Tech
Elon Musk Slams Physician “I Eat a Donut a Day and I’m Still Alive”

Elon Musk, who previously stated that he would rather “eat tasty food and live a shorter life,” has kept his word, saying he enjoys a breakfast donut daily.
In response to a tweet from Peter Diamandis, a physician and the CEO of the non-profit organization XPRIZE, the Twitter CEO revealed his sweet tooth.
On Tuesday, Diamandis tweeted, “Sugar is poison.” Musk replied: “I eat a donut every morning. Still alive.”
I eat a donut every morning. Still alive.
— Elon Musk (@elonmusk) March 28, 2023
At press time, Musk’s tweet had been viewed more than 11.4 million times.
Musk’s daily donut diet revelation is unsurprising, given his previous remarks about his eating habits.
In 2020, Musk told podcaster Joe Rogan, “I’d rather eat tasty food and live a shorter life.” Musk said that while he works out, he “wouldn’t exercise at all” if he could.
According to CNBC, it’s unclear whether Musk’s diet was influenced by his mother, Maye Musk, a model who worked as a dietitian for 45 years.
Musk is not the only celebrity with unusual eating habits.
Rep. Nancy Pelosi, the former House Speaker, survives — and thrives — on a diet of breakfast ice cream, hot dogs, pasta, and chocolate.
Former President Donald Trump has a well-documented fondness for fast food, telling a McDonald’s employee in February that he knows the menu “better than anyone” who works there.
Amazon founder Jeff Bezos enjoys octopus for breakfast, and Meta CEO Mark Zuckerberg prefers to eat meat from animals he has slaughtered himself.
Musk representatives did not respond immediately to Insider’s request for comment after regular business hours.
Elon Musk wants pause on AI Work.
Meanwhile, four artificial intelligence experts have expressed concern after their work was cited in an open letter co-signed by Elon Musk calling for an immediate halt to research.
The letter, dated March 22 and with over 1,800 signatures as of Friday, demanded a six-month moratorium on developing systems “more powerful” than Microsoft-backed (MSFT.O) OpenAI’s new GPT-4, which can hold human-like conversations, compose songs, and summarize lengthy documents.
Since GPT-4’s predecessor, ChatGPT, last year, competitors have rushed to release similar products.
According to the open letter, AI systems with “human-competitive intelligence” pose grave risks to humanity, citing 12 pieces of research from experts such as university academics and current and former employees of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind.
Since then, civil society groups in the United States and the European Union have urged lawmakers to limit OpenAI’s research. OpenAI did not immediately return requests for comment.
Critics have accused the Future of Life Institute (FLI), primarily funded by the Musk Foundation and behind the letter, of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.
“On the Dangers of Stochastic Parrots,” a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google, was cited.
Mitchell, now the chief ethical scientist at Hugging Face, slammed the letter, telling Reuters that it was unclear what constituted “more powerful than GPT4”.
“By taking a lot of dubious ideas for granted, the letter asserts a set of priorities and a narrative on AI that benefits FLI supporters,” she explained. “Ignoring current harms is a privilege some of us do not have.”
On Twitter, her co-authors Timnit Gebru and Emily M. Bender slammed the letter, calling some of its claims “unhinged.”
FLI president Max Tegmark told Reuters that the campaign did not undermine OpenAI’s competitive advantage.
“It’s quite amusing; I’ve heard people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk had no involvement in the letter’s creation. “This isn’t about a single company.”
RISKS RIGHT NOW
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, took issue with the letter mentioning her work. She co-authored a research paper last year arguing that the widespread use of AI already posed serious risks.
Her research claimed that the current use of AI systems could influence decision-making in the face of climate change, nuclear war, and other existential threats.
“AI does not need to reach human-level intelligence to exacerbate those risks,” she told Reuters.
“There are non-existent risks that are extremely important but don’t get the same level of Hollywood attention.”
When asked about the criticism, FLI’s Tegmark stated that AI’s short-term and long-term risks should be taken seriously.
“If we cite someone, it just means we claim they’re endorsing that sentence, not the letter or everything they think,” he told Reuters.
Dan Hendrycks, director of the California-based Center for AI Safety, also cited in the letter, defended its contents, telling Reuters that it was prudent to consider black swan events – those that appear unlikely but have catastrophic consequences.
According to the open letter, generative AI tools could be used to flood the internet with “propaganda and untruth.”
Dori-Hacohen called Musk’s signature “pretty rich,” citing a reported increase in misinformation on Twitter following his acquisition of the platform, as documented by the civil society group Common Cause and others.
Twitter will soon introduce a new fee structure for access to its data, which could hinder future research.
“That has had a direct impact on my lab’s work, as well as the work of others studying misinformation and disinformation,” Dori-Hacohen said. “We’re doing our work with one hand tied behind our back.”
Musk and Twitter did not respond immediately to requests for comment.
Smartphone
Amazon To Pay $31 Million In Privacy Violation Penalties For Alexa Voice Assistant And Ring Camera

Washington, D.C. Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy and misled parents by retaining for years the voice and location data of children recorded by its well-known Alexa voice assistant by paying a $25 million civil penalty.
In a separate agreement, the business acknowledged that its doorbell camera Ring may have violated customers’ privacy and agreed to pay them $5.8 million in refunds.
The Alexa-related action requires Amazon to revise its data deletion procedures and implement tougher, more lucid privacy controls. Additionally, it requires the tech giant to remove certain information gathered by its web-connected personal assistant, which users use to do everything from playing games and queueing up music to checking the weather.
Samuel Levine, the FCT’s director of consumer protection, said in a statement that Amazon’s history of misleading parents, retaining children’s recordings indefinitely, and disobeying deletion orders infringed on COPPA (the Child Online Privacy Protection Act) and compromised privacy for money. The 1998 law was created to protect kids from the dangers of the internet.
According to a statement by FTC Commissioner Alvaro Bedoya, “when parents asked Amazon to delete their kids’ Alexa voice data, the company did not delete all of it.”
The organization mandated that specific voice and geolocation data, as well as dormant child accounts, be deleted by the corporation.
Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy.
According to Bedoya, Amazon stored the children’s data to improve the voice recognition algorithm that powers Alexa, the artificial intelligence that runs Echo and other smart speakers. According to him, the FTC case sends a message to other tech firms that are “sprinting to do the same” in the face of intense competition when creating AI datasets.
The father of two young children, Bedoya, stated on Twitter that “nothing is more visceral to a parent than the sound of their child’s voice.”
More than half a billion Alexa-enabled gadgets have been sold internationally, according to Amazon, which also said that usage of the service rose 35% in 2016.
According to the FTC, in the Ring case, Amazon’s subsidiary for home security cameras gave employees and contractors access to customers’ private recordings and used insufficient security procedures that enabled hackers to take over certain accounts.
Many of the FTC’s claims of violations against California-based Ring’s operations date before Amazon’s 2018 acquisition of the company. The ring is compelled by the FTC’s decision to pay $5.8 million, which will be used for consumer refunds.
Amazon denied breaking the law and disagreed with the FTC’s allegations on Alexa and Ring. Nevertheless, it stated that the agreements “put these matters behind us.”
The Seattle-based business claimed that its “devices and services are built to protect customers’ privacy and to give customers control over their experience.”
The proposed order forbids Amazon from using deleted voice and geolocation data to develop or enhance any data products, in addition to the penalty in the Alexa case. Amazon must also develop a privacy program for using geolocation data by the court’s judgment.
Federal judges must approve the proposed orders.
The FTC commissioners unanimously made the decision to charge Amazon in both cases.
SOURCE – (AP)
Business
Regulators Take Aim At AI To Protect Consumers And Workers

NEW YORK — The nation’s finance authority has pledged to ensure that businesses comply with the Regulators law when utilizing artificial intelligence in light of rising concerns over increasingly capable AI systems like ChatGPT.
Automated systems and algorithms already heavily influence credit scores, loan conditions, bank account fees, and other monetary factors. Human resources, real estate, and working conditions are all impacted by AI.
According to Electronic Privacy Information Centre Senior Counsel Ben Winters Regulators, the federal agencies’ joint statement on enforcement released last month was a good starting step.
However, “there’s this narrative that AI is entirely unregulated, which is not really true,” he argued. “What they’re arguing is, ‘Just because you utilise AI to make a judgement, it doesn’t mean you’re exempt from responsibility for the repercussions of that decision. This is how we feel about it. “We are watching.
The Consumer Financial Protection Bureau has issued fines to financial institutions in the past year for using new technology and flawed algorithms, leading to improper foreclosures, repossessions, and lost payments of homes, cars, and government benefits payments.
These enforcement proceedings are used as instances of how there will be no “AI exemptions” to consumer protection, according to regulators.
Director of the Consumer Financial Protection Bureau Rohit Chopra stated that the organization is “continuing to identify potentially illegal activity” and has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists, and others to make sure we can confront these challenges.”
The Consumer Financial Protection Bureau (CFPB) joins the Federal Trade Commission, the Equal Employment Opportunity Commission, the Department of Justice, and others in claiming they are allocating resources and personnel to target emerging technologies and expose their potentially detrimental effects on consumers.
Chopra emphasized the importance of organizations understanding the decision-making process of their AI systems before implementing them. “In other cases, we are looking at how the use of all this data complies with our fair lending laws and Regulators.”
Financial institutions are required to report reasons for negative credit decisions by law, per the Fair Credit Regulators Act and the Equal Credit Opportunity Act, for instance. Decisions about housing and work are also subject to these rules. Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chopra speculated, “I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,'” I think what we’ve learned is that that’s not the case. The data itself may contain inherent biases.
Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chair of the Equal Employment Opportunity Commission (EEOC) Charlotte Burrows has pledged enforcement action against artificial intelligence (AI) Regulators recruiting technology that discriminates against people with disabilities and so-called “bossware” that illegally monitors employees.
Burrows also discussed the potential for algorithms to dictate illegal working conditions and hours to people.
She then added, “You need a break if you have a disability or perhaps you’re pregnant.” The algorithm only sometimes accounts for that kind of modification. Those are the sorts of things we’re taking a careful look at… The underlying message here is that laws still apply, and we have resources to enforce them; I don’t want anyone to misunderstand that just because technology is changing.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
OpenAI’s general counsel, Jason Kwon, recently spoke at a technology summit in Washington, DC, held by software industry group BSA. Industry standards and a consensus on them would be a good place to start. More debate is warranted about whether these should be mandated and how often they should be revised.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
The CEO of OpenAI, the company responsible for creating ChatGPT, Sam Altman, recently stated that government action “will be critical to mitigate the risks of increasingly powerful” AI systems and advocated for establishing a U.S. or global body to license and regulate the technology.
Altman and other tech CEOs were invited to the White House this month to confront tough questions about the consequences of these tools, even though there is no indication that Congress would draught sweeping new AI legislation like European politicians are doing.
As they have in the past with new consumer financial products and technologies, the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used, according to Winters of the Electronic Privacy Information Centre.
He said that “Buy Now, Pay Later” businesses had been dealt with effectively by the Consumer Financial Protection Bureau. “The AI ecosystem has a great deal of undiscovered territory. Putting that knowledge out there would help.
SOURCE – (AP)
Cryptocurrency
2023: Nvidia Signals How Artificial Intelligence Could Reshape Technology Sector

WASHINGTON — The U.S. Shares of Nvidia, already one of the most valuable businesses in the world, soared Thursday after the chipmaker forecasted a massive increase in revenue, indicating how dramatically the expanding use of artificial intelligence might transform the computer sector.
After a 25% rise in early trade, the California corporation is on its way to joining the exclusive club of $1 trillion companies like Alphabet, Apple, and Microsoft.
The developer of graphics chips for gaming and artificial intelligence posted a quarterly profit of more than $2 billion and revenue of $7 billion late Wednesday, above Wall Street projections.
However, Wall Street was caught off stride by its projections for $11 billion in sales this quarter. It’s a 64% increase over the same period last year and far above the $7.2 billion industry analysts predicted.
“It appears that the new gold rush has begun, and NVIDIA is selling all the picks and shovels,” wrote Susquehanna Financial Group’s Christopher Rolland and Matt Myers on Thursday.
Chipmakers throughout the world were dragged along. Taiwan Semiconductor increased by 3.5%, while SK Hynix in South Korea rose by 5%. ASML, situated in the Netherlands, increased by 4.8%.
The U.S. Shares of Nvidia are already one of the most valuable businesses in the world.
Jensen Huang, creator and CEO of Nvidia, stated that the world’s data centers require a makeover due to the transformation that AI technology will bring.
“The world’s $1 trillion data center is nearly entirely populated by (central processing NVIDIA units) today,” Huang remarked. “And $1 trillion, $250 billion a year, it’s growing, but over the last four years, call it $1 trillion in infrastructure installed, and it’s all based on CPUs and dumb NICs.” It is essentially unaccelerated.”
AI chips are intended to conduct artificial intelligence NVIDIA tasks more quickly and efficiently. While general-purpose processors, such as CPUs, can be utilized for lesser AI activities, they are “becoming less and less useful as AI advances,” according to 2020 research from Georgetown University’s Centre for Security and Emerging Technology.
“Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms,” the paper continues, saying that AI chips can also be more cost-effective than CPUs because of their higher efficiency.
According to analysts, Nvidia could be an early indicator of how AI will impact the tech sector.
“Last night, Nvidia gave jaw-dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape,” stated Wedbush analyst Dan Ives. “We would point any investor calling this an AI bubble to this Nvidia quarter, particularly guidance, which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the horizon with AI.”
SOURCE – (AP)
-
News5 months ago
Pfizer Covid-19 Vaccine Not Included in China’s Insurance
-
Science5 months ago
Chinese Government Halts Visas For Japan, South Korea In COVID-19 Fight
-
Beauty5 months ago
New, Barbie Doll Is Aimed At Kids As Young As 3
-
Science5 months ago
Tornado hits Selma, Alabama; 8 deaths reported
-
Celebrity5 months ago
Golden Globes: Jennifer Coolidge Stole The Show With Laughter
-
Tech5 months ago
Social Media Faces Lawsuits From Schools Over Mental Health Effects