Connect with us


Elon Musk Slams Physician “I Eat a Donut a Day and I’m Still Alive”

Avatar for VORNews



Elon Musk Slams Physician "I Eat a Donut a Day and I'm Still Alive"

Elon Musk, who previously stated that he would rather “eat tasty food and live a shorter life,” has kept his word, saying he enjoys a breakfast donut daily.

In response to a tweet from Peter Diamandis, a physician and the CEO of the non-profit organization XPRIZE, the Twitter CEO revealed his sweet tooth.

On Tuesday, Diamandis tweeted, “Sugar is poison.” Musk replied: “I eat a donut every morning. Still alive.”

At press time, Musk’s tweet had been viewed more than 11.4 million times.

Musk’s daily donut diet revelation is unsurprising, given his previous remarks about his eating habits.

In 2020, Musk told podcaster Joe Rogan, “I’d rather eat tasty food and live a shorter life.” Musk said that while he works out, he “wouldn’t exercise at all” if he could.

According to CNBC, it’s unclear whether Musk’s diet was influenced by his mother, Maye Musk, a model who worked as a dietitian for 45 years.

Musk is not the only celebrity with unusual eating habits.

Rep. Nancy Pelosi, the former House Speaker, survives — and thrives — on a diet of breakfast ice cream, hot dogs, pasta, and chocolate.

Former President Donald Trump has a well-documented fondness for fast food, telling a McDonald’s employee in February that he knows the menu “better than anyone” who works there.

Amazon founder Jeff Bezos enjoys octopus for breakfast, and Meta CEO Mark Zuckerberg prefers to eat meat from animals he has slaughtered himself.

Musk representatives did not respond immediately to Insider’s request for comment after regular business hours.

Elon Musk wants pause on AI Work.

Meanwhile, four artificial intelligence experts have expressed concern after their work was cited in an open letter co-signed by Elon Musk calling for an immediate halt to research.

The letter, dated March 22 and with over 1,800 signatures as of Friday, demanded a six-month moratorium on developing systems “more powerful” than Microsoft-backed (MSFT.O) OpenAI’s new GPT-4, which can hold human-like conversations, compose songs, and summarize lengthy documents.

Since GPT-4’s predecessor, ChatGPT, last year, competitors have rushed to release similar products.

According to the open letter, AI systems with “human-competitive intelligence” pose grave risks to humanity, citing 12 pieces of research from experts such as university academics and current and former employees of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind.

Since then, civil society groups in the United States and the European Union have urged lawmakers to limit OpenAI’s research. OpenAI did not immediately return requests for comment.

Critics have accused the Future of Life Institute (FLI), primarily funded by the Musk Foundation and behind the letter, of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.

“On the Dangers of Stochastic Parrots,” a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google, was cited.

Mitchell, now the chief ethical scientist at Hugging Face, slammed the letter, telling Reuters that it was unclear what constituted “more powerful than GPT4”.

“By taking a lot of dubious ideas for granted, the letter asserts a set of priorities and a narrative on AI that benefits FLI supporters,” she explained. “Ignoring current harms is a privilege some of us do not have.”

On Twitter, her co-authors Timnit Gebru and Emily M. Bender slammed the letter, calling some of its claims “unhinged.”

FLI president Max Tegmark told Reuters that the campaign did not undermine OpenAI’s competitive advantage.

“It’s quite amusing; I’ve heard people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk had no involvement in the letter’s creation. “This isn’t about a single company.”

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, took issue with the letter mentioning her work. She co-authored a research paper last year arguing that the widespread use of AI already posed serious risks.

Her research claimed that the current use of AI systems could influence decision-making in the face of climate change, nuclear war, and other existential threats.

“AI does not need to reach human-level intelligence to exacerbate those risks,” she told Reuters.

“There are non-existent risks that are extremely important but don’t get the same level of Hollywood attention.”

When asked about the criticism, FLI’s Tegmark stated that AI’s short-term and long-term risks should be taken seriously.

“If we cite someone, it just means we claim they’re endorsing that sentence, not the letter or everything they think,” he told Reuters.

Dan Hendrycks, director of the California-based Center for AI Safety, also cited in the letter, defended its contents, telling Reuters that it was prudent to consider black swan events – those that appear unlikely but have catastrophic consequences.

According to the open letter, generative AI tools could be used to flood the internet with “propaganda and untruth.”

Dori-Hacohen called Musk’s signature “pretty rich,” citing a reported increase in misinformation on Twitter following his acquisition of the platform, as documented by the civil society group Common Cause and others.

Twitter will soon introduce a new fee structure for access to its data, which could hinder future research.

“That has had a direct impact on my lab’s work, as well as the work of others studying misinformation and disinformation,” Dori-Hacohen said. “We’re doing our work with one hand tied behind our back.”

Musk and Twitter did not respond immediately to requests for comment.

Continue Reading


Sony Is Once Again Facing A Potential Security Breach, This Time By A Ransomware Group

Avatar for Kiara Grace




Once more, Sony faces the possibility of a security breach, this time from a ransomware group alleging to have compromised PlayStation systems. On Sunday, the group LAPSUS$ proclaimed the alleged hack on their dark website. This could have significant implications for PlayStation users, although details remain scant.

According to the ransomware group, they have compromised all Sony systems and seized valuable information, including game source code and firmware. As “proof,” they have provided screen captures of what appears to be an internal login page, PowerPoint presentation, and file directory.

However, according to cybersecurity specialists, this information could be more convincing. Cyber Security Connect stated, “None of it appears to be particularly compelling information.” They suspect that LAPSUS$ may have exaggerated the scope of their breach.

Based on the limited data available, it is extremely difficult to determine the scope or integrity of the hackers’ claims. PlayStation’s online services do not appear to have been impacted so far, with no word if user data is at risk.


Sony Is Once Again Facing A Potential Security Breach, This Time By A Ransomware Group.

Not for the first time have Sony’s systems been targeted. In 2011, the PlayStation Network was compromised, exposing the personal information of 77 million users. Sony ultimately locked down PSN for nearly a month to improve security.

In 2014, North Korea launched a devastating cyberattack against Sony Pictures in retaliation for the film The Interview. The release of terabytes of sensitive data, including scripts for upcoming films and employees’ personal and medical information. Time will tell if Sony can once again recover its systems from a significant cyberattack. However, PlayStation users may need to prepare for potential consequences.

If LAPSUS$’s claims are accurate, this breach could have comparable repercussions. There is a possibility that sensitive source code and intellectual property could be compromised. There is also the possibility of significant PlayStation Network service disruptions. As with any hack, we recommend that users alter any passwords used on any PlayStation service to avoid problems with other online accounts.

CGMagazine has sought out Sony for comment, but at the time of publication, the company has neither confirmed nor denied the breach’s scope; we will update the article if the situation changes.

SOURCE – (cgmagonline)

Continue Reading


Amazon Is Investing Up To $4 Billion In AI Startup Anthropic In Growing Tech Battle

Avatar for Kiara Grace




Amazon is investing up to $4 billion in artificial intelligence startup Anthropic and acquiring a minority stake in the company, the two companies announced on Monday.

The investment underscores how Big Tech companies are pouring money into AI as they race to capitalize on the opportunities that the latest iteration of the technology is set to fuel.

According to Amazon and Anthropic, the agreement is part of a larger collaboration to develop so-called foundation models, which are the basis for the generative AI systems that have garnered worldwide attention.

Foundation models, also known as large language models, are trained on vast online information pools, such as blog posts, digital books, scientific articles, and pop songs, to generate text, images, and videos that resemble human labor.


Amazon Is Investing Up To $4 Billion In AI Startup Anthropic In Growing Tech Battle.

Under the terms of the agreement, Anthropic will use Amazon as its primary cloud computing service and train and deploy its generative AI systems using Amazon’s custom processors.

Anthropic, based in San Francisco, was founded by former employees of OpenAI, the creator of the ChatGPT AI chatbot that made a global impact with its ability to generate responses that resembled human responses.

Anthropic has released Claude, its own ChatGPT competitor. The most recent version, available in the United States and the United Kingdom, can “sophisticated dialogue, creative content generation, complex reasoning, and detailed instruction,” according to the company.

Amazon is racing to catch up to competitors such as Microsoft, which invested $1 billion in OpenAI in 2019 and another multibillion-dollar investment at the beginning of the year.

Amazon has been releasing new services to keep up with the AI arms race, such as an update to its popular assistant Alexa that enables users to have more human-like conversations and AI-generated summaries of consumer product reviews.


Continue Reading


Photo Giant Getty Took A Leading AI Image-Maker To Court. Now It’s Also Embracing The Technology

Avatar for Kiara Grace




Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

But suppose you’re searching for a wide-angle image of a “hot pink plastic saguaro cactus with large, protruding arms, surrounded by sand, in a landscape at dawn.” According to Getty Images, you can now request that its AI-powered image generator create one on the spot.

The Seattle-based company employs a two-pronged strategy to address the threat and opportunity of artificial intelligence to its business. First, it filed a lawsuit against a prominent provider of AI-generated images earlier this year for what it claimed was a “stunning” violation of Getty’s image collection.

But on Monday, it joined the small but expanding market of AI image creators with a new service that enables its customers to create novel images trained on Getty’s vast library of human-made photographs.

According to Getty Images CEO Craig Peters, the distinction is that this new service is “commercially viable” for business clients and “wasn’t trained on the open internet with stolen imagery.”

He compared this to some pioneers in AI-generated imagery, such as OpenAI’s DALL-E, Midjourney, and Stability AI, the creator of Stable Diffusion.

“We have issues with those services, how they were built, what they were built upon, how they respect creator rights or not, and how they actually feed into deepfakes and other things like that,” Peters said in an interview.


Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

In a lawsuit filed early this year in a Delaware federal court, Getty alleged that London-based Stability AI copied without permission more than 12 million photographs from its collection, along with captions and metadata, “as part of its efforts to build a competing business.”

Getty asserted in its lawsuit that it is entitled to damages of up to $150,000 per infringed work, which could reach $1.8 trillion. Stability seeks dismissal or transfer of the case but has not formally responded to the underlying allegations. Similar to the situation in the United Kingdom, a court conflict is still brewing.

Peters stated that the new service, dubbed Generative AI by Getty Images, resulted from a long-standing partnership with California-based tech company and chipmaker Nvidia, which predated the legal challenges against Stability AI. It is based on Edify, an AI model created by Picasso, a division of Nvidia’s generative AI division.

It promises “full indemnification for commercial use” and is intended to eliminate the intellectual property risks that have made businesses hesitant to use generative AI tools.

Getty contributors will also be compensated for having their images included in the training set, which will be incorporated into their royalty obligations so that the company is “actually sharing the revenue with them over time rather than paying a one-time fee or not paying that,” according to Peters.


Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

Getty will compete with rivals such as Shutterstock, which has partnered with OpenAI’s DALL-E, and software company Adobe, which has developed its own AI image-generator Firefly, for brands seeking marketing materials and other creative imagery. It is unlikely to appeal to those seeking photojournalism or editorial content, where Getty competes with news organizations such as The Associated Press.

Peters stated that the new model cannot produce politically damaging “deepfake” images because it automatically blocks requests containing images of recognizable persons and brands. As an illustration, he entered “President Joe Biden on a surfboard” as a demonstration to an AP reporter, but the tool rejected the request.

“The positive news about this generative engine is that it cannot cause the Pentagon to be attacked. “It cannot generate the pope wearing Balenciaga,” he said, referring to a widely shared fake image of Pope Francis wearing a fashionable puffer jacket generated by artificial intelligence.

Peters added that AI-generated content will not be added to Getty Images’ content libraries, reserved for “real people in real places doing real things.”


Continue Reading

Recent News