Connect with us


AI Pioneer Resigns from Google to “Speak Freely” Over its Perils

Avatar for VORNews



AI Pioneer Resigns from Google to "Speak Freely" Over its Perils

Dr. Geoffrey Hinton a creator of some of the fundamental technology behind today’s generative AI systems has resigned from Google so he can “speak freely” about potential risks posed by Artificial Intelligence. He believes AI products will have unintended repercussions ranging from disinformation to job loss or even a threat to mankind.

“Look at how it was five years ago and how it is now,” Hinton said, according to the New York Times. “Take the difference and spread it around.” That’s terrifying.”

Dr. Hinton’s artificial intelligence career dates back to 1972, and his achievements have inspired modern generative AI practices. Backpropagation, a key technique for training neural networks that is utilised in today’s generative AI models, was popularized by Hinton, David Rumelhart, and Ronald J. Williams in 1987.

Dr. Hinton, Alex Krizhevsky, and Ilya Sutskever invented AlexNet in 2012, which is widely regarded as a breakthrough in machine vision and deep learning, and it is credited with kicking off our present era of generative AI. Hinton, Yoshua Bengio, and Yann LeCun shared the Turing Award, dubbed the “Nobel Prize of Computing,” in 2018.

VOR News

Hinton joined Google in 2013 when the business he founded, DNNresearch, was acquired by Google. His departure a decade later represents a watershed moment in the IT industry, which is both hyping and forewarning about the possible consequences of increasingly complex automation systems.

For example, following the March release of OpenAI’s GPT-4, a group of tech researchers signed an open letter calling for a six-month freeze on developing new AI systems “more powerful” than GPT-4. However, some prominent critics believe that such concerns are exaggerated or misplaced.

Google and Microsoft leading in AI

Hinton did not sign the open letter, but he believes that strong competition between digital behemoths such as Google and Microsoft might lead to a global AI race that can only be stopped by international legislation. He emphasizes the importance of collaboration among renowned scientists in preventing AI from becoming unmanageable.

“I don’t think [researchers] should scale this up any further until they know if they can control it,” he said.

Hinton is also concerned about the spread of fake information in photographs, videos, and text, making it harder for individuals to determine what is accurate. He also fears that AI will disrupt the employment market, initially supplementing but eventually replacing human workers in areas such as paralegals, personal assistants, and translators who do repetitive chores.

Google AI

Hinton’s long-term concern is that future AI systems would endanger humans by learning unexpected behaviour from massive volumes of data. “The idea that this stuff could actually get smarter than people—a few people believed that,” he told the New York Times. “However, most people thought it was a long shot. And I thought it was a long shot. I assumed it would be 30 to 50 years or possibly longer. Clearly, I no longer believe that.”

AI is becoming Dangerous

Hinton’s cautions stand out because he was formerly one of the field’s most vocal supporters. Hinton showed hope for the future of AI in a 2015 Toronto Star profile, saying, “I don’t think I’ll ever retire.” However, the New York Times reports that Hinton’s concerns about the future of AI have caused him to reconsider his life’s work. “I console myself with the standard excuse: if I hadn’t done it, someone else would,” he explained.

Some critics have questioned Hinton’s resignation and regrets. In reaction to The New York Times article, Hugging Face’s Dr. Sasha Luccioni tweeted, “People are referring to this to mean: look, AI is becoming so dangerous that even its pioneers are quitting.” As I see it, the folks who caused the situation are now abandoning ship.”

Hinton explained his reasons for leaving Google on Monday. “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google,” he stated in a tweet.

Actually, I departed so that I could discuss the perils of AI without having to consider how this affects Google.

Meanwhile, Elon Musk a well-known advocate for the responsible development of artificial intelligence (AI) and has expressed his concerns about the potential dangers of AI if it is not developed ethically and with caution.

He has stated that he believes AI has the potential to be more dangerous than nuclear weapons and has called for regulation and oversight of AI development.

Musk has also been involved in the development of AI through his companies, such as Tesla and SpaceX. Tesla, for example, uses AI in its autonomous driving technology, while SpaceX uses AI to automate certain processes in its rocket launches.

Musk has also founded several other companies focused on AI development, such as Neuralink, which aims to develop brain-machine interfaces to enhance human capabilities, and OpenAI, a research organization that aims to create safe and beneficial AI.

Continue Reading


Sony Is Once Again Facing A Potential Security Breach, This Time By A Ransomware Group

Avatar for Kiara Grace




Once more, Sony faces the possibility of a security breach, this time from a ransomware group alleging to have compromised PlayStation systems. On Sunday, the group LAPSUS$ proclaimed the alleged hack on their dark website. This could have significant implications for PlayStation users, although details remain scant.

According to the ransomware group, they have compromised all Sony systems and seized valuable information, including game source code and firmware. As “proof,” they have provided screen captures of what appears to be an internal login page, PowerPoint presentation, and file directory.

However, according to cybersecurity specialists, this information could be more convincing. Cyber Security Connect stated, “None of it appears to be particularly compelling information.” They suspect that LAPSUS$ may have exaggerated the scope of their breach.

Based on the limited data available, it is extremely difficult to determine the scope or integrity of the hackers’ claims. PlayStation’s online services do not appear to have been impacted so far, with no word if user data is at risk.


Sony Is Once Again Facing A Potential Security Breach, This Time By A Ransomware Group.

Not for the first time have Sony’s systems been targeted. In 2011, the PlayStation Network was compromised, exposing the personal information of 77 million users. Sony ultimately locked down PSN for nearly a month to improve security.

In 2014, North Korea launched a devastating cyberattack against Sony Pictures in retaliation for the film The Interview. The release of terabytes of sensitive data, including scripts for upcoming films and employees’ personal and medical information. Time will tell if Sony can once again recover its systems from a significant cyberattack. However, PlayStation users may need to prepare for potential consequences.

If LAPSUS$’s claims are accurate, this breach could have comparable repercussions. There is a possibility that sensitive source code and intellectual property could be compromised. There is also the possibility of significant PlayStation Network service disruptions. As with any hack, we recommend that users alter any passwords used on any PlayStation service to avoid problems with other online accounts.

CGMagazine has sought out Sony for comment, but at the time of publication, the company has neither confirmed nor denied the breach’s scope; we will update the article if the situation changes.

SOURCE – (cgmagonline)

Continue Reading


Amazon Is Investing Up To $4 Billion In AI Startup Anthropic In Growing Tech Battle

Avatar for Kiara Grace




Amazon is investing up to $4 billion in artificial intelligence startup Anthropic and acquiring a minority stake in the company, the two companies announced on Monday.

The investment underscores how Big Tech companies are pouring money into AI as they race to capitalize on the opportunities that the latest iteration of the technology is set to fuel.

According to Amazon and Anthropic, the agreement is part of a larger collaboration to develop so-called foundation models, which are the basis for the generative AI systems that have garnered worldwide attention.

Foundation models, also known as large language models, are trained on vast online information pools, such as blog posts, digital books, scientific articles, and pop songs, to generate text, images, and videos that resemble human labor.


Amazon Is Investing Up To $4 Billion In AI Startup Anthropic In Growing Tech Battle.

Under the terms of the agreement, Anthropic will use Amazon as its primary cloud computing service and train and deploy its generative AI systems using Amazon’s custom processors.

Anthropic, based in San Francisco, was founded by former employees of OpenAI, the creator of the ChatGPT AI chatbot that made a global impact with its ability to generate responses that resembled human responses.

Anthropic has released Claude, its own ChatGPT competitor. The most recent version, available in the United States and the United Kingdom, can “sophisticated dialogue, creative content generation, complex reasoning, and detailed instruction,” according to the company.

Amazon is racing to catch up to competitors such as Microsoft, which invested $1 billion in OpenAI in 2019 and another multibillion-dollar investment at the beginning of the year.

Amazon has been releasing new services to keep up with the AI arms race, such as an update to its popular assistant Alexa that enables users to have more human-like conversations and AI-generated summaries of consumer product reviews.


Continue Reading


Photo Giant Getty Took A Leading AI Image-Maker To Court. Now It’s Also Embracing The Technology

Avatar for Kiara Grace




Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

But suppose you’re searching for a wide-angle image of a “hot pink plastic saguaro cactus with large, protruding arms, surrounded by sand, in a landscape at dawn.” According to Getty Images, you can now request that its AI-powered image generator create one on the spot.

The Seattle-based company employs a two-pronged strategy to address the threat and opportunity of artificial intelligence to its business. First, it filed a lawsuit against a prominent provider of AI-generated images earlier this year for what it claimed was a “stunning” violation of Getty’s image collection.

But on Monday, it joined the small but expanding market of AI image creators with a new service that enables its customers to create novel images trained on Getty’s vast library of human-made photographs.

According to Getty Images CEO Craig Peters, the distinction is that this new service is “commercially viable” for business clients and “wasn’t trained on the open internet with stolen imagery.”

He compared this to some pioneers in AI-generated imagery, such as OpenAI’s DALL-E, Midjourney, and Stability AI, the creator of Stable Diffusion.

“We have issues with those services, how they were built, what they were built upon, how they respect creator rights or not, and how they actually feed into deepfakes and other things like that,” Peters said in an interview.


Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

In a lawsuit filed early this year in a Delaware federal court, Getty alleged that London-based Stability AI copied without permission more than 12 million photographs from its collection, along with captions and metadata, “as part of its efforts to build a competing business.”

Getty asserted in its lawsuit that it is entitled to damages of up to $150,000 per infringed work, which could reach $1.8 trillion. Stability seeks dismissal or transfer of the case but has not formally responded to the underlying allegations. Similar to the situation in the United Kingdom, a court conflict is still brewing.

Peters stated that the new service, dubbed Generative AI by Getty Images, resulted from a long-standing partnership with California-based tech company and chipmaker Nvidia, which predated the legal challenges against Stability AI. It is based on Edify, an AI model created by Picasso, a division of Nvidia’s generative AI division.

It promises “full indemnification for commercial use” and is intended to eliminate the intellectual property risks that have made businesses hesitant to use generative AI tools.

Getty contributors will also be compensated for having their images included in the training set, which will be incorporated into their royalty obligations so that the company is “actually sharing the revenue with them over time rather than paying a one-time fee or not paying that,” according to Peters.


Anyone seeking a gorgeous photograph of a desert landscape will find various options in the Getty Images stock photography collection.

Getty will compete with rivals such as Shutterstock, which has partnered with OpenAI’s DALL-E, and software company Adobe, which has developed its own AI image-generator Firefly, for brands seeking marketing materials and other creative imagery. It is unlikely to appeal to those seeking photojournalism or editorial content, where Getty competes with news organizations such as The Associated Press.

Peters stated that the new model cannot produce politically damaging “deepfake” images because it automatically blocks requests containing images of recognizable persons and brands. As an illustration, he entered “President Joe Biden on a surfboard” as a demonstration to an AP reporter, but the tool rejected the request.

“The positive news about this generative engine is that it cannot cause the Pentagon to be attacked. “It cannot generate the pope wearing Balenciaga,” he said, referring to a widely shared fake image of Pope Francis wearing a fashionable puffer jacket generated by artificial intelligence.

Peters added that AI-generated content will not be added to Getty Images’ content libraries, reserved for “real people in real places doing real things.”


Continue Reading

Recent News