Tech
AI Pioneer Resigns from Google to “Speak Freely” Over its Perils

Dr. Geoffrey Hinton a creator of some of the fundamental technology behind today’s generative AI systems has resigned from Google so he can “speak freely” about potential risks posed by Artificial Intelligence. He believes AI products will have unintended repercussions ranging from disinformation to job loss or even a threat to mankind.
“Look at how it was five years ago and how it is now,” Hinton said, according to the New York Times. “Take the difference and spread it around.” That’s terrifying.”
Dr. Hinton’s artificial intelligence career dates back to 1972, and his achievements have inspired modern generative AI practices. Backpropagation, a key technique for training neural networks that is utilised in today’s generative AI models, was popularized by Hinton, David Rumelhart, and Ronald J. Williams in 1987.
Dr. Hinton, Alex Krizhevsky, and Ilya Sutskever invented AlexNet in 2012, which is widely regarded as a breakthrough in machine vision and deep learning, and it is credited with kicking off our present era of generative AI. Hinton, Yoshua Bengio, and Yann LeCun shared the Turing Award, dubbed the “Nobel Prize of Computing,” in 2018.
Hinton joined Google in 2013 when the business he founded, DNNresearch, was acquired by Google. His departure a decade later represents a watershed moment in the IT industry, which is both hyping and forewarning about the possible consequences of increasingly complex automation systems.
For example, following the March release of OpenAI’s GPT-4, a group of tech researchers signed an open letter calling for a six-month freeze on developing new AI systems “more powerful” than GPT-4. However, some prominent critics believe that such concerns are exaggerated or misplaced.
Google and Microsoft leading in AI
Hinton did not sign the open letter, but he believes that strong competition between digital behemoths such as Google and Microsoft might lead to a global AI race that can only be stopped by international legislation. He emphasizes the importance of collaboration among renowned scientists in preventing AI from becoming unmanageable.
“I don’t think [researchers] should scale this up any further until they know if they can control it,” he said.
Hinton is also concerned about the spread of fake information in photographs, videos, and text, making it harder for individuals to determine what is accurate. He also fears that AI will disrupt the employment market, initially supplementing but eventually replacing human workers in areas such as paralegals, personal assistants, and translators who do repetitive chores.
Hinton’s long-term concern is that future AI systems would endanger humans by learning unexpected behaviour from massive volumes of data. “The idea that this stuff could actually get smarter than people—a few people believed that,” he told the New York Times. “However, most people thought it was a long shot. And I thought it was a long shot. I assumed it would be 30 to 50 years or possibly longer. Clearly, I no longer believe that.”
AI is becoming Dangerous
Hinton’s cautions stand out because he was formerly one of the field’s most vocal supporters. Hinton showed hope for the future of AI in a 2015 Toronto Star profile, saying, “I don’t think I’ll ever retire.” However, the New York Times reports that Hinton’s concerns about the future of AI have caused him to reconsider his life’s work. “I console myself with the standard excuse: if I hadn’t done it, someone else would,” he explained.
Some critics have questioned Hinton’s resignation and regrets. In reaction to The New York Times article, Hugging Face’s Dr. Sasha Luccioni tweeted, “People are referring to this to mean: look, AI is becoming so dangerous that even its pioneers are quitting.” As I see it, the folks who caused the situation are now abandoning ship.”
Hinton explained his reasons for leaving Google on Monday. “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google,” he stated in a tweet.
Actually, I departed so that I could discuss the perils of AI without having to consider how this affects Google.
Meanwhile, Elon Musk a well-known advocate for the responsible development of artificial intelligence (AI) and has expressed his concerns about the potential dangers of AI if it is not developed ethically and with caution.
He has stated that he believes AI has the potential to be more dangerous than nuclear weapons and has called for regulation and oversight of AI development.
Musk has also been involved in the development of AI through his companies, such as Tesla and SpaceX. Tesla, for example, uses AI in its autonomous driving technology, while SpaceX uses AI to automate certain processes in its rocket launches.
Musk has also founded several other companies focused on AI development, such as Neuralink, which aims to develop brain-machine interfaces to enhance human capabilities, and OpenAI, a research organization that aims to create safe and beneficial AI.
Smartphone
Amazon To Pay $31 Million In Privacy Violation Penalties For Alexa Voice Assistant And Ring Camera

Washington, D.C. Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy and misled parents by retaining for years the voice and location data of children recorded by its well-known Alexa voice assistant by paying a $25 million civil penalty.
In a separate agreement, the business acknowledged that its doorbell camera Ring may have violated customers’ privacy and agreed to pay them $5.8 million in refunds.
The Alexa-related action requires Amazon to revise its data deletion procedures and implement tougher, more lucid privacy controls. Additionally, it requires the tech giant to remove certain information gathered by its web-connected personal assistant, which users use to do everything from playing games and queueing up music to checking the weather.
Samuel Levine, the FCT’s director of consumer protection, said in a statement that Amazon’s history of misleading parents, retaining children’s recordings indefinitely, and disobeying deletion orders infringed on COPPA (the Child Online Privacy Protection Act) and compromised privacy for money. The 1998 law was created to protect kids from the dangers of the internet.
According to a statement by FTC Commissioner Alvaro Bedoya, “when parents asked Amazon to delete their kids’ Alexa voice data, the company did not delete all of it.”
The organization mandated that specific voice and geolocation data, as well as dormant child accounts, be deleted by the corporation.
Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy.
According to Bedoya, Amazon stored the children’s data to improve the voice recognition algorithm that powers Alexa, the artificial intelligence that runs Echo and other smart speakers. According to him, the FTC case sends a message to other tech firms that are “sprinting to do the same” in the face of intense competition when creating AI datasets.
The father of two young children, Bedoya, stated on Twitter that “nothing is more visceral to a parent than the sound of their child’s voice.”
More than half a billion Alexa-enabled gadgets have been sold internationally, according to Amazon, which also said that usage of the service rose 35% in 2016.
According to the FTC, in the Ring case, Amazon’s subsidiary for home security cameras gave employees and contractors access to customers’ private recordings and used insufficient security procedures that enabled hackers to take over certain accounts.
Many of the FTC’s claims of violations against California-based Ring’s operations date before Amazon’s 2018 acquisition of the company. The ring is compelled by the FTC’s decision to pay $5.8 million, which will be used for consumer refunds.
Amazon denied breaking the law and disagreed with the FTC’s allegations on Alexa and Ring. Nevertheless, it stated that the agreements “put these matters behind us.”
The Seattle-based business claimed that its “devices and services are built to protect customers’ privacy and to give customers control over their experience.”
The proposed order forbids Amazon from using deleted voice and geolocation data to develop or enhance any data products, in addition to the penalty in the Alexa case. Amazon must also develop a privacy program for using geolocation data by the court’s judgment.
Federal judges must approve the proposed orders.
The FTC commissioners unanimously made the decision to charge Amazon in both cases.
SOURCE – (AP)
Business
Regulators Take Aim At AI To Protect Consumers And Workers

NEW YORK — The nation’s finance authority has pledged to ensure that businesses comply with the Regulators law when utilizing artificial intelligence in light of rising concerns over increasingly capable AI systems like ChatGPT.
Automated systems and algorithms already heavily influence credit scores, loan conditions, bank account fees, and other monetary factors. Human resources, real estate, and working conditions are all impacted by AI.
According to Electronic Privacy Information Centre Senior Counsel Ben Winters Regulators, the federal agencies’ joint statement on enforcement released last month was a good starting step.
However, “there’s this narrative that AI is entirely unregulated, which is not really true,” he argued. “What they’re arguing is, ‘Just because you utilise AI to make a judgement, it doesn’t mean you’re exempt from responsibility for the repercussions of that decision. This is how we feel about it. “We are watching.
The Consumer Financial Protection Bureau has issued fines to financial institutions in the past year for using new technology and flawed algorithms, leading to improper foreclosures, repossessions, and lost payments of homes, cars, and government benefits payments.
These enforcement proceedings are used as instances of how there will be no “AI exemptions” to consumer protection, according to regulators.
Director of the Consumer Financial Protection Bureau Rohit Chopra stated that the organization is “continuing to identify potentially illegal activity” and has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists, and others to make sure we can confront these challenges.”
The Consumer Financial Protection Bureau (CFPB) joins the Federal Trade Commission, the Equal Employment Opportunity Commission, the Department of Justice, and others in claiming they are allocating resources and personnel to target emerging technologies and expose their potentially detrimental effects on consumers.
Chopra emphasized the importance of organizations understanding the decision-making process of their AI systems before implementing them. “In other cases, we are looking at how the use of all this data complies with our fair lending laws and Regulators.”
Financial institutions are required to report reasons for negative credit decisions by law, per the Fair Credit Regulators Act and the Equal Credit Opportunity Act, for instance. Decisions about housing and work are also subject to these rules. Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chopra speculated, “I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,'” I think what we’ve learned is that that’s not the case. The data itself may contain inherent biases.
Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chair of the Equal Employment Opportunity Commission (EEOC) Charlotte Burrows has pledged enforcement action against artificial intelligence (AI) Regulators recruiting technology that discriminates against people with disabilities and so-called “bossware” that illegally monitors employees.
Burrows also discussed the potential for algorithms to dictate illegal working conditions and hours to people.
She then added, “You need a break if you have a disability or perhaps you’re pregnant.” The algorithm only sometimes accounts for that kind of modification. Those are the sorts of things we’re taking a careful look at… The underlying message here is that laws still apply, and we have resources to enforce them; I don’t want anyone to misunderstand that just because technology is changing.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
OpenAI’s general counsel, Jason Kwon, recently spoke at a technology summit in Washington, DC, held by software industry group BSA. Industry standards and a consensus on them would be a good place to start. More debate is warranted about whether these should be mandated and how often they should be revised.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
The CEO of OpenAI, the company responsible for creating ChatGPT, Sam Altman, recently stated that government action “will be critical to mitigate the risks of increasingly powerful” AI systems and advocated for establishing a U.S. or global body to license and regulate the technology.
Altman and other tech CEOs were invited to the White House this month to confront tough questions about the consequences of these tools, even though there is no indication that Congress would draught sweeping new AI legislation like European politicians are doing.
As they have in the past with new consumer financial products and technologies, the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used, according to Winters of the Electronic Privacy Information Centre.
He said that “Buy Now, Pay Later” businesses had been dealt with effectively by the Consumer Financial Protection Bureau. “The AI ecosystem has a great deal of undiscovered territory. Putting that knowledge out there would help.
SOURCE – (AP)
Cryptocurrency
2023: Nvidia Signals How Artificial Intelligence Could Reshape Technology Sector

WASHINGTON — The U.S. Shares of Nvidia, already one of the most valuable businesses in the world, soared Thursday after the chipmaker forecasted a massive increase in revenue, indicating how dramatically the expanding use of artificial intelligence might transform the computer sector.
After a 25% rise in early trade, the California corporation is on its way to joining the exclusive club of $1 trillion companies like Alphabet, Apple, and Microsoft.
The developer of graphics chips for gaming and artificial intelligence posted a quarterly profit of more than $2 billion and revenue of $7 billion late Wednesday, above Wall Street projections.
However, Wall Street was caught off stride by its projections for $11 billion in sales this quarter. It’s a 64% increase over the same period last year and far above the $7.2 billion industry analysts predicted.
“It appears that the new gold rush has begun, and NVIDIA is selling all the picks and shovels,” wrote Susquehanna Financial Group’s Christopher Rolland and Matt Myers on Thursday.
Chipmakers throughout the world were dragged along. Taiwan Semiconductor increased by 3.5%, while SK Hynix in South Korea rose by 5%. ASML, situated in the Netherlands, increased by 4.8%.
The U.S. Shares of Nvidia are already one of the most valuable businesses in the world.
Jensen Huang, creator and CEO of Nvidia, stated that the world’s data centers require a makeover due to the transformation that AI technology will bring.
“The world’s $1 trillion data center is nearly entirely populated by (central processing NVIDIA units) today,” Huang remarked. “And $1 trillion, $250 billion a year, it’s growing, but over the last four years, call it $1 trillion in infrastructure installed, and it’s all based on CPUs and dumb NICs.” It is essentially unaccelerated.”
AI chips are intended to conduct artificial intelligence NVIDIA tasks more quickly and efficiently. While general-purpose processors, such as CPUs, can be utilized for lesser AI activities, they are “becoming less and less useful as AI advances,” according to 2020 research from Georgetown University’s Centre for Security and Emerging Technology.
“Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms,” the paper continues, saying that AI chips can also be more cost-effective than CPUs because of their higher efficiency.
According to analysts, Nvidia could be an early indicator of how AI will impact the tech sector.
“Last night, Nvidia gave jaw-dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape,” stated Wedbush analyst Dan Ives. “We would point any investor calling this an AI bubble to this Nvidia quarter, particularly guidance, which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the horizon with AI.”
SOURCE – (AP)
-
News5 months ago
Pfizer Covid-19 Vaccine Not Included in China’s Insurance
-
Science5 months ago
Chinese Government Halts Visas For Japan, South Korea In COVID-19 Fight
-
Beauty5 months ago
New, Barbie Doll Is Aimed At Kids As Young As 3
-
Science5 months ago
Tornado hits Selma, Alabama; 8 deaths reported
-
Celebrity5 months ago
Golden Globes: Jennifer Coolidge Stole The Show With Laughter
-
Tech5 months ago
Social Media Faces Lawsuits From Schools Over Mental Health Effects