Tech
Twitter Labels NPR “State-Affiliated Media”

Twitter has labeled NPR, the National Public Radio, as “state-affiliated media” on the social media platform, which some liberals feared could weaken public trust in the government-funded news organization.
National Public Radio said it was disturbed to see the description added to all of its tweets, and its president and CEO, John Lansing, called it “unacceptable for Twitter to label us this way.”
It was unclear why Twitter changed its policy. Elon Musk, the founder and CEO of Twitter, cited a definition of state-affiliated media in the company’s principles as “outlets where the state exercises control over editorial content through financial resources, direct or indirect political pressures, and control over production and distribution.”
“Seems accurate,” Musk responded to NPR on Twitter.
The United States government funds NPR through grants from federal agencies and departments and the Corporation for Public Broadcasting. According to the firm, it amounts to less than 1% of NPR’s yearly operational budget.
Until Wednesday, however, the same Twitter policies said that “state-financed media organizations with editorial independence, such as the CBC in Canada, BBC in the United Kingdom or National Public Radio in the United States, are not defined as state-affiliated media for this policy.”
.@ElonMusk Is a ‘Hero’ for Exposing @NPR as State-Affiliated Media@TuckerCarlson: “… requesting details about what in particular might have led to the new designation, @Twitter‘s press account auto-replied with a poop emoji 💩.” pic.twitter.com/bZOevuNu0v
— The Vigilant Fox 🦊 (@VigilantFox) April 7, 2023
On Twitter’s website, National Public Radio has been removed from that sentence. Twitter’s press office reacted with an automatic poop emoji when asked for a response.
The decision comes just days after Twitter removed The New York Times’ verification check mark.
“Millions of listeners rely on NPR and our member stations for the independent, fact-based journalism we provide,” Lansing said. “NPR supports free speech and holding the powerful accountable.”
In urging Twitter to change its decision, the literary organization PEN America emphasized that National Public Radio “assiduously maintains editorial independence.”
According to Liz Woolery, PEN America’s digital policy leader, Twitter’s decision is “a dangerous move that could further undermine public trust in reliable news sources.”
Karine Jean-Pierre LAVISHES Praise On Definitely Not State-Affiliated
Smartphone
Amazon To Pay $31 Million In Privacy Violation Penalties For Alexa Voice Assistant And Ring Camera

Washington, D.C. Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy and misled parents by retaining for years the voice and location data of children recorded by its well-known Alexa voice assistant by paying a $25 million civil penalty.
In a separate agreement, the business acknowledged that its doorbell camera Ring may have violated customers’ privacy and agreed to pay them $5.8 million in refunds.
The Alexa-related action requires Amazon to revise its data deletion procedures and implement tougher, more lucid privacy controls. Additionally, it requires the tech giant to remove certain information gathered by its web-connected personal assistant, which users use to do everything from playing games and queueing up music to checking the weather.
Samuel Levine, the FCT’s director of consumer protection, said in a statement that Amazon’s history of misleading parents, retaining children’s recordings indefinitely, and disobeying deletion orders infringed on COPPA (the Child Online Privacy Protection Act) and compromised privacy for money. The 1998 law was created to protect kids from the dangers of the internet.
According to a statement by FTC Commissioner Alvaro Bedoya, “when parents asked Amazon to delete their kids’ Alexa voice data, the company did not delete all of it.”
The organization mandated that specific voice and geolocation data, as well as dormant child accounts, be deleted by the corporation.
Amazon has agreed to settle charges from the Federal Trade Commission that it violated a statute protecting children’s privacy.
According to Bedoya, Amazon stored the children’s data to improve the voice recognition algorithm that powers Alexa, the artificial intelligence that runs Echo and other smart speakers. According to him, the FTC case sends a message to other tech firms that are “sprinting to do the same” in the face of intense competition when creating AI datasets.
The father of two young children, Bedoya, stated on Twitter that “nothing is more visceral to a parent than the sound of their child’s voice.”
More than half a billion Alexa-enabled gadgets have been sold internationally, according to Amazon, which also said that usage of the service rose 35% in 2016.
According to the FTC, in the Ring case, Amazon’s subsidiary for home security cameras gave employees and contractors access to customers’ private recordings and used insufficient security procedures that enabled hackers to take over certain accounts.
Many of the FTC’s claims of violations against California-based Ring’s operations date before Amazon’s 2018 acquisition of the company. The ring is compelled by the FTC’s decision to pay $5.8 million, which will be used for consumer refunds.
Amazon denied breaking the law and disagreed with the FTC’s allegations on Alexa and Ring. Nevertheless, it stated that the agreements “put these matters behind us.”
The Seattle-based business claimed that its “devices and services are built to protect customers’ privacy and to give customers control over their experience.”
The proposed order forbids Amazon from using deleted voice and geolocation data to develop or enhance any data products, in addition to the penalty in the Alexa case. Amazon must also develop a privacy program for using geolocation data by the court’s judgment.
Federal judges must approve the proposed orders.
The FTC commissioners unanimously made the decision to charge Amazon in both cases.
SOURCE – (AP)
Business
Regulators Take Aim At AI To Protect Consumers And Workers

NEW YORK — The nation’s finance authority has pledged to ensure that businesses comply with the Regulators law when utilizing artificial intelligence in light of rising concerns over increasingly capable AI systems like ChatGPT.
Automated systems and algorithms already heavily influence credit scores, loan conditions, bank account fees, and other monetary factors. Human resources, real estate, and working conditions are all impacted by AI.
According to Electronic Privacy Information Centre Senior Counsel Ben Winters Regulators, the federal agencies’ joint statement on enforcement released last month was a good starting step.
However, “there’s this narrative that AI is entirely unregulated, which is not really true,” he argued. “What they’re arguing is, ‘Just because you utilise AI to make a judgement, it doesn’t mean you’re exempt from responsibility for the repercussions of that decision. This is how we feel about it. “We are watching.
The Consumer Financial Protection Bureau has issued fines to financial institutions in the past year for using new technology and flawed algorithms, leading to improper foreclosures, repossessions, and lost payments of homes, cars, and government benefits payments.
These enforcement proceedings are used as instances of how there will be no “AI exemptions” to consumer protection, according to regulators.
Director of the Consumer Financial Protection Bureau Rohit Chopra stated that the organization is “continuing to identify potentially illegal activity” and has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists, and others to make sure we can confront these challenges.”
The Consumer Financial Protection Bureau (CFPB) joins the Federal Trade Commission, the Equal Employment Opportunity Commission, the Department of Justice, and others in claiming they are allocating resources and personnel to target emerging technologies and expose their potentially detrimental effects on consumers.
Chopra emphasized the importance of organizations understanding the decision-making process of their AI systems before implementing them. “In other cases, we are looking at how the use of all this data complies with our fair lending laws and Regulators.”
Financial institutions are required to report reasons for negative credit decisions by law, per the Fair Credit Regulators Act and the Equal Credit Opportunity Act, for instance. Decisions about housing and work are also subject to these rules. Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chopra speculated, “I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,'” I think what we’ve learned is that that’s not the case. The data itself may contain inherent biases.
Regulators have warned against using AI systems whose decision-making processes are too complex to explain.
Chair of the Equal Employment Opportunity Commission (EEOC) Charlotte Burrows has pledged enforcement action against artificial intelligence (AI) Regulators recruiting technology that discriminates against people with disabilities and so-called “bossware” that illegally monitors employees.
Burrows also discussed the potential for algorithms to dictate illegal working conditions and hours to people.
She then added, “You need a break if you have a disability or perhaps you’re pregnant.” The algorithm only sometimes accounts for that kind of modification. Those are the sorts of things we’re taking a careful look at… The underlying message here is that laws still apply, and we have resources to enforce them; I don’t want anyone to misunderstand that just because technology is changing.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
OpenAI’s general counsel, Jason Kwon, recently spoke at a technology summit in Washington, DC, held by software industry group BSA. Industry standards and a consensus on them would be a good place to start. More debate is warranted about whether these should be mandated and how often they should be revised.
At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.
The CEO of OpenAI, the company responsible for creating ChatGPT, Sam Altman, recently stated that government action “will be critical to mitigate the risks of increasingly powerful” AI systems and advocated for establishing a U.S. or global body to license and regulate the technology.
Altman and other tech CEOs were invited to the White House this month to confront tough questions about the consequences of these tools, even though there is no indication that Congress would draught sweeping new AI legislation like European politicians are doing.
As they have in the past with new consumer financial products and technologies, the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used, according to Winters of the Electronic Privacy Information Centre.
He said that “Buy Now, Pay Later” businesses had been dealt with effectively by the Consumer Financial Protection Bureau. “The AI ecosystem has a great deal of undiscovered territory. Putting that knowledge out there would help.
SOURCE – (AP)
Cryptocurrency
2023: Nvidia Signals How Artificial Intelligence Could Reshape Technology Sector

WASHINGTON — The U.S. Shares of Nvidia, already one of the most valuable businesses in the world, soared Thursday after the chipmaker forecasted a massive increase in revenue, indicating how dramatically the expanding use of artificial intelligence might transform the computer sector.
After a 25% rise in early trade, the California corporation is on its way to joining the exclusive club of $1 trillion companies like Alphabet, Apple, and Microsoft.
The developer of graphics chips for gaming and artificial intelligence posted a quarterly profit of more than $2 billion and revenue of $7 billion late Wednesday, above Wall Street projections.
However, Wall Street was caught off stride by its projections for $11 billion in sales this quarter. It’s a 64% increase over the same period last year and far above the $7.2 billion industry analysts predicted.
“It appears that the new gold rush has begun, and NVIDIA is selling all the picks and shovels,” wrote Susquehanna Financial Group’s Christopher Rolland and Matt Myers on Thursday.
Chipmakers throughout the world were dragged along. Taiwan Semiconductor increased by 3.5%, while SK Hynix in South Korea rose by 5%. ASML, situated in the Netherlands, increased by 4.8%.
The U.S. Shares of Nvidia are already one of the most valuable businesses in the world.
Jensen Huang, creator and CEO of Nvidia, stated that the world’s data centers require a makeover due to the transformation that AI technology will bring.
“The world’s $1 trillion data center is nearly entirely populated by (central processing NVIDIA units) today,” Huang remarked. “And $1 trillion, $250 billion a year, it’s growing, but over the last four years, call it $1 trillion in infrastructure installed, and it’s all based on CPUs and dumb NICs.” It is essentially unaccelerated.”
AI chips are intended to conduct artificial intelligence NVIDIA tasks more quickly and efficiently. While general-purpose processors, such as CPUs, can be utilized for lesser AI activities, they are “becoming less and less useful as AI advances,” according to 2020 research from Georgetown University’s Centre for Security and Emerging Technology.
“Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms,” the paper continues, saying that AI chips can also be more cost-effective than CPUs because of their higher efficiency.
According to analysts, Nvidia could be an early indicator of how AI will impact the tech sector.
“Last night, Nvidia gave jaw-dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape,” stated Wedbush analyst Dan Ives. “We would point any investor calling this an AI bubble to this Nvidia quarter, particularly guidance, which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the horizon with AI.”
SOURCE – (AP)
-
News5 months ago
Pfizer Covid-19 Vaccine Not Included in China’s Insurance
-
Science5 months ago
Chinese Government Halts Visas For Japan, South Korea In COVID-19 Fight
-
Beauty5 months ago
New, Barbie Doll Is Aimed At Kids As Young As 3
-
Science5 months ago
Tornado hits Selma, Alabama; 8 deaths reported
-
Celebrity5 months ago
Golden Globes: Jennifer Coolidge Stole The Show With Laughter
-
Tech5 months ago
Social Media Faces Lawsuits From Schools Over Mental Health Effects