In win for AI, Newsom vetoes driverless truck ban

Gov. Gavin Newsom made his choice Friday when confronted with a bill that pitted organized labor versus the tech and business community promoting artificial intelligence. 

Newsom chose AI. 

He vetoed AB316, which would have required a human driver to be present in the deployment of driverless heavy-duty trucks for at least five years as the state evaluates their safety. 

Newsom called the proposed legislation “unnecessary” for the oversight of autonomous trucks,  writing in his veto message that “existing law provides sufficient authority to create the appropriate regulatory framework.” 

Newsom’s veto goes against the wishes of the International Brotherhood of Teamsters union, which has 250,000 members in the state and whose president, Sean O’Brien, traveled to California this week for rallies in favor of the legislation. Union officials said the measure eventually could put  350,000 drivers out of work in California.

In his veto message, Newsom said he was directing the state’s Labor and Workforce Development Agency “to lead a stakeholder process next year to review and develop recommendations to mitigate the potential employment impact of testing and deployment of autonomous heavy-duty vehicles.”

The veto drew a swift rebuke Friday from Lorena Gonzalez, leader of the 2.1 million-member California Labor Federation, who called out Newsom for siding with his business world allies over his friends in labor. 

Keep reading

PENTAGON’S BUDGET IS SO BLOATED THAT IT NEEDS AN AI PROGRAM TO NAVIGATE IT

AS TECH LUMINARIES like Elon Musk issue solemn warnings about artificial intelligence’s threat of “civilizational destruction,” the U.S. military is using it for a decidedly more mundane purpose: understanding its sprawling $816.7 billion budget and figuring out its own policies.

Thanks to its bloat and political wrangling, the annual Department of Defense budget legislation includes hundreds of revisions and limitations telling the Pentagon what it can and cannot do. To make sense of all those provisions, the Pentagon created an AI program, codenamed GAMECHANGER. 

“In my comptroller role, I am, of course, the most excited about applying GAMECHANGER to gain better visibility and understanding across our various budget exhibits,” said Gregory Little, the deputy comptroller of the Pentagon, shortly after the program’s creation last year. 

“The fact that they have to go to such extraordinary measures to understand what their own policies are is an indictment of how they operate,” said William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft and expert on the defense budget. “It’s kind of similar to the problem with the budget as a whole: They don’t make tough decisions, they just layer on more policies, more weapons systems, more spending. Between the Pentagon and Congress, they’re not really getting rid of old stuff, they’re just adding more.”

Keep reading

Senators Call For Government Power To Hand Out Licenses To AI Companies, Curb “Deceptive” Election-Related AI and “Deepfakes”

This week, a Senate Judiciary hearing under the umbrella of the Privacy, Technology and Law Subcommittee became the stage for bipartisan senators to divulge plans aiming to focus on the allegedly looming threats of manipulative artificial intelligence, especially in the realm of elections. Visions for a framework proposed by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), foresee a new government agency, tasked with issuing licenses to entities working with AI systems.

Simultaneously, Senator Amy Klobuchar (D-Minn.) unveiled preliminary details of upcoming legislation, crafted in tandem with Hawley, along with Senators Chris Coons (D-Del.), and Susan Collins (R-Maine). This new proposal targets the prospects of AI technologies pervading the electoral process.

Apprehension regarding deceptive generative AI undermining democratic elections took center stage during the Senate hearing, with Klobuchar expressing a sense of urgency given the rapidly approaching electoral calendar.

Specifically, the newly minted legislation, coined the Protect Elections from Deceptive AI Act, is envisioned to clamp down on AI-assisted impersonation of federal political aspirants in campaign ads.

Rendered as an amendment to the Federal Election Campaign Act of 1971, this legislation provides a legal recourse in federal court for targeted candidates to counter harmful AI-generated deceptive content.

Keep reading

Ex-Google executive fears AI will be used to create ‘more lethal pandemics’

A former Google executive who helped pioneer the company’s foray into artificial intelligence fears the technology will be used to create “more lethal pandemics.”

Mustafa Suleyman, co-founder and former head of applied AI at Google’s DeepMind, said the use of artificial intelligence will enable humans to access information with potentially deadly consequences.

“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible,” Suleyman said The Diary of a CEO podcast on Monday.

“They can spread faster or [be] more lethal…They cause more harm or potentially kill, like a pandemic,” he added, calling for tighter regulation on AI software.

Suleyman said his biggest fear is that within the next five years a “kid in Russia” could genetically engineer a pathogen and unleash it so as to trigger a pandemic that’s “more lethal” than anything the world has seen thus far.

“That’s where we need containment. We have to limit access to the tools and the know-how to carry out that kind of experimentation,” he said.

Keep reading

Amazon Is Filled with Bogus AI-Generated Mushroom Foraging Books that Could Cause Poisoning Deaths

A surge in AI-generated mushroom foraging books on Amazon has raised alarms among experts, who warn that such guides, filled with misinformation about poisonous mushrooms, could pose life-threatening risks to consumers.

404 Media reports that The New York Mycological Society has raised an alarm over the increasing number of AI-generated mushroom foraging books appearing on Amazon. According to the society, these books could pose serious risks to public health. “These AI-generated foraging books could actually kill people if they eat the wrong mushroom because a guidebook written by an AI prompt said it was safe,” the NYMS stated on social media.

Sigrid Jakob, president of the New York Mycological Society, elaborated on the risks involved in using AI-generated foraging guides. “There are hundreds of poisonous fungi in North America and several that are deadly,” Jakob said. “They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”

Text detection tools have indicated that many of these books are predominantly written by AI, with some showing more than 85 percent AI-generated content. Despite this, these books are often marketed as if they were written by humans, making it challenging for consumers to identify their true origin.

In response to the issue, Amazon has removed some of the flagged AI-generated books from its platform. “All publishers in the store must adhere to our content guidelines, regardless of how the content was created,” said Amazon spokesperson Ashley Vanicek. “We’re committed to providing a safe shopping and reading experience for our customers and we take matters like this seriously.”

Keep reading

Call of Duty to snoop on players for ‘hate speech’

Games publisher Activision has announced that it will use artificial intelligence to listen in on the voice chat of ‘Call of Duty’ players and punish anyone using “hate speech” or “discriminatory language.” 

The feature was rolled out on Wednesday for US players of Call Of Duty’s ‘Modern Warfare II’ and ‘Warzone’ titles, and will be launched globally with ‘Call of Duty: Modern Warfare III’ on November 10.

In a statement on Wednesday, Activision said that it will use a machine learning tool to “identify in real-time and enforce against toxic speech,” including “hate speech, discriminatory language, harassment and more.” 

Activision’s announcement made no distinction between private chat among teammates and public chat audible to all players in a server. Presumably, both will be subject to the same monitoring.

Call Of Duty’s code of conduct bans insults based on race, sexual orientation, gender identity, age, culture, faith, and country of origin. Players deemed to have breached this code of conduct will be punished with temporary voice chat restrictions, temporary account bans, or permanent bans for repeat offenders, Activision said.

Around 90 million gamers worldwide play Call of Duty every month. At present, Activision depends on reports from other players and text chat monitoring to weed out allegedly “toxic” conduct. According to the developer, around one million accounts have been penalized since the launch of ‘Modern Warfare II’ last October.

Keep reading

Ads for AI sex workers are flooding Instagram and TikTok

Facebook, Instagram and TikTok have tried to keep a tight lid on sexualized content in recent years, banning nudity outright in almost all cases, kicking off sex workers and even cracking down on some artists and educators who speak frankly about sexual health and safety. 

But a new kind of sexualized content has lately been getting through their moderation systems: ads for scantily clad and dirty-talking chatbots, powered by what their creators say is artificial intelligence. 

Dozens of tech startups have been running explicit advertisements on TikTok, Instagram and Facebook in recent months for apps that promote not-safe-for-work experiences. The ads promise “NSFW pics,” custom pinup girls and chats with “no censoring,” and many of them feature digitally created potential “girlfriends” with large breasts and tight clothing. 

Some ads use memes that include popular children’s TV characters, such as SpongeBob SquarePants, Bart Simpson or Cookie Monster, to promote apps with which people can create “NSFW pics.” Others feature digitally created girls who appear to be teenagers or younger, sometimes in the style of anime. 

Keep reading

Bizarre AI app stirs controversy as it lets you text ‘Jesus and Satan’

Artificial intelligence has brought along a wide range of weird and wonderful creations since its sudden growth in popularity, with some proving more useful than others as the technology develops. 

The AI app Text With Jesus was launched in July by Catloaf Sofware, which according to developers is for: “’devoted Christians seeking a deeper connection with the Bible’s most iconic figures.”

Using the use of ChatGPT’s AI technology, users are reportedly able to interact with chatbots that represent biblical characters to provide an instant messaging service that uses bible studies to reference from.

Most notable figures include Jesus, Mary, Joseph, and even more shockingly the devil himself, Satan.

After the AI application was released to the public, the bizarre service has since gone viral.

Thousands of people have downloaded the app to their phones, whereby due to its sensitive religious theme, has also naturally divided opinion.

Most of the reviews for the app are positive, with a 4.2 score out of a possible 5, overall rating according to Apple.

Keep reading

Armed with traffic cones, protesters are immobilizing driverless cars

Two people dressed in dark colors and wearing masks dart into a busy street on a hill in San Francisco. One of them hauls a big orange traffic cone. They sprint toward a driverless car and quickly set the cone on the hood.

The vehicle’s side lights burst on and start flashing orange. And then, it sits there immobile.

“All right, looks good,” one of them says after making sure no one is inside. “Let’s get out of here.” They hop on e-bikes and pedal off.

All it takes to render the technology-packed self-driving car inoperable is a traffic cone. If all goes according to plan, it will stay there, frozen, until someone comes and removes it.

An anonymous activist group called Safe Street Rebel is responsible for this so-called coning incident and dozens of others over the past few months. The group’s goal is to incapacitate the driverless cars roaming San Francisco’s streets as a protest against the city being used as a testing ground for this emerging technology.

Over the past couple of years, driverless cars have become ubiquitous throughout San Francisco. It began with human safety drivers on board who were there to make sure everything ran smoothly. And then, many cars started operating with no humans at all.

They’re mostly run by Cruise, which is owned by GM, and Waymo, which is owned by Google parent company Alphabet. Both companies have poured billions of dollars into developing these autonomous vehicles. Neither Cruise nor Waymo responded to questions about why the cars can be disabled by traffic cones.

Keep reading

AI-generated art cannot be copyrighted, rules a US federal judge

United States District Court Judge Beryl A. Howell ruled on Friday that AI-generated artwork can’t be copyrighted, as noted by The Hollywood Reporter. She was presiding over a lawsuit against the US Copyright Office after it refused a copyright to Stephen Thaler for an AI-generated image made with the Creativity Machine algorithm he’d created.

Thaler had tried multiple times to copyright the image “as a work-for-hire to the owner of the Creativity Machine,” which would have listed the author as the creator of the work and Thaler as the artwork’s owner, but he was repeatedly rejected.

After the Office’s final rejection last year, Thaler sued the Office, claiming its denial was “arbitrary, capricious … and not in accordance with the law,” but Judge Howell didn’t see it that way. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.”

Keep reading