Wife of Tesla employee who died in what may have been the first Full Self-Driving crash said she and her husband were ‘guinea pigs’

Hans von Ohain, a Tesla recruiter, died in a car crash on a Colorado mountain road after his Tesla Model 3 veered off the road and barreled into a tree, bursting into flames, according to a new report from The Washington Post.

Erik Rossiter, a friend of Ohain’s who was in the vehicle at the time of the 2022 crash and survived the incident, told first responders Ohain had been using the “auto-drive feature on the Tesla” and the vehicle “just ran straight off the road,” the outlet reported.

If what Rossiter said is true, the incident — which Tesla has so far refused to acknowledge publicly — would be the first known fatality linked to the car company‘s Full Self-Driving technology.

While Ohain had been intoxicated at the time of the crash, with a blood-alcohol level more than three times the legal limit, investigators found the incident was not a typical drunken-driving crash. Sgt. Robert Madden of Colorado State Patrol told the Post there were no skid marks, which would have indicated the vehicle attempted to brake before impact, but there were “rolling tire marks,” meaning power was still being deployed to the wheels after the crash.

Madden said that “given the crash dynamics and how the vehicle drove off the road with no evidence of a sudden maneuver, that fits with the [driver-assistance] feature” being engaged.

Madden also described the subsequent fire, which engulfed the car, as one of the “most intense” vehicle fires he had encountered, due largely to the lithium-ion battery cells housed in the undercarriage of the Tesla that Ohain was driving.

Ohain’s cause of death was determined to be smoke inhalation and thermal injuries, per the Post, and Madden said he probably would have survived the crash had it not been for the intensity of the flames.

Keep reading

AI and the new kind of propaganda

Do you remember how the unconstitutional, pastel-authoritarian and totally batshit insane “Disinformation Governance Board” – with its Mary Poppins-cosplaying, Monty Python level of unintentional self-satirizing department head – was rolled out two years ago like a half-joke, half-beta-test of a version of the 1984 Ministry of Truth?

Well, kids, I wouldn’t really call this 4D chess or anything, but of course this was just bait. This parody and its rapid withdrawal reassures us that nothing of the sort could conceivably take place, while also seeding a visible, red-herring template for how we should expect heavy-handed, overt propaganda efforts to look in this day and age.

Meanwhile, there are currently massive efforts in the background and below the surface, all across the playing field, towards implementing big data and AI technology for not only the purposes of classical, increasingly obsolete propaganda or simple surveillance. No, this time, we’re exploring entirely novel methods of behavioural modification and narrative control intended to GET OUT AHEAD of the crystallization of discourses and even the formation of identities and worldviews.

They want to control the formation and reproduction of “social imaginaries”.

So the idea is to use massive data collection and AI pattern recognition to preemptively disrupt the formation of behaviourally significant narratives, discourses or patterns of information.

With these tools of “early diagnosis” of information that potentially could disrupt the power structure and its objectives, it then becomes possible to nip it in the bud incredibly early on, way before such information has even coalesced into something like coherent narratives or meaningful models for explanation or further (precarious) conclusions.

Keep reading

Israel deploys new military AI in Gaza war

The army has hinted at what the new tech is being used for, with spokesman Daniel Hagari saying last month that Israel’s forces were operating “above and underground simultaneously”.

A senior defence official told AFP the tech was destroying enemy drones and mapping Hamas’s vast tunnel network in Gaza.

New defence technologies including artificial intelligence-powered gunsights and robotic drones form a bright spot in an otherwise dire period for Israel’s tech industry.

The sector accounted for 18 percent of GDP in 2022, but the war in Gaza has wreaked havoc with an estimated eight percent of its workforce called up to fight.

“In general the war in Gaza presents threats, but also opportunities to test emerging technologies in the field,” said Avi Hasson, chief executive of Startup Nation Central, an Israeli tech incubator.

“Both on the battlefield and in the hospitals there are technologies that have been used in this war that have not been used in the past.”

But the rising civilian death toll shows that much greater oversight is needed over the use of new forms of defence tech, Mary Wareham, an arms expert at Human Rights Watch, told AFP.

“Now we’re facing the worst possible situation of death and suffering that we’re seeing today –- some of that is being brought about by the new tech,” she said.

More than 150 countries in December backed a UN resolution identifying “serious challenges and concerns” in new military tech, including “artificial intelligence and autonomy in weapons systems.”

Keep reading

Government Funds AI Tools for Whole-of-Internet Surveillance and Censorship

Ifeel scared. Very scared.

Internet-wide surveillance and censorship, enabled by the unimaginably vast computational power of artificial intelligence (AI), is here.

This is not a futuristic dystopia. It’s happening now.

Government agencies are working with universities and nonprofits to use AI tools to surveil and censor content on the Internet.

This is not political or partisan. This is not about any particular opinion or idea. 

 What’s happening is that a tool powerful enough to surveil everything that’s said and done on the Internet (or large portions of it) is becoming available to the government to monitor all of us, all the time. And, based on that monitoring, the government – and any organization or company the government partners with – can then use the same tool to suppress, silence, and shut down whatever speech it doesn’t like. 

But that’s not all. Using the same tool, the government and its public-private, “non-governmental” partners (think, for example: the World Health Organization, or Monsanto) can also shut down any activity that is linked to the Internet. Banking, buying, selling, teaching, learning, entertaining, connecting to each other – if the government-controlled AI does not like what you (or your kids!) say in a tweet or an email, it can shut down all of that for you. 

Yes, we’ve seen this on a very local and politicized scale with, for example, the Canadian truckers

But if we thought this type of activity could not, or would not, happen on a national (or even scarier – global) scale, we need to wake up right now and realize it’s happening, and it might not be stoppable.

Keep reading

Audio Cloning can Take Over a Phone Call in Real Time Without the Speakers Knowing

Generative AI could be listening to your phone calls and hijacking them with fake biometric audio for fraud or manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.

“Alarmingly,” writes Lee, “it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary.”

Keep reading

Biden’s AI plan to censor you revealed: Researchers say Americans can’t ‘tell fact from fiction’

Twitter’s censorship of the Hunter Biden laptop story in 2020 could soon be possible on an industrial scale — thanks to AI tools being built with funding from his father’s administration, a report from Republicans on the House Judiciary Committee claimed Tuesday.

The report reveals how the Biden administration is spending millions on artificial intelligence research designed to make anti “misinformation” tools which could then be passed to social media giants.

And it discloses how researchers who got funding for the plan — known as “Track F” — emailed each other to say that Americans could not tell fact from fiction online, and that conservatives and veterans were even more susceptible than the public at large.

The report was published by the House Judiciary Committee’s Subcommittee on the Weaponization of Government, which is chaired by Jim Jordan (R-OH).

It casts new light on how funding from the National Sciences Foundation is being given to elite institutions including Massachusetts Institute of Technology, the University of Madison-Wisconsin and the University of Michigan, for a program called “Trust & Authenticity in Communication Systems.”

Keep reading

AI Versus Age-Verification Laws

A new AI-powered web tool seems tailor-made to help teens get around age-verification laws online—and showcases the futility of trying to set a minimum age for social media use.

In the old days, getting around a minimum-age requirement meant actually having a physical ID card to say that you were of legal age. But with online age verification, all one may need is an image of that ID card.

Enter OnlyFake, a website using AI technology to cheaply generate images of fake IDs.

“OnlyFake is claiming to use ‘neural networks’ to generate realistic looking photos of fake IDs for just $15,” reported 404 Media earlier this week:

In our own tests, OnlyFake created a highly convincing California driver’s license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted. The photo even gives the appearance that the ID card is laying on a fluffy carpet, as if someone has placed it on the floor and snapped a picture, which many sites require for verification purposes.

The OnlyFake website disappeared (for now) after the 404 Media report. But it surely won’t be the last service to offer digital fake IDs.

Keep reading

AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’

Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports. 

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

The paper, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is the joint effort of researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative was submitted to the arXiv preprint server on January 4 and is awaiting peer review. Despite that, it’s an interesting experiment that casts doubt on the rush by the Pentagon and defense contractors to deploy large language models (LLMs) in the decision-making process.

It may sound ridiculous that military leaders would consider using LLMs like ChatGPT to make decisions about life and death, but it’s happening. Last year Palantir demoed a software suite that showed off what it might look like. As the researchers pointed out, the U.S. Air Force has been testing LLMs. “It was highly successful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exactly for, is not clear.

Keep reading

House Weaponization Committee Report Warns of AI-Enabled Censorship Echoing 2020 Social Media Suppression

A Republican report from the House Judiciary Committee on Tuesday raised alarms over possible AI-enabled censorship on a massive scale, reminiscent of Twitter and Facebook’s suppression of the Hunter Biden laptop exposé in 2020.

The accusations focus on a series of AI tools currently under development. Financial backing for this venture has reportedly come from the Biden administration, leading to concerns about the permeation of political influence in freedom of speech.

We obtained a copy of the report for you here.

According to the report, the administration has invested millions of funds in AI research. The objective of this activity, as stated, is the creation of handy tools capable of targeting and suppressing “misinformation.” Once operational, these utilities could ostensibly be handed over to major social media platforms.

Keep reading

Bodycam Maker Axon Is on a Mission to Surveil America with AI

Axon, maker of Tasers and police body cameras, has acquired a surveillance company that allows police to tap into camera networks in schools, retail stores, and other locations in cities and towns across America and apply AI technology to the footage. The move comes as Axon is trying to expand its cameras into retail and healthcare settings.   

Axon acquired Fusus for an undisclosed sum, according to a news release posted on Thursday. The acquisition “expands and deepens” the companies’ so-called real time capabilities. Fusus operates what it calls “real time crime centers (RTCC)” which allow police and other public agencies to analyze a wide array of video sources at a single point and apply AI that detects objects and people. These centers are reminiscent of the Department of Homeland Security’s Fusion Centers—where intelligence from a diverse number of sources is collected and shared among agencies—and have already expanded to over 250 cities and counties. 

“With Fusus, hospitals, schools, retail stores, houses of worship, event venues and residential communities—whole cities and towns—are better protected and, importantly, can contribute to greater safety for everyone,” an Axon blog on the Fusus acquisition states. 

Keep reading