Biden’s AI plan to censor you revealed: Researchers say Americans can’t ‘tell fact from fiction’

Twitter’s censorship of the Hunter Biden laptop story in 2020 could soon be possible on an industrial scale — thanks to AI tools being built with funding from his father’s administration, a report from Republicans on the House Judiciary Committee claimed Tuesday.

The report reveals how the Biden administration is spending millions on artificial intelligence research designed to make anti “misinformation” tools which could then be passed to social media giants.

And it discloses how researchers who got funding for the plan — known as “Track F” — emailed each other to say that Americans could not tell fact from fiction online, and that conservatives and veterans were even more susceptible than the public at large.

The report was published by the House Judiciary Committee’s Subcommittee on the Weaponization of Government, which is chaired by Jim Jordan (R-OH).

It casts new light on how funding from the National Sciences Foundation is being given to elite institutions including Massachusetts Institute of Technology, the University of Madison-Wisconsin and the University of Michigan, for a program called “Trust & Authenticity in Communication Systems.”

Keep reading

AI Versus Age-Verification Laws

A new AI-powered web tool seems tailor-made to help teens get around age-verification laws online—and showcases the futility of trying to set a minimum age for social media use.

In the old days, getting around a minimum-age requirement meant actually having a physical ID card to say that you were of legal age. But with online age verification, all one may need is an image of that ID card.

Enter OnlyFake, a website using AI technology to cheaply generate images of fake IDs.

“OnlyFake is claiming to use ‘neural networks’ to generate realistic looking photos of fake IDs for just $15,” reported 404 Media earlier this week:

In our own tests, OnlyFake created a highly convincing California driver’s license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted. The photo even gives the appearance that the ID card is laying on a fluffy carpet, as if someone has placed it on the floor and snapped a picture, which many sites require for verification purposes.

The OnlyFake website disappeared (for now) after the 404 Media report. But it surely won’t be the last service to offer digital fake IDs.

Keep reading

AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’

Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports. 

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

The paper, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is the joint effort of researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative was submitted to the arXiv preprint server on January 4 and is awaiting peer review. Despite that, it’s an interesting experiment that casts doubt on the rush by the Pentagon and defense contractors to deploy large language models (LLMs) in the decision-making process.

It may sound ridiculous that military leaders would consider using LLMs like ChatGPT to make decisions about life and death, but it’s happening. Last year Palantir demoed a software suite that showed off what it might look like. As the researchers pointed out, the U.S. Air Force has been testing LLMs. “It was highly successful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exactly for, is not clear.

Keep reading

House Weaponization Committee Report Warns of AI-Enabled Censorship Echoing 2020 Social Media Suppression

A Republican report from the House Judiciary Committee on Tuesday raised alarms over possible AI-enabled censorship on a massive scale, reminiscent of Twitter and Facebook’s suppression of the Hunter Biden laptop exposé in 2020.

The accusations focus on a series of AI tools currently under development. Financial backing for this venture has reportedly come from the Biden administration, leading to concerns about the permeation of political influence in freedom of speech.

We obtained a copy of the report for you here.

According to the report, the administration has invested millions of funds in AI research. The objective of this activity, as stated, is the creation of handy tools capable of targeting and suppressing “misinformation.” Once operational, these utilities could ostensibly be handed over to major social media platforms.

Keep reading

Bodycam Maker Axon Is on a Mission to Surveil America with AI

Axon, maker of Tasers and police body cameras, has acquired a surveillance company that allows police to tap into camera networks in schools, retail stores, and other locations in cities and towns across America and apply AI technology to the footage. The move comes as Axon is trying to expand its cameras into retail and healthcare settings.   

Axon acquired Fusus for an undisclosed sum, according to a news release posted on Thursday. The acquisition “expands and deepens” the companies’ so-called real time capabilities. Fusus operates what it calls “real time crime centers (RTCC)” which allow police and other public agencies to analyze a wide array of video sources at a single point and apply AI that detects objects and people. These centers are reminiscent of the Department of Homeland Security’s Fusion Centers—where intelligence from a diverse number of sources is collected and shared among agencies—and have already expanded to over 250 cities and counties. 

“With Fusus, hospitals, schools, retail stores, houses of worship, event venues and residential communities—whole cities and towns—are better protected and, importantly, can contribute to greater safety for everyone,” an Axon blog on the Fusus acquisition states. 

Keep reading

Congress pushes bill to let Americans SUE if fake porn images of them are published after Taylor Swift deep fake scandal

A group of lawmakers are stepping in to try and take down Taylor Swift ‘deep fake’ perpetrators with a bill that would allow Americans to sue if fake porn images of them are published. 

Popstar Taylor Swift became the latest target of nonconsensual deepfakes after artificial intelligence generated sexually explicit images of her flooded the internet this week. 

The dozens of graphic images showed Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia after she became a regular at football games to support of her boyfriend Travis Kelce. 

Swift is now considering legal action against the deepfake porn website that posted the images amid calls from fans and even the White House for legislative action to combat the growing issue. 

Lawmakers decided to step in to combat the rise of nonconsensual deepfakes with a new bill that allows victims to take action against fake porn made in their likeness.

The DEFIANCE Act of 2024 was introduced by Senate Judiciary Committee Chairman Dick Durbin, D-Ill., Ranking Member Lindsey Graham, R-S.C., Senator Josh Hawley, R-Mo., and Senator Amy Klobuchar, R-Minn.

Keep reading

The US Has Failed to Pass AI Regulation. New York City Is Stepping Up

AS THE US federal government struggles to meaningfully regulate AI—or even function—New York City is stepping into the governance gap.

The city introduced an AI Action Plan this week that mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.

If established, the office would provide a place for citizens to take complaints about automated decisionmaking systems used by public agencies, functioning like an ombudsman for algorithms in the five boroughs. It would also assess AI systems before deployment by the city for bias and discrimination.

Several US senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”

Keep reading

Here Come the Cyborgs: Mating AI with Human Brain Cells

If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:

“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”

“A system that integrates brain cells into a hybrid machine can recognize voices”

“Brainoware: Pioneering AI and Brain Organoid Fusion”

Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.

The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons.

The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.

But that is not going to stop this research.  The drive to seamlessly integrate biology and technology is strong.  But can it be done?  And why do so many research scientists and funding agencies assume it’s possible?

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading

Elon Musk’s X Blocks Searches for ‘Taylor Swift’ Amid Spread of Explicit AI-Generated Images

X was blocking searches for “Taylor Swift”over the weekend following the spread of AI-generated images depicting the pop star in sexually explicit poses.

Searches for “Taylor Swift” and “Taylor Swift AI” on X returned error messages on Saturday and Sunday, though Elon Musk’s platform allowed variations on the search terms, including “Taylor Swift photos AI.”

X confirmed it is deliberately blocking the search phrases for the time being.

“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” X’s head of business operations Joe Benarroch said in a statement sent to multiple media outlets.

The Joe Biden administration and the mainstream news media shifted into high gear after the fake Taylor Swift images went viral, seeking to protect the left-wing pop star.

“We are alarmed by the reports of the circulation of the false images,” White House press secretary Karine Jean-Pierre told reporters on Friday, saying social media companies need to do a better job enforcing their own rules.

Keep reading