Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

A new study from Columbia Journalism Review’s Tow Center for Digital Journalism has uncovered serious accuracy issues with generative AI models used for news searches. According to the study, AI search engines have a startling error rate of 60 percent when queried about the news.

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

Keep reading

Welcome to Skynet: Google Unveils AI Models to Power Physical Robots

Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.

CNBC reports that Google’s DeepMind has unveiled two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), which are set to revolutionize the way robots interact with the physical world. These models, running on Google’s Gemini 2.0, are designed to adapt to different situations, understand and respond quickly to instructions, and manipulate objects with dexterity. The company’s goal is to have autonomous robots taking orders from AI without human control.

Google has announced a partnership with Apptronik, a Texas-based robotics developer, to build the next generation of humanoid robots powered by Gemini 2.0. Apptronik, known for its work with Nvidia and NASA, recently received funding from Google in a $350 million round.

Demonstration videos released by Google showcase Apptronik robots equipped with the new AI models performing various tasks, such as plugging items into power strips, filling lunchboxes, moving plastic vegetables, and zipping up bags, all in response to spoken commands. While no timeline has been provided for the technology’s market release, the videos offer a glimpse into the potential applications of these AI-powered robots.

Keep reading

What Are the Ethics of Strapping VR Headsets on Inmates in Solitary Confinement?

Right now in the US, the total population held in prison is nearly 2 million people, which is over 20 percent of the world’s prisoners. Of that massive number, over 122,000 US citizens are forced to endure solitary confinement for at least 22 hours a day.

Solitary confinement is the brutal practice of stuffing people into closet-sized rooms without sunlight, stimulation, or human contact for hours, days, weeks, and sometimes years or decades at a time. It’s a practice that amounts to torture, according to the United Nations and the Geneva Convention.

It’s no wonder why: research has shown that just hours of solitary confinement can cause serious and lasting psychological damage, potentially magnifying existing mental illness and significantly increasing a victim’s risk of suicide. All told, it’s a horrifying mark on an already dystopian carceral system.

Now take that grim situation and add a “Black Mirror”-esque wrinkle: prison officials in California are now offering some people held in solitary confinement an escape via virtual reality.

The program comes by way of Creative Acts, a social justice organization that leads art therapy workshops and educational initiatives in youth and adult prison systems. The nonprofit has previously used VR headsets as part of a general population reentry program, where incarcerated people visualize scenarios like their first steps outside the prison walls, before working through their emotional and physical response with volunteers.

The Guardian recently detailed how the program is working at Corcoran State Prison, where incarcerated people are plucked from 6ft by 11ft cells — where some had been for weeks — and chained to a metal seat inside of a “therapeutic module,” a metal cage no bigger than a phone booth.

From there, Creative Acts volunteers fit the participants with Oculus headsets, loaded up with a range of virtual programming ranging from a ride through Thailand on a rickshaw to a stroll down the streets of Paris. Let’s get it out of the way: the optics of prisoners in small cages, outfitted with VR headsets, are pretty bleak.

Keep reading

Iran using drones and apps to enforce women’s dress code

Iran is using drones and intrusive digital technology to crush dissent, especially among women who refuse to obey the Islamic republic’s strict dress code, the United Nations has said.

Investigators say Iranian security officials are using a strategy of “state-sponsored vigilantism” to encourage people to use specialist phone apps to report women for alleged dress code violations in private vehicles such as taxis and ambulances.

Their new report also highlights the increasing use of drones and security cameras to monitor hijab compliance in Tehran and in southern Iran.

For women who defy the laws, or protest against them, the consequences are severe – arrest, beating, and even rape in custody.

The findings of the Independent International Fact-Finding Mission on the Islamic Republic of Iran come after it determined last year that the country’s theocracy was responsible for the “physical violence” that led to the death in custody of Mahsa Amini in 2022.

Witnesses said the 22-year-old Kurd was badly beaten by the morality police during her arrest, but authorities denied she was mistreated and blamed “sudden heart failure” for her death. Her killing sparked a massive wave of protests that continues today, despite threats of violent arrest and imprisonment.

“Two-and-a-half years after the protests began in September 2022, women and girls in Iran continue to face systematic discrimination, in law and in practice, that permeates all aspects of their lives, particularly with respect to the enforcement of the mandatory hijab,” the report said.

“The state is increasingly reliant on state-sponsored vigilantism in an apparent effort to enlist businesses and private individuals in hijab compliance, portraying it as a civic responsibility.”

Keep reading

App Allows You To Report Illegals In Real Time, Earn Cryptocurrency

ICERAID.US, a new interactive app operating on the blockchain, now lets everyday American citizens to report illegal immigrants and illegal activity in real-time. According to their website…

“ICERAID is a GovFi protocol that delegates intelligence gathering tasks to citizens that would otherwise be undertaken by law enforcement agencies. ICERAID rewards citizens for capturing and uploading images of criminal illegal alien activity as well as 6 other categories of criminal activity to the ICERAID Explorer.

The more images and locations you upload, the more ICERAID you earn.”

ICERAID Founder Jason Meyers, who has a long history in the cryptocurrency space, told this GP journalist that his app can help take the strain off law enforcement, by letting citizens provide initial data in real-time. He’s currently talking to officials in the DHS, federal, local, and state agencies.

Jason is the Founder of Auditchain Labs AG which developed Pacioli.ai and ICERAID, the world’s first GovFi and  RegFI applications that leverage blockchain to delegate tasks to citizens that would otherwise be undertaken by government and regulatory agencies supported by borrowing and taxation.

It seems that GovFi has the potential to balance budgets, pay down debt and put the millions of people back to work who were abandoned by the Biden Administration in 2021.

Keep reading

Police Departments Launching Drone Programs Across The United States

 Police across the U.S. are putting eyes in the sky, as the adoption of drones for law enforcement emerges as a trend. Forces deploying drones – also called Unmanned Aircraft Systems (UAS) or Unmanned Aerial Vehicles (UAVs) – for surveillance and public safety have promised not to use the flying robots for discriminatory policing.

So far the notion of facial recognition drones remains speculative. But the number of police departments adding drones to their technology arsenal – happening in parallel with a surge in AI-assisted fraud and the development of powerful facial recognition algorithms – makes the prospect of mobile biometric surveillance a plausible possibility.

Rules governing police drone deployments differ across jurisdictions

A host of recent reports shows just how many U.S. police forces use drones. A release from police in New Orleans describes a recent incident in which the use of drones for officer support led to the arrests of two 16-year-olds. The release cites a rule allowing the use of UAS to search for suspects using thermography.

Police in Washington, D.C. also launched a drone program in June. A notice on their website specifies that drones are used only in searches for missing persons, major traffic crash reconstruction, crowd management for large gatherings, and “situational and tactical purposes (such as a barricade).” It also lists things drones cannot be used for: namely, to record a particular person or group based solely on race, skin color, gender, personal appearance, and other identifiable traits.

West Hollywood is launching a one-year pilot program to deploy UAVs within a three mile radius of the city. WeHo Online reports that the drones will operate at altitudes of 300 to 400 feet, and integrate with the city’s new real-time watch center. Drone footage is retained for 30 days, “subject to policy updates and community feedback.” The force says it will not be used for surveillance, but rather as support for first responders to 911 calls.

In San Francisco, a March vote passed Proposition E, which lets law enforcement deploy surveillance cameras and drones. In this case, facial recognition is allowed; Prop E states that “drones and public surveillance cameras installed under these rules could include facial recognition technology and would not require Board approval.”

In Minnesota, an annual report released by the state’s Bureau of Criminal Apprehension (BCA) says police are buying and launching more drones than ever before. A post on the website for the Electronic Frontier Foundation, a nonprofit digital rights group, reports that “Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. The data show that more was spent on drones last year than in the previous two years combined.”

Keep reading

What happens if the robot army is defeated?

Many of the national security establishment’s leading voices say America’s military needs to rapidly modernize by embracing the digital future through adopting Artificial Intelligence, network-centric warfare, and uncrewed weapons.

Some even claim that such technology has already fundamentally changed the nature of war. The Pentagon’s technologists and the leaders of the tech industry envision a future of an AI-enabled military force wielding swarms of autonomous weapons on land, at sea, and in the skies.

However, before the United States fully mortgages its security to software code and integrated circuits, several questions must be addressed. Assuming the military does one day build a force with an uncrewed front rank, what happens if the robot army is defeated? Will the nation’s leaders surrender at that point, or do they then send in the humans?

The next major question is, what weapons will the humans wield? It is difficult to imagine the services will maintain parallel fleets of digital and analog weapons. Judging by current trends, Pentagon leaders are much more likely to invest the bulk of their procurement budgets in purchasing autonomous or “optionally manned” systems like the XM30 Mechanized Infantry Combat Vehicle.

Those promoting such a future appear ignorant of a very simple truth: war is a human endeavor. Wars are fought to serve human ends. The weapons used are only the means to achieve those ends.

The humans on both sides of a conflict will seek every advantage possible to secure a victory. When a weapon system is connected to the network, the means to remotely defeat it is already built into the design. The humans on the other side would be foolish not to unleash their cyber warriors to find any way to penetrate the network to disrupt cyber-physical systems.

The United States may find that the future military force may not even cross the line of departure because it has been remotely disabled in a digital Pearl Harbor-style attack.

Technology certainly has its place in the military. Uncrewed aerial vehicles fill many of the roles traditionally performed by pilots flying expensive aircraft to take just one example. In certain circumstances, troops on the front lines should have the ability to employ technology directly.

Keep reading

Someone Invited Kamala Harris To Speak At A Major AI Conference And Hoo Boy…

Someone thought it was a good idea to invite Kamala Harris to speak at AI conference HumanX at Fontainebleau in Las Vegas over the weekend, and it was obviously a disaster.

Harris delivered a word salad complete with Doritos on the side.

“Former Vice President Harris will share her vision for the future of AI, emphasizing the responsibility to shape this technology in a way that promotes human rights, privacy, and equal opportunity,” Business Wire suggested when Harris was announced as a speaker.

Instead she blathered on about how she is obsessed with nacho cheese Doritos.

“We did DoorDash ’cause I wanted Doritos. And the red carpet part was about to start and nobody wanted to leave to go to the grocery store,” Harris was filmed saying.

“So it was DoorDash … So I was willing to give up whatever might be the tracking of Kamala Harris’ particular fondness for nacho cheese Doritos for the sake of getting a big bag of Doritos as I watched the Oscars,” she furthered blathered, sounding completely drunk and breaking into inane cackling.

She continued, “And you can debate with me if it should be a right – I think it should. To expect that the innovation would also be weighted in terms of solving their everyday problems, which are beyond my craving for Doritos… but about whatever – and I know the work is happening – the scientific discoveries, for example to cure longstanding diseases I would love it if there was an investment in resources and solving the affordable housing issue in America.”

Keep reading

The Singularity Has Already Happened

In mathematics, a singularity is where the equations go “non-linear”. Inputs go in, but the outputs are beyond understanding. In physics, it’s where the laws of three-dimensional Cartesian reality break down.

Ray Kurzweil hijacked the term and applied it to AI and exponential computing power: Moore’s Law would bring us all past the point where technology accelerates beyond human comprehension.

For Techno-Utopians, among them – the transhumanists, the Singularity will bring about a kind of eschatological event, a computerized “End Times” like a Rapture moment wherein technology acquires more brainpower than all of humanity combined, and deftly solves all our problems (unless the techno-doomers are right, and it annihilates us instead).

In Frazzledrip Overdrive, a recent piece that looked at ever-accelerating Future Shock, and the iterations of mass formation psychosis that causes – I was probably guilty of “burying the lede”.

It was that my current working theory is that “The Singularity” has already occurred:

I think that we are already in the post-singularity eraRight now

It happened within the last couple years with the AI wave – of third major technological wave of this century (internet, crypto, now AI) and the eighth major technological leap since the industrial revolution (electricity, radio/TV, telephones, semi-conductors, personal computers).

Now we’re past the point where the code is coding. The feedback loop there is already underway, and it too is accelerating…

What I mean by that is because we’re using AI and LLMs to cut code (create programming source code), and beyond that, those AI engines are producing computer code that is itself generating more code – we’ve essentially crossed the defining barrier of what the Singularity is supposed to be – which technology creating more intelligent versions of itself.

Granted – it has not resulted in an instantaneous Rapture-like moment of technological quantum leap across all aspects of our reality yet.

But if you haven’t noticed – since the advent of GPT 3 in 2023, itself an iteration of previous LLMs that were playing over at least a decade, the newer, faster, smarter iterations have been coming at a faster cadence and at shorter intervals.

Keep reading

Laser Light Transformed Into a Supersolid in Groundbreaking Experiment

An international team of physicists has transformed laser light into a supersolid, marking an entirely new process for achieving this mysterious state of matter.

On the quantum level, matter often exhibits strange behaviors, and the supersolid state is one of the most counterintuitive examples. In this state, atoms arrange into a crystal lattice like a solid but also flow without friction, a property typically associated with liquids.

The Quest to Understand Supersolids

Scientists first proposed the idea of a solid that could demonstrate fluid-like flow in the 1960s, with theoretical exploration intensifying in the 1970s.

Helium was initially considered the most promising candidate for achieving this exotic phase of matter. However, early experiments attempting to produce a solid with superfluid properties yielded disappointing results. In the 1980s, physicist John Goodkind used ultrasound techniques to identify anomalies in matter that suggested supersolids might be feasible.

By the 2000s, new experimental data provided stronger hints of supersolid behavior, though some findings conflicted with theoretical predictions, making the state even more elusive.

Creating a Supersolid With Laser Light

For decades, researchers believed that achieving a supersolid state required ultracold atomic Bose-Einstein condensates combined with electromagnetic fields. This method, which was only successfully demonstrated in recent years, produced a material structured like table salt but also capable of flowing.

The latest research, however, takes an entirely different approach, creating a supersolid without using atoms at all.

The team began with a piece of gallium oxide designed with precise ridges to interact with an incoming laser beam. When the laser light struck the semiconductor’s ridges, it produced a quasiparticle known as a polariton. The shape of the ridges then constrained the polariton’s motion, forcing it into a supersolid state.

Keep reading