How AI’s left-leaning biases could reshape society

The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.

David Rozado, an AI researcher at New Zeland’s Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini. 

Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy.” 

LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.

The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.

“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information,” he says. “However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”

He also argues the shift in the sourcing of information has “profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society,” with the U.S. presidential election between the GOP’s Donald Trump and the Democrats’ Kamala Harris now just over two months away and expected to be close. 

It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions. 

Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet. 

Keep reading

Backyard Privacy in the Age of Drones

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s:California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

Keep reading

Military Threat: China’s AI Robots

Last week, the Chinese Communist Party (CCP) organized the World Robot Conference – where they showcased the latest advancements that China’s robotics industry has produced over the past several years.

According to the CCP, China’s humanoid robots are “catching up fast with global rivals,” with advances such as the incorporation of AI into some of its robots that have military capabilities.

We’re picturing mindless robot patrols enforcing the next ‘welded in’ pandemic lockdown, with deadl(ier) results.

As Anders Corr notes in The Epoch TimesChina’s humanoid robots on display at the conference could easily be equipped with weapons and probably already have been. The People’s Liberation Army has demonstrated armed flying drones and quadruped AI robots that resemble dogs with machine guns mounted to their backs. The killer robot dogs can reportedly fire their weapons autonomously.

China’s rapid rise in robotics is state-directed and subsidized to the tune of over $1.4 billion, according to an official announcement in 2023. In 2012, China installed fewer than 15 percent of industrial robots globally. By 2022, that number increased to over 50 percent, with China installing over 250,000, the most in the world. By comparison, Japan and the United States installed just about 50,000 and 40,000, respectively.

In 2016, a Chinese company bought Germany’s Kuka, one of the world’s three leading industrial robot makers. The other two are Japan’s Fanuc and Switzerland’s ABB. Tesla is also a leading robot maker. It plans to deploy 1,000 humanoid Optimus robots in Tesla factories in 2025. Given the close connections of all four of these companies to China, there is a significant risk of technology transfers and IP theft, further driving China’s rapid rise in the robotics space.

Keep reading

Child abuse images removed from AI image-generator training source, researchers say

Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.

The LAION research dataset is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up dataset for future AI research.

Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the “tainted models” that are still able to produce child abuse imagery.

Keep reading

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn’t create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

Keep reading

Ten Reasons Why Human Flourishing Has Reversed

Until recently, most of us welcomed innovation, but something changed. And now a huge number of people are anxious and fearful about the same tech companies they once trusted.

What caused this shift?

That’s a big issue. Unless we understand how things went wrong, we can’t begin to fix them. Otherwise we’re just griping—about bad software or greedy CEOs or whatever.

It’s now time to address the causes, not just complain about symptoms.

Once we do that, we can move to the next steps, namely outlining a regimen for recovery and an eventual cure.

So let me try to lay out my diagnosis as clearly as I can. Below are the ten reasons why tech is now breaking bad.

I apologize in advance for speaking so bluntly. Many will be upset by my frankness. But the circumstances—and the risks involved—demand it.

Keep reading

OpenAI Agrees To Run GPT Models Past US Government To ‘Evaluate’ For Safety

AI companies OpenAI and Anthropic have agreed to run their new AI models past the US government’s AI Safety Institute to evaluate their ‘capabilities and risks,’ as well as ‘collaborate on methods to mitigate potential issues,’ Bloomberg reports.

Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelley, director of the US AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Under agreements announced Thursday by the Commerce Department’s National Institute of Standards and Technology (NIST), in “close collaboration with the UK’s AI Safety Institute,” the government will work to provide feedback on potential safety improvements.

“We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models,” said OpenAI Chief Strategy Officer Jason Kwon. “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

Anthropic also said it was important to build out the capacity to effectively test AI models. “Safe, trustworthy AI is crucial for the technology’s positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.” -Bloomberg

The US AI Safety Institute was launched in 2023 as part of the Biden-Harris administration’s Executive Order on AI. The group is tasked with developing the testing, evaluations and guidelines for responsible AI innovation.

Keep reading

A.I – Is This The Beginning Of The End For The Human Race?

The question of whether AI could cause human extinction is a subject of intense debate among experts, futurists, and ethicists. While the potential risks of AI are significant, the possibility of it leading to human extinction depends on several factors, including how AI is developed, governed, and integrated into society. Here’s a breakdown of the possible risk v benefit of this technology, and considerations concerning it’s future use in society:

Since 1950 when Alan Turing wrote his paper ‘Computer intelligence and machinery’, the world has been debating both if and when A.I would be the saviour of human existence or its annihilator. In the hands of lunatics who have a god complex, and only wish to follow their own twisted agenda, it becomes an existential threat to us all. Do we just allow this technology to seep into all aspects of our personal lives, our financial futures, and even within the health systems that we are told to trust in.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking speaking at the launch of the Leverhulme center for the future of intelligence, Oct 19, 2016.

Keep reading

HHS Funds AI Tool to ‘Inoculate’ Social Media Users Against HPV Vax ‘Misinformation’

University of Pennsylvania researchers — using U.S. taxpayer dollars — are developing an artificial intelligence (AI) tool designed to “inoculate” social media users against “misinformation” about the HPV vaccine posted on social media, grant documents obtained by Children’s Health Defense (CHD) via a Freedom of Information Act (FOIA) request revealed.

The U.S. Department of Health and Human Services (HHS) is funding the $4 million “Inoculate for HPV Vaccine” randomized controlled trial running from April 2022 through March 2027. The National Cancer Institute, part of HHS, is facilitating the funding. Funding for year three was released in April.

The study is headed up by Melanie L. Kornides, associate professor of nursing at the University of Pennsylvania, whose research focuses on increasing vaccine uptake, and also on “strategies to combat misinformation.”

Kornides is joined by a team of digital health communication experts, software and program designers, social media analysts and machine learning systems experts who will help her run the “inoculation” experiment on 2,500 parents of children ages 8-12.

The team is collecting user data from YouTube, Twitter, Facebook and Instagram where people talk about HPV and using natural language processing to train an AI tool to identify “HPV misinformation,” or posts that are critical of vaccination — whether or not the information in the post is true or false.

They will then develop and test their “inoculation tool,” exposing subjects in three study arms to different types of messaging meant to make them immune to such misinformation.

A control group will get no particular messaging and two test groups will be exposed either to messaging designed to inoculate viewers against content critical of of HPV vaccines and content critical of anti-vaccine arguments.

The subjects will get “booster” doses of messaging at three and six months after their first inoculation.

If successful, the researchers wrote, this novel approach to combating health “misinformation” can be used in “wide-scale social media campaigns” addressing pandemics, childhood vaccination and other health issues.

Keep reading

RFID cards could turn into a global security mess after discovery of hardware backdoor

WTF?! Chinese-made chips used in popular contactless cards contain hardware backdoors that are easy to exploit. These chips are compatible with the proprietary Mifare protocol developed by Philips spin-off NXP Semiconductors and are inherently “intrinsically broken,” regardless of the card’s brand.

Security researchers at Quarkslab have discovered a backdoor in millions of RFID cards developed by Shanghai Fudan Microelectronics (FMSH). When properly exploited, this backdoor could be used to quickly clone contactless smart cards that regulate access to office buildings and hotel rooms worldwide.

According to French researchers, “Mifare Classic” cards are widely used but have significant security vulnerabilities. These chip-based contactless cards have been targeted by various attacks over the years and remain vulnerable despite the introduction of updated versions.

In 2020, Shanghai Fudan released a new variant that provides a compatible (and likely cheaper) RFID technology through the Mifare-compatible FM11RF08S chip. It featured several countermeasures designed to thwart known card-only attacks, but introduced its own security issues.

Quarkslab analyst Philippe Teuwen discovered an attack capable of cracking FM11RF08S “sector keys” within a few minutes, but only if a specific key is reused across at least three sectors or three cards.

Keep reading