House Judiciary Committee Investigates Biden-Harris AI Censorship Allegations with New Subpoenas

Political trends and circumstances change, as do US administrations – but the House Judiciary Committee chaired by Jim Jordan continues to “soldier on” in its multi-year, comprehensive bid to get to the bottom of the activities by the Biden-Harris White House aimed at pressuring tech companies to its political advantage.

In the past, these investigations produced some spectacular results – such as Meta CEO Mark Zuckerberg publicly admitting to his company relenting to that pressure, stating he regretted that – and that the tech giant would reverse the policies that facilitated compliance with the former government.

The latest set of the Committee’s subpoenas concern companies developing AI tech. The subpoenas have been sent to Adobe, Alphabet, Amazon, Anthropic PBC, Apple, Cohere, International Business Machines Corp., Inflection AI, Meta, Microsoft, Nvidia, Open AI, Palantir Technologies, Salesforce, Scale AI, and Stability AI.

We obtained a copy of one of the letters for you here.

The Committee wants all documents and communications that the previous administration had with these companies concerning content “moderation and suppression” – i.e., collusion with the aim of censoring lawful speech – to be preserved and presented. The timeframe is January 2020 to January 2025.

Keep reading

Medical Surveillance Part 2: Tracking the Unvaccinated

Part 1 of “Medical Surveillance” revealed how contact tracing evolved into databases called real-time AI ecosystems. The data stored in these ecosystems ranges from medical records to genomic sequences that were largely collected using Covid-19 PCR tests. Health privacy laws were revised to enable an alarming amount of data sharing with public and private intelligence agencies for military operations. Using the Covid-19 scamdemic as a front, the military worked with so-called health authorities to weaponize Covid-19 statistics to target non-compliant or undesirable groups with mRNA vaccines, ventilators, and Remdesivir. In other words, it was a military operation that utilized covertly collected private medical and genetic data to deploy bioweapons. Targets were acquired using AI generated predictive behavior models provided by government intelligence agencies like Palantir. If that sounds disturbing to you, keep reading because that was just a warm-up.

The DELAYED REACTION THAT ENABLED THE ILLUSION OF THE PANDEMIC OF THE UNVACCINATED

As contact tracing phased into the background and the genome-collection method known as PCR testing was normalized, one more important piece of data needed to be collected: vaccination status.

The mockingbird media foreshadowed that vaccination status must be made public information because during a public health emergency everyone has a right to know their risk. Soon everyone would need to have a Covid-19 shot to travel, work, go to school, and participate in society. All this would inevitably lead to a vaccine passport. Yet there was no official way to track who was vaccinated in the healthcare industry.

The CDC and Medicare (CMS) announced new codes for tracking vaccination status that would go live on April 1st 2022. The update occurred exactly two years after the Covid-19 diagnosis code went live — on April fools’ Day. This time the emergency update was for the purposes of tracking vaccination status. It just wasn’t an emergency during the most aggressive portion of the vaccine campaign; the part where everyone had to get the shot in order for society to come out of lockdown and “go back to normal”. At any point during 2021, the CDC, CMS, or the AMA could have stopped the presses to do another emergency update to introduce a new code for vaccination status (or for adverse events, for that matter). They did not.

Keep reading

Erik Prince On AI, Kamikaze Drones, Future Warfare, & DOGE

Erik Prince, the founder and former CEO of the military contractor Blackwater, recently spoke at a seminar at Hillsdale College titled “AI and the Future Battlefield.” In his speech, he discussed the evolution of warfare, the impact of drones and AI, the changing dynamics of global power, and the importance of innovation—particularly in the private sector. He also praised Elon Musk’s Department of Government Efficiency

“We don’t have a monopoly in innovation, but we have a critical mass of it, and a lot of that still resides in the military,” Prince, a 1992 Hillsdale College graduate and founder of Blackwater Worldwide, told students. 

Prince said, “As long as DoD, just a little bit, opens the tap of money, redirecting from the nonsense, hyper-overpriced programs that they like to spend money on, we can certainly not just catch up but surpass any capability that we have to worry about with China.”

So, less DoD funding for the military-industrial complex—such as legacy defense giants like Lockheed and Boeing, often seen as innovation killers—and more support for emerging startups like

The magic of innovation: more startups = more competition … who would’ve ever thought? 

Keep reading

Suspended for Pro-Palestine Speech: My Statement on Yale Law School’s Embrace of AI-Generated Smears

The following statement was originally published on Helyeh Doutaghi’s account on the X platform on March 12, 2025. Last week, Doutaghi was placed on administrative leave by Yale Law School following an AI-generated article falsely accusing her of being a “terrorist” over connections to Palestine advocacy organizations.  (Reprinted from Mondoweiss.)

My name is Helyeh Doutaghi. I am a scholar of international law and geopolitical economy. My research engages with Third World Approaches to International Law (TWAIL), postcolonial critiques of law, and the global political economy of sanctions. I have specifically examined the mechanisms and consequences of economic warfare on Iran, as well as the forms of knowledge produced in International Humanitarian Law (IHL) to obscure and shield U.S. military operations from accountability. On October 1, 2023, I was appointed Deputy Director of the Law and Political Economy (LPE) Project and joined the team. I also held the position of Associate Research Scholar at Yale Law School (YLS), a non-tenured faculty role without teaching responsibilities.

On the morning of March 3rd, I was notified of an online report about me. An obscure AI-powered right-wing Zionist platform called “Jewish Onliner” published a report falsely accusing me of being a “terrorist”. Rather than defend me, the Yale Law School moved within less than 24 hours of learning about the report to place me on leave.

I was given only a few hours’ notice by the administration to attend an interrogation based on far-right AI-generated allegations against me, while enduring a flood of online harassment, death threats, and abuse by Zionist trolls, exacerbating ongoing unprecedented distress and complications both at work and at home. I endured all of this while fasting, and my request for religious accommodations during Ramadan was dismissed. Just a few hours later, YLS placed me on leave, revoked my IT access – including email – and banned me from campus. I was afforded no due process and no reasonable time to consult with my attorney.

Rather than investigate the source of these allegations first, the nation’s “top law school” accepted them at face value, and shifted the burden of proof from the accuser to the accused, treating me, prima facie, as guilty until proven otherwise. Whether Yale Law School’s attorneys knowingly relied on AI-fabricated claims or simply chose willful ignorance remains unanswered.

To conduct the interrogation, Yale Law School retained David Ring from the law firm Wiggin and Dana – an attorney whose public profile includes “Israel” listed as a “service” he provides and whose portfolio boasts advising “the world’s largest aerospace and defense companies.” Twice appointed by the U.S. State Department as a Special Compliance Officer, his career is deeply embedded in the very industries that sustain genocide and war crimes in Palestine. When I raised my concerns about the potential conflict of interests posed by his participation in this process, YLS dismissed them, stating there was “no concern with his ability to conduct a fair interview.” It is reprehensible that YLS would appoint a counsel who profits from the machinery of Palestinian death to “interview” an employee about their public anti-genocide and pro-Palestine positions.

Keep reading

Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

A new study from Columbia Journalism Review’s Tow Center for Digital Journalism has uncovered serious accuracy issues with generative AI models used for news searches. According to the study, AI search engines have a startling error rate of 60 percent when queried about the news.

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

Keep reading

Welcome to Skynet: Google Unveils AI Models to Power Physical Robots

Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.

CNBC reports that Google’s DeepMind has unveiled two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), which are set to revolutionize the way robots interact with the physical world. These models, running on Google’s Gemini 2.0, are designed to adapt to different situations, understand and respond quickly to instructions, and manipulate objects with dexterity. The company’s goal is to have autonomous robots taking orders from AI without human control.

Google has announced a partnership with Apptronik, a Texas-based robotics developer, to build the next generation of humanoid robots powered by Gemini 2.0. Apptronik, known for its work with Nvidia and NASA, recently received funding from Google in a $350 million round.

Demonstration videos released by Google showcase Apptronik robots equipped with the new AI models performing various tasks, such as plugging items into power strips, filling lunchboxes, moving plastic vegetables, and zipping up bags, all in response to spoken commands. While no timeline has been provided for the technology’s market release, the videos offer a glimpse into the potential applications of these AI-powered robots.

Keep reading

The Take It Down Act: A Censorship Weapon Disguised As Protection

President Trump has thrown his support behind the Take It Down Act, a bill designed to combat the spread of non-consensual intimate imagery (NCII), including AI-generated deepfakes. The legislation has gained momentum, particularly with First Lady Melania Trump backing the effort, and Trump himself endorsing it during his March 4 address to Congress.

We obtained a copy of the bill for you here.

“The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”

While this comment was likely tongue-in-cheek, it highlights an important question: how will this law be enforced, and who will benefit the most from it?

A Necessary Law with Potential Pitfalls

The rise of AI-generated explicit content and the increasing problem of revenge porn are serious concerns. Victims of NCII have long struggled to get harmful content removed, often facing bureaucratic roadblocks while the damage continues to spread. The Take It Down Act aims to give individuals more power to protect themselves online.

However, as with many internet regulations, the challenge is in the details. Laws designed to curb harmful content often run the risk of being too broad, potentially leading to overreach. Critics warn that, without clear safeguards, the legislation could be used beyond its intended purpose.

Keep reading

Someone Invited Kamala Harris To Speak At A Major AI Conference And Hoo Boy…

Someone thought it was a good idea to invite Kamala Harris to speak at AI conference HumanX at Fontainebleau in Las Vegas over the weekend, and it was obviously a disaster.

Harris delivered a word salad complete with Doritos on the side.

“Former Vice President Harris will share her vision for the future of AI, emphasizing the responsibility to shape this technology in a way that promotes human rights, privacy, and equal opportunity,” Business Wire suggested when Harris was announced as a speaker.

Instead she blathered on about how she is obsessed with nacho cheese Doritos.

“We did DoorDash ’cause I wanted Doritos. And the red carpet part was about to start and nobody wanted to leave to go to the grocery store,” Harris was filmed saying.

“So it was DoorDash … So I was willing to give up whatever might be the tracking of Kamala Harris’ particular fondness for nacho cheese Doritos for the sake of getting a big bag of Doritos as I watched the Oscars,” she furthered blathered, sounding completely drunk and breaking into inane cackling.

She continued, “And you can debate with me if it should be a right – I think it should. To expect that the innovation would also be weighted in terms of solving their everyday problems, which are beyond my craving for Doritos… but about whatever – and I know the work is happening – the scientific discoveries, for example to cure longstanding diseases I would love it if there was an investment in resources and solving the affordable housing issue in America.”

Keep reading

The Singularity Has Already Happened

In mathematics, a singularity is where the equations go “non-linear”. Inputs go in, but the outputs are beyond understanding. In physics, it’s where the laws of three-dimensional Cartesian reality break down.

Ray Kurzweil hijacked the term and applied it to AI and exponential computing power: Moore’s Law would bring us all past the point where technology accelerates beyond human comprehension.

For Techno-Utopians, among them – the transhumanists, the Singularity will bring about a kind of eschatological event, a computerized “End Times” like a Rapture moment wherein technology acquires more brainpower than all of humanity combined, and deftly solves all our problems (unless the techno-doomers are right, and it annihilates us instead).

In Frazzledrip Overdrive, a recent piece that looked at ever-accelerating Future Shock, and the iterations of mass formation psychosis that causes – I was probably guilty of “burying the lede”.

It was that my current working theory is that “The Singularity” has already occurred:

I think that we are already in the post-singularity eraRight now

It happened within the last couple years with the AI wave – of third major technological wave of this century (internet, crypto, now AI) and the eighth major technological leap since the industrial revolution (electricity, radio/TV, telephones, semi-conductors, personal computers).

Now we’re past the point where the code is coding. The feedback loop there is already underway, and it too is accelerating…

What I mean by that is because we’re using AI and LLMs to cut code (create programming source code), and beyond that, those AI engines are producing computer code that is itself generating more code – we’ve essentially crossed the defining barrier of what the Singularity is supposed to be – which technology creating more intelligent versions of itself.

Granted – it has not resulted in an instantaneous Rapture-like moment of technological quantum leap across all aspects of our reality yet.

But if you haven’t noticed – since the advent of GPT 3 in 2023, itself an iteration of previous LLMs that were playing over at least a decade, the newer, faster, smarter iterations have been coming at a faster cadence and at shorter intervals.

Keep reading

Trump Jr. Team Slams AI-Generated Fake Video as ‘100% False’

A sophisticated AI-fabricated video deceptively portraying Donald Trump Jr. advocating for U.S. arms support to Russia instead of Ukraine has ignited controversy across social media platforms. Multiple confirmations from Trump Jr.’s team have definitively established that the video and accompanying audio were entirely AI-generated. Which prominent social media accounts shared the fake content?

Anatomy of a Digital Deception

A viral video purportedly showing Donald Trump Jr. advocating for the United States to send weapons to Russia instead of Ukraine has been confirmed as entirely fake by multiple sources. The sophisticated AI-generated content was widely shared across social media platforms, including accounts with large followings on X.

The falsified clip depicted Trump Jr. making controversial statements about Ukraine, suggesting Russia would be a better ally due to its substantial resources and size. In the fabricated audio, a voice mimicking Trump Jr. could be heard saying, “I can’t imagine anyone in their right mind picking Ukraine as an ally when Russia is the other option” and “Honestly, the U.S. should have been sending weapons to Russia.”

Keep reading