Suspended for Pro-Palestine Speech: My Statement on Yale Law School’s Embrace of AI-Generated Smears

The following statement was originally published on Helyeh Doutaghi’s account on the X platform on March 12, 2025. Last week, Doutaghi was placed on administrative leave by Yale Law School following an AI-generated article falsely accusing her of being a “terrorist” over connections to Palestine advocacy organizations.  (Reprinted from Mondoweiss.)

My name is Helyeh Doutaghi. I am a scholar of international law and geopolitical economy. My research engages with Third World Approaches to International Law (TWAIL), postcolonial critiques of law, and the global political economy of sanctions. I have specifically examined the mechanisms and consequences of economic warfare on Iran, as well as the forms of knowledge produced in International Humanitarian Law (IHL) to obscure and shield U.S. military operations from accountability. On October 1, 2023, I was appointed Deputy Director of the Law and Political Economy (LPE) Project and joined the team. I also held the position of Associate Research Scholar at Yale Law School (YLS), a non-tenured faculty role without teaching responsibilities.

On the morning of March 3rd, I was notified of an online report about me. An obscure AI-powered right-wing Zionist platform called “Jewish Onliner” published a report falsely accusing me of being a “terrorist”. Rather than defend me, the Yale Law School moved within less than 24 hours of learning about the report to place me on leave.

I was given only a few hours’ notice by the administration to attend an interrogation based on far-right AI-generated allegations against me, while enduring a flood of online harassment, death threats, and abuse by Zionist trolls, exacerbating ongoing unprecedented distress and complications both at work and at home. I endured all of this while fasting, and my request for religious accommodations during Ramadan was dismissed. Just a few hours later, YLS placed me on leave, revoked my IT access – including email – and banned me from campus. I was afforded no due process and no reasonable time to consult with my attorney.

Rather than investigate the source of these allegations first, the nation’s “top law school” accepted them at face value, and shifted the burden of proof from the accuser to the accused, treating me, prima facie, as guilty until proven otherwise. Whether Yale Law School’s attorneys knowingly relied on AI-fabricated claims or simply chose willful ignorance remains unanswered.

To conduct the interrogation, Yale Law School retained David Ring from the law firm Wiggin and Dana – an attorney whose public profile includes “Israel” listed as a “service” he provides and whose portfolio boasts advising “the world’s largest aerospace and defense companies.” Twice appointed by the U.S. State Department as a Special Compliance Officer, his career is deeply embedded in the very industries that sustain genocide and war crimes in Palestine. When I raised my concerns about the potential conflict of interests posed by his participation in this process, YLS dismissed them, stating there was “no concern with his ability to conduct a fair interview.” It is reprehensible that YLS would appoint a counsel who profits from the machinery of Palestinian death to “interview” an employee about their public anti-genocide and pro-Palestine positions.

Keep reading

Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

A new study from Columbia Journalism Review’s Tow Center for Digital Journalism has uncovered serious accuracy issues with generative AI models used for news searches. According to the study, AI search engines have a startling error rate of 60 percent when queried about the news.

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

Keep reading

Welcome to Skynet: Google Unveils AI Models to Power Physical Robots

Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.

CNBC reports that Google’s DeepMind has unveiled two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), which are set to revolutionize the way robots interact with the physical world. These models, running on Google’s Gemini 2.0, are designed to adapt to different situations, understand and respond quickly to instructions, and manipulate objects with dexterity. The company’s goal is to have autonomous robots taking orders from AI without human control.

Google has announced a partnership with Apptronik, a Texas-based robotics developer, to build the next generation of humanoid robots powered by Gemini 2.0. Apptronik, known for its work with Nvidia and NASA, recently received funding from Google in a $350 million round.

Demonstration videos released by Google showcase Apptronik robots equipped with the new AI models performing various tasks, such as plugging items into power strips, filling lunchboxes, moving plastic vegetables, and zipping up bags, all in response to spoken commands. While no timeline has been provided for the technology’s market release, the videos offer a glimpse into the potential applications of these AI-powered robots.

Keep reading

The Take It Down Act: A Censorship Weapon Disguised As Protection

President Trump has thrown his support behind the Take It Down Act, a bill designed to combat the spread of non-consensual intimate imagery (NCII), including AI-generated deepfakes. The legislation has gained momentum, particularly with First Lady Melania Trump backing the effort, and Trump himself endorsing it during his March 4 address to Congress.

We obtained a copy of the bill for you here.

“The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”

While this comment was likely tongue-in-cheek, it highlights an important question: how will this law be enforced, and who will benefit the most from it?

A Necessary Law with Potential Pitfalls

The rise of AI-generated explicit content and the increasing problem of revenge porn are serious concerns. Victims of NCII have long struggled to get harmful content removed, often facing bureaucratic roadblocks while the damage continues to spread. The Take It Down Act aims to give individuals more power to protect themselves online.

However, as with many internet regulations, the challenge is in the details. Laws designed to curb harmful content often run the risk of being too broad, potentially leading to overreach. Critics warn that, without clear safeguards, the legislation could be used beyond its intended purpose.

Keep reading

Someone Invited Kamala Harris To Speak At A Major AI Conference And Hoo Boy…

Someone thought it was a good idea to invite Kamala Harris to speak at AI conference HumanX at Fontainebleau in Las Vegas over the weekend, and it was obviously a disaster.

Harris delivered a word salad complete with Doritos on the side.

“Former Vice President Harris will share her vision for the future of AI, emphasizing the responsibility to shape this technology in a way that promotes human rights, privacy, and equal opportunity,” Business Wire suggested when Harris was announced as a speaker.

Instead she blathered on about how she is obsessed with nacho cheese Doritos.

“We did DoorDash ’cause I wanted Doritos. And the red carpet part was about to start and nobody wanted to leave to go to the grocery store,” Harris was filmed saying.

“So it was DoorDash … So I was willing to give up whatever might be the tracking of Kamala Harris’ particular fondness for nacho cheese Doritos for the sake of getting a big bag of Doritos as I watched the Oscars,” she furthered blathered, sounding completely drunk and breaking into inane cackling.

She continued, “And you can debate with me if it should be a right – I think it should. To expect that the innovation would also be weighted in terms of solving their everyday problems, which are beyond my craving for Doritos… but about whatever – and I know the work is happening – the scientific discoveries, for example to cure longstanding diseases I would love it if there was an investment in resources and solving the affordable housing issue in America.”

Keep reading

The Singularity Has Already Happened

In mathematics, a singularity is where the equations go “non-linear”. Inputs go in, but the outputs are beyond understanding. In physics, it’s where the laws of three-dimensional Cartesian reality break down.

Ray Kurzweil hijacked the term and applied it to AI and exponential computing power: Moore’s Law would bring us all past the point where technology accelerates beyond human comprehension.

For Techno-Utopians, among them – the transhumanists, the Singularity will bring about a kind of eschatological event, a computerized “End Times” like a Rapture moment wherein technology acquires more brainpower than all of humanity combined, and deftly solves all our problems (unless the techno-doomers are right, and it annihilates us instead).

In Frazzledrip Overdrive, a recent piece that looked at ever-accelerating Future Shock, and the iterations of mass formation psychosis that causes – I was probably guilty of “burying the lede”.

It was that my current working theory is that “The Singularity” has already occurred:

I think that we are already in the post-singularity eraRight now

It happened within the last couple years with the AI wave – of third major technological wave of this century (internet, crypto, now AI) and the eighth major technological leap since the industrial revolution (electricity, radio/TV, telephones, semi-conductors, personal computers).

Now we’re past the point where the code is coding. The feedback loop there is already underway, and it too is accelerating…

What I mean by that is because we’re using AI and LLMs to cut code (create programming source code), and beyond that, those AI engines are producing computer code that is itself generating more code – we’ve essentially crossed the defining barrier of what the Singularity is supposed to be – which technology creating more intelligent versions of itself.

Granted – it has not resulted in an instantaneous Rapture-like moment of technological quantum leap across all aspects of our reality yet.

But if you haven’t noticed – since the advent of GPT 3 in 2023, itself an iteration of previous LLMs that were playing over at least a decade, the newer, faster, smarter iterations have been coming at a faster cadence and at shorter intervals.

Keep reading

Trump Jr. Team Slams AI-Generated Fake Video as ‘100% False’

A sophisticated AI-fabricated video deceptively portraying Donald Trump Jr. advocating for U.S. arms support to Russia instead of Ukraine has ignited controversy across social media platforms. Multiple confirmations from Trump Jr.’s team have definitively established that the video and accompanying audio were entirely AI-generated. Which prominent social media accounts shared the fake content?

Anatomy of a Digital Deception

A viral video purportedly showing Donald Trump Jr. advocating for the United States to send weapons to Russia instead of Ukraine has been confirmed as entirely fake by multiple sources. The sophisticated AI-generated content was widely shared across social media platforms, including accounts with large followings on X.

The falsified clip depicted Trump Jr. making controversial statements about Ukraine, suggesting Russia would be a better ally due to its substantial resources and size. In the fabricated audio, a voice mimicking Trump Jr. could be heard saying, “I can’t imagine anyone in their right mind picking Ukraine as an ally when Russia is the other option” and “Honestly, the U.S. should have been sending weapons to Russia.”

Keep reading

What? Texas Needs Equivalent Of 30 Nuclear Reactors By 2030 To Power AI Data Centers

The AI infrastructure trade (aka the Power-Up America basket which we recommended one year ago before it soared into the stratosphere), had taken a back seat in recent weeks, with some marquee names such as a Vertiv, Contellation, Oklo and others, tumbling from record highs amid growing speculation that China’s DeepSeek – and other cheap LLM alternatives – will lead to far lower capex demands than what is currently projected.

But while the occasional hiccup is to be expected, the endgame for US infra/nuclear stocks looks (millions of degrees) bright. Consider Texas, where demand on the state power grid is expected to expand so immensely that it would take the equivalent of adding 30 nuclear plants’ worth of electricity by 2030 to meet the needs. That’s according to the Electric Reliability Council of Texas (ERCOT), which manages the grid.

The forecast is based on the addition of new data centers needed to power artificial intelligence. And it’s raising concerns about whether infrastructure in the state, which last week we said wants to be “king of nuclear power as the Next AI trade unfolds” – will be able to expand fast enough…. and at what cost.

Coming out of the pandemic, electricity demand on the Texas grid was already growing faster than anywhere else in the country, Bloomberg reports. And now that’s being supercharged by AI, with the state vying to become the data-center hub of the country, if not the world.

Individual projects are already starting to request 1 gigawatt of power and they pose new risks to maintaining a stable grid, said Agee Springer, Ercot’s senior manager of grid interconnections. A gigawatt is typically enough to power 250,000 homes in Texas. The data centers “present a reliability risk to the Ercot system,” said Springer, who spoke on a panel at Infocast’s ERCOT Market Summit in Austin this week.

“We’ve never existed in a place where large industrial loads can really impact the reliability of the grid, and now we are stepping into that world.”

Keep reading

State Department To Use AI To Revoke Visas of Students Who ‘Appear Pro-Hamas’

Secretary of State Marco Rubio is launching an AI-driven effort to revoke the visas of foreigners in the US who “appear pro-Hamas” in a crackdown targeting pro-Palestine protests on college campuses, Axios reported on Thursday.

The report said the effort will involve AI-assisted reviews of social media accounts of tens of thousands of foreign students in the US on visas that will look for “evidence of alleged terrorist sympathies expressed after Hamas’s Oct. 7, 2023, attack on Israel.”

The language in the report suggests that any foreign students who attend pro-Palestine demonstrations or express sympathy for Palestinians online could be swept up in the crackdown since opponents of the Israeli siege on Gaza or US military support for Israel are often labeled “pro-Hamas.”

Civil liberty groups have strongly criticized President Trump’s promises to deport foreign students who attend pro-Palestine protests since the speech of foreigners inside the US is supposed to be protected under the First Amendment.

“If we open the door to expelling foreign students who peacefully express ideas out of step with the current administration about the Israeli-Palestinian conflict, we should expect it to swing wider to encompass other viewpoints too,” Sarah McLaughlin, senior scholar at the Foundation for Individual Rights and Expression (FIRE), said in an op-ed for MSNBC in January.

“Today it may be alleged ‘Hamas sympathizers’ facing threats of deportation for their political expression. Who could it be in four years? In eight?” McLaughlin added.

Keep reading

This Is How The Military Wants AI To Help Control America’s Nuclear Arsenal

While it has long been a world-ending threat in science fiction, U.S. Air Force and Space Force officials see artificial intelligence (AI) playing important, if not critical roles in the command and control enterprise at the heart of America’s nuclear deterrent capabilities.

AI has the potential to help speed up decision making cycles and ensure that orders get where they need to go as fast and securely as possible. It could also be used to assist personnel charged with other duties from intelligence processing to managing maintenance and logistics. The same officials stress that humans will always need to be in or at least on the loop, and that a machine alone will never be in a position to decide to employ nuclear weapons.

A group of officers from the Air Force and Space Force talked about how AI could be used to support what is formally called the Nuclear Command, Control, and Communications (NC3) architecture during a panel discussion at the Air & Space Forces Association’s 2025 Warfare Symposium, at which TWZ was in attendance. The current NC3 enterprise consists of a wide array of communications and other systems on the surfacein the air, and in space designed to ensure that a U.S. nuclear strike can be carried out at any time regardless of the circumstances.

Keep reading