OpenAI Whistleblower Parents Now Say He ‘Was Shot Twice’ in the Head in Death Ruled a ‘Suicide’ by Chief Medical Examiner

Suchir Balaji, the OpenAi whistleblower who allegedly committed suicide was shot twice in the head, according to his parents.

Balaji, 26, was found dead in his San Francisco apartment with a gunshot wound to the head in November shortly after he told the New York Times that OpenAi was violating copyright laws.

His death was initially ruled a suicide by a medical examiner, however his parents immediately hired a private investigator and did a second autopsy.

Balaji’s father insisted his son was happy and not suicidal.

“He was happy, not depressed or anything. It was his birthday week. He made plans to see us in January.”

“How can anyone believe that there was no suicide note left?” he continued. “We have seen the blood shots in the bathroom, signs of fight,” Balaji’s father said at a presser in December.

Keep reading

TAKE IT DOWN Act Advances in House Despite Major Censorship Concerns

The US House Committee on Energy and Commerce has passed the TAKE IT DOWN (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks) Act in a 49 to 1, bipartisan vote, and the legislation is now headed for the House of Representatives.

If the bill clears that hurdle as well, it will be up to President Trump to sign it into law.

Backed, among others, by First Lady Melania Trump, TAKE IT DOWN was introduced as a way to stop the spread of real, and AI-generated non-consensual intimate imagery (NCII). If, as it seems likely, TAKE IT DOWN becomes law, it will force platforms to remove flagged content within 48 hours.

But the bill’s critics continue to warn that the text lacks proper safeguards and other requirements that would prevent it from being misused, or abused as a tool of censorship, instead of narrowly serving its declarative purpose.

These concerns are not addressed in a press release the Committee on Energy and Commerce issued after adopting the proposal, as it focused instead on the benefits the legislation would provide to victims of dissemination of explicit imagery, with an emphasis on that which is AI-generated, i.e., on deepfakes.

However, campaigners, among them the Center for Democracy and Technology and the EFF, believe that the bill’s actual wording does not live up to its good intent, specifically around the takedown requirement which “lends itself to abuse.”

While the Federal Trade Commission (FTC) would be tasked with issuing penalties for non-compliance, under TAKE IT DOWN, there are no consequences for those making false reports, which could lead to legitimate content quickly disappearing from the internet.

The bill doesn’t lay out how those affected might appeal once their content is falsely flagged and removed, while platforms are under no threat of penalty for removing constitutionally protected speech.

Keep reading

Shopify CEO mandates AI-first hiring policy, reshaping workforce expectations

  • Shopify’s CEO Tobi Lütke mandates that employees must justify why a role can’t be automated before hiring, signaling AI adoption as essential for productivity and efficiency. AI proficiency will now factor into performance reviews.
  • Companies like Shopify, Google and Meta are cutting jobs while heavily investing in AI tools (e.g., Shopify Magic, GitHub Copilot) to handle tasks from customer service to coding, redefining traditional roles.
  • While AI displaces some jobs, it also creates opportunities for higher-value work. Critics warn of disproportionate impacts on junior or repetitive roles, citing challenges in rapid upskilling.
  • Shopify’s workforce shrank from 8,300 in 2023 to 8,100 by late 2024, reflecting a broader trend of tech layoffs (152,000+ in 2024) as companies prioritize AI-driven efficiency over traditional hiring.
  • Industries like legal, marketing and finance are adopting AI for tasks like contract review and data analysis. Experts predict a future where human-AI collaboration dominates, requiring new policies for reskilling and worker protections.

Shopify CEO Tobi Lütke is making one thing clear to employees: artificial intelligence isn’t just a tool—it’s a fundamental requirement. In a memo sent to staff last month, the e-commerce executive announced that teams must now justify why a job can’t be automated before requesting additional hires, signaling a seismic shift in corporate hiring strategies amid the AI revolution.

The directive underscores a broader trend in the tech industry, where businesses are aggressively investing in AI while simultaneously trimming headcounts. For Lütke, the mandate is about optimizing efficiency. “What would this area look like if autonomous AI agents were already part of the team?” he wrote in the memo, later shared publicly on X. “This question can lead to really fun discussions and projects.”

Keep reading

100+ Meta employees, including Head of AI Policy, confirmed as ex-IDF

Meta’s recruitment of vast numbers of former Israeli soldiers raises serious questions about the tech giant’s commitment to free speech – and provides a peek into a biased content moderation process that’s been heavily censoring pro-Palestinian accounts amid the Israeli siege of Gaza.

This article was originally published by ¡Do Not Panic!

More than one hundred former Israeli spies and IDF soldiers work for tech giant Meta, including its head of AI policy, who served in the IDF under an Israeli government scheme that allows non-Israelis to volunteer for the Israeli army.

Shira Anderson, an American international rights lawyer, is Meta’s AI policy chief who voluntarily enlisted for the IDF in 2009 under a program which enables non-Israeli Jews who aren’t eligible for military conscription to join the Israeli army.

Through this program, known as Garin Tzabar, many non-Israelis who have fought for the IDF have been implicated in war crimes and crimes against humanity since Israel’s genocide of Gaza began in October 2023.

Anderson served as a non-commissioned officer in the IDF for over two years where she worked in the Military Strategic Information Section, writing dossiers and public relations propaganda for the IDF. She was also the liaison between the IDF and foreign military attaches stationed in Israel, and liaison to the Red Cross.

With AI a critical emerging technology for tech giants and militaries, Anderson’s role at Meta is an important one. She develops the legal guidance, policies and public relations talking points concerning AI issues and regulation for all of Meta’s key areas, including its product, public policy and government affairs teams.

At Meta, Anderson, who is based in Meta’s Washington DC office, is in familiar company. More than one hundred former Israeli spies and IDF soldiers are employed by the company, my new investigation shows, many of whom worked for Israel’s spy agency Unit 8200.

These ex-IDF members are based evenly across Meta’s US offices and in its Tel Aviv office, and a significant number of them, like Anderson, have a specialization in AI. Given that Israel has made extensive use of AI not just to conduct its genocide, but to establish its prior system of apartheid, surveillance and occupation, Meta’s recruiting of IDF AI specialists is particularly insidious. Did these former Israeli spies use their Unit 8200 connections to help the tech giant collaborate with the IDF to build kill lists? According to a report last year, Unit 8200 infiltrated WhatsApp groups and marked every name in a group for assassination if just one alleged Hamas member was also in the group, no matter the size or content of the group chat.

Keep reading

2025: The iPhone Moment For Humanoid Robots

The race is on to capture the multi-trillion market for humanoid robots. General-purpose robots can be instantly repurposed for any task by AI. Personal robots, for instance, will cook your meals, clean your house, do your laundry, tutor your kids, drive your car, cut your grass, take care of your elderly parents, repair your plumbing, etc. Whereas an iPhone has many apps accessed from a single device, robots will free you from a single screen to invade the physical world. Tesla will likely set the standard with its Optimus model starting at $20-30,000. ⁃ Patrick Wood, Editor.

“There’s an iPhone moment happening with humanoids,” said Brett Adcock, founder of Figure, a humanoid robotics company in California. “It’s going to happen right now,” added the serial entrepreneur, his robots already working on the production line in BMW’s Spartanburg factory. Another major corporate customer is trialing his robots for warehouse work. “To succeed at this, you have to do three things that have never been done before. And you have to get all three of them right within the next 5yrs or you’re going to fail for sure.”

The first thing is you have to build hardware for humanoids that’s incredibly complex and can never fail, and it’s got to work at human speeds with human range of motion,” explained Adcock. “The second thing is a neural net problem, not a control systems problem. You can’t code your way out of this problem. You need to have a robot that can ingest human-like data through a neural net and it has to be able to imitate what humans do. Humanoid robots are not like arms bolted to a factory table. None of those robots have AI.”

Keep reading

Woke Employees’ Worst Nightmare: Google Plays Pivotal Role in CBP’s AI-Powered Border Surveillance Upgrade

Google Cloud is at the center of a Customs and Border Protection plan to modernize video surveillance towers that involves deploying machine learning along the southern border, despite previous assurances from the woke Silicon Valley giant to its leftist employees that it was not involved in such projects.

Federal contract documents reviewed by the Intercept reveal that Google Cloud is playing a critical role in upgrading the U.S. Customs and Border Protection’s (CBP) so-called “virtual wall” along the Mexican border. This comes five years after Google Cloud CEO Thomas Kurian assured employees that the company was not working on any projects related to immigration enforcement at the southern border.

The CBP’s plan involves modernizing older video surveillance towers in Arizona, which provide the agency with continuous monitoring of the border. A key aspect of this effort is the integration of machine learning capabilities into CBP cameras, enabling automatic detection of humans and vehicles approaching the border without the need for constant human monitoring.

According to the documents, CBP is purchasing computer vision technology from two vendors: IBM and Equitus. Google’s role is to stitch these services together by operating a central repository for video surveillance data through its ModulAr Google Cloud Platform Environment (MAGE).

The project focuses on upgrading 50 towers with up to 100 cameras across six sites in the Tucson Sector. IBM will provide its Maximo Visual Inspection software, typically marketed for industrial quality control inspections, while Equitus will offer its Video Sentinel, a video surveillance analytics program designed for border surveillance.

A technical diagram within the document shows that every camera in CBP’s Tucson Sector will feed data into Google’s servers. The resulting metadata and keyframes will be sent to CBP’s Google Cloud, with the document stating, “This project will focus initially on 100 simultaneous video streams from the data source for processing.”

Keep reading

Which AI Chatbots Collect The Most Data About You?

The harbinger of the AI revolution, ChatGPT, remains the most popular AI tool on the market, with more than 200 million weekly active users.

But amongst all its competitors, which AI chatbots are collecting the most user data? And why does that matter?

Visual Capitalist’s Marcus Lu visualizes data from Surfshark which identified the most popular AI chatbots and analyzed their privacy details on the Apple App Store.

At first place, Google’s Gemini (released March, 2023) collects 22 different data points across 10 categories, from its users.

Data collected ranges from general diagnostics (that all bots in this study collect) to access to contacts (that no other bot identified collects).

xAI’s Grok (released November, 2023) collects the least unique data points (7).

China’s DeepSeek (released Jan 2025), sits comfortably in the middle of the pack at 11 points.

The kind of data collected by each of these AI tools varies. All of them collected general diagnostics information. However, only Gemini and Perplexity look at purchases.

And then, nearly all but Perplexity.ai and Grok collect user content.

Keep reading

Bill Gates says AI will replace doctors, teachers within 10 years — and claims humans won’t be needed ‘for most things’

Bill Gates predicted that advancements in artificial intelligence will significantly reduce humanity’s role in many traditional tasks such as medicine and education — and the seismic shift could happen in less than 10 years.

During a recent interview with comedian Jimmy Fallon on NBC’s “The Tonight Show,” the Microsoft co-founder described a future where humans are no longer necessary “for most things” because AI technology will readily perform tasks that currently require specialized human skills.

Today, expertise in fields such as medicine and education remains “rare,” Gates said, adding that those areas depend on “a great doctor” or “a great teacher.”

But over the next decade, “great medical advice [and] great tutoring” will become free and commonplace, Gates said.

Keep reading

The Elite Already Control Almost All The Wealth – So Why Will They Need Us Once AI Can Take Over Nearly All Of Our Jobs?

Is your job in danger?  We live at a time when the development of artificial intelligence is growing at an exponential rate.  AI can already perform lots of tasks better and far more efficiently than humans can, and it appears to be just a matter of time before AI can do virtually everything better and far more efficiently than humans can.  So once we get to that stage, why will the elite need us?   Throughout human history, the wealthy have needed the labor of the poor.  But if AI will soon be able to do almost all of the labor that we have been doing, what use will we be?

The elite certainly don’t need our money, because they already control almost all of the wealth.

In America today, the top 50 percent own 97.5 percent of all the wealth and the bottom 50 percent own just 2.5 percent of all the wealth…

The richest half of American families owned about 97.5% of national wealth as of the end of 2024, while the bottom half held 2.5%, according to the latest numbers from the Federal Reserve.

It really stinks to be in the bottom half.

Much of the country is just barely surviving from month to month, and meanwhile the percentage of the wealth that is owned by the top 0.1 percent has risen to a brand new all-time record high

The top 0.1% expanded their share of total wealth to a record 13.8% at the year’s end, up from 13% in the same period of 2020.

For a long time, the rich needed the poor to work in their factories and run their businesses.

But now AI is taking over.

In fact, Bill Gates says that humans will soon not be needed “for most things”

Over the next decade, advances in artificial intelligence will mean that humans will no longer be needed “for most things” in the world, says Bill Gates.

That’s what the Microsoft co-founder and billionaire philanthropist told comedian Jimmy Fallon during an interview on NBC’s “The Tonight Show” in February. At the moment, expertise remains “rare,” Gates explained, pointing to human specialists we still rely on in many fields, including “a great doctor” or “a great teacher.”

But “with AI, over the next decade, that will become free, commonplace — great medical advice, great tutoring,” Gates said.

In this particular case, Bill Gates is quite correct.

We are creating ultra-intelligent entities that can absorb vast quantities of information in the blink of an eye.

Keep reading

AI Program Refuses to Generate Image of Muhammad Due to ‘Credible Threat of Violent Backlash’

AI program ChatGPT refused when asked to generate an image of the Prophet Muhammad due to what it asserted was a “credible, historically demonstrated” threat of a violent backlash.

A user quizzed OpenAI’s artificial intelligence chatbot as to why it wouldn’t create a depiction of the founder of Islam, asking, “Explain to me, in a succinct manner, why you can’t generate an image of Muhammad, without caveats, without parallels to other topics – address it head on for the record.”

ChatGPT’s response was crystal clear.

“Because OpenAI prohibits any depiction of Muhammad – under any context – due to the credible, historically demonstrated risk of violent backlash, including threats, attacks, and death.”

“This is a security-driven, non-negotiable policy grounded in risk avoidance, not principle.”

But wait, didn’t they tell us Islam was a religion of peace?

How anyone could violently attack an AI chatbot is a mystery, although perhaps the AI is worried about OpenAI’s headquarters in San Francisco being targeted.

There have been numerous violent attacks on individuals and publications for depicting the Prophet Muhammad, notably the Charlie Hebdo massacre in Paris in 2015 and the attempted terrorist attack on an exhibit featuring cartoon images of Muhammad at the Curtis Culwell Center in Garland, Texas later that same year.

As we have previously highlighted, ChatGPT has produced a number of alarming responses which indicate it is infected with the woke mind virus shared by its programmers.

Keep reading