State Bar of California Makes ‘Staggering Admission’

The State Bar of California has disclosed that some multiple-choice questions in a problem-plagued bar exam were developed with the aid of artificial intelligence. The legal licensing body said in a news release that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam.

  • “The debacle that was the February 2025 bar exam is worse than we imagined,” Mary Basick, assistant dean of academic skills at the University of California, Irvine, Law School, told the Los Angeles Times. “I’m almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable.”
  • In February, the new exam led to complaints after many test-takers were unable to complete their bar exams. The online testing platforms repeatedly crashed before some applicants even started. Others struggled to finish and save essays, experienced screen lags and error messages, and could not copy and paste text, the Times reported earlier.
  • According to a recent presentation by the State Bar, 100 of the 171 scored multiple-choice questions were made by the company Kaplan Exam Services and 48 were drawn from a first-year law students exam, the AP reports. A smaller subset of 23 scored questions were made by ACS Ventures, the State Bar’s psychometrician, and developed with artificial intelligence.
  • “It’s a staggering admission,” says Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation. “The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam,” she says. “They then paid that same company to assess and ultimately approve of the questions on the exam, including the questions the company authored.”

Keep reading

‘Cyborg 1.0’: World’s First Robocop Debuts With Facial Recognition And 360° Camera Vision

Thailand has debuted the world’s first ‘Robocop’ designed to detect and prevent crime with advanced AI.

Equipped with 360-degree cameras for eyes, the cutting-edge cyborg maintains constant surveillance with real-time monitoring. The robocop, named Police Colonel Nakhonpathom Plod Phai, meaning “Nakhonpathom is safe,” was unveiled during the Songkran festival in Nakhon Pathom province on Wednesday. The debut was announced via a Facebook post by the Royal Thai Police, according to a report by The Sun.

The robocop is also able to detect weapons, such as knives and wooden batons. In neighboring China, humanoid robots have started supporting police patrols.

Interesting Engineering reports:

In Shenzhen, PM01 model robots developed by EngineAI have been deployed alongside officers, wearing high-visibility police vests. These robots have been seen engaging with pedestrians—waving, shaking hands, and responding to voice commands—according to local media reports. A recent video shows a PM01 robot waving to a crowd, sparking curiosity about its purpose in law enforcement.

First launched in December 2024, the PM01 features agile mobility, an interactive touchscreen, and an open-source platform. This design allows developers worldwide to contribute to its evolution by adding new features and capabilities through secondary development.

Last year, Logon Technology, a Chinese robotics company, unveiled the RT-G autonomous spherical robot, described as a “technological breakthrough,” with an army of these spherical robocops spotted rolling through cities across China, The Sun said. The robocop’s debut underscores the growing importance of robot technology. During Tesla’s Q1 2025 All-Hands meeting, CEO Elon Musk revealed the the company is preparing aiming to begin the production of its own humanoid, Optimus, this year.

Keep reading

Report: AI Company DeepSeek ‘Funnels’ American User Data To Red China

The Chinese artificial intelligence (AI) company DeepSeek is allegedly syphoning American user data to China’s communist government, according to a new congressional report.

Released on Wednesday by the House Select Committee on the Chinese Communist Party (CCP), the 16-page analysis contends that the China-based AI firm “collects detailed user data, which it transmits via backend infrastructure that is connected to China Mobile.”

A state-owned telecommunications giant, China Mobile was flagged by the Pentagon earlier this year for having ties to Beijing’s military. In 2019, the Federal Communications Commission (FCC) prohibited the firm from operating within the U.S. over concerns that “unauthorized access to customer … data could create irreparable damage to U.S. national security.”

The FCC subsequently added China Mobile to its list of “national security threats” in 2022, according to ABC News.

“While the extent of data transmission remains unconfirmed, DeepSeek’s integration with China Mobile infrastructure raises serious concerns about potential foreign access to Americans’ private information,” the House analysis reads. “By relying on China Mobile’s infrastructure, DeepSeek ensures that Americans’ data is stored and transmitted through networks controlled by the Chinese government.”

Among the data DeepSeek reportedly collects from users who utilize its chatbot function are their “chat history, device details, and even the way a person types,” according to the report. The House committee cited DeepSeek’s privacy policy, which discloses that the company stores the information it gathers from users “in secure servers located in the People’s Republic of China.”

It’s worth noting that the CCP passed a so-called “national intelligence law” in 2017 granting the communist government access to data held by businesses operating in China. As noted by the Department of Homeland Security, this law “compels all PRC firms and entities to support, assist, and cooperate with the PRC intelligence services, creating a legal obligation for those entities to turn over data collected abroad and domestically to the PRC.”

Keep reading

OpenAI Whistleblower Parents Now Say He ‘Was Shot Twice’ in the Head in Death Ruled a ‘Suicide’ by Chief Medical Examiner

Suchir Balaji, the OpenAi whistleblower who allegedly committed suicide was shot twice in the head, according to his parents.

Balaji, 26, was found dead in his San Francisco apartment with a gunshot wound to the head in November shortly after he told the New York Times that OpenAi was violating copyright laws.

His death was initially ruled a suicide by a medical examiner, however his parents immediately hired a private investigator and did a second autopsy.

Balaji’s father insisted his son was happy and not suicidal.

“He was happy, not depressed or anything. It was his birthday week. He made plans to see us in January.”

“How can anyone believe that there was no suicide note left?” he continued. “We have seen the blood shots in the bathroom, signs of fight,” Balaji’s father said at a presser in December.

Keep reading

TAKE IT DOWN Act Advances in House Despite Major Censorship Concerns

The US House Committee on Energy and Commerce has passed the TAKE IT DOWN (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks) Act in a 49 to 1, bipartisan vote, and the legislation is now headed for the House of Representatives.

If the bill clears that hurdle as well, it will be up to President Trump to sign it into law.

Backed, among others, by First Lady Melania Trump, TAKE IT DOWN was introduced as a way to stop the spread of real, and AI-generated non-consensual intimate imagery (NCII). If, as it seems likely, TAKE IT DOWN becomes law, it will force platforms to remove flagged content within 48 hours.

But the bill’s critics continue to warn that the text lacks proper safeguards and other requirements that would prevent it from being misused, or abused as a tool of censorship, instead of narrowly serving its declarative purpose.

These concerns are not addressed in a press release the Committee on Energy and Commerce issued after adopting the proposal, as it focused instead on the benefits the legislation would provide to victims of dissemination of explicit imagery, with an emphasis on that which is AI-generated, i.e., on deepfakes.

However, campaigners, among them the Center for Democracy and Technology and the EFF, believe that the bill’s actual wording does not live up to its good intent, specifically around the takedown requirement which “lends itself to abuse.”

While the Federal Trade Commission (FTC) would be tasked with issuing penalties for non-compliance, under TAKE IT DOWN, there are no consequences for those making false reports, which could lead to legitimate content quickly disappearing from the internet.

The bill doesn’t lay out how those affected might appeal once their content is falsely flagged and removed, while platforms are under no threat of penalty for removing constitutionally protected speech.

Keep reading

Shopify CEO mandates AI-first hiring policy, reshaping workforce expectations

  • Shopify’s CEO Tobi Lütke mandates that employees must justify why a role can’t be automated before hiring, signaling AI adoption as essential for productivity and efficiency. AI proficiency will now factor into performance reviews.
  • Companies like Shopify, Google and Meta are cutting jobs while heavily investing in AI tools (e.g., Shopify Magic, GitHub Copilot) to handle tasks from customer service to coding, redefining traditional roles.
  • While AI displaces some jobs, it also creates opportunities for higher-value work. Critics warn of disproportionate impacts on junior or repetitive roles, citing challenges in rapid upskilling.
  • Shopify’s workforce shrank from 8,300 in 2023 to 8,100 by late 2024, reflecting a broader trend of tech layoffs (152,000+ in 2024) as companies prioritize AI-driven efficiency over traditional hiring.
  • Industries like legal, marketing and finance are adopting AI for tasks like contract review and data analysis. Experts predict a future where human-AI collaboration dominates, requiring new policies for reskilling and worker protections.

Shopify CEO Tobi Lütke is making one thing clear to employees: artificial intelligence isn’t just a tool—it’s a fundamental requirement. In a memo sent to staff last month, the e-commerce executive announced that teams must now justify why a job can’t be automated before requesting additional hires, signaling a seismic shift in corporate hiring strategies amid the AI revolution.

The directive underscores a broader trend in the tech industry, where businesses are aggressively investing in AI while simultaneously trimming headcounts. For Lütke, the mandate is about optimizing efficiency. “What would this area look like if autonomous AI agents were already part of the team?” he wrote in the memo, later shared publicly on X. “This question can lead to really fun discussions and projects.”

Keep reading

100+ Meta employees, including Head of AI Policy, confirmed as ex-IDF

Meta’s recruitment of vast numbers of former Israeli soldiers raises serious questions about the tech giant’s commitment to free speech – and provides a peek into a biased content moderation process that’s been heavily censoring pro-Palestinian accounts amid the Israeli siege of Gaza.

This article was originally published by ¡Do Not Panic!

More than one hundred former Israeli spies and IDF soldiers work for tech giant Meta, including its head of AI policy, who served in the IDF under an Israeli government scheme that allows non-Israelis to volunteer for the Israeli army.

Shira Anderson, an American international rights lawyer, is Meta’s AI policy chief who voluntarily enlisted for the IDF in 2009 under a program which enables non-Israeli Jews who aren’t eligible for military conscription to join the Israeli army.

Through this program, known as Garin Tzabar, many non-Israelis who have fought for the IDF have been implicated in war crimes and crimes against humanity since Israel’s genocide of Gaza began in October 2023.

Anderson served as a non-commissioned officer in the IDF for over two years where she worked in the Military Strategic Information Section, writing dossiers and public relations propaganda for the IDF. She was also the liaison between the IDF and foreign military attaches stationed in Israel, and liaison to the Red Cross.

With AI a critical emerging technology for tech giants and militaries, Anderson’s role at Meta is an important one. She develops the legal guidance, policies and public relations talking points concerning AI issues and regulation for all of Meta’s key areas, including its product, public policy and government affairs teams.

At Meta, Anderson, who is based in Meta’s Washington DC office, is in familiar company. More than one hundred former Israeli spies and IDF soldiers are employed by the company, my new investigation shows, many of whom worked for Israel’s spy agency Unit 8200.

These ex-IDF members are based evenly across Meta’s US offices and in its Tel Aviv office, and a significant number of them, like Anderson, have a specialization in AI. Given that Israel has made extensive use of AI not just to conduct its genocide, but to establish its prior system of apartheid, surveillance and occupation, Meta’s recruiting of IDF AI specialists is particularly insidious. Did these former Israeli spies use their Unit 8200 connections to help the tech giant collaborate with the IDF to build kill lists? According to a report last year, Unit 8200 infiltrated WhatsApp groups and marked every name in a group for assassination if just one alleged Hamas member was also in the group, no matter the size or content of the group chat.

Keep reading

2025: The iPhone Moment For Humanoid Robots

The race is on to capture the multi-trillion market for humanoid robots. General-purpose robots can be instantly repurposed for any task by AI. Personal robots, for instance, will cook your meals, clean your house, do your laundry, tutor your kids, drive your car, cut your grass, take care of your elderly parents, repair your plumbing, etc. Whereas an iPhone has many apps accessed from a single device, robots will free you from a single screen to invade the physical world. Tesla will likely set the standard with its Optimus model starting at $20-30,000. ⁃ Patrick Wood, Editor.

“There’s an iPhone moment happening with humanoids,” said Brett Adcock, founder of Figure, a humanoid robotics company in California. “It’s going to happen right now,” added the serial entrepreneur, his robots already working on the production line in BMW’s Spartanburg factory. Another major corporate customer is trialing his robots for warehouse work. “To succeed at this, you have to do three things that have never been done before. And you have to get all three of them right within the next 5yrs or you’re going to fail for sure.”

The first thing is you have to build hardware for humanoids that’s incredibly complex and can never fail, and it’s got to work at human speeds with human range of motion,” explained Adcock. “The second thing is a neural net problem, not a control systems problem. You can’t code your way out of this problem. You need to have a robot that can ingest human-like data through a neural net and it has to be able to imitate what humans do. Humanoid robots are not like arms bolted to a factory table. None of those robots have AI.”

Keep reading

Woke Employees’ Worst Nightmare: Google Plays Pivotal Role in CBP’s AI-Powered Border Surveillance Upgrade

Google Cloud is at the center of a Customs and Border Protection plan to modernize video surveillance towers that involves deploying machine learning along the southern border, despite previous assurances from the woke Silicon Valley giant to its leftist employees that it was not involved in such projects.

Federal contract documents reviewed by the Intercept reveal that Google Cloud is playing a critical role in upgrading the U.S. Customs and Border Protection’s (CBP) so-called “virtual wall” along the Mexican border. This comes five years after Google Cloud CEO Thomas Kurian assured employees that the company was not working on any projects related to immigration enforcement at the southern border.

The CBP’s plan involves modernizing older video surveillance towers in Arizona, which provide the agency with continuous monitoring of the border. A key aspect of this effort is the integration of machine learning capabilities into CBP cameras, enabling automatic detection of humans and vehicles approaching the border without the need for constant human monitoring.

According to the documents, CBP is purchasing computer vision technology from two vendors: IBM and Equitus. Google’s role is to stitch these services together by operating a central repository for video surveillance data through its ModulAr Google Cloud Platform Environment (MAGE).

The project focuses on upgrading 50 towers with up to 100 cameras across six sites in the Tucson Sector. IBM will provide its Maximo Visual Inspection software, typically marketed for industrial quality control inspections, while Equitus will offer its Video Sentinel, a video surveillance analytics program designed for border surveillance.

A technical diagram within the document shows that every camera in CBP’s Tucson Sector will feed data into Google’s servers. The resulting metadata and keyframes will be sent to CBP’s Google Cloud, with the document stating, “This project will focus initially on 100 simultaneous video streams from the data source for processing.”

Keep reading

Which AI Chatbots Collect The Most Data About You?

The harbinger of the AI revolution, ChatGPT, remains the most popular AI tool on the market, with more than 200 million weekly active users.

But amongst all its competitors, which AI chatbots are collecting the most user data? And why does that matter?

Visual Capitalist’s Marcus Lu visualizes data from Surfshark which identified the most popular AI chatbots and analyzed their privacy details on the Apple App Store.

At first place, Google’s Gemini (released March, 2023) collects 22 different data points across 10 categories, from its users.

Data collected ranges from general diagnostics (that all bots in this study collect) to access to contacts (that no other bot identified collects).

xAI’s Grok (released November, 2023) collects the least unique data points (7).

China’s DeepSeek (released Jan 2025), sits comfortably in the middle of the pack at 11 points.

The kind of data collected by each of these AI tools varies. All of them collected general diagnostics information. However, only Gemini and Perplexity look at purchases.

And then, nearly all but Perplexity.ai and Grok collect user content.

Keep reading