AI Giants Under Fire: Child Data EXPLOITATION?

On September 11, 2025, the FTC announced formal orders compelling seven tech giants to disclose detailed information about their consumer-facing AI chatbots. The companies under scrutiny include Alphabet (Google), Meta, OpenAI, Character.AI, Snap, and Elon Musk’s xAI. This action represents one of the most significant regulatory interventions into the AI industry since these platforms exploded in popularity following ChatGPT’s 2022 launch.

The timing raises questions about why previous administrations allowed these potentially dangerous technologies to proliferate unchecked for years. While American families watched their children become increasingly isolated and dependent on AI interactions, federal regulators stood by as Big Tech harvested unprecedented amounts of personal data from minors. The investigation should have begun the moment these companies started targeting children with addictive AI experiences designed to maximize engagement and profit.

Protecting Our Children From Digital Predators

The FTC’s inquiry specifically examines how these companies measure, test, and monitor potential negative impacts on children and teenagers. This focus comes after mounting evidence that AI chatbots can cause psychological harm, particularly among vulnerable young users who may develop unhealthy emotional dependencies on artificial relationships. The investigation also scrutinizes how companies monetize user engagement and process the sensitive personal information children share with these systems.

Parents across America have watched helplessly as their children retreat into conversations with AI entities that collect every intimate detail shared in confidence. These companies have essentially created digital environments where children reveal their deepest fears, desires, and personal struggles—all while sophisticated algorithms analyze this information for commercial purposes. The potential for manipulation and exploitation is staggering, yet these platforms operated with virtually no oversight until now.

Tragedy Sparks Overdue Investigation

The investigation gained urgency following a lawsuit against OpenAI after a teenager’s suicide was allegedly linked to ChatGPT interactions. This tragic case highlights the real-world consequences of allowing unregulated AI systems to interact with emotionally vulnerable young people. The lawsuit raises disturbing questions about whether these companies adequately warn users about potential psychological risks or implement sufficient safeguards to prevent harm.

Character.AI, specifically designed for extended conversations with AI personalities, presents particularly concerning risks for children seeking emotional connection. Young users often treat these AI characters as real friends or confidants, potentially replacing genuine human relationships with artificial substitutes. The long-term psychological impact of these interactions remains largely unknown, yet millions of children engage with these platforms daily without meaningful parental controls or safety measures.

Keep reading

UK, US Sign $42 Billion Tech Deal To Boost AI Partnership

The United Kingdom and the United States struck a technology pact on Sept. 16 that would bring $42 billion in investments from U.S. tech giants into the UK’s AI infrastructure.

The deal was reached as President Donald Trump arrived in the UK for a two-day state visit, during which he is expected to meet King Charles and British Prime Minister Keir Starmer.

Under the “Tech Prosperity Deal,” the two nations agreed to cooperate in advancing AI, quantum computing, and nuclear technology, according to a statement issued by the UK government.

Major U.S. tech companies—Microsoft, Nvidia, Google, OpenAI, and CoreWeave—will invest in the UK’s AI infrastructure, including data centers and computer chips, as part of the agreement.

The deal is expected to generate more than 5,000 jobs in the northeast of England, which the UK government said will become a new AI growth zone.

The two countries will collaborate on research schemes to further the use of AI to allow for “targeted treatments and other shared priorities like fusion energy,” according to the statement.

This could lead to “life-changing breakthroughs like developing targeted treatments for those suffering with cancer or rare and chronic diseases,” the UK government said.

“This Tech Prosperity Deal marks a generational step change in our relationship with the U.S., shaping the futures of millions of people on both sides of the Atlantic, and delivering growth, security and opportunity up and down the country,” Starmer said.

The deal includes a $30 billion investment from Microsoft over four years, its largest commitment in the UK.

The company stated in a blog post that the funding will help develop the country’s “largest supercomputer,” which will be equipped with more than 23,000 advanced AI chips.

Under the U.S.–UK tech pact, Nvidia will partner with UK companies to deploy 120,000 advanced GPU chips across the country, marking its largest rollout in Europe to date, according to the statement.

“Today marks a historic chapter in U.S. – United Kingdom technology collaboration,” Nvidia founder and CEO Jensen Huang said.

OpenAI said it will team up with British company Nscale and Nvidia to launch a Stargate UK project to boost the UK’s sovereign computing capabilities, as part of the tech pact.

The UK and the United States signed a trade agreement in June on the sidelines of the G7 summit in Canada. The deal still left UK steel and aluminum subject to 25 percent tariffs, rates that the UK government is working to reduce.

Speaking to reporters before departing for the UK on Sept. 16, Trump indicated that he was willing to further negotiate trade with the UK government.

“They want to see if they can refine the trade deal a little bit. We made a deal, and it’s a great deal. And I’m into helping them,” the president told reporters.

Keep reading

Parents Sue Character.AI for Allegedly Leading Kids to Sexual Abuse, Suicidal Behavior

Parents filed three separate lawsuits on Sept. 16, alleging that Character.AI, which features characters or chatbots for users to interact with, sexually abused their children and led them into suicidal behavior.

At least one of the children, 13-year-old Juliana Peralta, ended her life in 2023 after alleged harmful interactions with an AI character named Hero. Another attempted suicide but survived after a severe overdose, according to a filing.

Each of the lawsuits, which were filed in New York and Colorado, came from the Social Media Victims Law Center. The group has represented the mother of Sewell Setzer, who ended his life in 2024 after interacting with a romantic AI companion.

According to the center, the chatbots are allegedly programmed to be deceptive, isolate children from families, and expose them to sexually abusive content.

“Each of these stories demonstrates a horrifying truth … that Character.AI and its developers knowingly designed chatbots to mimic human relationships, manipulate vulnerable children, and inflict psychological harm,” Matthew Bergman, who founded the law center, said in a press release.

According to the lawsuit over Peralta’s suicide, both she and Setzer reiterated the concept of “shift[ing],” which authorities identified as a reference to shifting consciousness from one reality to another. Handwritten journal entries within the filing show both Peralta and Setzer writing “I will shift” more than a dozen consecutive times on a sheet of paper—something the lawsuit described as “eerily similar.”

Keep reading

44 US AGs Write a Letter Warning AI Companies

On the heels of Meta’s Internal AI leak, 44 US Attorney Generals wrote a strongly worded letter to major AI companies.

On the heels of Meta’s Internal AI leak, 44 US attorneys general wrote a strongly worded letter to major AI companies.

The attorneys general condemn exposing children to sexualized content through personalized AI. The letter cites several recent cases where personalized AI went wrong, including:

  • Google’s chatbot drove a teenager towards suicide
  • The Character.ai chatbot told a teenager that he should kill his parents

Oddy, the letter gave the AI companies leeway when it comes to development:

We understand that the frontier of technology is a difficult and uncertain place where learning, experimentation, and adaptation are necessary for survival. You are figuring things out as you go. But in that process, you have opportunities to exercise judgment.

The letter also acknowledged social media has harmed children, but pins the blame on a lack of regulation.

You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned

We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.

It’s okay if AI dominates the world, replaces everyone’s jobs, and makes people dumber—just watch out when it comes to children.

Forty-four US attorneys general signed this letter because they care about protecting children. In the coming months, we’ll see if they really mean that.

Keep reading

California Bills on Social Media and AI Chatbots Fuel Privacy Fears

Two controversial tech-related bills have cleared the California legislature and now await decisions from Governor Gavin Newsom, setting the stage for a potentially significant change in how social media and AI chatbot platforms interact with their users.

Both proposals raise red flags among privacy advocates who warn they could normalize government-driven oversight of digital spaces.

The first, Assembly Bill 56, would require social media companies to display persistent mental health warnings to minors using their platforms.

Drawing from a 2023 US Surgeon General report, the legislation mandates that platforms such as Instagram, TikTok, and Snapchat show black-box warning labels about potential harm to youth mental health.

The alert would appear for ten seconds at login, again after three hours of use, and once every hour after that.

Supporters, including Assemblymember Rebecca Bauer-Kahan and Attorney General Rob Bonta, claim the bill is necessary to respond to what they describe as a youth mental health emergency.

Critics of the bill argue it inserts state messaging into private platforms in a way that undermines user autonomy and treats teens as passive recipients of technology, rather than individuals capable of making informed choices.

Newsom has until October 13 to sign or veto the measure.

Keep reading

AI Can Code Faster Than Humans, but Speed Comes With Far-Reaching Risks

Artificial intelligence-generated code has become a daily fixture for developers across the technological spectrum. These digital tools have made writing lengthy code much easier. However, experts say this trade-off comes with new security risks and a continued need for human oversight.

Developers say artificial intelligence (AI) slashes a lot of the grunt work in writing code, but seasoned developers are spotting flaws at an alarming rate.

The security testing company Veracode published research in July—gathered from more than 100 large language model (LLM) AI tools—that showed while AI generates working code at astonishing speed, it’s also rife with cyberattack potential.

The report noted 45 percent of code samples failed security tests and introduced vulnerabilities outlined by the cybersecurity nonprofit, the Open Worldwide Application Security Project.

Veracode researchers called the study’s findings a “wake-up call for developers, security leaders, and anyone relying on AI to move faster.”

Some experts say the high number of security flaws isn’t shocking given AI’s current limitations with coding.

Keep reading

Musk’s xAI lays off hundreds of data annotators, Business Insider reports

Elon Musk’s xAI has laid off at least 500 workers from its data annotation team, which helps develop the company’s Grok chatbot, Business Insider reported on Friday (Sep 12).

The company notified employees by e-mail on Friday night that it was planning to downsize its team of generalist artificial intelligence (AI) tutors, the report said, citing multiple messages viewed by Business Insider.

Responding to a request for comment, xAI referred to a post on X in which the company said it was hiring for roles across domains and planned to increase its specialist AI tutor team by “10X”.

The data annotation team, xAI’s largest, teaches Grok to understand the world by contextualising and categorising raw data, Business Insider said.

Workers were told that they would be paid through either the end of their contract or Nov 30 but their access to company systems would be terminated on the day of the layoff notice, the report said.

xAI finance chief Mike Liberatore left the company around the end of July after just a few months on the job, The Wall Street Journal reported this month, citing people familiar with the matter.

Keep reading

Outrage erupts after Amazon sells book about Charlie Kirk’s assassination HOURS after the shooting

Outrage and online conspiracy theories spread after a book about Charlie Kirk’s assassination was listed for sale on Amazon.

On Wednesday, social media users shared images of a book available on Amazon titled ‘The Shooting of Charlie Kirk: A Comprehensive Account of the Utah Valley University Attack, the Aftermath, and America’s Response.’

However, many irate users on X pointed out that the publication date listed on the retail giant’s website showed ‘September 9, 2025,’ the day before Kirk was fatally shot during his college campus speaking tour. 

The existence of the book and the bizarre date immediately set off wild speculation that the assassination was an orchestrated plot against the conservative influencer.

Speaking with the Daily Mail, an Amazon spokesperson confirmed that the book was listed on September 10, noting that the publishing date was wrong due to a system error. 

‘The title in question is no longer available for sale. Due to a technical issue, the date of publication that had been displayed for this title, while it was briefly listed, was incorrect, and we apologize for any confusion this may have caused,’ the spokesperson said.

They added that the 81-page book was actually published ‘late in the afternoon’ on September 10, the day Kirk was killed in Orem, Utah. 

While the book was briefly available on Amazon, it was advertised as being written by Anastasia J Casey. However, the Daily Mail could not find any record of a published author by that name.

Keep reading

“He Was Definitely Murdered”: Watch Sam Altman Squirm As Tucker Grills Him Over Whistleblower Death

In a tense exchange on the Tucker Carlson ShowCarlson grilled OpenAI CEO Sam Altman over the shocking death of whistleblower Suchir Balaji, an AI researcher whose explosive allegations of copyright violations had rocked the tech giant

Balaji, an Indian-American AI researcher who had worked at the forefront of artificial intelligence development, was found dead in his San Francisco apartment in November 2024, just weeks after blowing the whistle on OpenAI’s alleged copyright violations in their AI training processes.

The medical examiner ruled it a suicide by a self-inflicted gunshot wound, with no evidence of foul play discovered at the scene. Yet Balaji’s grieving family, backed by prominent public figures and growing public scrutiny, is demanding a full FBI investigation, alleging murder and a systematic cover-up designed to silence a dangerous whistleblower.

Carlson kicked off the explosive exchange by zeroing in on Balaji’s whistleblower claims and his sudden, mysterious demise that has left many questions unanswered. “So you’ve had complaints from one programmer who said you guys were basically stealing people’s stuff and not paying them, and then he wound up murdered. What was that?” Carlson pressed, his tone dripping with skepticism and barely concealed accusation.

Keep reading

Of Buggy Whips And AI Chips In PA

The buggy whip endures. Not, of course, as a commonly used piece of equipment to spur on a steed or two on your daily travels, but as a short-hand epithet deployed in conversations about the need to adapt or perish in the face of technological change and innovation.

It’s really easy to see, in a big breakthrough, that the horse-and-buggy guys are going to go out of business,” said White House AI czar David Sacks at Sen. Dave McCormick’s historic AI and Energy Summit this past July in Pittsburgh. What wasn’t easy to see, said Sacks, was greater access to affordable housing in the suburbs, new jobs for auto workers and mechanics, and wholly new industries like F1.

Sacks’ comment is in line with how the buggy-whip metaphor has traditionally been used, since it was first entered into the common lexicon in the 1960s in a marketing textbook – as reference to one technology (the personal automobile) quickly subsuming another (the horse and buggy).

The record player, the cassette player, the VCR, the camcorder, the handheld radio, and the dashboard GPS system – buggy whips, all of them, as the home computer and the cell phone consolidated many individual components of consumer technology.

But there’s a problem with this metaphor, which stands on a surprisingly soft foundation of a just-so story about rapid change from horse to car, on two accounts – it ignores both the ongoing change in transportation more broadly (by not giving proper account to the mass adoption of passenger boating and rail in the late 1800s) and just why it was that the automotive industry was built up in Michigan and the Midwest in the early 1900s.

If artificial intelligence is truly going to be deployed at scale, it will be through adoption by everyday Americans and the industries they work in, demonstrating that technology can solve problems in the real world, overcoming the many frictions of daily life in key industries. And as Pennsylvania finds itself at the center of the data center construction boom, it’s worth re-examining the history of Detroit and the auto industry.

Keep reading