Google agrees to $68m settlement over claims it recorded private conversations

Google is facing a class action after its users claimed the company was spying on them.

The company has agreed to pay $68 million to settle a lawsuit as users accuse the company of violating their personal privacy.

Google’s virtual assistant, an AI powered software available on android phones and tablets, has been accused of recording private conversations.

The software activates when users use “wake words”, a verbal cue prompting the device to actively listen to commands, like “Hey Google” or “Okay Google”.

The assistant is designed to only switch from passive monitoring to active listening when it hears wake words.

Keep reading

EU Targets X (Again) in Grok AI Probe

European regulators have launched a new investigation into Elon Musk’s X, focusing on alleged failures to control sexually explicit imagery generated by the company’s AI chatbot, Grok.

The case is being pursued under the European Union’s Digital Services Act (DSA), a law that grants the European Commission expansive powers to police digital platforms for potential “harms.”

In a statement, the Commission said, “The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok’s functionalities into X in the EU.”

The agency added that the review includes “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.” Officials stated that these threats “seem to have materialized, exposing citizens in the EU to serious harm.”

Keep reading

Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

The $134 Billion Betrayal: Inside Elon Musk’s Explosive Lawsuit With OpenAI

Elon Musk’s lawsuit against OpenAI and Microsoft has evolved into a high-stakes dispute over whether OpenAI stayed true to the mission it was founded on or quietly outgrew it while relying on that original promise.

Musk is seeking between $79 billion and $134 billion in damages, a figure derived from an expert valuation that treats his early funding and contributions as foundational to what OpenAI later became. While the number is enormous, the heart of the case is simpler: Musk argues he helped create and fund a nonprofit dedicated to AI for the public good, and that OpenAI later abandoned that commitment in a way that amounted to fraud.

According to Musk’s filings, his roughly $38 million in early funding was not just a donation but the financial backbone of OpenAI’s formative years, supplemented by recruiting help, strategic guidance, and credibility. His damages theory, prepared by financial economist C. Paul Wazzan, ties those early inputs to OpenAI’s current valuation of around $500 billion.

The claim is framed as disgorgement rather than repayment, with Musk arguing that the vast gains realized by OpenAI and Microsoft flowed from a nonprofit story that attracted support and trust, only to be discarded once the company reached scale, according to TechCrunch

Much of the public attention has centered on internal documents uncovered during discovery, particularly private notes from OpenAI co-founder Greg Brockman in 2017.

One line has become central to Musk’s argument: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.”

Keep reading

“Emergency Intervention”: Trump To Cap Residential Electric Bills By Forcing Tech Giants To Pay For Soaring Power Costs

Back in August, when the American population was just waking up to the dire consequences the exponentially growing army of data centers spawned across the country was having on residential electricity bills, we said that the chart of US CPI would soon become the most popular (not in a good way) chart in the financial realm.

One month later we added that it was only a matter of time before Trump, realizing that soaring electricity costs would almost certainly cost Republicans the midterms, would enforce price caps.

Turns out we were right.

And while Trump obviously can not pull a communist rabbit out of his hat, and centrally plan the entire US power grid, what he can do is precisely what he is about to announce. 

According to Bloomberg, Trump and the governors of several US Northeastern states agreed to push for an emergency wholesale electricity auction that would compel technology companies to effectively fund new power plants, effectively putting a cap for residential power prices at the expense of hyperscalers and data centers. Which, come to think of it, we also proposed back in October.

Keep reading

Florida’s “App Store Accountability Act” Would Deputize Big Tech to Verify User IDs for App Access

In Florida, Senator Alexis Calatayud has introduced a proposal that could quietly reshape how millions of Americans experience the digital world.

The App Store Accountability Act (SB 1722), presented as a safeguard for children, would require every app marketplace to identify users by age category, verify that data through “commercially available methods,” and secure recurring parental consent whenever an app’s policies change.

The legislation is ambitious. If enacted, it would take effect in July 2027, with enforcement beginning the following year.

Each violation could carry penalties of up to $7,500, along with injunctions and attorney fees.

On its surface, this is a regulatory measure aimed at strengthening parental oversight and protecting minors from online harms. Yet it hits up against a larger philosophical and rights struggle.

For much of modern political thought, the relationship between authority and liberty has revolved around who decides what constitutes protection. Florida’s proposal situates that question in the hands of private corporations. The bill effectively deputizes Big Tech app store operators, such as Apple and Google, as arbiters of digital identity, compelling them to verify user ages and manage parental permissions across every platform.

Millions of Floridians could be required to submit identifying details or official documents simply to access or update apps. This process, while justified as a measure of security, reintroduces the age-old tension between the protective role of the state and the autonomy of the citizen.

By making identity verification the gateway to digital access, the law risks creating an infrastructure in which surveillance becomes a condition of participation. It is a move from voluntary oversight to systemic authentication, merging the roles of government and corporation in a single mechanism of control.

The proposal may collide with long-established constitutional principles. One of the objections lies in the concept of prior restraint. By conditioning minors’ ability to download or continue using apps on verified Big Tech platforms, the bill requires permission before access, effectively placing all expressive content behind a regulatory gate.

Apps today are not mere entertainment; they are conduits of news, art, religion, and political discourse. Restricting that access risks transforming a parental safeguard into a systemic filter for speech.

The burden falls most heavily on minors, whose First Amendment protections are often ignored in public debate.

Keep reading

Democratic Senators Urge Tech Platforms to Restrict AI Images, Including Altered Clothing and Body-Shape Edits

Democratic senators are broadening the definition of what counts as restricted online content, moving from earlier efforts focused on explicit deepfakes to a new campaign against what they call “non-nude sexualized” material.

The new language dramatically expands the category of what can be censored, reaching beyond pornography or criminal exploitation to include images with altered clothing, edited body shapes, or suggestive visual effects.

Senator Lisa Blunt Rochester of Delaware led the group of seven Democrats who signed a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X.

We obtained a copy of the letter for you here.

The signatories — Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff — are asking for records that define how each company classifies and removes this type of content, as well as any internal documents or moderator guidance about “virtual undressing” and similar AI edits.

“We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others,” the senators wrote.

“These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.”

Their argument rests on reports describing AI tools that can transform photos of clothed women into revealing deepfakes or fabricate images of sexualized poses. The senators describe this as evidence of a growing “crisis of image-based abuse” that undermines trust and safety online.

But the language of the letter goes further than earlier initiatives that targeted explicit content. It introduces a much wider standard where mere suggestion or aesthetic change could qualify as “sexualized.” The call to prohibit “altered clothing” or “body-shape edits” effectively merges real abuse prevention with subjective judgments about appearance.

Keep reading

Data Centers Use Lots of Electricity. This Bill Would Let Them Go Off the Grid.

Tech companies are building data centers as quickly as possible to run AI. These facilities are controviersial because they use copious amounts of electricity and might tax an electrical grid that in some areas is already straining.

In a bill introduced last week, Sen. Tom Cotton (R–Ark.) proposed an idea: letting these companies get off the grid altogether.

“Power officials have been raising concerns that the grid isn’t equipped to handle the sheer number of data centers tech companies are seeking to build,” Katherine Blunt wrote last week at The Wall Street Journal. “They say it will take many years to build new transmission lines and power plants needed to support the surge in demand while keeping the lights on for other customers.” Some officials, Blunt noted, “have proposed either requiring or encouraging data centers to stop using [the grid] when there is a risk of blackouts, either by powering down or switching to backup electricity supplies.”

Jowi Morales of Tom’s Hardware reports companies are “looking at alternative power sources to bring their projects online, regardless of the availability of power from the grid.” Microsoft, for example, is recommissioning the Three Mile Island nuclear plant in Pennsylvania to generate 835 megawatts of energy for its data centers (though not without a $1 billion loan from U.S. taxpayers).

“These initiatives will take years to take off, though,” Morales adds. “The Three Mile Island plant is expected to be operational only by 2028.”

Last week, Cotton introduced the Decentralized Access to Technology Alternatives (DATA) Act of 2026. Under the bill, “a consumer-regulated electric utility” would be “exempt from regulation” under federal law so long as it doesn’t connect to the overall electrical grid.

When one company contracts to sell electricity to another company, “that retail transaction presently would put you under the jurisdiction of a bunch of people” at the state and federal levels, says Travis Fisher, director of energy and environmental policy studies at the Cato Institute.

And that brings a cumbersome level of red tape. “The rapid pace of innovation means the AI revolution won’t wait for multi-year permitting fights, cost-of-service hearings held by regulators, or planning processes built for the analog era,” Fisher pointed out last year in an article co-written by Cato’s Jennifer Huddleston. “And yet those are the structures that still govern electricity in much of the country. Building a new transmission line in the US now takes about 10 years, while generation projects spend multiple years stuck in interconnection queues, with more than 2,600 gigawatts of capacity now in queues nationwide.”

The DATA Act would lower the level of regulatory intrusion for enclosed systems that don’t connect to the grid. “It just serves data centers that are probably going to be clustered around it without taking electricity supply off the market for Arkansas families and businesses,” Cotton told the Arkansas Democrat-Gazette.

Keep reading

Democrats Demand Apple and Google Ban X From App Stores

Apple and Google are under mounting political pressure from Democrats over X’s AI chatbot, Grok, after lawmakers accused the platform of producing images of women and allegedly minors in bikinis.

While the outrage targets X specifically, the ability to generate such material is not unique to one platform. Similar image manipulation and synthetic content creation can be found across nearly every major AI system available today.

Yet, the letter sent to Apple CEO Tim Cook and Google CEO Sundar Pichai by Senators Ron Wyden, Ben Ray Luján, and Ed Markey only asked the tech giants only about X and demanded that the companies remove X from their app stores entirely.

X is used by around 557 million users.

We obtained a copy of the letter for you here.

The lawmakers wrote that “X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms.”

They pointed to Google’s developer rules, which prohibit apps that facilitate “the exploitation or abuse of children,” and Apple’s policy against apps that are “offensive” or “just plain creepy.”

Ignoring the First Amendment completely, “Apple and Google must remove these apps from the app stores until X’s policy violations are addressed,” the letter states.

Dozens of generative systems, including open-source image models that can’t be controlled or limited by anyone, can produce the same kinds of bikini images with minimal prompting.

The senators cited prior examples of Apple and Google removing apps such as ICEBlock and Red Dot under government pressure.

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the letter stated.

Keep reading