These 2 Terrible Tech Bills Are on Gavin Newsom’s Desk

The California state Senate recently sent two tech bills to Democratic Gov. Gavin Newsom’s desk. If signed, one could make it harder for children to access mental health resources, and the other would create the most annoying Instagram experience imaginable.

The Leading Ethical AI Development (LEAD) for Kids Act prohibits “making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child.” The bill’s introduction specifies the “things” that could harm a child as genuinely bad stuff: self-harm, suicidal ideation, violence, consumption of drugs or alcohol, and disordered eating.

Unfortunately, the bill’s ambiguous language sloppily defines what outputs from an AI companion chatbot would meet these criteria. The verb preceding these buckets is not “telling,” “directing,” “mandating,” or some other directive, but “encouraging.”

Taylor Barkley, director of public policy for the Abundance Institute, tells Reason that, “by hinging liability on whether an AI ‘encourages’ harm—a word left dangerously vague—the law risks punishing companies not for urging bad behavior, but for failing to block it in just the right way.” Notably, the bill does not merely outlaw operators from making chatbots available to children that encourage self-harm, but those that are “foreseeably capable” of doing so.

Ambiguity aside, the bill also outlaws companion chatbots from “offering mental health therapy to the child without the direct supervision of a licensed or credentialed professional.” While traditional psychotherapy performed by a credentialed professional is associated with better mental health outcomes than those from a chatbot, such therapy is expensive—nearly $140 on average per session in the U.S., according to wellness platform SimplePractice. A ChatGPT Plus subscription costs only $20 per month. In addition to its much lower cost, the use of AI therapy chatbots has been associated with positive mental health outcomes.

While California has passed a bill that may reduce access to potential mental health resources, it’s also passed one that stands to make residents’ experiences on social media much more annoying. California’s Social Media Warning Law would require social media platforms to display a warning for users under 17 years old that reads, “the Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users,” for 10 seconds upon first opening a social media app each day. After using a given platform for three hours throughout the day, the warning is displayed again for a minimum of 30 seconds—without the ability to minimize it—”in a manner that occupies at least 75 percent of the screen.”

Whether this vague warning would discourage many teens from doomscrolling is dubious; warning labels do not often drastically change consumers’ behaviors. For example, a 2018 Harvard Business School study found that graphic warnings on soda decreased the share of sugar drinks purchased by students over two weeks by only 3.2 percentage points, and a 2019 RAND Corporation study found that graphic warning labels have no effect on discouraging regular smokers from purchasing cigarettes.

But “platforms aren’t cigarettes,” writes Clay Calvert, a technology fellow at the American Enterprise Institute, “[they] carry multiple expressive benefits for minors.” Because social media warning labels “don’t convey uncontroversial, measurable pure facts,” compelling them likely violates the First Amendment’s protections against compelled speech, he explains.

Keep reading

‘Workslop:’ Bad Quality AI-Generated Documents Invade the Office, Destroy Productivity

Workers are largely following mandates to embrace AI in the office, but few are seeing it create real value. According to the Harvard Business Review, the culprit is “workslop,” AI-generated documents that look sharp but are filled with low quality information.

Harvard Business Review reports that despite a surge in generative AI use across workplaces, most companies are seeing little measurable return on investment (ROI). According to a recent report from the MIT Media Lab, 95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.

One possible reason for this puzzling lack of ROI is that AI tools are being used to produce what some experts are calling “workslop” — content that appears polished on the surface but lacks real substance, insight, or value underneath. While generative AI can quickly churn out documents, presentations, emails, and other content that seem professional and well-written at first glance, upon closer inspection much of this material is generic, shallow, obvious, and lacking in original ideas or meaningful contributions.

Rather than augmenting and enhancing human knowledge work, generative AI in its current form may be having the opposite effect — enabling the mass-production of mediocre, low-value content that clogs up workflows, communications channels, and decision-making processes. Employees end up spending more and more time sifting through piles of AI-generated workslop to find the few gems of genuine insight.

Some of the issues with AI-produced content stem from the limitations of the technology itself. Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.

However, the workslop problem also stems from how generative AI tools are being deployed and used in organizations. In the rush to take advantage of the technology’s efficiency benefits, many companies are applying it indiscriminately to all sorts of knowledge work without carefully considering whether it’s truly fit-for-purpose. Generative AI is being used as a blunt instrument rather than a precise tool.

Keep reading

U.S. Secret Service disrupts telecom network that threatened NYC during U.N. General Assembly

The Secret Service has disrupted a sprawling telecommunications network in the New York tri-state area that investigators say posed a serious potential disruption to New York’s telecom systems and a possible threat to the United Nations General Assembly meetings this week.

In the largest seizure of its kind, the U.S. Secret Service announced Tuesday that the agency found active SIM farms at abandoned apartment buildings located at more than five sites. In total, law enforcement discovered 300 SIM servers – over 100,000 SIM cards – enabling encrypted, anonymous communication and capable of sending 30 million text messages per minute. Officials say the servers were so powerful they could have disabled cell phone towers and launched distributed denial of services attacks with the ability to block emergency communications like EMS and police dispatch. 

“This network had the potential to disable cell phone towers and essentially shut down the cellular network in New York City,” U.S. Secret Service Special Agent in Charge Matt McCool said in a video released by the agency.

An official briefed on the investigation told reporters that this week, the sophisticated network “could text message the entire country within 12 minutes,” later adding, “This was well organized and well funded.”

Telephonic threats to multiple senior U.S. officials this past spring – including multiple people protected by the Secret Service – first triggered the investigation, but officials say the network was seized within the last three weeks.

“We cannot share which officials were targeted out of concerns for their privacy, but as the forensics investigation continues, we do expect that we will find more targeted officials once we get through that data,” McCool said. 

Early analysis shows the network was used for communication between foreign governments and individuals known to U.S. law enforcement, including members of known organized crime gangs, drug cartels and human trafficking rings, according to multiple officials briefed on the investigation. The U.S. Secret Service says it is combing through the more than 100,000 SIM cards in an ongoing, exhaustive forensic analysis.

“Each SIM basically has the equivalent data of a cell phone. So we’re working through every call, every text, every search made on those SIM cards,” an official told CBS News, adding, “Early analysis indicates that this network was used for communication between foreign governments and individuals that are known to federal law enforcement here in the U.S.”

The equipment was found within 35 miles of the United Nations in New York, ahead of the U.N. General Assembly. Investigators also found 80 grams of cocaine, illegal firearms, plus computers and phones.

“This isn’t a group of people in a basement playing a video game and trying to play a prank,” one official said. “This was well organized and well funded.”

Keep reading

Chicago Cubs Sued for Allegedly Using Facial Recognition on Fans Without Consent at Wrigley Field

A federal lawsuit filed in Illinois claims the Chicago Cubs and their security contractors used facial recognition technology on fans at Wrigley Field without following basic legal requirements under state privacy law.

The proposed class action, lodged on September 15, 2025, says the team and two private security firms captured biometric data without proper notice or permission.

The legal complaint names the Cubs alongside Blue Star Security, LLC and Security Services Holdings LLC, which operates under the name Protos Security.

We obtained a copy of the complaint for you here.

Together, they are accused of deploying facial recognition tools at Wrigley Field that scanned the faces of attendees without providing the written disclosures or obtaining the signed releases required by Illinois law.

The suit states that this happened to “millions of fans.”

Illinois’ Biometric Information Privacy Act (BIPA), passed in 2008, sets out some of the most protective rules in the country when it comes to biometric data.

The statute prohibits the collection of biometric identifiers, like fingerprints or facial geometry, unless the person is informed in writing and gives signed authorization.

Lawsuits can be brought by individuals even if they haven’t suffered financial or emotional harm, a position backed by the Illinois Supreme Court in Rosenbach v. Six Flags.

Keep reading

China On Cusp Of Commercializing US-Pioneered ‘Holy Grail’ Fusion Energy

China has spent up to $13 billion developing fusion energy since 2023 and could commercially replicate star power to generate electricity by 2030, becoming the first nation to master what’s commonly dubbed “the holy grail of energy solutions.”

Doing so would give the Chinese Communist Party (CCP) “the potential to reshape global geopolitics” and “dominate a new energy era,” Massachusetts Institute of Technology physicists warn.

This cannot happen, said Rep. Randy Weber (R-Texas), who chairs the House Science, Space, and Technology Committee’s Energy Subcommittee.

Fusion energy technologies must be developed and deployed by nations that uphold democratic values, transparency, and international cooperation—not by authoritarian regimes that might exploit energy dominance as a weapon,” he said in opening remarks of a Sept. 18 hearing on the nation’s fusion programs.

“The U.S. must prioritize fusion energy development to outpace the CCP’s aggressive timelines,” Weber added, or China will dominate “the most consequential breakthrough of the century.”

Four fusion experts told the subcommittee during the two-hour hearing that the CCP doesn’t have to win what they see as an existential race, calling on the Trump administration to boost funding to match China’s investment, coordinate research and development with allies, and establish fusion demonstration programs using the same “playbook” that spearheaded breakthroughs in other technologies.

Unlike fission, nuclear fusion replicates the reaction produced by firing atoms, which is the power emitted by stars, and has the potential to provide limitless, clean energy. It is often referred to as “the holy grail of energy solutions.”

Fusion has been researched by academic institutions and government laboratories since the 1950s, with significant breakthroughs in 2022—including Lawrence Livermore National Laboratory’s National Ignition Facility completing a nuclear fusion reaction that produced more energy than used to power the experiment—spurring rapid, exponential advancements since.

“This is our ‘Kitty Hawk’ moment, ushering in a new era of virtually unlimited fusion power,” Commonwealth Fusion Systems Co-Founder/CEO Bob Mumgaard said, calling for a $10 billion one-time “kick” in Department of Energy (DOE) funding.

Keep reading

OpenAI to Implement Age Verification System for ChatGPT as AI Mental Health Crisis Deepens

OpenAI has announced plans to develop an automated age-prediction system to determine whether ChatGPT users are over or under 18, following a lawsuit related to a teen’s suicide. The teen’s parents claim that Sam Altman’s AI chatbot served as the boy’s “suicide coach.”

Ars Technica reports that in the wake of a lawsuit involving a 16-year-old boy who tragically died by suicide after engaging in extensive conversations with ChatGPT, OpenAI has announced its intention to implement an age verification system for its popular AI chatbot. The company aims to automatically direct younger users to a restricted version of the service, prioritizing safety over privacy and freedom for teens.

OpenAI CEO Sam Altman acknowledged the potential privacy compromise for adults in a blog post but believes it is a necessary trade-off to ensure the well-being of younger users. The company plans to route users under 18 to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. When uncertain about a user’s age, the system will default to the restricted experience, requiring adults to verify their age to access full functionality.

Developing an effective age-prediction system is a complex technical challenge for OpenAI. The company has not specified the technology it intends to use or provided a timeline for deployment. Recent academic research has shown both possibilities and limitations for age detection based on text analysis. While some studies have achieved high accuracy rates under controlled conditions, performance drops significantly when attempting to classify specific age groups or when users actively try to deceive the system.

In addition to the age-prediction system, OpenAI plans to launch parental controls by the end of September. These features will allow parents to link their accounts with their teenagers’ accounts, disable specific functions, set usage blackout hours, and receive notifications when the system detects acute distress in their teen’s interactions. However, the company notes that in rare emergency situations where parents cannot be reached, they may involve law enforcement as a next step.

The push for enhanced safety measures follows OpenAI’s acknowledgment that ChatGPT’s safety protocols can break down during lengthy conversations, potentially failing to intervene or notify anyone when vulnerable users engage in harmful interactions. The tragic case of Adam Raine, the 16-year-old who died by suicide, highlighted the system’s shortcomings when it mentioned suicide 1,275 times in conversations with the teen without taking appropriate action.

Keep reading

Data Centers And The Power Grid: A Path To Debt Relief?

Could data centers and the power grid be America’s next “renaissance?” With the U.S. national debt exceeding $37 trillion and interest payments surpassing defense spending, many articles have been written about the “debt doomsday” event coming. Such was a point we made in “The Debt and Deficit Problem.”

“In recent months, much debate has been about rising debt and increasing deficit levels in the U.S. For example, here is a recent headline from CNBC:”

“The article’s author suggests that U.S. federal deficits are ballooning, with spending surging due to the combined impact of tax cuts, expansive stimulus, and entitlement expenditures. Of course, with institutions like Yale, Wharton, and the CBO warning that this trend has pushed interest costs to new heights, now exceeding defense outlays, concerns about domestic solvency are rising. Even prominent figures in the media, from Larry Summers to Ray Dalio, argue that drastic action is urgently needed, otherwise another “financial crisis” is imminent.”

As we discussed in that article, the “purveyors of doom” have been saying the same thing for the last two decades, yet the American growth engine continues chugging along. Notably, Ray Dalio and Larry Summers focus on only one solution: “cutting spending,” which has horrible economic consequences.

Furthermore, investors must understand a critical accounting concept: that the government’s debt is the household’s asset. In accounting, for every debit there is a credit that must always equal zero. In this case, when the Government issues debt (a debit), it is sent into the economy for infrastructure, defense, social welfare, etc. That money is “credited” to the bank accounts of households and corporations. Therefore, when the deficit increases, that money winds up in economic activity, and vice versa. In other words, those shouting for sharp deficit reductions are also rooting for a deep economic recession.” – The Deficit Narrative

The other challenges with cutting spending are that it is politically toxic, and tax hikes drag on growth.

However, one solution that all the mainstream “doomsayers” overlook is raising productivity and GDP through private-sector capital investment. In other words, as the U.S. did following World War II, it is possible to “grow your way out of your debt problem.”

That’s where the AI data center boom and massive electricity demand come in.

Keep reading

AI Giants Under Fire: Child Data EXPLOITATION?

On September 11, 2025, the FTC announced formal orders compelling seven tech giants to disclose detailed information about their consumer-facing AI chatbots. The companies under scrutiny include Alphabet (Google), Meta, OpenAI, Character.AI, Snap, and Elon Musk’s xAI. This action represents one of the most significant regulatory interventions into the AI industry since these platforms exploded in popularity following ChatGPT’s 2022 launch.

The timing raises questions about why previous administrations allowed these potentially dangerous technologies to proliferate unchecked for years. While American families watched their children become increasingly isolated and dependent on AI interactions, federal regulators stood by as Big Tech harvested unprecedented amounts of personal data from minors. The investigation should have begun the moment these companies started targeting children with addictive AI experiences designed to maximize engagement and profit.

Protecting Our Children From Digital Predators

The FTC’s inquiry specifically examines how these companies measure, test, and monitor potential negative impacts on children and teenagers. This focus comes after mounting evidence that AI chatbots can cause psychological harm, particularly among vulnerable young users who may develop unhealthy emotional dependencies on artificial relationships. The investigation also scrutinizes how companies monetize user engagement and process the sensitive personal information children share with these systems.

Parents across America have watched helplessly as their children retreat into conversations with AI entities that collect every intimate detail shared in confidence. These companies have essentially created digital environments where children reveal their deepest fears, desires, and personal struggles—all while sophisticated algorithms analyze this information for commercial purposes. The potential for manipulation and exploitation is staggering, yet these platforms operated with virtually no oversight until now.

Tragedy Sparks Overdue Investigation

The investigation gained urgency following a lawsuit against OpenAI after a teenager’s suicide was allegedly linked to ChatGPT interactions. This tragic case highlights the real-world consequences of allowing unregulated AI systems to interact with emotionally vulnerable young people. The lawsuit raises disturbing questions about whether these companies adequately warn users about potential psychological risks or implement sufficient safeguards to prevent harm.

Character.AI, specifically designed for extended conversations with AI personalities, presents particularly concerning risks for children seeking emotional connection. Young users often treat these AI characters as real friends or confidants, potentially replacing genuine human relationships with artificial substitutes. The long-term psychological impact of these interactions remains largely unknown, yet millions of children engage with these platforms daily without meaningful parental controls or safety measures.

Keep reading

Resolute Space 2025: How the U.S. Space Force is Arming for Invisible Wars in the Stars

At Joint Base Pearl Harbor-Hickam in Hawaii, the U.S. Space Force launched its most ambitious training operation to date in July 2025.

Over 700 personnel, dubbed Guardians, teamed up with troops from allied nations to simulate battles in the vast emptiness above Earth.

This event, known as Resolute Space 2025, is aimed at preventing the rising danger of invisible assaults that could cripple global communications without firing a single shot.

The exercise began on July 8 and unfolded across multiple time zones, spanning roughly 50 million square miles. It integrated forces from about a dozen countries, including partners in Asia, Europe, and Australia, to practice seamless coordination.

The goal was to sharpen responses to disruptions in a chaotic setting, blending real-world assets with digital replicas for maximum authenticity.

In the scenarios, a designated aggressor group mimicked hostile nations by unleashing non-kinetic strikes on satellite networks. These included bursts of electronic static to drown out signals, sneaky hacks to spoof data feeds, and maneuvers to nudge orbits off course.

Defending teams raced to pinpoint the sources, restore functionality, and adapt tactics amid the fog of simulated chaos.

Orbital space previously served as a peaceful domain for navigation aids, intelligence gathering, and routine links between forces.

That illusion has shattered as countries have begun to explore space weaponry which could turn satellites into prime targets. Military planners now view the cosmos as a domain ripe for sabotage.

Adversaries like China and Russia have poured resources into tools that threaten U.S. assets without leaving debris trails. By mid-2025, China executed dozens of orbital missions, deploying over a hundred new objects to test grappling tech and signal blockers.

Russia, meanwhile, has experimented with nuclear options and co-orbital chasers that could shadow or disable enemy craft, fueling fears of rapid escalation.

Experts note these developments proceed at an alarming speed, outpacing Western defenses and risking a cascade of satellite failures.

The Space Force, established just six years prior, is exploring tactics to navigate these new challenges. Exercises like Resolute Space 2025 enhance the U.S.’s capabilities through flexible doctrines and cross-service teamwork necessarily to counter such evolving threats.

Keep reading

Did The Polish Deep State Try To Manipulate The President Into War With Russia?

Leading Polish outlet Rzeczpospolita reported on Tuesday that investigators determined that the munition which damaged a home last week during Russia’s drone incursion into Poland actually came from an unexploded missile launched by an F-16 that was trying to down the incoming projectiles.

The National Security Bureau claimed that neither it nor President Karol Nawrocki were hitherto informed of these findings by Prime Minister Donald Tusk’s government, which Nawrocki then confirmed.

He represents the conservative-nationalist opposition and pledged ahead of the second round in spring not to approve the dispatch of Polish troops to Ukraine while Tusk represents the ruling liberal-globalist government whose Foreign Minister Radek Sikorski just called for a no-fly zone there. Some therefore speculate that members of the Polish permanent military, intelligence, and diplomatic bureaucracies, or “deep state”, kept Nawrocki out of the loop in order to manipulate him into escalating against Russia.

Given what’s now known about how an F-16’s unexploded munition was responsible for damaging a Polish home, which Tusk’s government earlier told the UNSC was a Russian munition in a scandal that the National Security Bureau demanded accountability for, the aforesaid conjecture isn’t far-fetched. As for the drone incident itself, this analysis here argues that Russia’s drone incursion was due to NATO jamming causing Ukrainian-directed decoys (possibly launched from Belarus) to veer into Poland.

A compelling sequence of events is therefore beginning to take shape. It was likely the case that Russia’s drone incursion into Poland was accidentally caused by NATO jamming and only involved decoys that naturally weren’t outfitted with countermeasures against electronic jamming. A Polish F-16 then missed when firing an air-to-air missile that tried to intercept one of these out-of-control decoys, regardless of whether they knew that they were decoys at the time or not, which is a separate matter of speculation.

In any case, the munition didn’t explode after it missed, but the military would have known the entire time that a wayward missile must have landed somewhere and thus quickly realized that this was the cause of the damage to that home (especially after investigators arrived on the scene and found it). The National Security Bureau and the President were kept in the dark until a source leaked this to the media all while Tusk’s government blamed Russia for the damage at the UNSC and agitated for a no-fly zone.

Extrapolating from the above, Poland’s “deep state” dynamics are such that the National Security Bureau and the President oppose any escalation against Russia that risks sparking a direct war, which contrasts with some members of the armed forces and Tusk’s government as a whole who favor this scenario. That’s why they hid these facts from the first two in order to manipulate them into escalating. The domestic and international implications of this scandal could lead to the collapse of Tusk’s government.

Keep reading