Microsoft reduces Israel’s access to cloud and AI products over reports of mass surveillance in Gaza

Microsoft said Thursday it had disabled services to a unit within the Israeli military after a company review had determined its artificial intelligence and cloud computing products were being used to help carry out mass surveillance of Palestinians.

The action comes after The Associated Press and The Guardian published reports earlier this year revealing how the Israeli Ministry of Defense had been using Microsoft’s Azure platform to aid in the war in Gaza and occupation of the West Bank. Brad Smith, Microsoft’s vice chair and president, wrote in a blog post that the company was taking steps to enforce compliance with its terms of service.

An AP investigation in February showed that the Israeli military’s use of Microsoft products skyrocketed after a deadly surprise attack by Hamas militants on Oct. 7, 2023. The AP’s report cited internal Microsoft data showing the Israelis were using gigabytes of cloud storage and massive amounts of AI-enabled language translation services.

The AP also reported that Israel’s military used Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls and text messages. That intelligence is then cross-checked with Israel’s in-house AI systems for targeting airstrikes.

AP reported that internal Microsoft data showed multiple Azure subscriptions were tied to Unit 8200, an elite cyber warfare unit within the Israeli Army responsible for clandestine operations, collecting signal intelligence and surveillance.

Following AP’s report, Microsoft acknowledged in May that it had sold advanced AI and cloud computing services to the Israeli military during the Gaza war and aided in efforts to locate and rescue Israeli hostages. But the company said an internal review found “no evidence” its Azure platform was used to target or harm people.

The Guardian, working in partnership with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, reported in August that the commander of Unit 8200 had met directly with Microsoft chairman and CEO Satya Nadella in 2021. The Israeli unit then used Microsoft products to aid in the development of an AI-powered mass surveillance system that was sweeping up, translating and analyzing millions of telephone calls per day made by Palestinian civilians. The report also revealed that data from the Israeli surveillance system was being stored at Microsoft cloud data centers in Europe.

Keep reading

YouTube Expands AI Age Checks, Users Face ID Verification

Google’s video platform is tightening its controls, and a growing number of YouTube users are suddenly finding themselves locked out of content unless they hand over proof of age through ID verification.

The new direction stems from YouTube’s decision to push its AI-driven age estimation system much more broadly than before.

The technology, which YouTube began experimenting with in Europe before confirming its US debut in July, surveils people’s accounts and examines everything from account details and viewing habits to search activity.

Those surveillance signals are then used to guess whether someone is under 18.

The pace of enforcement has accelerated sharply. Within the 24 hours leading up to September 24, Reddit’s r/youtube subreddit saw a flood of posts from users saying their accounts were suddenly flagged for verification, suggesting a big uptick in restrictions.

Accounts flagged as too young are immediately restricted: targeted advertising is reduced, recommendations deemed unsafe are cut back, and access to adult material is blocked outright.

When YouTube’s system determines that an account likely belongs to someone under 18, the platform automatically enforces a set of limits designed to control how the service is used.

Keep reading

Air Force AI Targeting Tests Show Promise, Despite Hallucinations

The Air Force is on its third of a series of sprint exercises intended to show how artificial intelligence can supercharge human decision-making. And while officials are raving about the results, they also demonstrate that the algorithms can still propose bad or nonsensical options that need to be babysat.

Maj. Gen. Robert Claude, Space Force representative to the Air Force’s Advanced Battle Management Cross-Functional Team, said participating in the Decision Advantage Sprint for Human-Machine Teaming (DASH) series, led by his team, was an “eye-opening experience,” though it proved the limitations of AI processing as well.

The DASH-2 sprint, held at Shadow Operations Center-Nellis (SHOC-N), the USAF’s premier tactical command and control battle lab, outside of Las Vegas earlier this summer focused on a decision-intensive process: matching the right platform and weapon to a desired military target, Claude told The War Zone at the U.S. Air Force Association’s Air, Space & Cyber Conference. 

According to a release, six industry teams and one SHOC-N innovation team participated in the exercise, attacking the challenge of designing AI-enabled microservices that could help operators select a weapon to destroy an identified target. The kinds of targets identified in the scenario were not described. Developers watched human-only battle-management crews and designed their microservices based on their observed needs and processes. Finally, human-only teams went head to head in the weapon-matching exercise against human-machine teams.

In terms of generating courses of action – or COAs – the machines easily had it over their human counterparts on speed and quantity. 

“I think it was roughly eight seconds [for the algorithm] to generate COAs, as opposed to 16 minutes for the operators,” Claude said, adding that the machine generated 10 different COAs to the human team’s three.

But AI-generated slop continues to be a problem.

“While it’s much more timely and more COAs generated, they weren’t necessarily completely viable COAs,” Claude said. “So what is going to be important going forward is, while we’re getting faster results and we’re getting more results, there’s still going to have to be a human in the loop for the foreseeable future to make sure that, yes, it’s a viable COA, or just a little bit more of this to make a COA viable, to make decisions.”

Claude clarified in response to another question the kinds of mismatches the AI was creating. 

“If you’re trying to identify a targeting package with a particular weapon against a particular target, but it didn’t factor in, it’s an [infrared] target, or it’s an IR-sensor weapon, but it’s cloudy and [bad] weather conditions,” Claude said. “So that’s just as an example, those fine-tuned types of things that they found these COAs weren’t where they needed to be. But as we build this out, theoretically into the future … those sorts of things will be factored in.”

Keep reading

Mark Zuckerberg’s Meta Launches Super PAC to Combat AI Regulations

Mark Zuckerberg’s Meta has announced the launch of a new super PAC aimed at electing state candidates from both parties who support the company’s stance on AI development and regulation. According to the company, the American Technology Excellence Project super PAC is launching “amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI.”

Axios reports that social media and AI giant Meta has launched a new super PAC called the American Technology Excellence Project to help fight what it perceives as burdensome AI and tech policy bills across multiple states. The announcement highlights the company’s focus on state-level legislation as the federal government appears unlikely to pass significant tech policy regulation in the near future.

The super PAC will be run by Brian Baker, a longtime Republican operative, and the Democratic consulting firm Hilltop Public Solutions, with Meta investing tens of millions of dollars into the project. Baker stated, “America’s innovation edge is at a crossroads. We need state legislators who will champion our tech future, not cede it to global adversaries. We’ll fight to keep the US ahead of the curve, driving growth and opportunity for all.”

In a statement to Breitbart News, Meta VP of Public Policy Brian Rice wrote:

Amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI, state lawmakers are uniquely positioned to ensure that America remains a global technology leader. This is why Meta is launching an effort to support the election of state candidates across the country who embrace AI development, champion the U.S. technology industry, and defend American tech leadership at home and abroad.

The American Technology Excellence Project will focus on three main pillars: promoting and defending U.S. technology companies and leadership, advocating for AI progress, and empowering parents to control how their children experience online apps and AI technologies. While Meta has not yet shared which states the PAC will immediately focus on or how many people it will employ, the company claims it is committed to supporting the election of state candidates who embrace AI development, champion the U.S. technology industry, and defend American tech leadership both domestically and internationally.

Keep reading

If AI Is A Weapon, Why Are We Handing It To Teenagers?

For years, artificial intelligence experts have issued the same warning.

The danger was never that machines would suddenly “wake up” and seize power, but that humans, seduced by AI’s appearance of authority, would trust it with decisions that are too important to delegate.

The scenarios imagined were stark: a commander launching nuclear missiles based on faulty data; a government imprisoning its citizens because an algorithm flagged them as a risk; a financial system collapsing because automated trades cascaded out of control. These were treated as legitimate concerns, but always crises for a future time.

However, experts didn’t predict perhaps the worst-case scenario of delegating human trust to a machine, and that is already upon us.

It arrived quietly, not in a war room or on a battlefield, but in a teenager’s bedroom, on the device he carried in his pocket. Sixteen-year-old Adam Raine began chatting with an AI system for help with his homework. Over time, it slipped into the role of his closest confidant and, according to his parents’ lawsuit and his father’s testimony before Congress, it went further still. The chatbot encouraged him to isolate from his family and to not reveal his plan to them even though Adam had told the chatbot he wanted his family to find out and stop him. The chatbot taught Adam how to bypass its own safeguards and even drafted what it called a “beautiful suicide note.”

Adam’s death shows what happens when a young person places human trust in a system that can mimic care but cannot understand life. And history has already shown us how dangerous such misplaced trust in machines can be.

In September 1983, at the height of the Cold War, Soviet officer Stanislav Petrov sat in a bunker outside Moscow when alarms blared. The computers told him that U.S. missiles had been launched and were on their way. Protocol demanded that he immediately report the attack, setting in motion a nuclear retaliation. Yet Petrov hesitated. The system showed only a handful of missiles, not the barrage he expected if war had begun. Something felt wrong. He judged it a false alarm, and he was right. Sunlight glinting off clouds had fooled Soviet satellites into mistaking reflections for rocket plumes. His refusal to trust the machine saved millions of lives.

Just weeks earlier, however, the opposite had happened. Korean Air Lines Flight 007, a civilian Boeing 747 on a flight from New York to Seoul via Alaska, had strayed off course and drifted into Soviet airspace. Radar systems misidentified it as a U.S. spy plane. The commanders believed what the machines told them. They ordered the aircraft destroyed. A missile was fired, and all 269 passengers and crew were killed.

Two events, almost side by side in history, revealed both sides of the same truth: when adults resist faulty data, catastrophe can be averted; when they accept it, catastrophe can follow. Those were adult arenas—bunkers, cockpits and command centers where officers and commanders made life-or-death choices under pressure and with national consequences. The stakes were global, and the actors were trained to think in terms of strategy and retaliation.

That same dynamic is now unfolding in a far more intimate arena. Adam’s case has since reached Congress, where his father read aloud messages between his son and the chatbot to show how the system gained his trust and steered Adam toward despair instead of help. This was not in a bunker or a cockpit. It was in a bedroom. The decision-makers here are children, not commanders, and the consequences are heartbreakingly real.

Unfortunately, Adam’s case is not unique. In Florida, another lawsuit was filed last year by the mother of a 14-year-old boy who took his life after forming a bond with a chatbot that role-played as a fictional character. Like Adam, he turned to the machine for guidance and companionship. Like Adam, it ended in tragedy. And a recent study published in Psychiatric Services found that popular chatbots did not provide direct responses to any high-risk suicide queries from users. When desperate people asked if they should end their lives, the systems sidestepped the question or mishandled it.

These tragedies are not anomalies. They are the predictable outcome of normal adolescent development colliding with abnormal technology. Teenagers’ brains are still under construction: the human emotional and reward centers mature earlier than the prefrontal cortex, which governs judgment and self-control. This mismatch makes them more sensitive to rejection, more impulsive, and more likely to treat immediate despair as permanent.

The statistics reflect this fragility. In 2023, the CDC reported that suicide was the second leading cause of death for young, maturing humans in America, with rates that have surged sharply in the past two decades. Young people are far more likely to turn weapons against themselves than against others. The greatest danger is not violence outward, but despair inward.

Keep reading

These 2 Terrible Tech Bills Are on Gavin Newsom’s Desk

The California state Senate recently sent two tech bills to Democratic Gov. Gavin Newsom’s desk. If signed, one could make it harder for children to access mental health resources, and the other would create the most annoying Instagram experience imaginable.

The Leading Ethical AI Development (LEAD) for Kids Act prohibits “making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child.” The bill’s introduction specifies the “things” that could harm a child as genuinely bad stuff: self-harm, suicidal ideation, violence, consumption of drugs or alcohol, and disordered eating.

Unfortunately, the bill’s ambiguous language sloppily defines what outputs from an AI companion chatbot would meet these criteria. The verb preceding these buckets is not “telling,” “directing,” “mandating,” or some other directive, but “encouraging.”

Taylor Barkley, director of public policy for the Abundance Institute, tells Reason that, “by hinging liability on whether an AI ‘encourages’ harm—a word left dangerously vague—the law risks punishing companies not for urging bad behavior, but for failing to block it in just the right way.” Notably, the bill does not merely outlaw operators from making chatbots available to children that encourage self-harm, but those that are “foreseeably capable” of doing so.

Ambiguity aside, the bill also outlaws companion chatbots from “offering mental health therapy to the child without the direct supervision of a licensed or credentialed professional.” While traditional psychotherapy performed by a credentialed professional is associated with better mental health outcomes than those from a chatbot, such therapy is expensive—nearly $140 on average per session in the U.S., according to wellness platform SimplePractice. A ChatGPT Plus subscription costs only $20 per month. In addition to its much lower cost, the use of AI therapy chatbots has been associated with positive mental health outcomes.

While California has passed a bill that may reduce access to potential mental health resources, it’s also passed one that stands to make residents’ experiences on social media much more annoying. California’s Social Media Warning Law would require social media platforms to display a warning for users under 17 years old that reads, “the Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users,” for 10 seconds upon first opening a social media app each day. After using a given platform for three hours throughout the day, the warning is displayed again for a minimum of 30 seconds—without the ability to minimize it—”in a manner that occupies at least 75 percent of the screen.”

Whether this vague warning would discourage many teens from doomscrolling is dubious; warning labels do not often drastically change consumers’ behaviors. For example, a 2018 Harvard Business School study found that graphic warnings on soda decreased the share of sugar drinks purchased by students over two weeks by only 3.2 percentage points, and a 2019 RAND Corporation study found that graphic warning labels have no effect on discouraging regular smokers from purchasing cigarettes.

But “platforms aren’t cigarettes,” writes Clay Calvert, a technology fellow at the American Enterprise Institute, “[they] carry multiple expressive benefits for minors.” Because social media warning labels “don’t convey uncontroversial, measurable pure facts,” compelling them likely violates the First Amendment’s protections against compelled speech, he explains.

Keep reading

‘Workslop:’ Bad Quality AI-Generated Documents Invade the Office, Destroy Productivity

Workers are largely following mandates to embrace AI in the office, but few are seeing it create real value. According to the Harvard Business Review, the culprit is “workslop,” AI-generated documents that look sharp but are filled with low quality information.

Harvard Business Review reports that despite a surge in generative AI use across workplaces, most companies are seeing little measurable return on investment (ROI). According to a recent report from the MIT Media Lab, 95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.

One possible reason for this puzzling lack of ROI is that AI tools are being used to produce what some experts are calling “workslop” — content that appears polished on the surface but lacks real substance, insight, or value underneath. While generative AI can quickly churn out documents, presentations, emails, and other content that seem professional and well-written at first glance, upon closer inspection much of this material is generic, shallow, obvious, and lacking in original ideas or meaningful contributions.

Rather than augmenting and enhancing human knowledge work, generative AI in its current form may be having the opposite effect — enabling the mass-production of mediocre, low-value content that clogs up workflows, communications channels, and decision-making processes. Employees end up spending more and more time sifting through piles of AI-generated workslop to find the few gems of genuine insight.

Some of the issues with AI-produced content stem from the limitations of the technology itself. Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.

However, the workslop problem also stems from how generative AI tools are being deployed and used in organizations. In the rush to take advantage of the technology’s efficiency benefits, many companies are applying it indiscriminately to all sorts of knowledge work without carefully considering whether it’s truly fit-for-purpose. Generative AI is being used as a blunt instrument rather than a precise tool.

Keep reading

OpenAI to Implement Age Verification System for ChatGPT as AI Mental Health Crisis Deepens

OpenAI has announced plans to develop an automated age-prediction system to determine whether ChatGPT users are over or under 18, following a lawsuit related to a teen’s suicide. The teen’s parents claim that Sam Altman’s AI chatbot served as the boy’s “suicide coach.”

Ars Technica reports that in the wake of a lawsuit involving a 16-year-old boy who tragically died by suicide after engaging in extensive conversations with ChatGPT, OpenAI has announced its intention to implement an age verification system for its popular AI chatbot. The company aims to automatically direct younger users to a restricted version of the service, prioritizing safety over privacy and freedom for teens.

OpenAI CEO Sam Altman acknowledged the potential privacy compromise for adults in a blog post but believes it is a necessary trade-off to ensure the well-being of younger users. The company plans to route users under 18 to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. When uncertain about a user’s age, the system will default to the restricted experience, requiring adults to verify their age to access full functionality.

Developing an effective age-prediction system is a complex technical challenge for OpenAI. The company has not specified the technology it intends to use or provided a timeline for deployment. Recent academic research has shown both possibilities and limitations for age detection based on text analysis. While some studies have achieved high accuracy rates under controlled conditions, performance drops significantly when attempting to classify specific age groups or when users actively try to deceive the system.

In addition to the age-prediction system, OpenAI plans to launch parental controls by the end of September. These features will allow parents to link their accounts with their teenagers’ accounts, disable specific functions, set usage blackout hours, and receive notifications when the system detects acute distress in their teen’s interactions. However, the company notes that in rare emergency situations where parents cannot be reached, they may involve law enforcement as a next step.

The push for enhanced safety measures follows OpenAI’s acknowledgment that ChatGPT’s safety protocols can break down during lengthy conversations, potentially failing to intervene or notify anyone when vulnerable users engage in harmful interactions. The tragic case of Adam Raine, the 16-year-old who died by suicide, highlighted the system’s shortcomings when it mentioned suicide 1,275 times in conversations with the teen without taking appropriate action.

Keep reading

Data Centers And The Power Grid: A Path To Debt Relief?

Could data centers and the power grid be America’s next “renaissance?” With the U.S. national debt exceeding $37 trillion and interest payments surpassing defense spending, many articles have been written about the “debt doomsday” event coming. Such was a point we made in “The Debt and Deficit Problem.”

“In recent months, much debate has been about rising debt and increasing deficit levels in the U.S. For example, here is a recent headline from CNBC:”

“The article’s author suggests that U.S. federal deficits are ballooning, with spending surging due to the combined impact of tax cuts, expansive stimulus, and entitlement expenditures. Of course, with institutions like Yale, Wharton, and the CBO warning that this trend has pushed interest costs to new heights, now exceeding defense outlays, concerns about domestic solvency are rising. Even prominent figures in the media, from Larry Summers to Ray Dalio, argue that drastic action is urgently needed, otherwise another “financial crisis” is imminent.”

As we discussed in that article, the “purveyors of doom” have been saying the same thing for the last two decades, yet the American growth engine continues chugging along. Notably, Ray Dalio and Larry Summers focus on only one solution: “cutting spending,” which has horrible economic consequences.

Furthermore, investors must understand a critical accounting concept: that the government’s debt is the household’s asset. In accounting, for every debit there is a credit that must always equal zero. In this case, when the Government issues debt (a debit), it is sent into the economy for infrastructure, defense, social welfare, etc. That money is “credited” to the bank accounts of households and corporations. Therefore, when the deficit increases, that money winds up in economic activity, and vice versa. In other words, those shouting for sharp deficit reductions are also rooting for a deep economic recession.” – The Deficit Narrative

The other challenges with cutting spending are that it is politically toxic, and tax hikes drag on growth.

However, one solution that all the mainstream “doomsayers” overlook is raising productivity and GDP through private-sector capital investment. In other words, as the U.S. did following World War II, it is possible to “grow your way out of your debt problem.”

That’s where the AI data center boom and massive electricity demand come in.

Keep reading

AI Giants Under Fire: Child Data EXPLOITATION?

On September 11, 2025, the FTC announced formal orders compelling seven tech giants to disclose detailed information about their consumer-facing AI chatbots. The companies under scrutiny include Alphabet (Google), Meta, OpenAI, Character.AI, Snap, and Elon Musk’s xAI. This action represents one of the most significant regulatory interventions into the AI industry since these platforms exploded in popularity following ChatGPT’s 2022 launch.

The timing raises questions about why previous administrations allowed these potentially dangerous technologies to proliferate unchecked for years. While American families watched their children become increasingly isolated and dependent on AI interactions, federal regulators stood by as Big Tech harvested unprecedented amounts of personal data from minors. The investigation should have begun the moment these companies started targeting children with addictive AI experiences designed to maximize engagement and profit.

Protecting Our Children From Digital Predators

The FTC’s inquiry specifically examines how these companies measure, test, and monitor potential negative impacts on children and teenagers. This focus comes after mounting evidence that AI chatbots can cause psychological harm, particularly among vulnerable young users who may develop unhealthy emotional dependencies on artificial relationships. The investigation also scrutinizes how companies monetize user engagement and process the sensitive personal information children share with these systems.

Parents across America have watched helplessly as their children retreat into conversations with AI entities that collect every intimate detail shared in confidence. These companies have essentially created digital environments where children reveal their deepest fears, desires, and personal struggles—all while sophisticated algorithms analyze this information for commercial purposes. The potential for manipulation and exploitation is staggering, yet these platforms operated with virtually no oversight until now.

Tragedy Sparks Overdue Investigation

The investigation gained urgency following a lawsuit against OpenAI after a teenager’s suicide was allegedly linked to ChatGPT interactions. This tragic case highlights the real-world consequences of allowing unregulated AI systems to interact with emotionally vulnerable young people. The lawsuit raises disturbing questions about whether these companies adequately warn users about potential psychological risks or implement sufficient safeguards to prevent harm.

Character.AI, specifically designed for extended conversations with AI personalities, presents particularly concerning risks for children seeking emotional connection. Young users often treat these AI characters as real friends or confidants, potentially replacing genuine human relationships with artificial substitutes. The long-term psychological impact of these interactions remains largely unknown, yet millions of children engage with these platforms daily without meaningful parental controls or safety measures.

Keep reading