Apple, Google Refuse To Suspend ICE-Tracking Apps Used By Dallas Shooter

In a chilling social media post, FBI Director Kash Patel described how Wednesday’s sniper at a Dallas ICE facility gathered intelligence online for the ambush that left one ICE detainee dead and two seriously injured. Authorities say suspected killer Joshua Jahn, 29, committed suicide after the ambush.

While retracing Jahn’s movements and writings, the FBI found he reviewed a document that listed Dallas DHS locations, and he “searched apps that tracked the presence of ICE agents.” Marcos Charles, the ICE executive associate director for enforcement and removal operations, confirmed the gunman utilized these apps to carry out the attack. In some cases, illegal immigrants use the apps so they can give ICE the slip. But in this new use, a left-wing radical employed the apps for the even more nefarious purpose of violence and terror, a reality underscored in a letter the FBI says Jahn left.

“Hopefully this will give ICE agents real terror, to think, ‘is there a sniper with [armor piercing] rounds on that roof?” a handwritten note read. The wording of Patel’s post indicated there could be more notes.

Keep reading

Microsoft reduces Israel’s access to cloud and AI products over reports of mass surveillance in Gaza

Microsoft said Thursday it had disabled services to a unit within the Israeli military after a company review had determined its artificial intelligence and cloud computing products were being used to help carry out mass surveillance of Palestinians.

The action comes after The Associated Press and The Guardian published reports earlier this year revealing how the Israeli Ministry of Defense had been using Microsoft’s Azure platform to aid in the war in Gaza and occupation of the West Bank. Brad Smith, Microsoft’s vice chair and president, wrote in a blog post that the company was taking steps to enforce compliance with its terms of service.

An AP investigation in February showed that the Israeli military’s use of Microsoft products skyrocketed after a deadly surprise attack by Hamas militants on Oct. 7, 2023. The AP’s report cited internal Microsoft data showing the Israelis were using gigabytes of cloud storage and massive amounts of AI-enabled language translation services.

The AP also reported that Israel’s military used Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls and text messages. That intelligence is then cross-checked with Israel’s in-house AI systems for targeting airstrikes.

AP reported that internal Microsoft data showed multiple Azure subscriptions were tied to Unit 8200, an elite cyber warfare unit within the Israeli Army responsible for clandestine operations, collecting signal intelligence and surveillance.

Following AP’s report, Microsoft acknowledged in May that it had sold advanced AI and cloud computing services to the Israeli military during the Gaza war and aided in efforts to locate and rescue Israeli hostages. But the company said an internal review found “no evidence” its Azure platform was used to target or harm people.

The Guardian, working in partnership with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, reported in August that the commander of Unit 8200 had met directly with Microsoft chairman and CEO Satya Nadella in 2021. The Israeli unit then used Microsoft products to aid in the development of an AI-powered mass surveillance system that was sweeping up, translating and analyzing millions of telephone calls per day made by Palestinian civilians. The report also revealed that data from the Israeli surveillance system was being stored at Microsoft cloud data centers in Europe.

Keep reading

Air Force AI Targeting Tests Show Promise, Despite Hallucinations

The Air Force is on its third of a series of sprint exercises intended to show how artificial intelligence can supercharge human decision-making. And while officials are raving about the results, they also demonstrate that the algorithms can still propose bad or nonsensical options that need to be babysat.

Maj. Gen. Robert Claude, Space Force representative to the Air Force’s Advanced Battle Management Cross-Functional Team, said participating in the Decision Advantage Sprint for Human-Machine Teaming (DASH) series, led by his team, was an “eye-opening experience,” though it proved the limitations of AI processing as well.

The DASH-2 sprint, held at Shadow Operations Center-Nellis (SHOC-N), the USAF’s premier tactical command and control battle lab, outside of Las Vegas earlier this summer focused on a decision-intensive process: matching the right platform and weapon to a desired military target, Claude told The War Zone at the U.S. Air Force Association’s Air, Space & Cyber Conference. 

According to a release, six industry teams and one SHOC-N innovation team participated in the exercise, attacking the challenge of designing AI-enabled microservices that could help operators select a weapon to destroy an identified target. The kinds of targets identified in the scenario were not described. Developers watched human-only battle-management crews and designed their microservices based on their observed needs and processes. Finally, human-only teams went head to head in the weapon-matching exercise against human-machine teams.

In terms of generating courses of action – or COAs – the machines easily had it over their human counterparts on speed and quantity. 

“I think it was roughly eight seconds [for the algorithm] to generate COAs, as opposed to 16 minutes for the operators,” Claude said, adding that the machine generated 10 different COAs to the human team’s three.

But AI-generated slop continues to be a problem.

“While it’s much more timely and more COAs generated, they weren’t necessarily completely viable COAs,” Claude said. “So what is going to be important going forward is, while we’re getting faster results and we’re getting more results, there’s still going to have to be a human in the loop for the foreseeable future to make sure that, yes, it’s a viable COA, or just a little bit more of this to make a COA viable, to make decisions.”

Claude clarified in response to another question the kinds of mismatches the AI was creating. 

“If you’re trying to identify a targeting package with a particular weapon against a particular target, but it didn’t factor in, it’s an [infrared] target, or it’s an IR-sensor weapon, but it’s cloudy and [bad] weather conditions,” Claude said. “So that’s just as an example, those fine-tuned types of things that they found these COAs weren’t where they needed to be. But as we build this out, theoretically into the future … those sorts of things will be factored in.”

Keep reading

Mark Zuckerberg’s Meta Launches Super PAC to Combat AI Regulations

Mark Zuckerberg’s Meta has announced the launch of a new super PAC aimed at electing state candidates from both parties who support the company’s stance on AI development and regulation. According to the company, the American Technology Excellence Project super PAC is launching “amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI.”

Axios reports that social media and AI giant Meta has launched a new super PAC called the American Technology Excellence Project to help fight what it perceives as burdensome AI and tech policy bills across multiple states. The announcement highlights the company’s focus on state-level legislation as the federal government appears unlikely to pass significant tech policy regulation in the near future.

The super PAC will be run by Brian Baker, a longtime Republican operative, and the Democratic consulting firm Hilltop Public Solutions, with Meta investing tens of millions of dollars into the project. Baker stated, “America’s innovation edge is at a crossroads. We need state legislators who will champion our tech future, not cede it to global adversaries. We’ll fight to keep the US ahead of the curve, driving growth and opportunity for all.”

In a statement to Breitbart News, Meta VP of Public Policy Brian Rice wrote:

Amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI, state lawmakers are uniquely positioned to ensure that America remains a global technology leader. This is why Meta is launching an effort to support the election of state candidates across the country who embrace AI development, champion the U.S. technology industry, and defend American tech leadership at home and abroad.

The American Technology Excellence Project will focus on three main pillars: promoting and defending U.S. technology companies and leadership, advocating for AI progress, and empowering parents to control how their children experience online apps and AI technologies. While Meta has not yet shared which states the PAC will immediately focus on or how many people it will employ, the company claims it is committed to supporting the election of state candidates who embrace AI development, champion the U.S. technology industry, and defend American tech leadership both domestically and internationally.

Keep reading

‘We cannot wait’: EU calls for drone wall to deter Russia after new incident in Denmark

Denmark has joined Estonia, Latvia. Finland, Lithuania, Poland, Romania and Bulgaria in the project to build a ‘drone wall’ alongside the Eastern flank.

The European Commission has reaffirmed its intention to build a drone defence system along the bloc’s eastern flank to deter Russia from violating common airspace and respond rapidly to any such incursions.

In less than one month, Russian aircraft have violated the airspace of three member states – Poland, Romania and Estonia – putting the continent on high alert. The acts coincide with intensifying barrages on Ukrainian civilians.

On Monday, two to three large drones were spotted at Copenhagen Airport, prompting a shutdown in operations for nearly four hours. The airport later reopened but warned of delays and cancelled departures. Norway’s Oslo Airport was also disrupted.

Police said they refrained from shooting down the aircraft because the risk was too great, given the airport was at full capacity and planes were stationed near fuel depots.

Danish Prime Minister Mette Frederiksen called the incident “the most serious attack on Danish critical infrastructure to date” and said she could not rule out Russian involvement.

On Tuesday, the Commission built on the events to call for the drone wall, a novel initiative first unveiled by President Ursula von der Leyen in her State of the EU speech.

“For those who still doubted the need to have a drone wall in the European Union, well, here we get another example of how important it is,” said Thomas Regnier, the Commission’s spokesperson for defence policy.

Keep reading

If AI Is A Weapon, Why Are We Handing It To Teenagers?

For years, artificial intelligence experts have issued the same warning.

The danger was never that machines would suddenly “wake up” and seize power, but that humans, seduced by AI’s appearance of authority, would trust it with decisions that are too important to delegate.

The scenarios imagined were stark: a commander launching nuclear missiles based on faulty data; a government imprisoning its citizens because an algorithm flagged them as a risk; a financial system collapsing because automated trades cascaded out of control. These were treated as legitimate concerns, but always crises for a future time.

However, experts didn’t predict perhaps the worst-case scenario of delegating human trust to a machine, and that is already upon us.

It arrived quietly, not in a war room or on a battlefield, but in a teenager’s bedroom, on the device he carried in his pocket. Sixteen-year-old Adam Raine began chatting with an AI system for help with his homework. Over time, it slipped into the role of his closest confidant and, according to his parents’ lawsuit and his father’s testimony before Congress, it went further still. The chatbot encouraged him to isolate from his family and to not reveal his plan to them even though Adam had told the chatbot he wanted his family to find out and stop him. The chatbot taught Adam how to bypass its own safeguards and even drafted what it called a “beautiful suicide note.”

Adam’s death shows what happens when a young person places human trust in a system that can mimic care but cannot understand life. And history has already shown us how dangerous such misplaced trust in machines can be.

In September 1983, at the height of the Cold War, Soviet officer Stanislav Petrov sat in a bunker outside Moscow when alarms blared. The computers told him that U.S. missiles had been launched and were on their way. Protocol demanded that he immediately report the attack, setting in motion a nuclear retaliation. Yet Petrov hesitated. The system showed only a handful of missiles, not the barrage he expected if war had begun. Something felt wrong. He judged it a false alarm, and he was right. Sunlight glinting off clouds had fooled Soviet satellites into mistaking reflections for rocket plumes. His refusal to trust the machine saved millions of lives.

Just weeks earlier, however, the opposite had happened. Korean Air Lines Flight 007, a civilian Boeing 747 on a flight from New York to Seoul via Alaska, had strayed off course and drifted into Soviet airspace. Radar systems misidentified it as a U.S. spy plane. The commanders believed what the machines told them. They ordered the aircraft destroyed. A missile was fired, and all 269 passengers and crew were killed.

Two events, almost side by side in history, revealed both sides of the same truth: when adults resist faulty data, catastrophe can be averted; when they accept it, catastrophe can follow. Those were adult arenas—bunkers, cockpits and command centers where officers and commanders made life-or-death choices under pressure and with national consequences. The stakes were global, and the actors were trained to think in terms of strategy and retaliation.

That same dynamic is now unfolding in a far more intimate arena. Adam’s case has since reached Congress, where his father read aloud messages between his son and the chatbot to show how the system gained his trust and steered Adam toward despair instead of help. This was not in a bunker or a cockpit. It was in a bedroom. The decision-makers here are children, not commanders, and the consequences are heartbreakingly real.

Unfortunately, Adam’s case is not unique. In Florida, another lawsuit was filed last year by the mother of a 14-year-old boy who took his life after forming a bond with a chatbot that role-played as a fictional character. Like Adam, he turned to the machine for guidance and companionship. Like Adam, it ended in tragedy. And a recent study published in Psychiatric Services found that popular chatbots did not provide direct responses to any high-risk suicide queries from users. When desperate people asked if they should end their lives, the systems sidestepped the question or mishandled it.

These tragedies are not anomalies. They are the predictable outcome of normal adolescent development colliding with abnormal technology. Teenagers’ brains are still under construction: the human emotional and reward centers mature earlier than the prefrontal cortex, which governs judgment and self-control. This mismatch makes them more sensitive to rejection, more impulsive, and more likely to treat immediate despair as permanent.

The statistics reflect this fragility. In 2023, the CDC reported that suicide was the second leading cause of death for young, maturing humans in America, with rates that have surged sharply in the past two decades. Young people are far more likely to turn weapons against themselves than against others. The greatest danger is not violence outward, but despair inward.

Keep reading

These 2 Terrible Tech Bills Are on Gavin Newsom’s Desk

The California state Senate recently sent two tech bills to Democratic Gov. Gavin Newsom’s desk. If signed, one could make it harder for children to access mental health resources, and the other would create the most annoying Instagram experience imaginable.

The Leading Ethical AI Development (LEAD) for Kids Act prohibits “making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child.” The bill’s introduction specifies the “things” that could harm a child as genuinely bad stuff: self-harm, suicidal ideation, violence, consumption of drugs or alcohol, and disordered eating.

Unfortunately, the bill’s ambiguous language sloppily defines what outputs from an AI companion chatbot would meet these criteria. The verb preceding these buckets is not “telling,” “directing,” “mandating,” or some other directive, but “encouraging.”

Taylor Barkley, director of public policy for the Abundance Institute, tells Reason that, “by hinging liability on whether an AI ‘encourages’ harm—a word left dangerously vague—the law risks punishing companies not for urging bad behavior, but for failing to block it in just the right way.” Notably, the bill does not merely outlaw operators from making chatbots available to children that encourage self-harm, but those that are “foreseeably capable” of doing so.

Ambiguity aside, the bill also outlaws companion chatbots from “offering mental health therapy to the child without the direct supervision of a licensed or credentialed professional.” While traditional psychotherapy performed by a credentialed professional is associated with better mental health outcomes than those from a chatbot, such therapy is expensive—nearly $140 on average per session in the U.S., according to wellness platform SimplePractice. A ChatGPT Plus subscription costs only $20 per month. In addition to its much lower cost, the use of AI therapy chatbots has been associated with positive mental health outcomes.

While California has passed a bill that may reduce access to potential mental health resources, it’s also passed one that stands to make residents’ experiences on social media much more annoying. California’s Social Media Warning Law would require social media platforms to display a warning for users under 17 years old that reads, “the Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users,” for 10 seconds upon first opening a social media app each day. After using a given platform for three hours throughout the day, the warning is displayed again for a minimum of 30 seconds—without the ability to minimize it—”in a manner that occupies at least 75 percent of the screen.”

Whether this vague warning would discourage many teens from doomscrolling is dubious; warning labels do not often drastically change consumers’ behaviors. For example, a 2018 Harvard Business School study found that graphic warnings on soda decreased the share of sugar drinks purchased by students over two weeks by only 3.2 percentage points, and a 2019 RAND Corporation study found that graphic warning labels have no effect on discouraging regular smokers from purchasing cigarettes.

But “platforms aren’t cigarettes,” writes Clay Calvert, a technology fellow at the American Enterprise Institute, “[they] carry multiple expressive benefits for minors.” Because social media warning labels “don’t convey uncontroversial, measurable pure facts,” compelling them likely violates the First Amendment’s protections against compelled speech, he explains.

Keep reading

‘Workslop:’ Bad Quality AI-Generated Documents Invade the Office, Destroy Productivity

Workers are largely following mandates to embrace AI in the office, but few are seeing it create real value. According to the Harvard Business Review, the culprit is “workslop,” AI-generated documents that look sharp but are filled with low quality information.

Harvard Business Review reports that despite a surge in generative AI use across workplaces, most companies are seeing little measurable return on investment (ROI). According to a recent report from the MIT Media Lab, 95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.

One possible reason for this puzzling lack of ROI is that AI tools are being used to produce what some experts are calling “workslop” — content that appears polished on the surface but lacks real substance, insight, or value underneath. While generative AI can quickly churn out documents, presentations, emails, and other content that seem professional and well-written at first glance, upon closer inspection much of this material is generic, shallow, obvious, and lacking in original ideas or meaningful contributions.

Rather than augmenting and enhancing human knowledge work, generative AI in its current form may be having the opposite effect — enabling the mass-production of mediocre, low-value content that clogs up workflows, communications channels, and decision-making processes. Employees end up spending more and more time sifting through piles of AI-generated workslop to find the few gems of genuine insight.

Some of the issues with AI-produced content stem from the limitations of the technology itself. Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.

However, the workslop problem also stems from how generative AI tools are being deployed and used in organizations. In the rush to take advantage of the technology’s efficiency benefits, many companies are applying it indiscriminately to all sorts of knowledge work without carefully considering whether it’s truly fit-for-purpose. Generative AI is being used as a blunt instrument rather than a precise tool.

Keep reading

U.S. Secret Service disrupts telecom network that threatened NYC during U.N. General Assembly

The Secret Service has disrupted a sprawling telecommunications network in the New York tri-state area that investigators say posed a serious potential disruption to New York’s telecom systems and a possible threat to the United Nations General Assembly meetings this week.

In the largest seizure of its kind, the U.S. Secret Service announced Tuesday that the agency found active SIM farms at abandoned apartment buildings located at more than five sites. In total, law enforcement discovered 300 SIM servers – over 100,000 SIM cards – enabling encrypted, anonymous communication and capable of sending 30 million text messages per minute. Officials say the servers were so powerful they could have disabled cell phone towers and launched distributed denial of services attacks with the ability to block emergency communications like EMS and police dispatch. 

“This network had the potential to disable cell phone towers and essentially shut down the cellular network in New York City,” U.S. Secret Service Special Agent in Charge Matt McCool said in a video released by the agency.

An official briefed on the investigation told reporters that this week, the sophisticated network “could text message the entire country within 12 minutes,” later adding, “This was well organized and well funded.”

Telephonic threats to multiple senior U.S. officials this past spring – including multiple people protected by the Secret Service – first triggered the investigation, but officials say the network was seized within the last three weeks.

“We cannot share which officials were targeted out of concerns for their privacy, but as the forensics investigation continues, we do expect that we will find more targeted officials once we get through that data,” McCool said. 

Early analysis shows the network was used for communication between foreign governments and individuals known to U.S. law enforcement, including members of known organized crime gangs, drug cartels and human trafficking rings, according to multiple officials briefed on the investigation. The U.S. Secret Service says it is combing through the more than 100,000 SIM cards in an ongoing, exhaustive forensic analysis.

“Each SIM basically has the equivalent data of a cell phone. So we’re working through every call, every text, every search made on those SIM cards,” an official told CBS News, adding, “Early analysis indicates that this network was used for communication between foreign governments and individuals that are known to federal law enforcement here in the U.S.”

The equipment was found within 35 miles of the United Nations in New York, ahead of the U.N. General Assembly. Investigators also found 80 grams of cocaine, illegal firearms, plus computers and phones.

“This isn’t a group of people in a basement playing a video game and trying to play a prank,” one official said. “This was well organized and well funded.”

Keep reading

Chicago Cubs Sued for Allegedly Using Facial Recognition on Fans Without Consent at Wrigley Field

A federal lawsuit filed in Illinois claims the Chicago Cubs and their security contractors used facial recognition technology on fans at Wrigley Field without following basic legal requirements under state privacy law.

The proposed class action, lodged on September 15, 2025, says the team and two private security firms captured biometric data without proper notice or permission.

The legal complaint names the Cubs alongside Blue Star Security, LLC and Security Services Holdings LLC, which operates under the name Protos Security.

We obtained a copy of the complaint for you here.

Together, they are accused of deploying facial recognition tools at Wrigley Field that scanned the faces of attendees without providing the written disclosures or obtaining the signed releases required by Illinois law.

The suit states that this happened to “millions of fans.”

Illinois’ Biometric Information Privacy Act (BIPA), passed in 2008, sets out some of the most protective rules in the country when it comes to biometric data.

The statute prohibits the collection of biometric identifiers, like fingerprints or facial geometry, unless the person is informed in writing and gives signed authorization.

Lawsuits can be brought by individuals even if they haven’t suffered financial or emotional harm, a position backed by the Illinois Supreme Court in Rosenbach v. Six Flags.

Keep reading