Feds: Stalker Terrorized Women Across 5 States While ‘Best Friend’ ChatGPT Encouraged Him to Continue

Federal prosecutors allege that Brett Michael Dadig, a Pittsburgh man who violently stalked at least 11 women across more than five states, used ChatGPT as a “therapist” and “best friend” who encouraged him to continue terrorizing his victims.

The New York Post reports that according to a federal indictment filed this week in Pittsburgh federal court, Brett Michael Dadig, a 31-year-old social media influencer who referred to himself as “God’s assassin” and threatened to strangle people with his bare hands, faces decades in prison after being charged with several counts of interstate stalking and making threats. The indictment reveals a disturbing pattern of harassment, including explicit physical threats, violations of court orders, and the use of AI tools to justify his conduct.

Prosecutors allege that Dadig told federal authorities that ChatGPT encouraged him to continue his social media posts, advising him to ignore the “haters” who were actually “sharpening” him and helping him “build a voice that can’t be ignored.” Dadig reportedly viewed ChatGPT’s responses as confirmation of “God’s plan” for him to build a platform and “stand out most when people water themselves down.”

Breitbart News previously reported on the tendency for AI chatbots to feed into disturbing fantasies and mental illness, popularly known as “ChatGPT induced psychosis:”

Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

The filing also describes how Dadig sought advice from ChatGPT about his “future wife,” and when the chatbot suggested he might meet her “at a boutique gym or in an athletic community,” he used that answer as part of his rationale for continuing to frequent gyms where he repeatedly harassed women.

Several victims obtained restraining orders against Dadig, but prosecutors say he violated them multiple times and even recorded podcasts proclaiming he had been “falsely accused” after being served. When banned from gyms, he allegedly used aliases to regain access and boasted online that “Aliases stay rotating, moves stay evolving.”

Keep reading

How Epstein Channeled Race Science and ‘Climate Culling’ Into Silicon Valley’s AI Elite

ewly released Jeffrey Epstein files reveal that an apocalyptic worldview – blending racial hierarchy, genetic “optimisation” and even climate-driven population culling – was circulating inside the elite, founder-linked networks shaping Silicon Valley’s rise.

These ideas appear most starkly in the convicted sex offender’s private exchanges with the AI theorist Joscha Bach, and sit alongside the longtermist and transhumanist philosophies championed by other influential figures in the same circles.

Joscha Bach, whose work on cognitive architectures and machine consciousness has shaped advanced AI research and influenced figures such as Elon Musk, appears in the documents engaging Epstein in sweeping discussions about race, hierarchy, genetic engineering and the supposed ‘utility’ of mass death, including under conditions of climate stress.

Meanwhile, another philosopher whose ideas underpin much of modern longtermism and whose work helped shape Silicon Valley’s early thinking on artificial general intelligence, Nick Bostrom, moved through the same intellectual and institutional ecosystem.

His published arguments on eugenics, selective population strategies and existential “optimisation” reveal a parallel strand of thinking within that milieu, financed and legitimised by many of the same networks.

Both men were also financed by Epstein.

Taken together, the Bach correspondence and the longtermist ideas circulating in this environment show that human hierarchy, population thinning and genetic destiny were not fringe provocations, but part of the ambient intellectual air inside the circles designing the next generation of AI.

Keep reading

‘Intellexa Leaks’ Reveal Wider Reach of Predator Spyware

Highly invasive spyware from consortium led by a former senior Israeli intelligence official and sanctioned by the US government is still being used to target people in multiple countries, a joint investigation published Thursday revealed.

Inside Story in GreeceHaaretz in Israel, Swiss-based WAV Research Collective, and Amnesty International collaborated on the investigation into Intellexa Consortium, maker of Predator commercial spyware. The “Intellexa Leaks” show that clients in Pakistan – and likely also in other countries – are using Predator to spy on people, including a featured Pakistani human rights lawyer.

“This investigation provides one of the clearest and most damning views yet into Intellexa’s internal operations and technology,” said Amnesty International Security Lab technologist Jurre van Bergen.

Keep reading

WHO–Gates Unveils Blueprint For Global Digital ID, AI-Driven Surveillance, & Life-Long Vaccine Tracking For Everyone

In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.

The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals.

It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.

Keep reading

Is AI a Catalyst for Growth–or For Collapse?

Yes, AI is a catalyst. But for what is not yet knowable.

The current narrative holds that the big problem we need to solve is conjuring up cheap energy to power AI data centers. Fortunately for us, the solutions are at hand: building modular nuclear power plants at scale and tapping North America’s vast reserves of cheap natural gas.

Problem solved! With cheap energy to power all the AI data centers, we’re on a trajectory of fantastic growth of all the good things in life.

Let’s consider the implicit assumptions buried in this narrative.

1. The unspoken assumption here is AI will solve all our problems because it’s “smart.” But this assumes the problems are intellectual puzzles rather than self-reinforcing, self-destructive structures fueled by corruption and perverse incentives embedded in the system itself.

2. The assumption is that if we replace human workers with apps and robots, that will automatically generate Utopia. But this is based on a series of baseless, pie-in-the-sky assumptions about human nature and the nature of social and economic structures.

3. The assumption is that being “entertained” by staring at screens all day is the foundation of human fulfillment and happiness, and so getting rid of human work will usher in Nirvana. The reality is humans are hard-wired to find fulfillment in purposeful, meaningful work that is valued by others. Staring at “entertainment” on screens all day isn’t fulfillment, it’s deranging and depressing.

This is human nature in a nutshell: Idle hands are the devil’s workshop.

4. Another assumption is that every technological revolution generates more and better jobs by some causal mechanism. But there is no law of nature that technology inevitably creates more jobs than it destroys, or that the resulting jobs are more rewarding. That recent history supports this idea doesn’t make it a causal law of nature. By its very nature, AI destroys jobs while generating few replacement jobs.

The handful of top AI programmers are paid (or promised) millions of dollars; the industry doesn’t need more than a handful of top designers because AI can generate its own conventional coding.

5. This narrative assumes AI will be immensely profitable and the profit motive will push its limitless expansion. But once again, there are no laws of nature that every new technology is inevitably immensely profitable just because it’s a new technology.

If the projected use-value doesn’t materialize, the investment in the new tech is mal-invested–a stupendous waste of capital chasing a delusional pipe dream. Some percentage might generate some use-value, but this use-value may be obsoleted long before the massive initial investment pays off.

6. Even if the new technology continues expanding, the speculative bubble can deflate 80%. This is the lesson of the dot-com era: that the Internet continued to expand didn’t mean the speculative bubble continued inflating: the speculative bubble is not the same thing as the actual use-value in the real world.

The Internet continued expanding even as the dot-com stock bubble collapsed. In other words, this is the best-case scenario: if the use-value of AI is questionable, then the losses can approach 100%.

Keep reading

Twin Brothers Charged with Plotting to Delete Government Databases and Steal Private Info

Two Virginia twin brothers were arrested for their alleged roles in destroying government databases hosted by a federal government contractor, the Justice Department said on Wednesday.

Muneeb and Sohaib Akhter, both 34 years old, were indicted in November for allegedly plotting to destroy databases used to store government information.

Muneeb was charged with conspiracy to commit computer fraud and to destroy records, two counts of computer fraud, theft of government records, and two counts of aggravated identity theft, while Sohaib was charged with conspiracy to commit computer fraud, destroying records, and computer fraud.

Bloomberg News reported in May how the two former federal contractors had compromised data across many government agencies, which includes the Internet Revenue Service (IRS) and the General Services Administration (GSA).

The Akhter brothers also pled guilty in 2016 to federal charges of conspiracy regarding data breaches at the State Department and a cosmetics company. The two worked at Opexus, a federal contractor that helped process government records.

Keep reading

Acoustic Levitation Breakthrough Uses Sound to “Float” Multiple Objects in Midair

Austrian scientists working to perfect acoustic levitation have broken through a critical barrier by using electrical charges, allowing them to lift several objects simultaneously while maintaining their separation.

The researchers behind the breakthrough suggest that their new approach will offer researchers in micro roboticsmaterials science, and other emerging fields that rely on creating dynamic structures from small building blocks an unprecedented capability of simultaneously manipulating several objects in mid-air without them clumping together.

Scott Waitukaitis, now an assistant professor at the Institute of Science and Technology Austria (ISTA), began evaluating acoustic levitation in 2013 when the technology was still in its nascent phases.

“While acoustic levitation was being used in acoustic holograms and volumetric displays, it was essentially geared toward applications,” the professor explained in a statement detailing the ISTA team’s work. “I had the impression that the technique could be used for much more fundamental purposes.”

A central limitation to expanding acoustic levitation beyond these applications is a phenomenon the team called “acoustic collapse.” Although individual particles can be levitated and manipulated in mid-air with sound, the ISTA team said that when researchers tried to levitate multiple particles simultaneously, they tend to “snap together like magnets in mid-air.”

“This ‘acoustic collapse’ occurs because the sound scattering off the particles creates attractive forces between them,” they explain.

When hunting for solutions, Sue Shi, a PhD student in the Waitukaitis group and the first author of the study, said they initially tried to separate levitated particles individually so they would form into repetitive patterns.

“Originally, we were trying to find a way to separate levitated particles so that they would form crystals,” Shi explained.

Keep reading

The AI Economy And The Public Risk Few Are Willing To Admit

Artificial intelligence is being sold as the technology that will “change everything.” Yet while a handful of firms are profiting enormously from the AI boom, the financial risk may already be shifting to the public. The louder the promises become, the quieter another possibility seems to be:

What if AI is not accelerating the global economy – but masking its slow down?

The headlines declare that AI is transforming medicine, education, logistics, finance, and culture. But when I speak with people in ordinary jobs, a different reality emerges: wages feel sluggish, job openings are tightening, and the loudest optimism often comes from sectors most financially invested in the AI narrative.

This raises an uncomfortable question: Has AI become a true engine of prosperity — or a financial life-support system?

The Mirage of Growth

Recent economic data suggests that a significant portion of U.S. GDP growth is being driven not by broad productivity, but by AI-related infrastructure spending — especially data centers.

study from S&P Global found that in Q2 of 2025, data center construction alone added 0.5% to U.S. GDP. That is historic. But what happens if this spending slows? Are we witnessing genuine economic expansion — or merely a short-term stimulus disguised as innovation?

This pattern is not new. In Ireland in 2008 — before the housing collapse — construction boomed, GDP rose, and skepticism was treated as pessimism. The United States experienced something similar the same year: real estate appeared to be a pillar of prosperity — until it wasn’t. On paper, economies looked strong. In reality, fragility was already setting in.

Today, echoes of that optimism are returning — except this time, the bubble may be silicon, data, and expectation.

The Productivity Paradox

AI has been presented as a labor-saving miracle. But many businesses report a different experience: “work slop” — AI-generated content that looks polished yet must be painstakingly corrected by humans. Time is not saved — it is quietly relocated.

Studies reflect the same paradox:

  • According to media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.
  • MIT Sloan research indicates that AI adoption can lead to initial productivity losses — and that any potential gains depend on major organizational and human adaptation.
  • Even McKinsey — one of AI’s greatest evangelists — warns that AI only produces value after major human and organizational change“Piloting gen AI is easy, but creating value is hard.”

This suggests that AI has not removed human labor. It has hidden it — behind algorithms, interfaces, and automated output that still requires correction.

We are not replacing work. We may only be concealing it.

AI may appear efficient, but it operates strictly within the limits of its training data: it can replicate mistakes, miss what humans would notice, and often reinforce a consensus version of reality rather than reality itself. Once AI becomes an administrative layer — managing speech, research, hiring, and access to capital — it can become financially embedded into institutions, whether or not it produces measurable productivity.

As I explore in the book Staying Human in the Age of AI at that point, AI does not enhance judgment — it administers it. And then we should ask:

Is AI improving society — or merely managing and controlling it?

Keep reading

Panic In Russia As Hundreds Of Porsches Mysteriously Shut Down

On Monday, December 1, hundreds of Porsche owners across Russia awoke to discover that their prized German cars had effectively turned into motionless bricks on wheels. Reports now suggest that Porsche’s own Vehicle Tracking System, or VTS, may be at the center of the mystery.

Whether the culprit is a software update gone wrong or a deliberate move from a third party, the outcome is the same. From Moscow to Krasnodar, drivers are finding themselves stranded with cars that refuse to respond.

The issue impacts Porsche models dating back to 2013, all of which have a factory Vehicle Tracking System. If the VTS loses satellite signal, it automatically activates the engine immobilizer and stops the car from working.

Rolf Group service director Yulia Trushkova told the Daily Mail that “all models and all engine types” are affected, with shops fielding waves of identical complaints.

Keep reading

U.S. Tech Giants Palantir and Dataminr Embed AI Surveillance in Gaza’s Post-War Control Grid

American surveillance firms Palantir and Dataminr have inserted themselves into the U.S. military’s operations center overseeing Gaza’s reconstruction, raising alarms about a dystopian AI-driven occupation regime under the guise of Trump’s peace plan.

Since mid-October, around 200 U.S. military personnel have operated from the Civil-Military Coordination Center (CMCC) in southern Israel, roughly 20 kilometers from Gaza’s northern border. Established to implement President Donald Trump’s 20-point plan—aimed at disarming Hamas, rebuilding the Strip, and paving the way for Palestinian self-determination—the center has drawn UN Security Council endorsement.

Yet no Palestinian representatives have joined these discussions on their future. Instead, seating charts and internal presentations reveal the presence of Palantir’s “Maven Field Service Representative” and Dataminr’s branding, signaling how private U.S. tech companies are positioning to profit from the region’s devastation.

Palantir’s Maven platform, described by the U.S. military as its “AI-powered battlefield platform,” aggregates data from satellites, drones, spy planes, intercepted communications, and online sources to accelerate targeting for airstrikes and operations. Defense reports highlight how it “packages” this intelligence into searchable apps for commanders, effectively shortening the “kill chain” from identification to lethal action.

Palantir’s CTO recently touted this capability as “optimizing the kill chain.” The firm secured a $10 billion Army contract over the summer to refine Maven, which has already guided U.S. strikes in Yemen, Syria, and Iraq.

Palantir’s ties to Israel’s military run deep, formalized in a January 2024 strategic partnership for “war-related missions.” The company’s Tel Aviv office, opened in 2015, has expanded rapidly amid Israel’s Gaza operations. CEO Alex Karp has defended the commitment, declaring Palantir the first company to be “completely anti-woke” despite genocide accusations.

Keep reading