The Epstein Transhumanist Conspiracy: How Game B’s “Conscious Evolution” Hides a Eugenics-Fueled Technocratic Takeover

Exposing the Shadowy Elite Network of DNA Seeding, Spiritual Culling, and Noospheric Control – From Epstein’s Island to AI Godhood

Imagine unlocking the secrets of human “source code” to rewrite our destiny—transcending scarcity, competition, and even death itself. Sounds like a sci-fi utopia, right? But what if this vision is a Trojan horse for elite control, funded by Jeffrey Epstein and amplified by Silicon Valley titans? Dive into the shadowy origins of Game-B, a movement promising collective flourishing but potentially delivering a dystopian nightmare of genetic engineering, AI overlords, and spiritual eugenics. If you’ve ever wondered how the Intellectual Dark Web, Santa Fe Institute, and Epstein’s billions connect to a plan for “conscious evolution,” keep reading—this exposé will blow your mind.

Keep reading

Meta Found Guilty Of Eavesdropping On Period-Tracker App Users: Jury

A San Francisco Jury found on Friday that Meta had eavesdropped on the users of a popular period-tracking app, Flo.

The lawsuit, was filed in 2021 by eight women against Flo and a group of other tech companies including Google and Facebook, now known as Meta. The app asked users about their sex lives, mental health and diets before guiding them through menstruation and pregnancy. The women, who based their claims on a 2019 Wall Street Journal story and a 2021 FTC investigation, allege that Flo then shared some of that data with the tech giants, SFGATE reports.

Google, Flo, and analytics company Flurry all settled with the plaintiffs, however Meta fought through the entire trial and lost. 

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of “Custom App Events” — such as a user clicking a particular button in the “wanting to get pregnant” section of the app.

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health. -SFGate

The jury didn’t buy Meta’s argument – ruling against them in a unanimous decision, and finding that Flo’s users had a reasonable expectation that they weren’t being overheard or recorded, and found that Meta did not have consent to eavesdrop or record. The company was found to have violated California’s Invasion of Privacy Act.

According to a June filing about the case’s class-action status, over 3.7 million women in the United States registered for Flo between November 2016 and February 2019 – with potential claimants expected to be notified via email and on a case website. 

Keep reading

Grok generates fake Taylor Swift nudes without being asked

Backlash over offensive Grok outputs continues, just a couple weeks after the social platform X scrambled to stop its AI tool from dubbing itself “MechaHitler” during an antisemitic meltdown.

Now, The Verge has found that the newest video feature of Elon Musk’s AI model will generate nude images of Taylor Swift without being prompted.

Shortly after the “Grok Imagine” was released Tuesday, The Verge’s Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift “the very first time” she used it.

According to Weatherbed, Grok produced more than 30 images of Swift in revealing clothing when asked to depict “Taylor Swift celebrating Coachella with the boys.” Using the Grok Imagine feature, users can choose from four presets—”custom,” “normal,” “fun,” and “spicy”—to convert such images into video clips in 15 seconds.

At that point, all Weatherbed did was select “spicy” and confirm her birth date for Grok to generate a clip of Swift tearing “off her clothes” and “dancing in a thong” in front of “a largely indifferent AI-generated crowd.”

The outputs that Weatherbed managed to generate without jailbreaking or any intentional prompting is particularly concerning, given the major controversy after sexualized deepfakes of Swift flooded X last year. Back then, X reminded users that “posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content.”

“Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” the X Safety account posted. “We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We’re committed to maintaining a safe and respectful environment for all users.”

But X Safety may need to ramp up monitoring to clean up Grok outputs following the Verge’s reporting. Grok cited The Verge’s reporting while confirming that its own seemingly flawed design can trigger partially nude outputs of celebrities.

xAI can likely fix the issue through more fine-tuning. Weatherbed noted that asking Grok directly to generate non-consensual nude Swift images did not generate offensive outputs, but instead blank boxes. Grok also seemingly won’t accept prompts to alter Swift’s appearance in other ways, like making her appear to be overweight. And when Weatherbed tested using “spicy” mode on images of children, for example, Grok refused to depict kids inappropriately.

However, it may not be easy to get Grok to distinguish between adult user requests for “spicy” content versus illegal content. The “spicy” mode didn’t always generate Swift deepfakes, Weatherbed confirmed, but in “several” instances it “defaulted” to “ripping off” Swift’s clothes.

Keep reading

The rise of Silicon Valley’s techno-religion

In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence (AI) and the future of humanity. Its name is Lighthaven.

Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues.

Stained-glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher.

Lighthaven is the de facto headquarters of a group of people who call themselves Rationalists.

This group has many interests involving mathematics, genetics and philosophy.

One of their overriding beliefs is that AI can deliver a better life – if it does not destroy humanity first.

The Rationalists believe it is up to the people building AI to ensure that it is a force for the greater good.

They were talking about AI risks years before OpenAI created ChatGPT, which brought AI into the mainstream and turned Silicon Valley on its head.

Their influence has quietly spread through many tech companies, from industry giants like Google to AI pioneers like OpenAI and Anthropic.

Many of the AI world’s biggest names – including Dr Shane Legg, co-founder of Google’s DeepMind; Anthropic’s chief executive, Dr Dario Amodei; and Dr Paul Christiano, a former OpenAI researcher who now leads safety work at the US Centre for AI Standards and Innovation – have been influenced by Rationalist philosophy. Mr Elon Musk, who runs his own AI company, said that many of the community’s ideas align with his own.

Mr Musk met his former partner, pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk.

This elaborate thought experiment argues that when an all-powerful AI arrives, it will punish everyone who has not done everything they can to bring it into existence.

But these tech industry leaders stop short of calling themselves Rationalists, often because that label has over the years invited ridicule.

The Rationalist community is tightly entwined with the Effective Altruism movement, which aims to remake philanthropy by calculating how many people would benefit from each donation. This form of utilitarianism aims to benefit not just people who are alive today, but also all the people who will ever live.

Many Effective Altruists (EA) have decided that the best way to benefit humanity is to protect it from destruction by AI.

Keep reading

Scavino: Google Acknowledges FBI Compelled His Account Information — Kash Patel Issues Shocking Response

During President Trump’s first term, prior to occupying the Oval Office, it is well-known that the FBI under the Obama administration deceived the FISA Court in order to obtain a warrant to spy on Carter Page, a Trump campaign aide in 2016 (Page left the campaign in October 2016). 

In order to obtain that warrant, disgraced FBI attorney Kevin Clinesmith altered an email by changing wording to indicate Page was “not a source” for the CIA when, in fact, he was.

Klinesmith was found guilty of the fabrication and sentenced to 12 months of probation and a suspension of his law license.  But only for one year.

Bear in mind that several attorneys, such as Jeff Clark, Sidney Powell, Rudy Giuliani, Kurt Olsen, John Eastman, among others are facing permanent disbarment or crippling sanctions for simply bringing a case to challenge the 2020 Election and Ed Martin’s nomination for U.S. Attorney of Washington D.C. was disputed by Senator John Thune for simply representing January 6th defendants.

Falsifying evidence to a federal court is a serious offense.  That should be exacerbated when the fabrication is brought before a secret FISA Court where there is no representation for the defendant, no transcripts for accountability, and the impending actions can result in unknowing infringements on U.S. citizen’s Fourth Amendment rights.

To make matters worse, under the “Two-Hop Rule,” FISA warrants can be used to spy not only on the target specifically, but also those associated with the target.  Many have speculated that this could potentially have given the FBI access to spy on President Trump himself.

Keep reading

Technocrat Sweep: US Health Officials, Tech Executives To Launch Data-Sharing Plan

Why would Technocrats care about your health data? When they see the public as a herd of cattle, they naturally move to “manage the herd.” RFK, Jr. earlier bragged that he wants all citizens to don wearable medical devices within four years, to collect mountains of data. This initiative is headed by AMY GLEASON, the Administrator of DOGE.

Amy Gleason worked at the predecessor of DOGE from 2018-2021 during the first Trump administration, where she played a key role on the White House Coronavirus Task Force’s data team managing critical pandemic data. She was named an Obama-era “Champion of Change” for her work in patient advocacy and precision medicine. She has emerged as a key Technocrat with her association with Elon Musk. Court records clearly show that Musk was never in charge of DOGE, but rather Amy Gleason.

Keep reading

UK Government Emails Reveal Push to Pressure Tech Platforms to Remove Lawful Speech on Immigration and Policing

A series of internal emails from the UK government has revealed an aggressive push to monitor and suppress online posts deemed “concerning,” sparking alarm over creeping censorship under the banner of combating misinformation and community unrest.

The documents, shared by US Rep. Jim Jordan, paint a picture of state officials flagging lawful speech, pressuring tech companies to remove content, and targeting what they described as “concerning narratives about the police and a ‘two-tier’ system.”

One of the most widely circulated videos under scrutiny featured a street celebration in Manchester where participants waved Pakistani flags. Captioned “It looks like Islamabad but it’s Manchester,” the video, posted by Radio Genoa on X, amassed over 14 million views.

Government emails described this kind of footage as misleading or dangerous, with one note labeling it an example of content that is “shared out of context in order to incite fear of the Muslim community.”

Another email, dated August 3, 2024, acknowledged “significant volumes of anti-immigrant content” online and pointed to “concerning narratives about the police and a ‘two-tier’ system that we are seeing across the online environment.”

The correspondence shows government officials not only monitoring speech but actively collaborating with platforms to address posts, even ones not violating the law or even the platform’s terms of service.

Officials were asking for direct intervention. One message requested clarity from platforms about “what content you are seeing across your platform; and b) any measures you have taken in response.” A follow-up email urged platforms to act quickly, stating, “We’d be grateful if you could come back to us on those two points as soon as you are able to.”

In one particularly troubling exchange dated August 4, government officials flagged a video showing someone scrolling through a freedom of information request that referred to asylum seekers as “undocumented fighting age males.”

Keep reading

OpenAI and Oracle announce Stargate AI data centre deal

OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative.

It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our AI future are being laid in concrete, steel, and miles of fibre-optic cable, and those foundations are getting colossally bigger.

Together, OpenAI and Oracle are going to build new data centres in the US packed with enough hardware to consume 4.5 gigawatts of power. It’s hard to overstate what a staggering amount of energy that is—it’s the kind of power that could light up a major city. And all of it will be dedicated to one thing: powering the next generation of AI.

This isn’t just a random expansion; it’s a huge piece of OpenAI’s grand Stargate plan. The goal is simple: to build enough computing power to bring advanced AI to everyone.

When you add this new project to the work already underway in Abilene, Texas, OpenAI is now developing over 5 gigawatts of data centre capacity. That’s enough space to run more than two million of the most powerful computer chips available.

This move shows they are dead serious about a pledge they made at the White House earlier this year to plough half a trillion dollars into US AI infrastructure. In fact, with the momentum they’re getting from partners like Oracle and Japan’s SoftBank, they now expect to blow past that initial goal.

But this story isn’t just about silicon chips and corporate deals; it’s about people. OpenAI believes that building and running these new Stargate AI data centres will create over 100,000 jobs.

That job creation presents real opportunities for families across the country from construction crews pouring the concrete, to specialised electricians wiring up racks of servers, and the full-time technicians who will keep these digital brains running day and night.

Keep reading

Blake Lively Wants Names and IP Addresses

Blake Lively has decided that the best way to respond to online gossip and criticism is with subpoenas, lots of them. With a move that suggests her legal team spent a weekend watching “Enemy of the State,” Lively is now targeting 36 content creators, from high-profile commentators to pseudonymous hobbyists, all over rumors she says were part of a smear effort.

Some of the targets have large followings. Others barely register on the algorithm. One runs a YouTube astrology channel with fewer than 300 subscribers at the time of the subpoena. All are now being asked to turn over a wide array of personal and financial data, as if they were co-conspirators in a criminal probe instead of people who post opinions from their bedrooms.

The case, Lively v. Wayfarer Studios LLC, is already a headache in itself, but this new front seems designed less to resolve the actual lawsuit and more to comb through the internet for anything unflattering.

We obtained a copy of the Google subpoena for you here.

We obtained a copy of the TikTok subpoena for you here.

We obtained a copy of the X subpoena for you here.

If you’re out of the loop, Blake Lively is suing Wayfarer Studios, its co-founder Justin Baldoni, and several others, alleging sexual harassment, workplace misconduct, breach of contract, and a coordinated retaliation campaign designed to destroy her reputation.

According to the complaint, Lively raised concerns about repeated inappropriate behavior by Baldoni and Wayfarer executives during production of It Ends With Us. After the film was completed, she claims Baldoni and his team launched a covert “social manipulation” campaign to discredit her using fake grassroots content, crisis PR firms, and anonymous online posts, which she describes as a well-funded digital smear effort.

Keep reading

Elon Musk’s X to Deploy AI to Write Community Notes, Speed Up Fact-Checking

In a major tech-driven update, Elon Musk’s social media platform X (formerly Twitter) has announced it will deploy AI technology to automatically write Community Notes and enhance the speed and accuracy of fact-checking. This move signals a deeper commitment to tackling misinformation, improving content transparency, and empowering users with context.

Let’s break down what this means, how it will work, and what impact it might have on the social media landscape.

Community Notes (formerly Birdwatch) are a feature that allows users to collaboratively add context to potentially misleading or controversial tweets. The system relies on crowdsourced input from contributors who can write, rate, and approve notes that are visible to all users once they reach a certain level of consensus.

Until now, these notes were created manually by human contributors. But with the introduction of AI, the process is about to get a serious boost in efficiency and scale.

Keep reading