EU Targets Elon Musk’s X with Potential $1 Billion Fine Under Censorship Law

When the European Commission goes to war, it doesn’t send tanks. It dispatches compliance officers with angry emails and billion-dollar fines.

The European Union’s eurocrats’s next target is Elon Musk’s social media fixer-upper, X.

According to the New York Times, four anonymous whisperers from inside the EU machine say the bloc is loading up a billion-dollar bazooka aimed squarely at X, citing violations of their shiny new Digital Services Act, the latest attempt to regulate speech by committee. And what better way to showcase the importance of online civility than by dragging the world’s loudest billionaire into court?

The DSA, which was sold to the public as a digital hygiene law to make the internet a kinder, gentler place, has become a blunt instrument in the hands of bureaucrats who never met a control lever they didn’t want to pull. They’ve apparently decided that Musk’s flavor of digital chaos — too many unregulated opinions, not enough “fact-checking,” and a stubborn refusal to grovel — is a clear and present danger to the European project.

Among X’s alleged crimes against the algorithmic gods: refusing to hand over data to “independent researchers” (friendly academics who publish pro-censorship PDFs no one reads), hiding the secrets behind those little blue check marks, and failing to spill the tea on who’s advertising to whom.

Naturally, this has prompted Brussels to threaten a fine that could “top $1 billion,” a figure clearly pulled from the same place all government fines originate — an angry dartboard. One idea floating through the regulatory fog? That if X itself can’t pay up, maybe SpaceX can. Because when you’re short on jurisdiction, why not go fishing in another company’s wallet?

Keep reading

Facebook To Shut Down Fact-Checking Program On Monday: ‘Officially Over’

Meta, the parent company of Facebook, announced Friday that its fact-checking program in the United States would be “officially over” on Monday, when it will roll out the community notes feature across all platforms.

The news comes after Meta CEO Mark Zuckerberg announced in January that the company would end fact-checking and move to restore free speech on Facebook, Instagram, and Threads. Starting Monday, fact-checkers will no longer be able to rate new content, and old fact-checks placed on content will no longer appear.

Instead of fact-checks, Meta will adopt an X-style community notes system where users can add context to posts, which are then rated by other users. Anyone will be able to sign up to be a contributor to community notes if they are over 18 and have had a verified account for over six months.

Joel Kaplan, Meta’s chief global affairs officer, announced the changes on Friday.

“By Monday afternoon, our fact-checking program in the US will be officially over. That means no new fact checks and no fact checkers. We announced in January we’d be winding down the program & we haven’t applied penalties to fact-checked posts in the US since then. In place of fact checks, the first Community Notes will start appearing gradually across Facebook, Threads & Instagram, with no penalties attached,” Kaplan posted on social media.

Keep reading

New Info on How the Feds Helped Censor a Bombshell

The US House Judiciary Committee has released internal chat logs, that show the FBI moved into cover-up mode the very day the New York Post published the Hunter Biden laptop story, on October 14, 2020.

The logs, first reported about by journalists Michael Shellenberger and Catherine Herridge, reveal that the FBI employees were immediately instructed “not to discuss the Biden matter,” while an intelligence analyst who, during a call with Twitter, accidentally confirmed that the story, i.e., the laptop, was real, was placed under a “gag order.”

The reason the analyst, who was with the FBI’s Criminal Investigative Division, was able to so quickly confirm the reporting was based on credible information was the fact the FBI had seized and authenticated Hunter Biden’s laptop several months earlier.

Big Tech platforms – notably Twitter and Facebook – then started censoring the article, branding it falsely “Russian disinformation.” By maintaining the “no comment” policy instead of confirming that the laptop was real and under investigation, the FBI was in effect tacitly promoting the false narrative about foreign interference.

These moves originated from the Foreign Influence Task Force, which was shut down earlier this year for its activities related to censorship through pressure on social platforms.

The laptop scandal was unfolding during a crucial time in the 2020 campaign and represents one of the most egregious publicly known examples of political censorship of free speech and media orchestrated by government agencies.

The chat logs that have now been published reveal that one of the FBI staff involved in the Hunter Biden laptop story suppression was Bradley Benavides.

Only weeks prior, Benavides featured in another controversy: that time in what appeared to be a smear campaign against Senators Ron Johnson and Chuck Grassley, who were allegedly “advancing Russian disinformation.”

At the time, the senators just so happened to be investigating Hunter Biden’s financial connections to foreign governments.

A letter the Judiciary Committee sent Benavides in June 2023, shows that he had by that time gone through the Big Tech-Big Government “revolving door” – and was senior risk manager at Amazon.

Keep reading

Democrats, Former Disinfo Board Chief, Defend Government-Big Tech Ties, Dismiss Censorship-Industrial Complex at House Hearing on First Amendment Safeguards

Nina Jankowicz – former head of the disbanded Disinformation Governance Board and CEO of the American Sunlight Project – and Democrats who this week spoke during a House Foreign Affairs Committee hearing, continued to deny and defend the Big Tech-government censorship collusion.

The hearing – “Censorship-Industrial Complex: The Need for First Amendment Safeguards at the State Department” – also saw Jankowicz, who appeared as a witness, and Representative Sydney Kamlager-Dove, a Democrat, attempt to paint the actions taken by the new Trump White House as worse that what was happening during the previous US administration.

Both Jankowicz and Kamlager-Dove referred to the system known as the Censorship-Industrial Complex, and its elements, as “fiction,” “lies,” “tall tales,” and, “a conspiracy theory,” with Jankowicz trying to frame the new government’s moves as “an assault on the First Amendment” and “suppressing speech.”

Despite the fact these are some of the key accusations against the Biden administration – and at this point, fairly well backed up by batches of internal documents, but also testimonies from Big Tech execs – Jankowicz chose to call it “the imagined actions of the Biden administration.”

As for her own role in this “imagined” system – namely, the brief stint at the helm of the short-lived Disinformation Governance Board (that was part of the Department of Homeland Security) – Jankowicz maintained that it was not meant to be a censorship body.

Instead, Jankowicz would have the Committee and the public believe the Board was true to its mission statement, which was “to protect civil rights, civil liberties, privacy, and the First Amendment.”

Keep reading

Final moments of big tech whistleblower Suchir Balaji hours before his mysterious ‘suicide’

Hours before his death, Suchir Balaji appeared relaxed as he strolled to the door to collect his last meal.

The big tech whistleblower emerged from the elevators of his apartment building in San Francisco hungry and with a spring in his step.

He turned toward the front door and disappeared around a corner, returning 15 seconds later clutching a brown paper bag.

Inside was a box of take-out rice, meat, and vegetables – the scraps of which were found sitting on his desk when his body was found days later.

Balaji dressed casually in jeans, sneakers, and a zip-up sweater, strolled back to the elevator, pumping tunes through the wireless earbuds planted firmly in his ears.

To passersby he has the appearance of any tired, young professional ready to relax on the couch with his dinner.

Yet this newly uncovered surveillance footage obtained by DailyMail.com from 7.30pm on November 22 is believed to be the last time he was seen alive.

Just hours later he died from a single gunshot wound to the head. His body was found four days later on November 26.

The medical examiner concluded his death was a suicide. Police agree. But the young man’s parents are convinced he was the victim of foul play. 

They believe he was punished for daring to speak out against the dangers of the world-changing technology he’d been working on. 

A month earlier, Balaji had revealed OpenAI’s dubious methods of training ChatGPT and warned of its dangers to the internet. 

His mother, Poornima Ramarao, claims the final footage of her son proves he did not kill himself.

‘[The medical examiner] saw this video and still concluded he was depressed and called it suicide,’ she said.

Balaji spoke to his father, Balaji Ramamurthy, from 7.12pm until a few minutes before he arrived home.

He told him him he was getting dinner and they discussed his recent holiday, and tentative plans to meet in January.

There were no signs of what was to come. 

Earlier CCTV footage showed Balaji arriving home from his birthday holiday with friends to Catalina Island, off the coast of Los Angeles, at 1.33pm.

His luggage, a large canvas bag, hung from a strap slung over his left shoulder and he was wearing a black jacket, blue jeans, and black shoes.

Like the final video just six hours later, he was walking towards the elevators in the lobby of his apartment building.

‘Does he look stressed?’ Ramarao asked, rhetorically.

Keep reading

Meta Complies with Brazilian Court Order While Challenging Justice Alexandre de Moraes’s Demand for Journalist’s Instagram Data

Meta has launched a legal challenge against a ruling by controversial Brazilian Supreme Court Justice Alexandre de Moraes, who compelled the tech giant to disclose data tied to the Instagram account of journalist Allan dos Santos. Though raising objections to the judge’s rationale, Meta affirmed it would still comply — at least for now.

The company confirmed it will deliver the requested data in a confidential filing, stating, “In compliance with the order and demonstrating good faith, Meta Platforms will provide the requested data, in a separate confidential procedure, within the period granted.”

Justice Alexandre de Moraes consistently stirs controversy with his heavy-handed censorship tactics, like banning social media accounts and blocking platforms such as Telegram and X when they defy him. Critics slam him for trampling free speech, overreaching his role, and acting like a one-man judge-jury-executioner, especially against Bolsonaro allies, while his clash with Elon Musk over X’s compliance has fueled accusations of authoritarianism.

The demand, issued last week, also targeted platform X, requiring both companies to provide the Federal Police with detailed information on Santos’s accounts within ten days — under threat of a R$100,000 ($17,362) daily fine for delay or refusal. The data request is broad, seeking registration details, IP addresses, and post content from mid-2024 through early 2025.

Keep reading

Google imports ex-Israeli spies who automated Gaza genocide

On Mar. 18, Google bought Israeli cybersecurity company Wiz for $32 billion. The acquisition will mark the single largest transfer of former Israeli spies into an American company. This is because Wiz is run and staffed by dozens of ex Unit 8200 members, the specialist cyber-spying arm of the IDF.

Unit 8200 wrote the programming and designed the algorithms that automated the genocide of Gaza and was also responsible for the pager attack in Lebanon. Now the men and women who helped design the architecture of apartheid are being swallowed by the US tech-surveillance complex.

The identity of the Wiz founders, all former Unit 8200, is fairly well-documented (by Israeli media at least). One of the founders, Ami Luttwak, boasts on his LinkedIn profile that he led a “mission critical R&D team” for Unit 8200 which won them the “Israel Defence Award 2012.” Less well-documented, however, is the fact that a huge chunk of the Wiz workforce, from office managers, to software engineers to product analysts, are also former Unit 8200. Following my investigation earlier this year into the former Unit 8200 members working in key AI positions for tech companies, I have identified nearly fifty Wiz employees as being ex Unit 8200 operatives.

Keep reading

Welcome to Skynet: Google Unveils AI Models to Power Physical Robots

Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.

CNBC reports that Google’s DeepMind has unveiled two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), which are set to revolutionize the way robots interact with the physical world. These models, running on Google’s Gemini 2.0, are designed to adapt to different situations, understand and respond quickly to instructions, and manipulate objects with dexterity. The company’s goal is to have autonomous robots taking orders from AI without human control.

Google has announced a partnership with Apptronik, a Texas-based robotics developer, to build the next generation of humanoid robots powered by Gemini 2.0. Apptronik, known for its work with Nvidia and NASA, recently received funding from Google in a $350 million round.

Demonstration videos released by Google showcase Apptronik robots equipped with the new AI models performing various tasks, such as plugging items into power strips, filling lunchboxes, moving plastic vegetables, and zipping up bags, all in response to spoken commands. While no timeline has been provided for the technology’s market release, the videos offer a glimpse into the potential applications of these AI-powered robots.

Keep reading

Tech Titan Microsoft Partnered Extensively with USAID on Third World Internet Projects

The U.S. Agency for International Development (USAID) became the most visible symbol of government waste and ideological partisanship after DOGE exposed the vast sums it spent around the world promoting a variety of leftist causes including online censorship. What is less well-known is the now-shuttered agency’s work with big tech companies, notably Microsoft.

In 2023, Microsoft partnered with Internews, a USAID-funded global slush fund for journalists, to create the Media Viability Accelerator (MVA). The MVA sought to combine Microsoft’s tech resources with Internews’ global network of ideologically aligned journalists, allowing newsrooms to access market insights, data aggregation, analysis and visualization from Microsoft to support their efforts.

Microsoft and USAID also partnered on the progressive cause of women’s empowerment. A program called the Women’s Digital Inclusion Partnership saw Microsoft working with USAID to increase internet coverage for women in the third world. The program aimed to increase internet connectivity for women in rural areas of Columbia, Ghana, Guatemala, India and Kenya.

It’s unclear if this USAID-backed program was any more successful that the U.S. government’s domestic rural internet program, which according to analysts resulted in $42.5 billion in expenditure while connecting zero citizens.

Another USAID-Microsoft partnership on internet connectivity was the Airband Initiative, which aims to expand internet access around the world. The partnership brought together local coalitions of government agencies, nonprofits, and private sector companies to build digital infrastructure and provide “digital skills” training. The program aimed to expand internet access to 250 million people by the end of 2025, including 100 million people in Africa.

Keep reading

Are the Tech Bros Insane?

Positivism, mechanistic thinking, and right-hemisphere deficits

I offer an alternative to the theories of Team Woke or Team Musk. Entertain for a moment the idea that these Tech Bros are neither evil villains nor saviors. Instead, consider that they may actually believe AI technocracy would be good, but they are unable to understand the full implications of their goals because they suffer from right hemisphere deficits.

While I am thrilled that the Tech Bros are tearing down the old system, I’m worried they will replace inefficient centralized control and bureaucracy with more efficient AI centralized control and bureaucracy. Collapsing the old system of governance is something, not incidentally perhaps, the WEF has promoted as necessary for the 4th Industrial Revolution, the transhuman revolution.

Elon Musk (at the helm of Neuralink, Starlink), Larry Ellison (with Oracle), Peter Thiel and Alex Karp (who founded Palantir) have all expressed enthusiasm for merging biology with technology, figuring out how to live forever, micro-managing society using algorithms and AI surveillance, and other stupid things. They each currently have oversized roles in or adjacent to the US federal government.

Should we be concerned?

Oracle is setting up Stargate, a mega nuclear-powered data center for processing Big Data. Data on us?

Palantir has a contract with U.S. Army, fighting alleged terrorism in tandem with Amazon Web Services (which hosts the CIA’s, the NSA’s data on citizens). They offer Large Language Model (LLM) technology to the US Department of Defense to deploy AI weaponry.

If Palantir were to turn its eye from the people onto the government, that would be a good thing. If the Stargate project were to be used to track all federal spending and make everything transparent to citizens at all times, I would be pleasantly surprised. But I suspect that Palantir and Stargate will be used to try to manage the decisions of warfare and the welfare of the country.

The problem with this is that LLMs are glorified predictive text engines: matching prompts to patterns in a database, they output the type of pattern that usually follows. The system itself is not designed to be factual, only probable: it is stereotyping on steroids.

If you thought human bureaucracy is often idiotic and frustrating, you haven’t felt AI bureaucracy yet.

Keep reading