CBC Brags About Shutting Down Popular Political Clips YouTube Channel

A rising Canadian YouTube channel that had been pulling major traction has suddenly been banned following an aggressive report from the Canadian Broadcasting Corporation (CBC), which labeled the channel a “content farm” and reached out to YouTube directly. Not long after, the channel was gone.

“It’s just one example of what experts refer to as the ‘content’ or “engagement” farming phenomenon, in which individuals or organizations tailor their content to tap into the algorithm of the platform and boost their popularity,” the CBC explained in an article, as if this isn’t something that most YouTubers do.

Real Talk Politiks, the creator behind the now-deleted account, took to X on Sunday to reveal what happened, pointing the finger at government-aligned media and tech collusion.

“CBC, Canada’s state-funded media just got YouTube to terminate my channel — not for breaking rules, but for having the wrong political views,” the post read.

Despite operating without strikes, policy violations, or deceptive content flags, the channel was wiped.

What sparked the removal, according to CBC’s own report, was an AI-generated video of Ronald Reagan that allegedly lacked a clear label; something that might typically warrant a correction or warning, not a digital purge.

The CBC leaned into the narrative, bragging about its work in getting the channel shut down, and published a YouTube video titled “How we shut down one of Canada’s biggest news ‘content farms’.”

Keep reading

ADL Regional Director Calls for Government-Regulated Online Censorship

The Anti-Defamation League’s David Goldenberg is demanding a broad overhaul of how speech is governed on the internet, calling for both government intervention and intensified corporate censorship. In a recent appearance, Goldenberg, who heads the ADL’s Midwest operations, expressed frustration over what he sees as declining efforts by tech firms to suppress online content he deems hateful.

Citing Meta’s rollback of its fact-checking team in the United States, he argued that platforms must be forced to take action. “You have a platform like Meta that just gutted its entire fact-checking department…And so what we need to do is we need to apply pressure in a real significant way on tech platforms that they have a responsibility, that they have an absolute responsibility to check and remove hateful speech that is inciteful.”

Goldenberg advocated not just for voluntary moderation, but for legislative and regulatory measures, both at the federal and state level, that would compel platforms to act as speech enforcers. He pointed to efforts in states like California as examples of where local governments are already testing such models.

His concern centers around what he perceives as an ecosystem of radicalization made easily accessible by today’s digital infrastructure. He warned that extremist ideologies no longer require obscure forums or dark web communities to spread. “It used to be you had to fight going into the deep dark web… Now… it’s easier and easier to be exposed in the mainstream,” he said.

Framing the online environment as a catalyst for violence, Goldenberg argued that free access to controversial viewpoints must be curtailed. He called for social media companies to take a stronger stance by excluding users whose views fall outside accepted boundaries, adding that regulation should enforce this responsibility.

He zeroed in on Section 230 of the Communications Decency Act, a critical piece of legislation that shields platforms from legal liability over user-posted content. “Congress needs to amend Section 230, which provides immunity to tech platforms right now for what happens,” Goldenberg said. He dismissed comparisons between modern platforms and telecommunications companies, referencing past remarks by Facebook’s Mark Zuckerberg about how phone providers were not liable for threats made over calls. Goldenberg’s view was blunt: “These tech platforms are not guaranteed under the Constitution. They’re just not.”

From his perspective, private companies should be free to “kick people off, to de-platform,” and if they fail to do so voluntarily, they must be pressured or regulated into compliance. He described accountability as a mechanism for shaping behavior, stating, “Accountability is a tool that can be incredibly effective in changing behavior.”

Keep reading

“Completely Insane”: Wired Posts DIY Video For Mangione’s Ghost Gun

YouTube’s content rules apparently don’t apply to corporate media darlings. 

Case in pointWired (Publisher: Condé Nast) recently published a video walking viewers through the exact process of building a copycat version of the untraceable 9mm “ghost gun” allegedly used in the UnitedHealth CEO shooting by Lugi Mangione. 

So, armed with a shopping list and a credit card, we ordered everything we needed. A 3D printer, plastic filaments, and household products like epoxy were all just a few clicks away on sites like Lowe’s or Amazon. And the more specialized components were available on sites that sell gun parts, just not the guns themselves,” Wired’s Andy Greenberg explained to viewers in the video. 

Greenberg continued, “A few days later, every ingredient I needed to make Mangione’s gun arrived in the mail for the grand total of $1,144.67 plus shipping. And that includes the price of the 3D printer. This is like Christmas Day. This looks like a slide, very much like an obvious gun part. Kind of crazy that you can just order this.” 

The video then spent five minutes showing viewers the printing and assembly processes. He outsourced the assembly of the pistol to YouTube Print Shoot Repeat. 

Meanwhile, YouTube explicitly prohibits content that provides instructions on manufacturing firearms, including ghost guns

Keep reading

Meta Is Accused of Flouting Privacy Rules With AI Training Data

Meta’s attempt to restart AI training using Europeans’ public social media activity has drawn renewed resistance, as the privacy rights organization noyb threatens fresh legal action. The group has formally challenged Meta’s latest move to mine user data, asserting the tech giant is sidestepping EU privacy obligations and advancing without regulatory clearance.

Following a halt in June 2024 prompted by regulatory concerns, Meta announced in April it would resume training its language models. This time, it intends to use public posts and user interactions, including with Meta AI, from adults across the European Union and European Economic Area.

The initial pause came after mounting pressure from the Irish Data Protection Commission and a wave of complaints submitted to authorities in various member states. According to Meta, a December opinion from the European Data Protection Board signaled that its approach satisfied legal standards.

Keep reading

Joe Rogan Guest Reveals Facebook’s Secret Experiment That Manipulated 700,000 Users Without Warning

Joe Rogan sat down with Harvard professor and mind control expert Rebecca Lemov, and it didn’t take long for the conversation to dive into one of his favorite topics: government interference in our digital lives.

Rogan opened the conversation by saying, “There are so many different kinds of mind control.”

“One of the things we’ve talked about a lot on this podcast is, that an enormous percentage of what you’re seeing on social media in terms of interactions and debate is not real. It’s not organic,” he explained.

“It’s state-run and state-funded, and it’s whether it’s foreign governments or our government or even corporations, you’re getting inorganic discourse that’s designed to form a narrative and which is a form of mind control,” he added.

Lemov picked up on that point and took it further. Even when people know something is fake, she explained, our brains still react as if it were real.

“Yeah. I mean, I think even on a basic level, people, it’s known and studies have shown that we respond as if it were organic and real,” she said.

“Even when somebody likes a post of yours, the response is the same as, like, in-person interaction,” she added.

It’s not just governments pulling the strings, she warned. The platforms themselves are designed to influence how we feel.

“I think at the root, there is a kind of way that, on an emotional level, it’s not just manipulation of ideas,” she said, “but there’s a kind of emotional engineering that’s built into the platforms and doesn’t even demand, you know, at first, government involvement.”

Keep reading

YouTube and Netflix Deploy AI and Behavioral Tracking to Intensify Targeted Advertising

YouTube and Netflix are moving aggressively to expand the ways they track and monetize viewer behavior, leaning further into AI-driven systems and behavioral profiling to fine-tune ad delivery.

YouTube’s latest experiment with intrusive advertising comes in the form of “Peak Points,” a format that leverages Google’s Gemini AI to dissect video content and identify the exact moment a viewer is most emotionally invested.

Ads are then served immediately after these moments. While the idea is to capture attention when it’s most focused, the reality for viewers could mean jarring interruptions right after an emotional payoff or a pivotal scene.

This development was announced during YouTube’s Upfront event in New York, where the company pitched it as a smarter way to keep audiences engaged with advertisements. But the concept is likely to be unwelcome news for users already frustrated by mid-roll ads. Now, even emotional immersion is being treated as just another metric for ad targeting.

Meanwhile, Netflix is unveiling its own strategy to transform user engagement into a high-resolution marketing blueprint. At its recent advertising presentation, Netflix rolled out a host of new tools that feed off detailed user data, facilitated by what it calls the Netflix Ads Suite. The platform is now operational in North America and will soon be deployed across all countries where the ad-supported model is available.

A key feature of the system is its “Enhanced Data Capabilities,” which allow brands to merge their customer data with Netflix’s audience data. This process, conducted through intermediaries like LiveRamp or directly through Netflix, enables highly targeted ad delivery. To support this, Netflix has granted data access to third-party giants including Experian and Acxiom, firms notorious for building detailed consumer profiles for advertisers.

Netflix is also introducing a “clean room” setup, a controlled data-sharing environment where outside partners can analyze combined datasets without directly accessing raw user information. However, such structures often do little to curb the broader privacy implications of the data they facilitate.

Another part of Netflix’s expanded toolkit includes “brand lift” measurement, essentially tying a user’s viewing habits to how they perceive particular brands. It’s a more aggressive step toward turning personal entertainment choices into commercially valuable behavioral signals.

In tandem with these tools, Netflix has previewed new ad formats powered by generative AI. These include interactive mid-roll and pause-screen ads that can include prompts, overlays, or even buttons to push content to a second screen. These formats are being framed as personalized and responsive, and are slated to be available across all ad-tier markets by 2026.

Keep reading

Google Reaches $1.375 Billion Settlement with Texas Over Privacy Violations Involving Location Tracking and Biometric Data

Google has reached a $1.375 billion settlement with Texas over allegations the tech giant intruded on user privacy by collecting data without proper consent. The resolution, announced by Texas Attorney General Ken Paxton, concludes two lawsuits centered on the company’s handling of sensitive information across several of its products.

The lawsuits focused on practices involving Google’s location tracking, biometric data collection, and its private browsing tool, Incognito mode. According to Paxton, the company engaged in prolonged surveillance of individuals’ movements, online activity, and even biometric identifiers like voiceprints and facial features, activities he claimed were conducted without user knowledge or agreement.

“In Texas, Big Tech is not above the law,” said Paxton. “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

Although the total settlement figure has been made public, specific terms remain undisclosed, and the state has not explained how the funds will be distributed.

Google has denied any wrongdoing and emphasized that the agreement resolves claims based on policies that have already been updated. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said Google spokesperson José Castañeda. He added, “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”

The original lawsuits, filed in 2022, accused Google of circumventing user privacy settings, continuing to track locations despite users believing the feature was off. They also charged that the company’s so-called private browsing mode did not actually provide meaningful privacy and that Google had collected biometric data from Texans without obtaining legally required consent.

Keep reading

X Complies with Over 8,000 Indian Government Censorship Orders, Blocks News and User Accounts

The X Global Government Affairs team has revealed that the social company has received over 8,000 censorship orders from the Indian government, affecting access in that country to entire accounts.

We obtained a copy of the memo for you here.

Among them are executive orders to block accounts of international news organizations and prominent X users, the post said, adding that the company will comply by “withholding” those accounts only in India.

Other platforms have also received the same orders but have yet to comment on the issue.

According to the post, the decision was not an easy one to make but X views it as necessary in order for the platform to continue its presence in India.

Noncompliance with the orders could have resulted in “significant fines” and even imprisonment of local employees, X announced.

It is further explained that the Indian government’s orders do not state which individual posts were found in violation of the country’s law, opting rather to demand that entire accounts must be blocked.

“For a significant number of accounts, we did not receive any evidence or justification to block (them),” Global Affairs writes.

Even though the decision has been made to comply with the orders at this time, X made it clear the company “disagrees” and views the blocking of accounts rather than individual allegedly offending posts as not only “unnecessary” but also a form of censorship that affects both existing and future content, in that way violating users’ right to free speech.

Having complied in order to stay in the market, X is now “exploring all possible legal avenues available to the company.”

The Global Affairs post is urging affected users who are located in India, to turn to the courts in a bid to have the orders overturned – since they, unlike X, have this option. Those users are also reminded they can contact the government directly.

X lists a number of legal aid organizations that those with blocked accounts can consider contacting, and explains the decision to go public with some details about the censorship orders as “essential for transparency” – while “lack of disclosure discourages accountability and can contribute to arbitrary decision making.”

Keep reading

Google Hit with Historic $1.375 Billion Settlement for Secretly Tracking People’s Movements, Private Searches, Voiceprints, and Facial Data

Texas Attorney General Ken Paxton has delivered a knockout punch to Google, securing a record-shattering $1.375 billion settlement for the Big Tech’s covert surveillance of everyday Americans.

This staggering sum is nearly a billion dollars more than what 40 states combined were able to wring from Google for similar offenses — a testament to Paxton’s unrelenting crusade against Big Tech tyranny.

In 2022, Texas Attorney General Ken Paxton has filed a 44-page lawsuit against Google, accusing the multibillion-dollar corporation of “systematically misleading” and “deceiving” Texans for years in order to secretly track their every move — and rake in obscene profits from it.

The lawsuit lays out a damning case against Google, alleging that the tech behemoth “covertly harvested” users’ precise geolocation data, voiceprints, and even facial geometry — all while leading users to believe they had turned off such invasive tracking.

According to the lawsuit, Google duped its users by creating a maze of confusing and misleading settings, falsely telling Texans they could protect their privacy by turning off features like “Location History.” But in reality, Google was still logging user data using obscure and hard-to-find settings like “Web & App Activity,” storing data in shadowy internal databases with Orwellian names like “Footprints.”

Keep reading

Mozilla’s Google Dependence Threatens Firefox’s Survival

Despite its mission to challenge Big Tech dominance, Mozilla now finds itself tethered to one of its largest rivals in a paradox that could threaten the very survival of its flagship browser, Firefox.

As the Justice Department pushes forward with remedies aimed at curbing Google’s monopoly over online search, Mozilla’s financial dependence on the search giant is surfacing as a glaring vulnerability, one that the organization admits could become existential.

Mozilla’s Chief Financial Officer, Eric Muhlheim, testified in court on Friday, describing the potential fallout of the DOJ’s proposals as dire. “It’s very frightening,” he said if Google were barred from paying to remain the default search provider in Firefox.

That payment, ironically, forms the lifeblood of a browser that was created to stand as a counterweight to corporate control of the internet. Firefox generates roughly 90 percent of Mozilla’s revenue, and Muhlheim confirmed that about 85 percent of that comes from its agreement with Google; an arrangement that funds both Mozilla’s for-profit arm and, by extension, the nonprofit foundation behind it.

While the court has already determined that Google’s use of default search engine contracts amounts to illegal monopolistic behavior, Mozilla’s testimony underscores the tangled consequences of dismantling those deals. Mozilla, positioned as a David to Google’s Goliath in the browser wars, depends on the very dominance the DOJ seeks to unwind.

Muhlheim didn’t mince words about what severing the deal could mean. The immediate loss of that income would require sweeping cutbacks. He spoke of a “downward spiral” in which reduced funding for product development would degrade Firefox, prompt user attrition, and potentially “put Firefox out of business.” The ripple effects, he warned, would hit Mozilla’s other initiatives—such as its work on ethical AI and open web standards.

The contradiction is hard to miss: Firefox, hailed by digital rights advocates as a rare independent in a browser market increasingly shaped by Apple’s WebKit and Google’s Chromium, is only able to survive because of a search contract with Google. Its own browser engine, Gecko, was developed precisely to prevent a single corporation—then Microsoft—from dictating how the internet worked. Now, two decades later, Mozilla’s survival hinges on the largesse of another tech behemoth.

Keep reading