YouTube and Netflix Deploy AI and Behavioral Tracking to Intensify Targeted Advertising

YouTube and Netflix are moving aggressively to expand the ways they track and monetize viewer behavior, leaning further into AI-driven systems and behavioral profiling to fine-tune ad delivery.

YouTube’s latest experiment with intrusive advertising comes in the form of “Peak Points,” a format that leverages Google’s Gemini AI to dissect video content and identify the exact moment a viewer is most emotionally invested.

Ads are then served immediately after these moments. While the idea is to capture attention when it’s most focused, the reality for viewers could mean jarring interruptions right after an emotional payoff or a pivotal scene.

This development was announced during YouTube’s Upfront event in New York, where the company pitched it as a smarter way to keep audiences engaged with advertisements. But the concept is likely to be unwelcome news for users already frustrated by mid-roll ads. Now, even emotional immersion is being treated as just another metric for ad targeting.

Meanwhile, Netflix is unveiling its own strategy to transform user engagement into a high-resolution marketing blueprint. At its recent advertising presentation, Netflix rolled out a host of new tools that feed off detailed user data, facilitated by what it calls the Netflix Ads Suite. The platform is now operational in North America and will soon be deployed across all countries where the ad-supported model is available.

A key feature of the system is its “Enhanced Data Capabilities,” which allow brands to merge their customer data with Netflix’s audience data. This process, conducted through intermediaries like LiveRamp or directly through Netflix, enables highly targeted ad delivery. To support this, Netflix has granted data access to third-party giants including Experian and Acxiom, firms notorious for building detailed consumer profiles for advertisers.

Netflix is also introducing a “clean room” setup, a controlled data-sharing environment where outside partners can analyze combined datasets without directly accessing raw user information. However, such structures often do little to curb the broader privacy implications of the data they facilitate.

Another part of Netflix’s expanded toolkit includes “brand lift” measurement, essentially tying a user’s viewing habits to how they perceive particular brands. It’s a more aggressive step toward turning personal entertainment choices into commercially valuable behavioral signals.

In tandem with these tools, Netflix has previewed new ad formats powered by generative AI. These include interactive mid-roll and pause-screen ads that can include prompts, overlays, or even buttons to push content to a second screen. These formats are being framed as personalized and responsive, and are slated to be available across all ad-tier markets by 2026.

Keep reading

NYT and Washington Post Push YouTube To Censor Election “Misinformation,” Lament Podcast Censorship Challenges

The New York Times, Media Matters for America, and The Washington Post are stepping up their pressure on YouTube to demonetize and censor election “misinformation,” particularly statements that the 2020 election was rigged or insecure.

As these organizations push for stricter speech suppression, questions are raised about the implications for open discourse on the platform and the legacy media and activist attempts to get it shut down.

In the past months, Media Matters undertook an extensive review of content from 30 prominent conservative YouTube channels, identifying 286 videos containing what they classified as election misinformation, which collectively garnered over 47 million views. This report, backed by verification from The New York Times, pointed out that YouTube profited from ads placed on many of these videos.

Highlighted in the Times article were figures such as former New York Mayor Rudy Giuliani, Fox News host Tucker Carlson, and conservative commentator Ben Shapiro, all of whom have voiced skepticism regarding the legitimacy of various aspects of the 2020 election process.

According to The New York Times, “Giuliani, the former New York mayor, posted more false electoral claims to YouTube than any other major commentator in the research group.”

Surprisingly, YouTube’s stance, as relayed by a spokeswoman, stresses the importance of open political discourse: “The ability to openly debate political ideas, even those that are controversial, is an important value — especially in the midst of election season,” she stated, defending the platform’s approach to content management.

However, YouTube did still remove three of the videos that Media Matters flagged.

Keep reading

YouTube May Be Suppressing The Rogan/Trump Podcast And Bombarding It With Kamala Ads

The Joe Rogan interview with Donald Trump is impossible to hide, it’s been seen by tens of millions of people, yet some people believe YouTube is having a damn good go at it.

Several commentators have noted that it appears that YouTube is suppressing the video on its platform.

It seems YouTube is hiding the podcast from the search results.

There might be another explanation, however.

Rogan tweeted Friday “There is no issue with YouTube censoring the trump episode. It was just supposed to go live on both Spotify and YouTube at the same time and there was a glitch in Spotify’s upload system and so we delisted the YouTube link until it’s fixed. It should be fine now.”

But is it fixed?

Many have said that they still cannot find the video via a Google search or the YouTube search box.

It has also not been featured on YouTube’s trending page, despite having millions of views.

Keep reading

YouTube Tightens Stranglehold On Firearms Content — Blocks All Gun-related Sponsors

When Google and YouTube first announced that they would be demonetizing a host of channels back in 2017 (including firearms-related content) they said is was because advertisers were “complaining” about their ads being featured in videos that were contrary to their messaging.  In other words, the excuse was that ads embedded on firearms channels might give their customers the “wrong impression” about those companies and their products, and Google didn’t want to anger their advertising partners.

It’s hard to say how accurate this claim was. The exposure of ESG and Big Tech collusion with government agencies to censor conservative platforms supports the idea that there was probably an organized corporate push to suppress the political opposition on YouTube as much as there was an effort to shut them down on social media.

The majority of conservative content creators understood that this was not about advertisers, it was about narratives.  The exploding popularity of gun channels runs contrary to the media assertion that American society is moving increasingly to the left.  And, even though gun channels mostly focus on firearms and instruction, they also promoted conservative and constitutional values which represent a thorn in the side of the establishment.

Keep reading

YouTube Removes 35,000 EU Videos for “Misinformation,” Enhances Content Censorship Ahead of 2024 Elections

YouTube has (“voluntarily” or otherwise) assumed the role of a private business entity that “supports elections.”

Google’s video platform detailed in a blog post how this is supposed to play out, in this instance, in the EU.

With the European Parliament (EP) election just around the corner, YouTube set out to present “an overview of our efforts to help people across Europe and beyond find helpful and authoritative election news and information.”

The overview is the usual hodgepodge of reasonable concepts, such as promoting information on how to vote or register for voting, learning about election results, etc., that quickly morph into yet another battle in the “war on disinformation.”

And what better way to “support” an election (and by extension, democracy) – than to engage in another round of mass censorship? /s

But YouTube was happy to share that in 2023 alone, it removed 35,000 videos uploaded in the EU, having decided that this content violated the platform’s policies, including around what the blog post calls “certain types of elections misinformation” (raising the logical question if some types of “election misinformation” might be allowed).

Keep reading

YouTube Says It Has a “Responsibility” To Manipulate Algorithms Leading Up to the 2024 Election

“Responsibility” is a good word. It’s even better as an actual thing. But even just as a word, it’s a positive one. It signals that reliable people/entities are behind some project, or policy.

So no wonder then, that the thoroughly disgraced Google/YouTube – as far as censorship and biased political approach – are trying to use the word “responsibility” as a narrative fig leaf to cover what the giant platform is actually up to – and has been, for a long while.

Enter, YouTube’s newest chief product officer, Johanna Voolich. What are the priorities here? It could be summed up as, four R’s and One C – namely, YouTube’s “remove, raise, reward, reduce” content approach – that’s as per a blog post published by YouTube itself.

And then, C would be speculative, for “censorship” – which is what these supposedly fair and “uplifting” actions in reality end up achieving.

If you thought any of this could be achieved by YouTube without “boosting authoritative content” – think again. That is still a solid pledge, regurgitated by Voolich.

And if you thought somebody would finally come out and clearly spell out how, and according to whose definition, content gets to be dubbed “authoritative” or otherwise – just don’t hold your breath.

The sum total is that YouTube has a new product manager, but that nothing has changed.

Certainly not in this year of election.

Keep reading

‘Unconstitutional’: FBI Demanded Identities of Users Who Watched Certain YouTube Videos

The FBI demanded that Google turn over the identities of tens of thousands of users who watched certain YouTube videos.

Federal investigators obtained court-ordered subpoenas for any YouTube viewers who watched tutorials on mapping with drones and augmented reality software.

The subpoena included names, addresses, telephone numbers, and browsing history for Google accounts for at least 30,000 people, tracing traffic to the relevant videos for the first week of January 2023.

The government also wanted the IP addresses of non-Google account owners who viewed the videos.

“There is reason to believe that these records would be relevant and material to an ongoing criminal investigation, including by providing identification information about the perpetrators,” the authorities claimed, according to Forbes.

Google was also told to keep the request secret until it was unsealed earlier this week. It’s unknown if Google complied with the subpoena.

But that wasn’t the only case of the FBI trampling on privacy rights.

Keep reading

Google To Start Running “Prebunk” Ads and Quizzing YouTube Viewers To Fight So-Called “Misinformation”

Prebunking – until relatively recently it was just one of the fringe concepts in the relentless “war on misinformation industrial complex.”

A short way to describe it is as a dystopian version of debunking false or incorrect information. But here the idea is to stop users (“help them identify”) unwanted content, before they can even see it.

A short way to describe what’s wrong with the “war on misinformation” is that it all too easily turns into a smokescreen for plain censorship of lawful and factually correct speech.

And now, prebunking is moving from ideations pushed by murky “fact-checking” and similar outfits, to the very top of the mainstream – Google.

The company that in effect controls the search market and some of the largest social platforms in the world (outside China) has announced that its latest anti-misinformation campaign will incorporate prebunking.

No doubt with an eye on the US election later in the year, Google’s attention is now on Europe, specifically the EU ahead of the European Parliament vote in June.

Keep reading

YouTube CEO Vows To Censor “Hate Speech” and Boost “Authoritative Sources” in Recommendations When People Look for Election News in 2024

YouTube CEO Neal Mohan has a post up on the company blog just in time to, well in advance, reiterate the giant platform’s policies regarding the upcoming elections.

Even though Mohan’s “letter” is supposed to deal with the “four big bets for 2024,” the bit concerning the elections is of most interest, given the ramifications of YouTube’s previous and continued restrictive approach and unprecedented levels of censorship.

Just in case anyone worried things might improve, the post reassures them: YouTube will use its massive resources and the way the platform is structured, such as search and recommendations, to wipe out what it chooses to consider “hate speech” and at the same time “boost authoritative sources” even more.

Keep reading

Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading