Google To Start Running “Prebunk” Ads and Quizzing YouTube Viewers To Fight So-Called “Misinformation”

Prebunking – until relatively recently it was just one of the fringe concepts in the relentless “war on misinformation industrial complex.”

A short way to describe it is as a dystopian version of debunking false or incorrect information. But here the idea is to stop users (“help them identify”) unwanted content, before they can even see it.

A short way to describe what’s wrong with the “war on misinformation” is that it all too easily turns into a smokescreen for plain censorship of lawful and factually correct speech.

And now, prebunking is moving from ideations pushed by murky “fact-checking” and similar outfits, to the very top of the mainstream – Google.

The company that in effect controls the search market and some of the largest social platforms in the world (outside China) has announced that its latest anti-misinformation campaign will incorporate prebunking.

No doubt with an eye on the US election later in the year, Google’s attention is now on Europe, specifically the EU ahead of the European Parliament vote in June.

Keep reading

EU Officials Start Crafting Censorship Guidelines for Big Tech Companies Ahead of 2024 Elections

The European Union has announced that it has started putting together what it calls “guidelines for election integrity” – but what critics will describe in plain language as censorship guidelines that Big Tech is supposed to follow.

The process of drafting these instructions, a part of the Digital Services Act (DSA), was initiated with a public consultation that will last until March 7, and the EU said these will be the first guidelines under the DSA.

Social media and services covered by it are referred to as Very Large Online Platforms and Search Engines, and they are the ones who will be expected to implement what the EU thinks are “best practices and possible measures to mitigate systemic risks” related to elections.

The concept of free and fair elections is long-standing, but the EU has managed to work the term “resilient elections” in there as well, as the ultimate goal of the new guidelines.

The draft also gives examples of what the bloc considers to be good ways to censor unwanted content – where censorship is referred to as “mitigating measures.” Particular attention is paid to generative AI, i.e, deepfakes.

The platforms are supposed to stick to the guidelines before, as well as after the voting, and for once, “billions of people all around the world going to the polls this year” are not mentioned as the justification for the “measures.”

At least the EU does not do it while announcing the drafting of the guidelines, although legacy media do, while reporting about it. Executive Vice-President for a Europe Fit for the Digital Age Margrethe Vestager is quoted as saying that the concern here are elections at various levels in EU nation-states, as well as those for the European Parliament.

According to Vestager, voters must discuss issues online “in a safe way.”

Keep reading

YouTube CEO Vows To Censor “Hate Speech” and Boost “Authoritative Sources” in Recommendations When People Look for Election News in 2024

YouTube CEO Neal Mohan has a post up on the company blog just in time to, well in advance, reiterate the giant platform’s policies regarding the upcoming elections.

Even though Mohan’s “letter” is supposed to deal with the “four big bets for 2024,” the bit concerning the elections is of most interest, given the ramifications of YouTube’s previous and continued restrictive approach and unprecedented levels of censorship.

Just in case anyone worried things might improve, the post reassures them: YouTube will use its massive resources and the way the platform is structured, such as search and recommendations, to wipe out what it chooses to consider “hate speech” and at the same time “boost authoritative sources” even more.

Keep reading

House Weaponization Committee Report Warns of AI-Enabled Censorship Echoing 2020 Social Media Suppression

A Republican report from the House Judiciary Committee on Tuesday raised alarms over possible AI-enabled censorship on a massive scale, reminiscent of Twitter and Facebook’s suppression of the Hunter Biden laptop exposé in 2020.

The accusations focus on a series of AI tools currently under development. Financial backing for this venture has reportedly come from the Biden administration, leading to concerns about the permeation of political influence in freedom of speech.

We obtained a copy of the report for you here.

According to the report, the administration has invested millions of funds in AI research. The objective of this activity, as stated, is the creation of handy tools capable of targeting and suppressing “misinformation.” Once operational, these utilities could ostensibly be handed over to major social media platforms.

Keep reading

Government-Funded Entities Build Network to Flag “Misinformation” In Private Messages

More reports are emerging about the various forms in which the Big Tech/government collusion is taking place in the US.

It’s not just directly pressuring, or “communicating with” – as current White House officials like to put it, social sites; reports are now emerging about companies getting hired to make massive databases of supposedly unlawful speech that are compiled thanks to users effectively spying and reporting on each other on messaging platforms like WhatsApp.

Former State Department official, now executive director of Foundation For Freedom Online, Mike Benz, calls this “a snitch network of citizen informants.” Information thus obtained is then analyzed using some form of AI, resulting in identification of “misinformation trends.”

One of these companies is Algorithmic Transparency Institute. The money comes from firms that receive government funds and congressionally chartered organizations.

The need to resort to “old school” citizen-informant methods arises from the nature of the platforms the government would like to spy on, and get content flagged and eventually censored. It’s the likes of WhatsApp and Telegram, where, due to the nature of (particularly encrypted) private messaging, the now established forms of “monitoring” places like Facebook or YouTube cannot be used.

Keep reading

Google will no longer back up the Internet: Cached webpages are dead

Google will no longer be keeping a backup of the entire Internet. Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off. Google “Search Liaison” Danny Sullivan confirmed the feature removal in an X post, saying the feature “was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.”

The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search. For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search. For now, the cached version of Ars Technica seems to still work. All of Google’s support pages about cached sites have been taken down.

Cached links used to live under the drop-down menu next to every search result on Google’s page. As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing. That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data. Google is in the era of cost savings now, so assuming Google can just start deleting cache data, it can probably free up a lot of resources.

Keep reading

Senators Obliterate Zuckerberg For ‘Helping’ Pedos Find Child Sex Abuse Content

During a remarkable Senate hearing Wednesday, Republican Senators wiped the floor with Meta owner Mark Zuckerberg, exposing how his company has acted woefully when it comes to child sexual abuse material and other harmful content on its platforms that has directly led to the deaths of children.

By the end of the hearing Zuckerberg was utterly humiliated, forced to stand and face the families of victims who have suffered because of his failures, and told that he should be sued into oblivion for gross dereliction of duty.

During the hearing titled ‘Big Tech and the Online Child Sexual Exploitation Crisis’, Senator Ted Cruz essentially accused Zuckerberg of helping pedophiles gain access to child porn on his platforms.

“Every parent in America is terrified about the garbage that is directed at our kids,” Cruz told Zuckerberg, adding “the phones they have are portals to predators…and each of your companies could do a lot more to prevent it.”

Keep reading

Lawmakers and Tech CEOs Push Online Age and ID Verification Proposals During Hearing on Child Safety

As we reported previously, US lawmakers are intent on pushing online ID, age verification, and causing an end to online anonymity – despite constitutional concerns.

And during a hearing today, tech CEOs supported proposals that would greatly expand the requirements for online ID verification and erode the ability to use the internet without connecting your online activity to your identity.

The proposals are being pushed in the name of protecting children online but would impact anyone who doesn’t want to tie all of their online speech and activity to their real ID – over surveillance or censorship concerns.

In response to criticism from lawmakers, Meta CEO Mark Zuckerberg pushed for far-reaching online age verification standards that would impose age verification at the app store level — a proposal that would mean the vast majority of mobile app usage could be tied to a person’s official identity.

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading

X CEO Linda Yaccarino Says “Free Speech” Ends at “Hate Speech”

X continues to sit on two chairs and send mixed signals regarding the company’s stance on free speech.

new blog post penned this week by X Corp CEO Linda Yaccarino goes into this, at once claiming that society must “empower people to express its thoughts” – but also, that the line must be drawn at “hate” and “hate speech.”

Considering the platform’s long and difficult history with suppressing free speech, well documented in the Twitter Files, and the fact terms like “hate speech” not to mention “misinformation” are so often used simply to cover up straight-up censorship, Yaccarino’s intent here can be seen as confusing.

All the more so since the blog post is entitled, “Safeguarding Information Independence and Combating Hate Speech” only to be followed by the subtitle, “Building an Indispensable Global Town Square.”

This is particularly interesting since it’s an admission of sorts that X is indeed a (digital) town square. The argument that this is the case with all major social sites has been used for a long time to prove that speech there should be protected under the US Constitution’s First Amendment, regardless of the companies being privately-owned.

The term “modern public square” as it pertains to social networks is found in the 2017 US Supreme Court opinion in Packingham v. North Carolina.

Keep reading