“Free Speech Prevailed” Says Elon Musk as Australia Drops Bid to Censor Internet Globally

Elon Musk’s has said “freedom of speech is worth fighting for” after Australia’s cyber safety regulator, eSafety, dropped its federal court case over X Corp’s refusal to block footage of a radicalised teenager stabbing a bishop at a Church in Sydney not just for Australians, but for users of the platform worldwide.

The case has been portrayed as a battle for control of the internet and goes to the heart of a central and as yet unresolved issue in an increasingly online world, namely, whether Government-led attempts to control the distribution within a country of what it regards as ‘harmful’ online material should be allowed to impinge on the rights of those beyond its borders to access that same material.

An initial ruling by federal judge Geoffrey Kennett last month overturned orders that videos of the bishop’s stabbing were to be hidden because they contained what Australian authorities argue is terrorist content that might influence others.

That decision still required ratification by the court, and a case management hearing had been due to take place at a later date. However, the country’s eSafety commissioner, Julie Inman-Grant, said on Wednesday that the watchdog has decided to drop the action following Judge Kennett’s ruling.

“I have decided to discontinue the proceedings in the federal court against X Corp in relation to the matter of extreme violent material depicting the real-life graphic stabbing of a religious leader at Wakeley in Sydney,” she said, adding: “I stand by my investigators and the decisions eSafety made.”

Grant went on to cite the prudent use of public funds as one of the reasons for dropping the case, although critics say it was also increasingly apparent that the Australian state’s argument in favour of a global ban on the material was legally indefensible.

Keep reading

Google accidentally published internal Search documentation to GitHub

Google apparently accidentally posted a big stash of internal technical documents to GitHub, partially detailing how the search engine ranks webpages. For most of us, the question of search rankings is just “are my web results good or bad,” but the SEO community is both thrilled to get a peek behind the curtain and up in arms since the docs apparently contradict some of what Google has told them in the past. Most of the commentary on the leak is from SEO experts Rand Fishkin and Mike King.

Google confirmed the authenticity of the documents to The Verge, saying, “We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.”

The fun thing about accidentally publishing to the GoogleAPI GitHub is that, while these are sensitive internal documents, Google technically released them under an Apache 2.0 license. That means anyone who stumbled across the documents was granted a “perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license” to them, so these are freely available online now, like here.

Keep reading

Elon Musk’s X Urges Supreme Court for Review After Jack Smith Obtained Trump Files

Elon Musk’s X Corp. has asked the U.S. Supreme Court to consider stepping in against a process that lets officials obtain information from social media companies and bars the companies from informing people whose information is handed over.

The process wrongly enables officials to “access and review potentially privileged materials without any opportunity for the user to assert privileges—including constitutional privileges,” lawyers for X said in a filing to the nation’s top court.

Unsealed documents in 2023 showed that X provided data and records from former President Donald Trump’s Twitter account to special counsel Jack Smith after Mr. Smith obtained a search warrant.

X was blocked from informing President Trump by a nondisclosure order that Mr. Smith also obtained.

The order said disclosing the warrant would result in “destruction of or tampering with evidence, intimidation of potential witnesses, and serious jeopardy to the investigation,” and let President Trump “flee from prosecution.”

X challenged the order, arguing it violated its First Amendment rights and noting that President Trump might have reason to claim executive privilege, or presidential privilege. The company wanted to alert the former president so he could assert the privilege, but U.S. District Judge Beryl Howell ruled against it, claiming during a hearing that the only reason X was issuing the challenge was “because the CEO wants to cozy up with the former president.”

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

Google Tightens Influence on UK Elections with New “Moderation” Tactics

Google has found itself yet another election to “support.”

After the company made announcements to this effect related to the EU (European Parliament) June ballot, voters in the UK can now also look forward to – or dread, as the case may be – the tech giant’s role in their upcoming general election.

blog post by Google UK Director of Government Affairs and Public Policy Katie O’Donovan announced even more “moderation” and a flurry of other measures, most of which have become tried-and-tested instruments of Google’s censorship over the past years.

They are divided in three categories – pushing (“surfacing”) content and sources of information picked by Google as authoritative and of high quality, along with YouTube information panels, investing in what it calls Trust & Safety operations, as well as “equipping campaigns with the best-in-class security tools and training.”

Another common point is combating “misinformation” – together with what the blog post refers to as “the wider ecosystem.” That concerns Google News Initiative and PA Media, a private news agency, and their Election Check 24, which is supposed to safeguard the UK election from “mis- and dis-information.”

Searches related to voting are “rigged” to return results manipulated to boost what Google considers authoritative sources – notably, the UK government’s site.

As for AI, the promise is that users of Google platforms will receive “help navigating” that type of content.

This includes the obligation for advertisers to reveal that ads “include synthetic content that inauthentically depicts real or realistic-looking people or events” (this definition can easily be stretched to cover parody, memes, and similar).

“Disclosure” here, however, is still differentiated from Google’s outright ban on manipulated media that it decides “misleads people.” Such content is labeled, and banned if considered as having the ability to maybe pose “a serious risk of egregious harm.”

Keep reading

The internet is disappearing, with a quarter of all webpages from 2013 to 2023 going the way of the dodo

Well, so long and thanks for all the fish. A study from the Pew Research Center entitled “When Online Content Disappears” indicates that our beloved internet may well be disappearing beneath our fingers—with a quarter of all webpages that existed between 2013 and 2023 found to be no longer accessible.

Contrary to the popular perception that everything committed to the interwebs is destined to exist forever, the study revealed that 38% of pages that existed in 2013 alone have now been lost (via The Independent). It doesn’t appear to be an age-related phenomenon, either. 

Even newer pages appear to be performing vanishing acts—eight percent of pages that existed in 2023 were found to be unavailable, too.

The study made use of Common Crawl, an open repository of web crawl data that archives billions of webpages and provides archives and datasets for public use. The researchers took random samples of over a million webpages, before checking the links to see which were still active, and which had gone to the great lost information archive in the sky.

The results showed 23% of news pages and 21% of government websites studied were found to include at least one broken link, while a staggering 54% of Wikipedia pages included a reference link that no longer exists. That’s a lot of facts that can no longer be reasonably checked.

Given the internet’s integral role in modern society (for better or worse) in terms of verifying information, these results are troubling. What with the increasing proliferation of misleading AI content, losing valuable sources of information pre AI-era can’t possibly help.

Compounding this slide into a murky world where verifiable information is increasingly harder to find, a recent study found that 46.9% of all internet traffic could be attributed to bots—many of which may be contributing all sorts of made-up information to further muddy the waters.

Keep reading

Google’s Woke AI Is Hilariously But Frighteningly Broken

Google’s hastily rolled out AI Overview feature is disastrously broken, returning searches claiming that people should spread glue on pizzas, eat rocks, and that it’s safe for pregnant women to smoke cigarettes.

The Verge reports that Google is scrambling to manually disable the AI Overview feature for certain searches after users found it giving our some truly bizarre advice, and information that is just made up nonsense.

Apparently cockroaches are so named because they live in penis holes.

Keep reading

Facebook Censors Media Who Criticize FBI’s ‘Deadly Force’ Raid Against Trump

Facebook is deploying so-called “fact-checkers” to run interference for the FBI after it was revealed the agency authorized the use of “deadly force” against former President Trump during its 2022 raid on his Mar-a-Lago estate.

On Thursday, the Big Tech platform slapped an “independent fact-check” on The Federalist’s May 21 report detailing the contents of unsealed court documents that revealed the FBI gave agents raiding Trump’s Florida residence the green-light to use “deadly force” against the former president “when necessary.” The raid — which took place on Aug. 8, 2022 — was approved by Attorney General Merrick Garland and reportedly aimed at retrieving “any document Trump ever saw, read, or created for the entirety of his four years as commander-in-chief.”

According to the filing by Trump’s legal team, the FBI’s operations order “contained a ‘Policy Statement’ regarding ‘Use Of Deadly Force,’ which stated, for example, ‘Law enforcement officers of the Department of Justice may use deadly force when necessary …’” The document further revealed that these agents “planned to bring ‘Standard Issue Weapon[s],’ ‘Ammo,’ ‘Handcuffs,’ and ‘medium and large sized bolt cutters,’ but they were instructed to wear ‘unmarked polo or collared shirts’ and to keep ‘law enforcement equipment concealed.’”

The FBI also appeared to provide guidance to agents on how to engage Trump and Secret Service personnel if they were encountered during the raid.

Keep reading

Exposing The CIA’s Secret Effort To Seize Control Of Social Media

While the CIA is strictly prohibited from spying on or running clandestine operations against American citizens on US soil, a bombshell new “Twitter Files” report reveals that a member of the Board of Trustees of InQtel – the CIA’s mission-driving venture capital firm, along with “former” intelligence community (IC) and CIA analysts, were involved in a massive effort in 2021-2022 to take over Twitter’s content management system, as Michael Shellenberger, Matt Taibbi and Alex Gutentag report over at Shellenberger’s Public (subscribers can check out the extensive 6,800 word report here).

According to “thousands of pages of Twitter Files and documents,” these efforts were part of a broader strategy to manage how information is disseminated and consumed on social media under the guise of combating ‘misinformation’ and foreign propaganda efforts – as this complex of government-linked individuals and organizations has gone to great lengths to suggest that narrative control is a national security issue.

Keep reading

New X Policy Forces Earners To Verify Their Government ID With Israeli Verification Company

X, formerly Twitter, is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform – either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. Users who opt to receive payouts on the platform will have to undergo a verification process with the company.

This initiative aims to curb impersonation, fraud, and improve user support, yet it also raises profound questions about privacy and free speech, as X markets itself as a free speech platform, and free speech and anonymity often go hand-in-hand. This is especially true in countries where their speech can get citizens jailed or worse.

“We’re making changes to our Creator Subscriptions and Ads Revenue Share programs to further promote authenticity and fight fraud on the platform. Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024,” the update to X’s verification page now reads.

Keep reading