Musk Declares War on Apple: Threatens to Ban Devices Over “Creepy Spyware” AI Integration

Elon Musk, the CEO of Tesla, SpaceX, and X (formerly Twitter), has declared war on big tech Apple.

The tech mogul threatens to ban Apple devices across his companies unless Apple abandons its plans to integrate OpenAI’s woke ChatGPT technology into its operating system.

Apple announced on Monday that it would be integrating ChatGPT into iOS, iPadOS, and macOS. This integration would allow users to access ChatGPT’s capabilities, including image and document understanding, without needing to switch between tools. Siri, Apple’s virtual assistant, could also tap into ChatGPT’s intelligence when necessary.

“We’re excited to partner with Apple to bring ChatGPT to their users in a new way. Apple shares our commitment to safety and innovation, and this partnership aligns with OpenAI’s mission to make advanced AI accessible to everyone. Together with Apple, we’re making it easier for people to benefit from what AI can offer,” said Sam Altman, CEO of OpenAI.

“It’s personal, powerful, and private—and it’s integrated into the apps you rely on every day. Introducing Apple Intelligence—our next chapter in AI,” said Tim Cook, Apple’s CEO.

In response to Tim Cook’s announcement, Musk stated, “Don’t want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies.”

Keep reading

Google Introduces App Store Censorship Rules, Bans AI Generating Various Types of “Restricted Content,” Including “Hate Speech”

Developers of apps for Android will have to adhere to a new set of rules if they wish to publish on the Google Play Store.

The “guidance” is seen by critics as yet another wave of sweeping censorship tied to AI, as Google continues to crack down on what it considers to be hate speech, profanity, bullying, harassment, and other content listed as “restricted.”

One of the types of content developers are now banned from generating refers to sensitive events – and Google’s description is another example of what is likely a deliberately vague definition, so it can be left open to arbitrary interpretation.

Namely, this is content about sensitive events that include things that “capitalize on or are insensitive toward a sensitive event with significant social, cultural, or political impact.”

In its support pages, Google is telling developers that the intent behind the new policies is to make sure AI-generated content is “safe for all users.” And, the giant wants to make sure developers allow users to flag what they see as offensive, and incorporate that “feedback” for the sake of “responsible innovation.”

According to the rules, developers are instructed to utilize user reports “to inform content filtering and moderation in their apps.”

Keep reading

Stop letting governments request social media censorship in secret

Governments around the world routinely request global social media and search platforms to remove content. This can be a positive thing if the content in question is clearly harmful. But it can be nefarious if the content is simply inconvenient or disagreeable to a government’s viewpoint on a particular current news topic.

Unfortunately, when governments around the world request or demand censorship, they do so with the expectation that their request will remain private and not be publicized. Some online platforms report a summary of these government requests. Other platforms are completely silent.

This has been observed recently in content removal requests from a powerful court justice in Brazil. X has brought the most recent requests into the glare of public lights with its initial refusals to comply, but it is clear that Brazil and other governments around the world intend to silence their opposition online without the embarrassment of making such requests in public.

The massive publicity around Australia’s recent request to remove content has triggered a national discussion in Australia on content moderation and the role of government in making specific requests.  

There are also undoubtedly numerous government requests to remove content that creates a truly imminent threat of harm. But sometimes they may be slow to or unable to remove such content quickly. Transparency for all these government requests, both good and nefarious, will improve both online safety and viewpoint neutrality.  

These governments, including the EU, and U.S. federal government agencies have built or are building organization infrastructure to make content removal requests. These government agencies expect their requests will not be made public and the social media platforms will quietly comply as directed to avoid facing regulatory, legal or financial consequences from these government entities.

However, when these requests are made public, such as we have seen with X publicly refusing the recent requests from Brazil and Australia, then the requests can be scrutinized and judged by the public in those countries as well as globally. 

Keep reading

“Free Speech Prevailed” Says Elon Musk as Australia Drops Bid to Censor Internet Globally

Elon Musk’s has said “freedom of speech is worth fighting for” after Australia’s cyber safety regulator, eSafety, dropped its federal court case over X Corp’s refusal to block footage of a radicalised teenager stabbing a bishop at a Church in Sydney not just for Australians, but for users of the platform worldwide.

The case has been portrayed as a battle for control of the internet and goes to the heart of a central and as yet unresolved issue in an increasingly online world, namely, whether Government-led attempts to control the distribution within a country of what it regards as ‘harmful’ online material should be allowed to impinge on the rights of those beyond its borders to access that same material.

An initial ruling by federal judge Geoffrey Kennett last month overturned orders that videos of the bishop’s stabbing were to be hidden because they contained what Australian authorities argue is terrorist content that might influence others.

That decision still required ratification by the court, and a case management hearing had been due to take place at a later date. However, the country’s eSafety commissioner, Julie Inman-Grant, said on Wednesday that the watchdog has decided to drop the action following Judge Kennett’s ruling.

“I have decided to discontinue the proceedings in the federal court against X Corp in relation to the matter of extreme violent material depicting the real-life graphic stabbing of a religious leader at Wakeley in Sydney,” she said, adding: “I stand by my investigators and the decisions eSafety made.”

Grant went on to cite the prudent use of public funds as one of the reasons for dropping the case, although critics say it was also increasingly apparent that the Australian state’s argument in favour of a global ban on the material was legally indefensible.

Keep reading

Google accidentally published internal Search documentation to GitHub

Google apparently accidentally posted a big stash of internal technical documents to GitHub, partially detailing how the search engine ranks webpages. For most of us, the question of search rankings is just “are my web results good or bad,” but the SEO community is both thrilled to get a peek behind the curtain and up in arms since the docs apparently contradict some of what Google has told them in the past. Most of the commentary on the leak is from SEO experts Rand Fishkin and Mike King.

Google confirmed the authenticity of the documents to The Verge, saying, “We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.”

The fun thing about accidentally publishing to the GoogleAPI GitHub is that, while these are sensitive internal documents, Google technically released them under an Apache 2.0 license. That means anyone who stumbled across the documents was granted a “perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license” to them, so these are freely available online now, like here.

Keep reading

Elon Musk’s X Urges Supreme Court for Review After Jack Smith Obtained Trump Files

Elon Musk’s X Corp. has asked the U.S. Supreme Court to consider stepping in against a process that lets officials obtain information from social media companies and bars the companies from informing people whose information is handed over.

The process wrongly enables officials to “access and review potentially privileged materials without any opportunity for the user to assert privileges—including constitutional privileges,” lawyers for X said in a filing to the nation’s top court.

Unsealed documents in 2023 showed that X provided data and records from former President Donald Trump’s Twitter account to special counsel Jack Smith after Mr. Smith obtained a search warrant.

X was blocked from informing President Trump by a nondisclosure order that Mr. Smith also obtained.

The order said disclosing the warrant would result in “destruction of or tampering with evidence, intimidation of potential witnesses, and serious jeopardy to the investigation,” and let President Trump “flee from prosecution.”

X challenged the order, arguing it violated its First Amendment rights and noting that President Trump might have reason to claim executive privilege, or presidential privilege. The company wanted to alert the former president so he could assert the privilege, but U.S. District Judge Beryl Howell ruled against it, claiming during a hearing that the only reason X was issuing the challenge was “because the CEO wants to cozy up with the former president.”

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

Google Tightens Influence on UK Elections with New “Moderation” Tactics

Google has found itself yet another election to “support.”

After the company made announcements to this effect related to the EU (European Parliament) June ballot, voters in the UK can now also look forward to – or dread, as the case may be – the tech giant’s role in their upcoming general election.

blog post by Google UK Director of Government Affairs and Public Policy Katie O’Donovan announced even more “moderation” and a flurry of other measures, most of which have become tried-and-tested instruments of Google’s censorship over the past years.

They are divided in three categories – pushing (“surfacing”) content and sources of information picked by Google as authoritative and of high quality, along with YouTube information panels, investing in what it calls Trust & Safety operations, as well as “equipping campaigns with the best-in-class security tools and training.”

Another common point is combating “misinformation” – together with what the blog post refers to as “the wider ecosystem.” That concerns Google News Initiative and PA Media, a private news agency, and their Election Check 24, which is supposed to safeguard the UK election from “mis- and dis-information.”

Searches related to voting are “rigged” to return results manipulated to boost what Google considers authoritative sources – notably, the UK government’s site.

As for AI, the promise is that users of Google platforms will receive “help navigating” that type of content.

This includes the obligation for advertisers to reveal that ads “include synthetic content that inauthentically depicts real or realistic-looking people or events” (this definition can easily be stretched to cover parody, memes, and similar).

“Disclosure” here, however, is still differentiated from Google’s outright ban on manipulated media that it decides “misleads people.” Such content is labeled, and banned if considered as having the ability to maybe pose “a serious risk of egregious harm.”

Keep reading

The internet is disappearing, with a quarter of all webpages from 2013 to 2023 going the way of the dodo

Well, so long and thanks for all the fish. A study from the Pew Research Center entitled “When Online Content Disappears” indicates that our beloved internet may well be disappearing beneath our fingers—with a quarter of all webpages that existed between 2013 and 2023 found to be no longer accessible.

Contrary to the popular perception that everything committed to the interwebs is destined to exist forever, the study revealed that 38% of pages that existed in 2013 alone have now been lost (via The Independent). It doesn’t appear to be an age-related phenomenon, either. 

Even newer pages appear to be performing vanishing acts—eight percent of pages that existed in 2023 were found to be unavailable, too.

The study made use of Common Crawl, an open repository of web crawl data that archives billions of webpages and provides archives and datasets for public use. The researchers took random samples of over a million webpages, before checking the links to see which were still active, and which had gone to the great lost information archive in the sky.

The results showed 23% of news pages and 21% of government websites studied were found to include at least one broken link, while a staggering 54% of Wikipedia pages included a reference link that no longer exists. That’s a lot of facts that can no longer be reasonably checked.

Given the internet’s integral role in modern society (for better or worse) in terms of verifying information, these results are troubling. What with the increasing proliferation of misleading AI content, losing valuable sources of information pre AI-era can’t possibly help.

Compounding this slide into a murky world where verifiable information is increasingly harder to find, a recent study found that 46.9% of all internet traffic could be attributed to bots—many of which may be contributing all sorts of made-up information to further muddy the waters.

Keep reading

Google’s Woke AI Is Hilariously But Frighteningly Broken

Google’s hastily rolled out AI Overview feature is disastrously broken, returning searches claiming that people should spread glue on pizzas, eat rocks, and that it’s safe for pregnant women to smoke cigarettes.

The Verge reports that Google is scrambling to manually disable the AI Overview feature for certain searches after users found it giving our some truly bizarre advice, and information that is just made up nonsense.

Apparently cockroaches are so named because they live in penis holes.

Keep reading