New York Passes Online Age Verification Digital ID Law

Lawmakers in New York have passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the Child Data Protection Act.

Assembly Bill A8148A and Senate Bill S7694A (that became the SAFE Act) were introduced as aiming to prevent social platforms from showing minors “addictive” (i.e., algorithmically manipulated) feeds, among a host of other provisions.

Parental consent is now required for children to have access to the latter versions of the feeds – which in turn means that the controversial age verification for adults must be introduced into the mix.

The new rules will not prohibit children from searching for particular keywords but social platforms will not be able to send notifications to their phones “regarding addictive feeds” from midnight to 6 am – again, this will be possible, but only with parental consent.

Could this be the true impetus behind the two bills – to usher in age verification and digital ID, some skeptics might wonder.

Keep reading

“A First Victory Against Big Tech!” – Belgian Lawmaker Awarded €27k From Meta For Unfair Facebook ‘Shadowban’

Meta, the parent company of Facebook, has been ordered to pay damages in the sum of €27,000 to a Belgian right-wing lawmaker for unfairly limiting his reach on the social media platform, otherwise known as “shadowbanning.”

The Antwerp Court of Appeal ruled on Monday in favor of Tom Vandendriessche, an MEP standing for reelection as the lead candidate for the Flemish separatist party, Vlaams Belang, in Belgium.

The court held that Facebook had unfairly censored Vandendriessche’s account, which currently boasts 234,000 followers, back in February 2021 and had failed to act “in accordance with the principle of good faith” and did not offer “sufficient procedural guarantees” for users who were subjected to such measures. His account was subsequently blocked in May of the same year.

Meta claimed it had acted in accordance with its community guidelines and accused the Belgian lawmaker of posting inappropriate content on the platform, leading to the shadowban. However, Vandendriessche was informed by the social media giant the ban had been lifted at the end of 2021, a claim he contested, as his organic reach remained artificially low.

No ruling was made on this claim, as the court held there was insufficient evidence to prove the account remained subject to adverse measures.

The judgment overruled the court of first instance, which ruled that Belgian courts did not have jurisdiction to decide on the matter, leading to an appeal to the higher court by Vandendriessche.

In a statement following the ruling, the Vlaams Belang politician hailed “a first victory against Big Tech,” insisting that “anonymous technocrats should never dictate what can be said and heard.”

“I hope that this ruling makes it clear to Facebook that they can no longer censor me, and many citizens with me, without consequences,” he added.

Keep reading

Stop letting governments request social media censorship in secret

Governments around the world routinely request global social media and search platforms to remove content. This can be a positive thing if the content in question is clearly harmful. But it can be nefarious if the content is simply inconvenient or disagreeable to a government’s viewpoint on a particular current news topic.

Unfortunately, when governments around the world request or demand censorship, they do so with the expectation that their request will remain private and not be publicized. Some online platforms report a summary of these government requests. Other platforms are completely silent.

This has been observed recently in content removal requests from a powerful court justice in Brazil. X has brought the most recent requests into the glare of public lights with its initial refusals to comply, but it is clear that Brazil and other governments around the world intend to silence their opposition online without the embarrassment of making such requests in public.

The massive publicity around Australia’s recent request to remove content has triggered a national discussion in Australia on content moderation and the role of government in making specific requests.  

There are also undoubtedly numerous government requests to remove content that creates a truly imminent threat of harm. But sometimes they may be slow to or unable to remove such content quickly. Transparency for all these government requests, both good and nefarious, will improve both online safety and viewpoint neutrality.  

These governments, including the EU, and U.S. federal government agencies have built or are building organization infrastructure to make content removal requests. These government agencies expect their requests will not be made public and the social media platforms will quietly comply as directed to avoid facing regulatory, legal or financial consequences from these government entities.

However, when these requests are made public, such as we have seen with X publicly refusing the recent requests from Brazil and Australia, then the requests can be scrutinized and judged by the public in those countries as well as globally. 

Keep reading

New York’s “SAFE” Digital ID Act For Kids Threatens Online Free Speech and Privacy

Legislators in the state of New York are pushing two new bills to regulate the internet, specifically as it pertains to the way minors use social media – Assembly Bill A8148A and Senate Bill S7694A.

If it succeeds, the law would be the first of its kind in the US, and likely represent a blueprint for other states.

But both acts, dubbed Stop Addictive Feeds Exploitation (SAFE) for Kids, have drawn criticism for bringing up constitutional issues tied to First Amendment rights.

Meanwhile, Governor Kathy Hochul and state lawmakers are said to be close on agreeing on the text of the bills, which are presented as designed to prohibit tech platforms from providing addictive feeds to minors (replacing them with content shown in chronological order), and monetizing their data, among other things.

But how would these platforms ascertain if somebody’s a minor? By requiring that their parents go through the digital ID age verification before they can provide consent on behalf of their children to use a particular social network in a particular way.

And this is where the legislative intent goes against the First Amendment, critics say, as having all online activity tied to a government-issued ID chills free speech and opens data privacy issues.

Somewhat ironically, given their open disregard of the First Amendment in other scenarios, those critics include some of the biggest tech companies.

Constitution and freedom of expression aside – their bottom lines would suffer if the bills pass, and so they find themselves as (no doubt, for both parties) uneasy bedfellows with those who consistently campaign against age verification, manipulated feeds, and data harvesting.

Keep reading

“Free Speech Prevailed” Says Elon Musk as Australia Drops Bid to Censor Internet Globally

Elon Musk’s has said “freedom of speech is worth fighting for” after Australia’s cyber safety regulator, eSafety, dropped its federal court case over X Corp’s refusal to block footage of a radicalised teenager stabbing a bishop at a Church in Sydney not just for Australians, but for users of the platform worldwide.

The case has been portrayed as a battle for control of the internet and goes to the heart of a central and as yet unresolved issue in an increasingly online world, namely, whether Government-led attempts to control the distribution within a country of what it regards as ‘harmful’ online material should be allowed to impinge on the rights of those beyond its borders to access that same material.

An initial ruling by federal judge Geoffrey Kennett last month overturned orders that videos of the bishop’s stabbing were to be hidden because they contained what Australian authorities argue is terrorist content that might influence others.

That decision still required ratification by the court, and a case management hearing had been due to take place at a later date. However, the country’s eSafety commissioner, Julie Inman-Grant, said on Wednesday that the watchdog has decided to drop the action following Judge Kennett’s ruling.

“I have decided to discontinue the proceedings in the federal court against X Corp in relation to the matter of extreme violent material depicting the real-life graphic stabbing of a religious leader at Wakeley in Sydney,” she said, adding: “I stand by my investigators and the decisions eSafety made.”

Grant went on to cite the prudent use of public funds as one of the reasons for dropping the case, although critics say it was also increasingly apparent that the Australian state’s argument in favour of a global ban on the material was legally indefensible.

Keep reading

Elon Musk’s X Urges Supreme Court for Review After Jack Smith Obtained Trump Files

Elon Musk’s X Corp. has asked the U.S. Supreme Court to consider stepping in against a process that lets officials obtain information from social media companies and bars the companies from informing people whose information is handed over.

The process wrongly enables officials to “access and review potentially privileged materials without any opportunity for the user to assert privileges—including constitutional privileges,” lawyers for X said in a filing to the nation’s top court.

Unsealed documents in 2023 showed that X provided data and records from former President Donald Trump’s Twitter account to special counsel Jack Smith after Mr. Smith obtained a search warrant.

X was blocked from informing President Trump by a nondisclosure order that Mr. Smith also obtained.

The order said disclosing the warrant would result in “destruction of or tampering with evidence, intimidation of potential witnesses, and serious jeopardy to the investigation,” and let President Trump “flee from prosecution.”

X challenged the order, arguing it violated its First Amendment rights and noting that President Trump might have reason to claim executive privilege, or presidential privilege. The company wanted to alert the former president so he could assert the privilege, but U.S. District Judge Beryl Howell ruled against it, claiming during a hearing that the only reason X was issuing the challenge was “because the CEO wants to cozy up with the former president.”

Keep reading

Insane & Gross Turbo Cancer Cases From Covid Shots Flooding Social Media

Extremely aggressive fast-acting turbo cancers are being widely discussed on social media site X (formally Twitter).

In a post on X from Tuesday ‘They Keep Saying Its Rare‘, a Covid vaccine adverse event reporting profile, documented a gruesome face cancer on a vaccine victim and linked the related study on the phenomenon.

The case report study detailed the serious Covid shot side effect which began just four days post vaccination.

“We report on an aggressive, infiltrating, metastatic, and ultimately lethal basaloid type of carcinoma arising shortly after an mRNA vaccination for COVID-19,” the study said in the ‘Abstract’ section. “We place this within the context of multiple immune impairments potentially related to the mRNA injections that would be expected to potentiate more aggressive presentation and progression of cancer.”

Keep reading

Lawmakers Push for the Censorship of “Harmful Content,” “Disinformation” in Latest Section 230 Reform Push

Section 230 of the Communications Act (CDA), an online liability shield that prevents online apps, websites, and services from being held civilly liable for content posted by their users if they act in “good faith” to moderate content, provided the foundation for most of today’s popular platforms to grow without being sued out of existence. But as these platforms have grown, Section 230 has become a political football that lawmakers have used in an attempt to influence how platforms editorialize and moderate content, with pro-censorship factions threatening reforms that force platforms to censor more aggressively and pro-free speech factions pushing reforms that reduce the power of Big Tech to censor lawful speech.

And during a Communications and Technology Subcommittee hearing yesterday, lawmakers discussed a radical new Section 230 proposal that would sunset the law and create a new solution that “ensures safety and accountability for past and future harm.”

We obtained a copy of the draft bill to sunset Section 230 for you here.

Keep reading

Exposing The CIA’s Secret Effort To Seize Control Of Social Media

While the CIA is strictly prohibited from spying on or running clandestine operations against American citizens on US soil, a bombshell new “Twitter Files” report reveals that a member of the Board of Trustees of InQtel – the CIA’s mission-driving venture capital firm, along with “former” intelligence community (IC) and CIA analysts, were involved in a massive effort in 2021-2022 to take over Twitter’s content management system, as Michael Shellenberger, Matt Taibbi and Alex Gutentag report over at Shellenberger’s Public (subscribers can check out the extensive 6,800 word report here).

According to “thousands of pages of Twitter Files and documents,” these efforts were part of a broader strategy to manage how information is disseminated and consumed on social media under the guise of combating ‘misinformation’ and foreign propaganda efforts – as this complex of government-linked individuals and organizations has gone to great lengths to suggest that narrative control is a national security issue.

Keep reading

UK “disinformation unit” spied on citizens and flagged online speech for removal during pandemic

New documents reveal that authorities in the UK considered placing government employees inside of social media companies to form a type of digital KGB that would control online speech during the pandemic.

This is according to recently released minutes from the governing board of the Counter Disinformation Unit (CDU). They show that, as many people suspected, British authorities were actively involved in monitoring people’s speech online and flagging certain viewpoints for removal.

At one point, they discussed a strategy to “embed” civil servants in various companies that were running social media platforms, and there is nothing in the document to indicate that they did not follow through on this.

The CDU, which has since been rebranded the National Security Online Information Team in response to heavy scrutiny, insists that it is “countering disinformation and hostile state narratives” but the agency, along with government-hired private contractors, was put in charge of surveilling British citizens and silencing those who were deemed to be “COVID measures dissenters.”

Instead of going after foreign adversaries who were spreading misinformation, they were targeting British citizens – from journalists and medical professionals to politicians – who were criticizing the government.

Keep reading