World Economic Forum Pushes For AI Use and Collaboration in Fighting “Misinformation”

The dual approach of talking up the benefits of AI when it comes to using this still very much emerging tech to combat “disinformation,” while warning against the perils of AI in creating that same “disinformation” – continues.

The point at which these two approaches converge is censorship – “both disinformation warriors” who want to use AI in their fight, and AI doomsayers who claim deepfakes will destroy democracies, work towards “monitoring,” “labeling,” and ultimately, controlling content.

And sometimes they’re the same informal but powerful groups, or government agencies and legacy media.

In this “installment” of the AI story coming from the World Economic Forum (WEF), authored by heads of AI, Data, and Metaverse Cathy Li and Global Coalition for Digital Safety Project Lead Agustina Callegari, we learn that WEF would like policymakers, tech firms, researchers, and civil rights groups to all band together and push for deployment of advanced AI-driven systems combating “disinformation and misinformation.”

The technique they would like explored, developed, and used would rely on pattern, language, and context analysis “to aid content moderation.”

The two authors of the post published by WEF are optimists: they think (or say they do) that AI-driven content analysis is at a level where it is capable of “understanding” context almost perfectly – or as they put it, understanding “the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread).”

The article speaks favorably about authenticity and watermarking of content – such as is done by Adobe, Microsoft, et al., through their Coalition for Content Provenance and Authenticity (C2PA), throwing the obligatory bone in the direction of those worried about privacy and protecting journalists from persecution “in conflict zones” (but what about journalists in all the other zones?)

Keep reading

Big Tech Coalition Partners With WEF, Pushes “Global Digital Safety” Standards

Big Tech coalition Digital Trust & Safety Partnership (DTSP), the UK’s regulator OFCOM, and the World Economic Forum (WEF) have come together to produce a report.

The three entities, each in their own way, are known for advocating for or carrying out speech restrictions and policies that can result in undermining privacy and security.

DTSP says it is there to “address harmful content” and makes sure online age verification (“age assurance”) is enforced, while OFCOM states its mission to be establishing “online safety.”

Now they have co-authored a WEF (WEF Global Coalition for Digital Safety) report – a white paper – that puts forward the idea of closer cooperation with law enforcement in order to more effectively “measure” what they consider to be online digital safety and reduce what they identify to be risks.

The importance of this is explained by the need to properly allocate funds and ensure compliance with regulations. Yet again, “balancing” this with privacy and transparency concerns is mentioned several times in the report almost as a throwaway platitude.

The report also proposes co-opting (even more) research institutions for the sake of monitoring data – as the document puts it, a “wide range of data sources.”

More proposals made in the paper would grant other entities access to this data, and there is a drive to develop and implement “targeted interventions.”

Keep reading

The Censorship-Industrial Complex and How It has the Internet in its Grip

Since the 1960s, the military-industrial complex has influenced and driven American policy to profit cynically from conflict and war.  But in this decade, a new complex has arrived, one that is far more dangerous to American values.  It is the censorship-industrial complex (CIC), which has gained tremendous control over the internet.

When the internet-backed World Wide Web was created in 1989, it democratized information and connectedness.  Through rapid commercialization, it unleashed unlimited possibilities and economic growth.  Equally, it became a haven of free expression, debate, and creativity.  These ideals crystallized into the five principles of the 2012 Declaration of Internet Freedom: non-censorship; universal access; freedom to connect and create; the right to privacy and control of personal information; and protection for technology and innovation.

But governments and the elites that control them were quick to move in, sensing the threat to their authoritarian instinct.  At work since 2016, the pernicious CIC gained strength during the Covid-19 pandemic, amplifying government-approved narratives that favored the agenda of the elites.  Furthering the advance to the Great Reset, it now works to color content and discourse in the leftist hues that disguise the intent and operations of the global elites.

Mike Benz, a former State Department official who now heads the Foundation for Freedom Online and is a staunch campaigner against the CIC, reveals that the complex is controlled by the State Department, the Defense Department, the CIA, MI6, and Brussels.  The turning points, according to him, were the Brexit referendum, the election of Donald Trump, and elections in the Philippines, in all of which the internet played an important role.  Therefore, it was decided to end free speech on the internet and control the flow of information.  Since the American government was hamstrung by the First Amendment, NGOs and fronts were enlisted for “doing the dirty work.”

The Biden administration continues on that path.  In 2022, days after Elon Musk committed to a pro-free speech vision on acquiring Twitter, the White House issued the Declaration for the Future of the Internet, in direct contradiction with the 2012 Declaration of Internet Freedom.  The language, of course, answers to all the shibboleths of freedom.  But while criticizing the policies of “authoritarian” governments, the declaration calls for curbing “disinformation” and “harassment” in the pursuit of “reclaiming the promise of the internet.”  It expresses concern about online platforms that spread “illegal or harmful content,” threaten safety and foment violence, and undermine “respect for and protection of human rights and democratic institutions.”

The question, obviously, is who decides what amounts to disinformation, harassment, and illegal or harmful content. 

Keep reading

Parents, Not Lax Regulation, To Blame for Tweens’ Excessive Screen Time

Instead of calling on the federal government to regulate tween and teen use of social media, perhaps we should look a little closer to home. A new study suggests parental policies and habits around screens are a significant predictor of problematic use among adolescents.

One major finding: Kids getting too much “screen time” are more likely to have parents who get too much screen time.

“One of the biggest predictors of adolescents’ screen use is their parents’ screen use,” pediatrician and lead study author Jason Nagata told The Washington Post.

This was a massive study looking at the screen habits of more than 10,000 kids ages 12 and 13. Published in the journal Pediatric Research, the study—”Associations between media parenting practices and early adolescent screen use”—looked at how often parents used cellphones or other screens around their kids and family policies surrounding technology, such as whether screens were often employed during meal times (35.6 percent said yes), whether kids had access to screens in their bedrooms (46.2 percent said yes), and whether parents monitored and/or limited screen time during the week (67.4 percent and 76.2 percent said yes). Researchers also examined how often the children of these parents engaged in tech-based activities (including using social media, playing video games, and being on a cell phone generally) and how this affected various aspects of their lives.

The researchers found that “parent screen use, family mealtime screen use, and bedroom screen use were associated with greater adolescent screen time and problematic social media, video game, and mobile phone use.”

In addition, “parental use of screens to control behavior (e.g., as a reward or punishment) was associated with higher screen time and greater problematic video game use.”

Keep reading

Fast-Tracked EU Vote Threatens Online Privacy with New “Chat Control” Law

Bad rules are only made better if they are also opt-in (that is, a user is not automatically included, but has to explicitly consent to them).

But the European Union (EU) looks like it’s “reinventing” the meaning and purpose of an opt-in: when it comes to its child sexual abuse regulation, CSAR, a vote is coming up that would block users who refuse to opt-in from sending photos, videos, and links.

According to a leak of minutes just published by the German site Netzpolitik, the vote on what opponents call “chat control” – and lambast as really a set of mass surveillance rules masquerading as a way to improve children’s safety online – is set to take place as soon as June 19.

That is apparently much sooner than those keeping a close eye on the process of adoption of the regulation would have expected.

Due to its nature, the EU is habitually a slow-moving, gargantuan bureaucracy, but it seems that when it comes to pushing censorship and mass surveillance, the bloc finds a way to expedite things.

Netzpolitik’s reporting suggests that the EU’s centralized Brussels institutions are succeeding in getting all their ducks in a row, i.e., breaking not only encryption (via “chat control”) – but also resistance from some member countries, like France.

The minutes from the meeting dedicated to the current version of the draft state that France is now “significantly more positive” where “chat-control is concerned.”

Others, like Poland, would still like to see the final regulation “limited to suspicious users only, and expressed concerns about the consent model,” says Netzpolitik.

But it seems the vote on a Belgian proposal, presented as a “compromise,” is now expected to happen much sooner than previously thought.

Keep reading

The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

The United Nations Ad Hoc Committee is just weeks away from finalizing a too-broad Cybercrime Draft Convention. This draft would normalize unchecked domestic surveillance and rampant government overreach, allowing serious human rights abuses around the world.

The latest draft of the convention—originally spearheaded by Russia but since then the subject of two and a half years of negotiations—still authorizes broad surveillance powers without robust safeguards and fails to spell out data protection principles essential to prevent government abuse of power.

As the August 9 finalization date approaches, Member States have a last chance to address the convention’s lack of safeguards: prior judicial authorization, transparency, user notification, independent oversight, and data protection principles such as transparency, minimization, notification to users, and purpose limitation. If left as is, it can and will be wielded as a tool for systemic rights violations.

Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards or reject the treaty altogether. These domestic surveillance powers are critical as they underpin international surveillance cooperation

Keep reading

Indiana and Mississippi Are Sued Over Online Age Verification Digital ID Laws

A group associated with big (and smaller) tech companies has filed a lawsuit claiming First Amendment violations against the state of Mississippi.

This comes after long years of these companies scoffing at First Amendment speech protections, as they censored their users’ speech and/or deplatformed them.

We obtained a copy of the lawsuit for you here.

It might seem hypocritical, but at the same time, even a broken clock is right twice a day. In this case, it is the industry group NetChoice that has launched the legal battle (NetChoice v. Fitch), at the center of which is state bill HB 1126 which requires age verification to be implemented on social networks.

NetChoice correctly observes that forcing people (for the sake of providing parental consent) to essentially unmask themselves through age verification (“age assurance”) exposes sensitive personal data, undermines their constitutional rights, and poses a threat to the online security of all internet users.

The filing against Mississippi also asserts that it is up to parents – rather than what NetChoice calls “Big Government” – to, in different ways, assure that their children are using sites and online services in an age-appropriate manner.

Keep reading

Trudeau Pushes Online Censorship Bill To “Protect” People From “Misinformation”

Canadian Prime Minister Justin Trudeau last week complained that governments have allegedly been left without the necessary tools to “protect people from misinformation.”

This “dire” warning came as part of Trudeau’s effort to have the Online Harms Act (Bill C-63) – one of the most controversial of its kind pieces of censorship legislation in Canada of late – pushed across the finish line in the country’s parliament.

C-63 has gained notoriety among civil rights and privacy advocates because of some of its provisions around “hate speech,” “hate propaganda,” and “hate crime.”

Under the first two, people would be punished before they commit any transgression, but also retroactively.

However, in a podcast interview for the New York Times, Trudeau defended C-63 as a solution to the “hate speech” problem, and clearly, a necessary “tool,” since according to this politician, other avenues to battle real or imagined hate speech and crimes resulting from it online have been exhausted.

Not one to balk at speaking out of both sides of his mouth, Trudeau at one point essentially admits that the more control governments have (and the bill is all about control, critics say, regardless of how its sponsors try to sugarcoat it) the more likely they are to abuse it.

He nevertheless goes on to declare that new legislative methods of “protecting people from misinformation” are needed and, in line with this, talk up C-63 as some sort of balanced approach to the problem.

But it’s difficult to see that “balance” in C-63, which is currently debated in the House of Commons. If it becomes law, it will allow the authorities to keep people under house arrest should they decide these people could somewhere down the line commit “hate crime or hate propaganda” – a chilling application of the concept of “pre-crime.”

Keep reading

New York Passes Online Age Verification Digital ID Law

Lawmakers in New York have passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the Child Data Protection Act.

Assembly Bill A8148A and Senate Bill S7694A (that became the SAFE Act) were introduced as aiming to prevent social platforms from showing minors “addictive” (i.e., algorithmically manipulated) feeds, among a host of other provisions.

Parental consent is now required for children to have access to the latter versions of the feeds – which in turn means that the controversial age verification for adults must be introduced into the mix.

The new rules will not prohibit children from searching for particular keywords but social platforms will not be able to send notifications to their phones “regarding addictive feeds” from midnight to 6 am – again, this will be possible, but only with parental consent.

Could this be the true impetus behind the two bills – to usher in age verification and digital ID, some skeptics might wonder.

Keep reading

“A First Victory Against Big Tech!” – Belgian Lawmaker Awarded €27k From Meta For Unfair Facebook ‘Shadowban’

Meta, the parent company of Facebook, has been ordered to pay damages in the sum of €27,000 to a Belgian right-wing lawmaker for unfairly limiting his reach on the social media platform, otherwise known as “shadowbanning.”

The Antwerp Court of Appeal ruled on Monday in favor of Tom Vandendriessche, an MEP standing for reelection as the lead candidate for the Flemish separatist party, Vlaams Belang, in Belgium.

The court held that Facebook had unfairly censored Vandendriessche’s account, which currently boasts 234,000 followers, back in February 2021 and had failed to act “in accordance with the principle of good faith” and did not offer “sufficient procedural guarantees” for users who were subjected to such measures. His account was subsequently blocked in May of the same year.

Meta claimed it had acted in accordance with its community guidelines and accused the Belgian lawmaker of posting inappropriate content on the platform, leading to the shadowban. However, Vandendriessche was informed by the social media giant the ban had been lifted at the end of 2021, a claim he contested, as his organic reach remained artificially low.

No ruling was made on this claim, as the court held there was insufficient evidence to prove the account remained subject to adverse measures.

The judgment overruled the court of first instance, which ruled that Belgian courts did not have jurisdiction to decide on the matter, leading to an appeal to the higher court by Vandendriessche.

In a statement following the ruling, the Vlaams Belang politician hailed “a first victory against Big Tech,” insisting that “anonymous technocrats should never dictate what can be said and heard.”

“I hope that this ruling makes it clear to Facebook that they can no longer censor me, and many citizens with me, without consequences,” he added.

Keep reading