Your Tax Dollars at Work: Military Monitors Social Media for Mean Posts About Generals

The U.S. Army’s Protective Services Battalion (PSB), the Department of Defense’s equivalent of the Secret Service, now monitors social media to see if anyone has posted negative comments about the country’s highest-ranking officers.

Per a report by the Intercept, the PSB’s remit includes protecting officers from “embarrassment,” in addition to more pressing threats like kidnapping and assassination.

An Army procurement document from 2022 obtained by the Intercept reveals that the PSB now monitors social media for “negative sentiment” about the officers under its protection, as well as for “direct, indirect, and veiled” threats.

“This is an ongoing PSIFO/PIB” — Protective Services Field Office/Protective Intelligence Branch — “requirement to provide global protective services for senior Department of Defense (DoD) officials, adequate security in order to mitigate online threats (direct, indirect, and veiled), the identification of fraudulent accounts and positive or negative sentiment relating specifically to our senior high-risk personnel.”

Per the report, the Army intends not just to monitor platforms for “negative sentiment,” but also to pinpoint the location of posters.

Keep reading

The Bizarre Reality of Getting Online in North Korea

FOR 25 MILLION North Koreans, the internet is an impossibility. Only a few thousand privileged members of the hermit kingdom’s society can access the global internet, while even the country’s heavily censored internal intranet is out of reach for the majority of the population. Getting access to free and open information isn’t an option.

New research from South Korea-based human rights organization People for Successful Corean Reunification (Pscore) details the reality for those who—in very limited circumstances—manage to get online in North Korea. The report reveals a days-long approval process to gain internet access, after which monitors sit next to people while they browse and approve their activities every five minutes. Even then, what can be accessed reveals little about the world outside North Korea’s borders.

Documentation from the NGO is being presented today at the human rights conference RightsCon and sheds light on the regime with the most limited internet freedoms, which fall far below the restrictive and surveilled internet access in China and Iran. For millions of people in North Korea, the internet simply doesn’t exist.

Keep reading

Atlantic Council Takes Up the Censorship Sword

In Costa Rica and Latvia today, the Atlantic Council is hosting its 360/OS Summit at RightsCon Costa Rica and NATO’s Riga StratCom. Among other things, the influential think tank will be previewing its “Task Force for a Trustworthy Future Web” report, which they hope will “lay the groundwork for stronger cross-sectoral ideation and action” and “facilitate collaboration now between the expanding community dedicated to understanding and protecting trust and safety.”

In human terms, conference attendees are discussing how best to stay on-brand by presenting the Censorship-Industrial Complex as a human rights initiative, and as #TwitterFiles documents show, they have the juice to pull it off.

EngageMedia (which I co-founded and was the long-time Executive Director) co-organized RightsCon in Manila in 2015, and I personally oversaw a lot of the preparations. That looks like a big mistake. I now believe RightsCon represents everything that has gone wrong in the digital rights field. Specifically, it represents the capture of a once-vibrant movement by corporate and government interests, and a broader shift towards anti-liberal and authoritarian solutions to online challenges. I left EngageMedia on good terms, but now have no formal relationship.

In honor of this week’s RightsCon and 360/OS Summit, we dug into the #TwitterFiles to revisit the integration of the Atlantic Council’s anti-disinformation arm, the Digital Forensic Research Labs (DFRLabs), while also highlighting its relationship with weapons manufacturers, Big Oil, Big Tech, and others who fund the NATO-aligned think tank.

The Atlantic Council is unique among “non-governmental” organizations thanks to its lavish support from governments and the energy, finance, and weapons sectors. It’s been a key player in the development of the “anti-disinformation” sector from the beginning. It wasn’t an accident when its DFRLabs was chosen in 2018 to help Facebook “monitor for misinformation and foreign interference,” after the platform came under intense congressional scrutiny as a supposed unwitting participant in a Russian influence campaign. The press uniformly described DFRLabs as an independent actor that would merely “improve security,” and it was left to media watchdog FAIR to point out that the Council was and is “dead center in what former President Obama’s deputy national security advisor Ben Rhodes called ‘the blob.’”

Keep reading

Childproofing the Internet

For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.

There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political biaspolitical extremismdrug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.

Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn’t know why, or how.

But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.

Keep reading

Congress To Investigate WHO Plans To Use “Listening Surveillance Systems” To Identify “Misinformation”

If you’ve been following our reporting on the issue, you’ll already know that the new World Health Organization (WHO) pandemic prevention initiative, the Preparedness and Resilience for Emerging Threats (PRET), recommends using “social listening surveillance systems” to identify “misinformation.” But as more people are learning about how unelected bodies are being used to suppress speech and potentially override sovereignty, it’s starting to get more pushback.

According to documents from the UN agency, PRET aims to “guide countries in pandemic planning” and work to “incorporate the latest tools and approaches for shared learning and collective action established during the COVID-19 pandemic.”

The PRET document describes misinformation as a “health threat,” and refers to it as an “infodemic.”

“Infodemic is the overabundance of information – accurate or not – which makes it difficult for individuals to adopt behaviors that will protect their health and the health of their families and communities. The infodemic can directly impact health, hamper the implementation of public health countermeasures and undermine trust and social cohesiveness,” the document states.

However, it continues to recommend invasive methods of countering the spread of misinformation.

“Establish and invest in resources for social listening surveillance systems and capacities to identify concerns as well as rumors and misinformation,” the WHO wrote in the PRET document.

“To build trust, it’s important to be responsive to needs and concerns, to relay timely information, and to train leaders and HCWs in risk communications principles and encourage their application.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

New Democrat Bill Calls For A Federal Agency To Create “Behavioral Codes,” Introduce “Disinformation Experts”

On May 18, two US senators introduced the Digital Platform Commission Act of 2023, a bill that seeks to give powers to a new federal agency that will set up a council to regulate AI, in the context of social platforms.

More precisely, the new body – the Federal Digital Platform Commission – would “rule” on what’s termed as enforceable “behavioral codes,” and among those staffing it will be “disinformation experts.”

The move by the two Democratic senators – Michael Bennet and Peter Welch – seems to have come in concert with congressional testimony delivered by OpenAI CEO Sam Altman, since the bill was presented shortly afterwards, and backs Altman’s idea to form a new federal agency of the kind.

Altman had more thoughts on how all this should work – the new agency, according to him, might be given the power to restrict AI development via licenses or credentialing.

The speed with which the two senators picked up on this to announce their bill may owe to the fact Bennet “only” had to go back and update one he already introduced in 2022. This time around, the proposed legislation has been changed in a number of ways, most notably by redefining what a digital platform is.

The bill wants this definition to also cover those companies that provide content “primarily” generated by algorithmic processes. This is done by proposing that the future Commission be given the authority over how personal information is used in decision-making or content generation, which is thought to specifically refer to tech like ChatGPT.

Keep reading

The Internet Dodges Censorship by the Supreme Court

The Supreme Court today refused to weaken one of the key laws supporting free expression online, and recognized that digital platforms are not usually liable for their users’ illegal acts, ensuring that everyone can continue to use those services to speak and organize.

The decisions in Gonzalez v. Google and Twitter v. Taamneh are great news for a free and vibrant internet, which inevitably depends on services that host our speech. The court in Gonzalez declined to address the scope of 47 U.S.C. § 230 (“Section 230”), which generally protects users and online services from lawsuits based on content created by others. Section 230 is an essential part of the legal architecture that enables everyone to connect, share ideas, and advocate for change without needing immense resources or technical expertise. By avoiding addressing Section 230, the Supreme Court avoided weakening it.

In Taamneh, the Supreme Court rejected a legal theory that would have made online services liable under the federal Justice Against Sponsors of Terrorism Act on the theory that members of terrorist organizations or their supporters simply used these services like we all do: to create and share content. The decision is another win for users’ online speech, as it avoids an outcome where providers censor far more content than they do already, or even prohibit certain topics or users entirely when they could later be held liable for aiding or abetting their user’s wrongful acts.

Given the potential for both decisions to have disastrous consequences for users’ free expression, EFF is pleased that the Supreme Court left existing legal protections for online speech legal in place.

But we cannot rest easy. There are pressing threats to users’ online speech as Congress considers legislation to weaken Section 230 and otherwise expand intermediary liability. Users must continue to advocate for their ability to have a free and open internet that everyone can use.

Keep reading

These Senators Want the Federal Government To Verify Your Age Online

Despite their many disagreements, Republicans and Democrats have developed a common affinity for social media regulation, largely relying on the disputed assumption that platforms like Instagram and TikTok severely degrade children’s mental health. The latest regulatory proposal in Congress is the Protecting Kids on Social Media Act, sponsored by a bipartisan group of four senators: Sens. Brian Schatz (D–Hawaii), Tom Cotton (R–Ark.), Chris Murphy (D–Conn.), and Katie Britt (R–Ala.).

The bill features several flawed policies, drawing from recent state and federal social media proposals. It would require social media platforms to verify the age of every would-be user. Platforms could allow the unverified to view content, but not to interact with it or with other users. After providing age verification to register an account, underage teens would need proof of parental consent. Those under 13 years old would be completely barred from registering accounts.

The bill does propose one novel—and potentially dangerous—innovation. It would establish a “pilot program” for a federally run verification system. This system would ascertain social media users’ age and, for teen users, confirm parental consent.

Age verification mandates, which invariably entail intrusive data gathering, threaten user data privacy and security. They also violate the individual’s right to speak freely and anonymously online. Although the bill’s authors sought to mitigate the risks their implementation would pose to users, they largely failed. Such risks are inextricable from the process of age verification itself. The bill proposes a legal safe harbor for social media platforms that choose to use the pilot program. To avoid even the appearance of noncompliance, many platforms will do just that.

The proposed pilot program would require would-be social media users to submit documentation to the Department of Commerce in order to verify their age. In return, the pilot program would provide a “credential” to be submitted to social media platforms. Users would verify parental consent by the same process. To administer the program, the government would necessarily obtain and store troves of personal data on American social media users—to prove regulatory compliance, if nothing else.

To protect user privacy, the bill directs Commerce to “keep no records of the social media platforms where users have verified their identity.” It would also forbid the agency from sharing user data with platforms or law enforcement without user consent, a court order, or a program-specific fraud or oversight investigation.

Nonetheless, the bill would require users to register personal information with state authorities simply to speak online. Government agencies, under a legal pretext, could retrieve from social media platforms the records necessary to identify user accounts. Democrats have long been skeptical of the federal government’s data abuses, but both partiesincluding newly skeptical Republicans—ought to understand these risks.

Keep reading

The EARN IT Act, an attack on encrypted communications, to be reintroduced next week

Those behind the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act must be hoping that third time’s a charm for this previously widely-opposed piece of legislation, that is set to be reintroduced next week.

The previous two attempts to make EARN IT into law failed amid outcry from opponents who said that while designed to protect children, the bill would fail to do that – but would still damage online privacy.

Now here’s the third, bipartisan attempt sponsored by Republican Lindsey Graham and Democrat Richard Blumenthal to bring changes to the Communication Decency Act (CDA) Section 230.

Critics say that the amendment as envisaged by EARN IT would harm internet users by removing legal protections Section 230 gives tech companies for third party content.

The consequence would be those companies protecting themselves by engaging in (even more) censorship, and “working” with the government to this end – even more than we are aware they already do.

At the core of EARN IT is to target platforms for violations related to child sexual abuse material (CSAM) rules that exist at the federal and state level.

But allegedly, these platforms are reluctant to “moderate” i.e., censor content in a heavy-handed manner, and for that reason oppose the legislation.

Keep reading