Robert F. Kennedy Jr. Banned by Major Social Media Site, Campaign Pages Blocked

Twitter owner Elon Musk invited Democrat presidential candidate Robert F. Kennedy Jr. for a discussion on his Twitter Spaces after Kennedy said his campaign was suspended by Meta-owned Instagram.

“Interesting… when we use our TeamKennedy email address to set up @instagram accounts we get an automatic 180-day ban. Can anyone guess why that’s happening?” he wrote on Twitter. An accompanying image shows that Instagram said it “suspended” his “Team Kennedy” account and that there “are 180 days remaining to disagree” with the company’s decision.

In response to his post, Musk wrote: “Would you like to do a Spaces discussion with me next week?” Kennedy agreed, saying he would do it Monday at 2 p.m. ET.

Hours later, Kennedy wrote that Instagram “still hasn’t reinstated my account, which was banned years ago with more than 900k followers.” He argued that “to silence a major political candidate is profoundly undemocratic.”

“Social media is the modern equivalent of the town square,” the candidate, who is the nephew of former President John F. Kennedy, wrote. “How can democracy function if only some candidates have access to it?”

Keep reading

Childproofing the Internet

For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.

There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political biaspolitical extremismdrug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.

Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn’t know why, or how.

But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.

Keep reading

Congress To Investigate WHO Plans To Use “Listening Surveillance Systems” To Identify “Misinformation”

If you’ve been following our reporting on the issue, you’ll already know that the new World Health Organization (WHO) pandemic prevention initiative, the Preparedness and Resilience for Emerging Threats (PRET), recommends using “social listening surveillance systems” to identify “misinformation.” But as more people are learning about how unelected bodies are being used to suppress speech and potentially override sovereignty, it’s starting to get more pushback.

According to documents from the UN agency, PRET aims to “guide countries in pandemic planning” and work to “incorporate the latest tools and approaches for shared learning and collective action established during the COVID-19 pandemic.”

The PRET document describes misinformation as a “health threat,” and refers to it as an “infodemic.”

“Infodemic is the overabundance of information – accurate or not – which makes it difficult for individuals to adopt behaviors that will protect their health and the health of their families and communities. The infodemic can directly impact health, hamper the implementation of public health countermeasures and undermine trust and social cohesiveness,” the document states.

However, it continues to recommend invasive methods of countering the spread of misinformation.

“Establish and invest in resources for social listening surveillance systems and capacities to identify concerns as well as rumors and misinformation,” the WHO wrote in the PRET document.

“To build trust, it’s important to be responsive to needs and concerns, to relay timely information, and to train leaders and HCWs in risk communications principles and encourage their application.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

New Democrat Bill Calls For A Federal Agency To Create “Behavioral Codes,” Introduce “Disinformation Experts”

On May 18, two US senators introduced the Digital Platform Commission Act of 2023, a bill that seeks to give powers to a new federal agency that will set up a council to regulate AI, in the context of social platforms.

More precisely, the new body – the Federal Digital Platform Commission – would “rule” on what’s termed as enforceable “behavioral codes,” and among those staffing it will be “disinformation experts.”

The move by the two Democratic senators – Michael Bennet and Peter Welch – seems to have come in concert with congressional testimony delivered by OpenAI CEO Sam Altman, since the bill was presented shortly afterwards, and backs Altman’s idea to form a new federal agency of the kind.

Altman had more thoughts on how all this should work – the new agency, according to him, might be given the power to restrict AI development via licenses or credentialing.

The speed with which the two senators picked up on this to announce their bill may owe to the fact Bennet “only” had to go back and update one he already introduced in 2022. This time around, the proposed legislation has been changed in a number of ways, most notably by redefining what a digital platform is.

The bill wants this definition to also cover those companies that provide content “primarily” generated by algorithmic processes. This is done by proposing that the future Commission be given the authority over how personal information is used in decision-making or content generation, which is thought to specifically refer to tech like ChatGPT.

Keep reading

The Internet Dodges Censorship by the Supreme Court

The Supreme Court today refused to weaken one of the key laws supporting free expression online, and recognized that digital platforms are not usually liable for their users’ illegal acts, ensuring that everyone can continue to use those services to speak and organize.

The decisions in Gonzalez v. Google and Twitter v. Taamneh are great news for a free and vibrant internet, which inevitably depends on services that host our speech. The court in Gonzalez declined to address the scope of 47 U.S.C. § 230 (“Section 230”), which generally protects users and online services from lawsuits based on content created by others. Section 230 is an essential part of the legal architecture that enables everyone to connect, share ideas, and advocate for change without needing immense resources or technical expertise. By avoiding addressing Section 230, the Supreme Court avoided weakening it.

In Taamneh, the Supreme Court rejected a legal theory that would have made online services liable under the federal Justice Against Sponsors of Terrorism Act on the theory that members of terrorist organizations or their supporters simply used these services like we all do: to create and share content. The decision is another win for users’ online speech, as it avoids an outcome where providers censor far more content than they do already, or even prohibit certain topics or users entirely when they could later be held liable for aiding or abetting their user’s wrongful acts.

Given the potential for both decisions to have disastrous consequences for users’ free expression, EFF is pleased that the Supreme Court left existing legal protections for online speech legal in place.

But we cannot rest easy. There are pressing threats to users’ online speech as Congress considers legislation to weaken Section 230 and otherwise expand intermediary liability. Users must continue to advocate for their ability to have a free and open internet that everyone can use.

Keep reading

Kyiv Residents Who Posted Footage of Russian Air Attack on Social Media Threatened With Jail

Six Kyiv residents who posted shocking night-time footage of missiles flying through the air which quickly went viral could face up to eight years in prison if charged with breaking wartime censorship rules.

While in an age of social media saturation, it may seem natural to record and post something extraordinary happening outside your bedroom window, that is presently illegal in Ukraine, as six locals — including a locally famous Instagram model — are finding out.

Ukraine’s domestic intelligence agency, The Security Service of Ukraine (SBU) said the Kyiv City Prosecutor’s Office have launched “comprehensive measures” to “establish all the circumstances of the crime and bring the guilty to justice” after footage showing the night-time sky of Kyiv during Tuesday’s “exceptional ” air raid was published online. Russian ‘hypersonic’ missiles and suicide drones were among the 18 incoming shot down in the raid, according to the Ukrainian government.

Keep reading

CIA Releases Highly-Produced Video To Recruit Russian Spies

The Central Intelligence Agency (CIA) has utilized social media platforms to release a video aimed at providing Russians with a secure means of communication. The video assures individuals that their safety will be safeguarded if they choose to share information about the Ukraine war and other relevant details with American intelligence operatives.

A CIA representative stated, “Our objective is to reach out to courageous Russians who are compelled by their government’s unjust war and encourage them to engage with the CIA, ensuring their security throughout the process.”

The CIA shared the video on Telegram, YouTube, Twitter, and Facebook.

The narrator of the Russian-language video emphasizes, “Those around you may be unwilling to hear the truth, but we are here to listen. You possess the power to make a difference. Connect with us securely.”

In response to the video, Dmitry Peskov, spokesperson for Russian President Vladimir Putin, commented, “I am confident that our intelligence services are closely monitoring this platform.” A spokesperson from the Russian Foreign Ministry added it’s “a conveniently traceable resource for applicants.”

The meticulously crafted two-minute video, accompanied by dramatic music, portrays Russians deep in thought as they gaze out of windows or sit on park benches, seemingly contemplating a significant decision. One individual, carrying a briefcase, enters a government building and discreetly displays an identification card. The fictional characters in the video appear lost in contemplation as they examine family portraits, reflecting on the future of their children. Towards the conclusion of the video, the Russians make contact with the CIA via their phones.

Keep reading

LinkedIn Co-Founder Reid Hoffman Spreads Misinformation While Calling For Misinformation Regulation

In a recent conversation with The Washington Post on the implications of the First Amendment and freedom of speech, LinkedIn co-founder Reid Hoffman expressed his perspective on what he thinks is the need for modern restrictions on speech to combat “misinformation.” But even his own call to action contained misinformation.

Hoffman’s argument revolves around two main points: freedom of speech and freedom of reach. He says the amplification and discovery of content, especially AI-generated content, can impact the socio-political landscape.

“We don’t really have the right discourse mechanisms for doing that. And you know, one of them obviously is freedom of speech and freedom of reach. And that’s again, within the AI content is, you know, well what gets amplified and, how is that all discovered is one of the things that will matter within the electoral context.”

Hoffman referenced a commonly misunderstood idea. He mentioned the proverbial concept of “yelling fire in a crowded movie theater,” hinting at the existence of restrictions on free speech.

However, this analogy does not accurately represent the actual US law and therefore gives an incorrect impression of the nature of free speech. The idea that you can’t yell “fire” in a crowded theater is one of the most erroneous statements regarding free speech.

Keep reading

Here Are 7 Major Cases The Supreme Court Has Yet To Decide This Term

Among the dozens of opinions yet to be released by the Supreme Court this term are cases on affirmative action, compelled speech and social media companies’ liability for content posted on their platforms.

To date, the Court has released 18 opinions, issuing rulings that enabled those facing complaints from administrative agencies to press constitutional challenges in federal court and allowed a death row inmate’s request for a DNA test to proceed. But opinions in 40 more cases are expected to be released before the end of June, including some of the most consequential cases on this term’s docket.

Keep reading