ADL Regional Director Calls for Government-Regulated Online Censorship

The Anti-Defamation League’s David Goldenberg is demanding a broad overhaul of how speech is governed on the internet, calling for both government intervention and intensified corporate censorship. In a recent appearance, Goldenberg, who heads the ADL’s Midwest operations, expressed frustration over what he sees as declining efforts by tech firms to suppress online content he deems hateful.

Citing Meta’s rollback of its fact-checking team in the United States, he argued that platforms must be forced to take action. “You have a platform like Meta that just gutted its entire fact-checking department…And so what we need to do is we need to apply pressure in a real significant way on tech platforms that they have a responsibility, that they have an absolute responsibility to check and remove hateful speech that is inciteful.”

Goldenberg advocated not just for voluntary moderation, but for legislative and regulatory measures, both at the federal and state level, that would compel platforms to act as speech enforcers. He pointed to efforts in states like California as examples of where local governments are already testing such models.

His concern centers around what he perceives as an ecosystem of radicalization made easily accessible by today’s digital infrastructure. He warned that extremist ideologies no longer require obscure forums or dark web communities to spread. “It used to be you had to fight going into the deep dark web… Now… it’s easier and easier to be exposed in the mainstream,” he said.

Framing the online environment as a catalyst for violence, Goldenberg argued that free access to controversial viewpoints must be curtailed. He called for social media companies to take a stronger stance by excluding users whose views fall outside accepted boundaries, adding that regulation should enforce this responsibility.

He zeroed in on Section 230 of the Communications Decency Act, a critical piece of legislation that shields platforms from legal liability over user-posted content. “Congress needs to amend Section 230, which provides immunity to tech platforms right now for what happens,” Goldenberg said. He dismissed comparisons between modern platforms and telecommunications companies, referencing past remarks by Facebook’s Mark Zuckerberg about how phone providers were not liable for threats made over calls. Goldenberg’s view was blunt: “These tech platforms are not guaranteed under the Constitution. They’re just not.”

From his perspective, private companies should be free to “kick people off, to de-platform,” and if they fail to do so voluntarily, they must be pressured or regulated into compliance. He described accountability as a mechanism for shaping behavior, stating, “Accountability is a tool that can be incredibly effective in changing behavior.”

Keep reading

The Rise And Fall Of The West’s Propaganda Regime

Propaganda used to control Western democracies is running headlong into searchable facts freely available on the internet. As a result, the powerful cannot maintain their self-sustaining narratives and are behaving erratically and defensively, like bees when their nest is disturbed.

Opposition leaders in France, Romania, and Brazil have been barred from elections. They were accused of spreading “disinformation” and then prosecuted for unrelated crimes. In the United States, the same lawfare was attempted against President Trump, but he won anyway.

Sun Tzu’s first principle of war is “know the enemy,” but knowledge is difficult in an information war. Propaganda plants lies in every soul. In 1928, the father of modern public relations, Edward Bernay (who also happened to be the nephew of Sigmund Freud), wrote in his book Propaganda:

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. …We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. …In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons…who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.

Propaganda works by contrasting two sides. Empathy for an opponent’s position is systematically removed by control of information. Public policy filtered through propaganda is always a binary choice.

In wartime, “us versus them” in a “kill or be killed” conflict provides the necessary binary, and propaganda writes itself. Domestic propaganda in peacetime is a bit trickier but works the same way: by staking out two sides.

Mass media permits distinct information sources. Elections award one side or the other with temporary political rule, but power is never actually surrendered. Each side simply acts as a fulcrum for the other side to pivot. Wedge issues inject urgency, but they are designed to produce a stalemate.

Keep reading

Teens’ Google Search History Helped Detectives Solve Horrific Denver Arson Murder Case

A reverse keyword search warrant served to Google helped Denver police identify three teens responsible for an arson attack that killed five members of a family in 2020.

Wired reports that in August 2020, a horrific arson attack in Denver, Colorado, claimed the lives of five members of a Senegalese family, including two children. The case initially left investigators stumped, with little evidence pointing to the perpetrators. However, a breakthrough came when Denver Police Department (DPD) detectives Neil Baker and Ernest Sandoval decided to serve a reverse keyword search warrant to Google, requesting information on users who had searched for the address of the victims’ home in the days leading up to the fire.

The warrant, which was met with initial resistance from Google due to privacy concerns, ultimately revealed that three local teenagers—Kevin Bui, Gavin Seymour, and Dillon Siebert—had repeatedly searched for the address on Google in the two weeks prior to the arson. This information, combined with cell phone location data placing the teens near the scene of the crime, provided the key evidence needed to arrest and charge them.

The case highlights the growing use of reverse keyword search warrants by law enforcement. These warrants allow police to request information on all individuals who searched for specific keywords or phrases, potentially exposing innocent people to unwarranted scrutiny.

In this instance, the teens’ defense argued that the warrant violated their Fourth Amendment rights by conducting a broad “digital dragnet” without individualized probable cause. However, the judge ruled in favor of law enforcement, likening the search to looking for a needle in a haystack.

The Colorado Supreme Court later upheld the constitutionality of the warrant in a landmark ruling, potentially paving the way for wider use of this investigative technique. However, the court also acknowledged the lack of individualized probable cause, deeming the warrant “constitutionally defective” despite allowing the evidence to stand.

Keep reading

Congressional Bill Aimed At Protecting Kids Online Could Cause Headaches For Marijuana Businesses

A newly filed bill in Congress aimed at protecting children online could create headaches for advertisers trying to promote legal marijuana and other regulated substances.

Titled the Kids Online Safety Act (KOSA), the bipartisan proposal—from Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) as well as Senate Majority Leader John Thune (R-SD) and Senate Minority Leader Chuck Schumer (D-NY)—would create a “duty of care” for online platforms such as social media and streaming video services, requiring them to take steps to prevent access to potentially sensitive content by minors.

That includes advertisements for cannabis products and certain other drugs and services.

A factsheet from Blackburn’s office says the duty of care “requires social media companies to prevent and mitigate certain harms that they know their platforms and products are causing to young users.”

The sponsors say the legislation is necessary to protect children from pernicious practices that keep “kids glued to their screens” for hours a day, alleging that “Big Tech is trying every method possible to keep them scrolling, clicking ads, and sharing every detail of their life.”

The 63-page bill “targets the harms that online platforms cause through their own product and business decisions,” the factsheet says, “like how they design their products and applications to keep kids online for as long as possible, train their algorithms to exploit vulnerabilities, and target children with advertising.”

Much of the proposal is aimed at limiting content that fuels behavioral health disorders. Platforms would need to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors,” it says, listing eating and drug use disorders, suicidal ideation, violence and harassment, sexual exploitation, financial harm and others.

As for controlled substances, online platforms would be prohibited from facilitating the “advertising of narcotic drugs, cannabis products, tobacco products, gambling, or alcohol to an individual that the covered platform knows is a minor.”

The provision around drug use lists the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol” as risks that platforms would need to actively guard minors against.

Video streaming platforms meanwhile, would be required “to employ measures that safeguard against serving advertising for narcotic drugs, cannabis products, tobacco products, gambling, or alcohol directly to the account or profile of an individual that the service knows is a minor.”

“Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children, and I’ve heard too many heartbreaking stories to count from parents who have lost a child because these companies have refused to make their platforms safer by default,” Blackburn said in a press release about the legislation.

Keep reading

Trump Signs Take It Down Act

President Donald Trump has now signed into law the Take It Down Act, a measure designed to address the spread of non-consensual intimate imagery (NCII), including increasingly prevalent AI-generated deepfakes.

While the legislation is being celebrated by both major parties as a victory for online safety, particularly for children and victims of abuse, it has also raised concerns about the potential for overreach, selective enforcement, and the erosion of free speech under the guise of digital protection, particularly because of the broad wording of the bill.

The law’s most prominent advocate within the administration has been First Lady Melania Trump, who campaigned heavily for its passage and made rare public appearances to promote it. During the Rose Garden signing ceremony, President Trump invited her to add her signature beneath his, an unusual but symbolic gesture that underscored her role in pushing the legislation forward.

“This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” Mrs Trump said. In her remarks, she repeated her criticism of AI and social media, calling them “the digital candy for the next generation,” and warned that these technologies “can be weaponized, shaped beliefs, and sadly affect emotions and even be deadly.”

President Trump, for his part, appeared to dismiss constitutional concerns. “People talked about all sorts of First Amendment, Second Amendment. They talked about any amendment they could make up, and we got it through because of some very brave people,” he said.

Earlier in the year, during his March 4 address to Congress, Trump had signaled his intent to sign the bill. “The Senate passed the Take It Down Act…Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”

While made in jest, the remark pointed to an unresolved issue: how this law will be enforced, and who will benefit most from it.

There is no denying the harm caused by NCII. Victims often struggle to remove intimate images, whether real or AI-generated, while the content continues to spread. The Take It Down Act requires websites to remove flagged content within 48 hours of a complaint. But, just like the Digital Millennium Copyright Act (DMCA), platforms have little way of determining if a complaint is legitimate or being used as a censorship mechanism.

That timeline is designed to offer swift recourse to victims. However, the law’s broad wording leaves its applications open to interpretation.

The bill defines a violation as involving an “identifiable individual” engaged in “sexually explicit conduct,” without offering a clear or narrow definition of what that conduct entails. This vagueness creates a gray area that could easily be used to suppress satire, parody, or even critical political speech.

Keep reading

Virginia’s new social media law targets teen access via parental consent, age checks

Virginia is preparing to enforce one of the nation’s most sweeping new restrictions on social media access for teens, requiring parental consent, time limits and age checks for users under 16.

The law, signed by Gov. Glenn Youngkin in May, takes effect Jan. 1, 2026, and applies to any platform that allows public profiles, messaging and shared content feeds.

Senate Bill 854 requires users to use a neutral age screen to determine if a user is under 16 and then secure verifiable parental consent before allowing more than one hour of daily use per service.

It also bars platforms from treating a minor’s data as if it belongs to an adult, even when shared through browser plug-ins or common devices.

Virginia joins more than a dozen states that have passed or proposed laws since 2023 to regulate children’s access to social media, including Utah, Texas, Florida, Maryland and California. While the details vary, most require age verification and parental consent for minors to create accounts or use apps beyond a time limit.

Privacy advocates say the law could have unintended consequences. Jason Kelley, associate director of digital strategy at the Electronic Frontier Foundation, said the measure may increase data collection rather than limit it.

“Any time a company is required by law to collect more information, there are data privacy risks,” Kelley said, noting that platforms may turn to tools like geolocation, facial scans or ID verification to meet the law’s requirements.

He also raised legal concerns, pointing out that similar laws in other states have been paused or overturned in court.

“Like similar laws in other states, it’s unlikely Virginia’s law will survive judicial scrutiny,” Kelley said.

He also raised concerns that verification systems may not work for all families.

“These systems don’t necessarily take into account a large number of non-traditional families,” Kelley said. “Regardless, such restrictions are not enforcing parental authority. They are imposing governmental authority, subject only to a parental veto.”

Supporters of the law say it gives parents a stronger role in managing their children’s online habits. In a written response, Sen. Schuyler Van Valkenburg’s office said the law is enforced under the existing Consumer Protection Act and handled by the Virginia Attorney General’s Office.

Keep reading

KOSA Reintroduced: Child “Safety” Bill Raises Alarms Over Internet Surveillance, Digital ID, and Free Speech Risks

Senators have once again put forward the Kids Online Safety Act (KOSA), reviving a bill that, if enacted, would radically reshape how Americans experience the internet.

Promoted as a measure to protect children, this latest version now carries the backing of Apple, a tech giant that has publicly endorsed the legislation as a meaningful step toward improving online safety.

But behind the bipartisan sales pitch and industry support lies a framework that risks expanding government control over online content and eroding user privacy through mandated age verification and surveillance infrastructure.

We obtained a copy of the bill for you here.

KOSA is often described as a child protection bill, requiring platforms to limit exposure to content that could contribute to mental health issues such as depression or disordered eating.

What is less emphasized by its sponsors is how the bill empowers the Federal Trade Commission to investigate and sue platforms over speech that’s deemed “harmful” to minors.

Though lawmakers insist the bill does not authorize the censorship of content, it effectively places government pressure on websites to sanitize what users see, or face liability. Such chilling effects rarely need explicit censorship orders to shape outcomes.

Keep reading

EU nations seek mandatory social media age checks

France, Spain, and Greece are advocating for mandatory age verification on social media platforms such as Meta’s Facebook and Elon Musk’s X, Bloomberg reported on Friday.

The proposed rules would require all internet-connected devices to be equipped with age verification technology. Digital services ministers from the three EU member states are coordinating the initiative ahead of a meeting with their counterparts from the bloc on June 6, a document cited by Bloomberg said.

The three nations reportedly argue that the “lack of proper and widespread age-verification mechanisms” makes it difficult to effectively enforce age limits. They aim to leverage the economic power of the EU’s 450 million consumers to compel tech companies to implement robust verification systems, according to the report.

French President Emmanuel Macron confirmed on Tuesday his support for mandatory age verification for teenagers registering on social media platforms, stating that online networks have contributed to suffering and mental health issues among young people.

“We must protect our children,” he told TF1, adding that age verification on social networks should be imposed.

According to Bloomberg, the European Commission, along with several bloc members, is already developing pilot projects to boost parental controls and age verification. However, their efforts are being hindered by regulatory differences across EU countries and the ease with which users can access social networks from outside the bloc.

Keep reading

Redefining Obscenity: Lawmakers Take Aim at More Online Content

Two Republican lawmakers are advancing a bill that could dramatically expand the federal government’s ability to criminalize certain content online.

Senator Mike Lee of Utah and Representative Mary Miller of Illinois have introduced the Interstate Obscenity Definition Act (IODA), legislation that aims to overhaul the legal definition of obscenity and give prosecutors wide authority to target more online content.

We obtained a copy of the bill for you here.

Supporters of the bill claim it is designed to protect families and children from harmful material, but civil liberties advocates warn that its sweeping language threatens to criminalize large swaths of constitutionally protected expression.

IODA discards key elements of the Supreme Court’s long-standing Miller test, which has served as the nation’s benchmark for identifying obscene content since 1973. Under that framework, courts assess whether material appeals to prurient interest, depicts sexual conduct in a “patently offensive” way by community standards, and lacks “serious literary, artistic, political, or scientific value.”

Lee and Miller’s bill replaces that careful balancing test with a rigid federal definition. According to the proposed language, content is considered obscene if “taken as a whole, [it] appeals to the prurient interest in nudity, sex, or excretion,” if it “depicts, describes or represents actual or simulated sexual acts with the objective intent to arouse, titillate, or gratify the sexual desires of a person,” and if it “taken as a whole, lacks serious literary, artistic, political, or scientific value.”

Promoting the bill, Lee declared, “Obscenity isn’t protected by the First Amendment, but hazy and unenforceable legal definitions have allowed extreme pornography to saturate American society and reach countless children.” He added, “Our bill updates the legal definition of obscenity for the internet age so this content can be taken down and its peddlers prosecuted.”

Keep reading

Virginia passes law to limit time teens spend on social media to one hour a day

Virginia teens under 16 will soon face limits on their social media usage after Governor Glenn Youngkin signed new legislation into law.

The law requires social media companies to set default time limits of one hour per day for users under 16 years old, with parents having the ability to adjust that time up or down.

“It’s a good first start, and it’s a good way for parents to be able to have better control over how much social media their kids are on,” said Sen. Schuyler VanValkenburg (D – Henrico), who co-sponsored the legislation.

VanValkenburg, who teaches in Henrico County schools, has witnessed the impact of excessive social media use firsthand.

“You see how much it hinders their ability to do well in school, and you see how much it hinders their socialization with their friends,” VanValkenburg said.

Keep reading