TAKE IT DOWN Act Advances in House Despite Major Censorship Concerns

The US House Committee on Energy and Commerce has passed the TAKE IT DOWN (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks) Act in a 49 to 1, bipartisan vote, and the legislation is now headed for the House of Representatives.

If the bill clears that hurdle as well, it will be up to President Trump to sign it into law.

Backed, among others, by First Lady Melania Trump, TAKE IT DOWN was introduced as a way to stop the spread of real, and AI-generated non-consensual intimate imagery (NCII). If, as it seems likely, TAKE IT DOWN becomes law, it will force platforms to remove flagged content within 48 hours.

But the bill’s critics continue to warn that the text lacks proper safeguards and other requirements that would prevent it from being misused, or abused as a tool of censorship, instead of narrowly serving its declarative purpose.

These concerns are not addressed in a press release the Committee on Energy and Commerce issued after adopting the proposal, as it focused instead on the benefits the legislation would provide to victims of dissemination of explicit imagery, with an emphasis on that which is AI-generated, i.e., on deepfakes.

However, campaigners, among them the Center for Democracy and Technology and the EFF, believe that the bill’s actual wording does not live up to its good intent, specifically around the takedown requirement which “lends itself to abuse.”

While the Federal Trade Commission (FTC) would be tasked with issuing penalties for non-compliance, under TAKE IT DOWN, there are no consequences for those making false reports, which could lead to legitimate content quickly disappearing from the internet.

The bill doesn’t lay out how those affected might appeal once their content is falsely flagged and removed, while platforms are under no threat of penalty for removing constitutionally protected speech.

Keep reading

Meta Keeps Big MAGA Accounts on Ban List as Mark Zuckerberg Lobbies Trump

Mark Zuckerberg is reportedly lobbying the Trump administration to drop a pivotal FTC case against the company, in what would amount to a major political favor for Meta. Despite Zuckerberg’s multiple olive branches to the Turmp administration, a number of high-profile, pro-MAGA voices are still banned on Meta’s Facebook and Instagram platforms.

The Wall Street Journal recently reported that Mark Zuckerberg has made regular visits to the White House urging the president to lean on the FTC to drop its case against Meta. If the FTC were to prevail in the case, Meta could be forced to divest from WhatsApp and Instagram, breaking up the company.

In January, Zuckerberg made several public overtures to the Trump administration, praising parts of its policy platform in an appearance on the Joe Rogan Experience, and criticizing the outgoing Biden administration for its censorship demands during COVID. Zuckerberg also announced a policy pivot at Meta, promising to “get back to our roots” of supporting free speech.

Despite these pledges, several prominent anti-establishment figures remain banned on Meta platforms:

  • Laura Loomer, investigative journalist and former Republican congressional candidate who was recently credited with influencing a shakeup at the NSC.
  • Tommy Robinson, the prominent British political activist and critic of Islam.
  • Alex Jones
  • Paul Joseph Watson
  • Gavin McInnes
  • Milo Yiannopoulos, self-styled “civil rights icon” and former Breitbart News editor.

It is also unclear if Meta still maintains its “hate agents”  list of prominent anti-establishment voices uncovered by Breitbart News in 2019 that included political candidates. Or if the company has taken any steps to remedy the mass-censorship of WhatsApp accounts in Brazil, which extended to Flavio Bolsonaro, son of persecuted former president Jair Bolsonaro. In a comment to Breitbart News, the company denied it has continued to maintain its documented list of hate agents.

As the FTC trial date draws closer, Meta has drawn flak from the conservative commentariat.

Keep reading

US-Funded “Anti-Misinformation” Groups Are Still Quietly Active

Despite the big and open push that came in with the new US administration to end the practice of the government funding third-party groups to effectively act as its censorship proxies – some of these arrangements continue to be operational.

Most appear to be working to strengthen previously established “preferred” narratives around health issues – as ever, with “combating misinformation” given as the declarative, overarching purpose behind the effort.

But critics say, that was/remains a smokescreen meant to manipulate public opinion.

The Federalist reports that the National Science Foundation (NSF) – one of the US government’s “independent agencies” designed to channel federal funds – had a number of programs under its “anti-misinformation” umbrella, the Convergence Accelerator.

Among the ones who continue to this day are Chime In, Analysis and Response Toolkit for Trust (ARTT), and Expert Voices Together (EVT).

Chime In’s original name was Course Correct. It was set up at the University of Wisconsin-Madison – with $5 million coming from NSF in 2022 – to provide “anti-misinformation” resources for journalists.

True to the era, its original “mission” was to persuade (Covid) vaccine skeptics to take the jab; and then it went into advocating (“misinformation detecting”) in favor of persuading people there was no reason to be skeptical about genetically modified (GMO) foods, Covid narratives, and vaccines in general, as well as issues like sunscreen product and raw milk safety.

ARTT, meanwhile, came up with its own “AI” chatbot, that focused on political discourse, but according to the Federalist, once again, heavily tied to vaccine hesitancy.

From 2021, ARTT received close to $750,000 from the NSF, and a further $5 million, “to develop practical interventions to build trust and address vaccine hesitancy.”

Keep reading

100+ Meta employees, including Head of AI Policy, confirmed as ex-IDF

Meta’s recruitment of vast numbers of former Israeli soldiers raises serious questions about the tech giant’s commitment to free speech – and provides a peek into a biased content moderation process that’s been heavily censoring pro-Palestinian accounts amid the Israeli siege of Gaza.

This article was originally published by ¡Do Not Panic!

More than one hundred former Israeli spies and IDF soldiers work for tech giant Meta, including its head of AI policy, who served in the IDF under an Israeli government scheme that allows non-Israelis to volunteer for the Israeli army.

Shira Anderson, an American international rights lawyer, is Meta’s AI policy chief who voluntarily enlisted for the IDF in 2009 under a program which enables non-Israeli Jews who aren’t eligible for military conscription to join the Israeli army.

Through this program, known as Garin Tzabar, many non-Israelis who have fought for the IDF have been implicated in war crimes and crimes against humanity since Israel’s genocide of Gaza began in October 2023.

Anderson served as a non-commissioned officer in the IDF for over two years where she worked in the Military Strategic Information Section, writing dossiers and public relations propaganda for the IDF. She was also the liaison between the IDF and foreign military attaches stationed in Israel, and liaison to the Red Cross.

With AI a critical emerging technology for tech giants and militaries, Anderson’s role at Meta is an important one. She develops the legal guidance, policies and public relations talking points concerning AI issues and regulation for all of Meta’s key areas, including its product, public policy and government affairs teams.

At Meta, Anderson, who is based in Meta’s Washington DC office, is in familiar company. More than one hundred former Israeli spies and IDF soldiers are employed by the company, my new investigation shows, many of whom worked for Israel’s spy agency Unit 8200.

These ex-IDF members are based evenly across Meta’s US offices and in its Tel Aviv office, and a significant number of them, like Anderson, have a specialization in AI. Given that Israel has made extensive use of AI not just to conduct its genocide, but to establish its prior system of apartheid, surveillance and occupation, Meta’s recruiting of IDF AI specialists is particularly insidious. Did these former Israeli spies use their Unit 8200 connections to help the tech giant collaborate with the IDF to build kill lists? According to a report last year, Unit 8200 infiltrated WhatsApp groups and marked every name in a group for assassination if just one alleged Hamas member was also in the group, no matter the size or content of the group chat.

Keep reading

How Taxpayer-Funded Censorship Tools Could Manipulate What Americans See About Vaccines, Raw Milk, And More

The U.S. government funded various online censorship tools, some of which went missing after President Donald Trump took office. Others, however, are still operating today — potentially limiting what Americans see online, especially regarding health issues.

The National Science Foundation incubated a host of programs through its Convergence Accelerator to fight online speech deemed “misinformation,” many of which recently dropped off the map, as The Federalist previously reported. But some of these projects continue today — adopted by left-leaning institutions and dedicated to controlling online speech.

The censorship projects still operating include Chime In, hosted by the University of Wisconsin-Madison; the Analysis and Response Toolkit for Trust, housed in leftist nonprofit Discourse Labs; and Expert Voices Together, adopted by the left-wing group Right To Be. 

Chime In

The tool Chime In was previously called “Course Correct.” UW-Madison developed Course Correct as an anti-“misinformation” dashboard for journalists, as The Federalist previously reported. The NSF awarded the project $5 million starting in 2022. According to the grant description, it hoped to “scale Course Correct into local, national, and international newsrooms…” 

The program was renamed to Chime In “more than a year ago,” university spokesman John Lucas previously told The Federalist. Chime In is housed in UW-Madison’s Center for Communication and Civic Renewal, and aims to “counter vaccine hesitancy.”

As The College Fix reported in 2023, the program conducted “pilot testing… on issues including: raw milk, genetically modified foods, vaccine safety, fluoride in water, Covid-19, and sunscreen safety.” The project could also collude with media to manipulate the public narrative.

“Once journalists evaluate the size and reach of these misinformation networks detected by the dashboard, they work with Course Correct staff to develop and rapidly test messages that will reduce the flow of misinformation,” reads the project’s NSF description. “Course Correct will seed the affected misinformation network with sponsored social media posts…”

The Federalist obtained a screen recording of the Chime In software. The program helps users create messaging “experiments,” creating their own target groups, such as “vaccine skeptics.” Users can push their narratives on different platforms such as X, Facebook, Instagram, and YouTube, then request funding for these campaigns. Finally, the tool enables users to draft specific posts to be promoted across social media.

Keep reading

UK Halts “Legal but Harmful” Censorship Rule Amid US Trade Pressure, But Online Safety Act Still Fuels Free Speech and Privacy Fears

Plans to implement sweeping content moderation powers for tech companies have been put on hold by the UK government, as concerns grow that reintroducing speech controls could disrupt sensitive trade discussions with President Donald Trump’s allies.

The British Government had been exploring a return to the abandoned “legal but harmful” proposal, a measure that would have forced online platforms to purge content deemed “harmful” yet not unlawful. But after internal pushback and a wary eye on Washington’s stance, the idea has been quietly dropped.

The original measure, introduced under Conservative leadership in 2022, triggered significant dissent, including from within the party itself. Conservative leader Kemi Badenoch, then serving as business secretary, dismissed the idea, warning it could mean “legislating for hurt feelings.” The proposal was ultimately replaced with tools that give individuals more choice over the material they encounter online rather than imposing top-down restrictions.

According to reports, the recent move to distance the government from any revival of the censorship clause comes amid Labour’s review of the Online Safety Act, launched after riots last summer linked to false claims about a Southport attacker. While that review sparked fresh debate over “misinformation,” officials have opted not to revisit the “legal but harmful” language, choosing instead to emphasize online protections for children.

Labour appears focused on building upon new safety measures coming into force this summer, including mandatory age checks for adult content. Technology Secretary Peter Kyle is working on a package aimed at strengthening youth safeguards, though these proposals stop well short of any return to compelled content takedowns.

“We are really committed to keeping children safe,” a government insider said. “Finally, the Online Safety Act is starting to have an impact, and we will see some enforcement action shortly. Age assurance will also be a massive step forward when it comes in the summer, but we’re actively exploring other ways of protecting children.”

​While the UK government’s removal of the “legal but harmful” provision from the Online Safety Act was intended to address concerns over free speech and censorship, significant issues remain. The Act still imposes broad duties on online platforms to assess and mitigate risks associated with user-generated content.

Keep reading

EU Targets Elon Musk’s X with Potential $1 Billion Fine Under Censorship Law

When the European Commission goes to war, it doesn’t send tanks. It dispatches compliance officers with angry emails and billion-dollar fines.

The European Union’s eurocrats’s next target is Elon Musk’s social media fixer-upper, X.

According to the New York Times, four anonymous whisperers from inside the EU machine say the bloc is loading up a billion-dollar bazooka aimed squarely at X, citing violations of their shiny new Digital Services Act, the latest attempt to regulate speech by committee. And what better way to showcase the importance of online civility than by dragging the world’s loudest billionaire into court?

The DSA, which was sold to the public as a digital hygiene law to make the internet a kinder, gentler place, has become a blunt instrument in the hands of bureaucrats who never met a control lever they didn’t want to pull. They’ve apparently decided that Musk’s flavor of digital chaos — too many unregulated opinions, not enough “fact-checking,” and a stubborn refusal to grovel — is a clear and present danger to the European project.

Among X’s alleged crimes against the algorithmic gods: refusing to hand over data to “independent researchers” (friendly academics who publish pro-censorship PDFs no one reads), hiding the secrets behind those little blue check marks, and failing to spill the tea on who’s advertising to whom.

Naturally, this has prompted Brussels to threaten a fine that could “top $1 billion,” a figure clearly pulled from the same place all government fines originate — an angry dartboard. One idea floating through the regulatory fog? That if X itself can’t pay up, maybe SpaceX can. Because when you’re short on jurisdiction, why not go fishing in another company’s wallet?

Keep reading

New Info on How the Feds Helped Censor a Bombshell

The US House Judiciary Committee has released internal chat logs, that show the FBI moved into cover-up mode the very day the New York Post published the Hunter Biden laptop story, on October 14, 2020.

The logs, first reported about by journalists Michael Shellenberger and Catherine Herridge, reveal that the FBI employees were immediately instructed “not to discuss the Biden matter,” while an intelligence analyst who, during a call with Twitter, accidentally confirmed that the story, i.e., the laptop, was real, was placed under a “gag order.”

The reason the analyst, who was with the FBI’s Criminal Investigative Division, was able to so quickly confirm the reporting was based on credible information was the fact the FBI had seized and authenticated Hunter Biden’s laptop several months earlier.

Big Tech platforms – notably Twitter and Facebook – then started censoring the article, branding it falsely “Russian disinformation.” By maintaining the “no comment” policy instead of confirming that the laptop was real and under investigation, the FBI was in effect tacitly promoting the false narrative about foreign interference.

These moves originated from the Foreign Influence Task Force, which was shut down earlier this year for its activities related to censorship through pressure on social platforms.

The laptop scandal was unfolding during a crucial time in the 2020 campaign and represents one of the most egregious publicly known examples of political censorship of free speech and media orchestrated by government agencies.

The chat logs that have now been published reveal that one of the FBI staff involved in the Hunter Biden laptop story suppression was Bradley Benavides.

Only weeks prior, Benavides featured in another controversy: that time in what appeared to be a smear campaign against Senators Ron Johnson and Chuck Grassley, who were allegedly “advancing Russian disinformation.”

At the time, the senators just so happened to be investigating Hunter Biden’s financial connections to foreign governments.

A letter the Judiciary Committee sent Benavides in June 2023, shows that he had by that time gone through the Big Tech-Big Government “revolving door” – and was senior risk manager at Amazon.

Keep reading

Democrats, Former Disinfo Board Chief, Defend Government-Big Tech Ties, Dismiss Censorship-Industrial Complex at House Hearing on First Amendment Safeguards

Nina Jankowicz – former head of the disbanded Disinformation Governance Board and CEO of the American Sunlight Project – and Democrats who this week spoke during a House Foreign Affairs Committee hearing, continued to deny and defend the Big Tech-government censorship collusion.

The hearing – “Censorship-Industrial Complex: The Need for First Amendment Safeguards at the State Department” – also saw Jankowicz, who appeared as a witness, and Representative Sydney Kamlager-Dove, a Democrat, attempt to paint the actions taken by the new Trump White House as worse that what was happening during the previous US administration.

Both Jankowicz and Kamlager-Dove referred to the system known as the Censorship-Industrial Complex, and its elements, as “fiction,” “lies,” “tall tales,” and, “a conspiracy theory,” with Jankowicz trying to frame the new government’s moves as “an assault on the First Amendment” and “suppressing speech.”

Despite the fact these are some of the key accusations against the Biden administration – and at this point, fairly well backed up by batches of internal documents, but also testimonies from Big Tech execs – Jankowicz chose to call it “the imagined actions of the Biden administration.”

As for her own role in this “imagined” system – namely, the brief stint at the helm of the short-lived Disinformation Governance Board (that was part of the Department of Homeland Security) – Jankowicz maintained that it was not meant to be a censorship body.

Instead, Jankowicz would have the Committee and the public believe the Board was true to its mission statement, which was “to protect civil rights, civil liberties, privacy, and the First Amendment.”

Keep reading

After Trump’s Election, Several NSF-Funded Censorship Tools Go Missing

The National Science Foundation (NSF) claims to be the place “where discoveries begin” — but it funded programs where free speech ends. When The Federalist asked if it still funds anti-“misinformation” tools, a representative simply pointed to its public grants database, which lists censorship projects funded during the Biden administration.

The public-interest law firm Alliance Defending Freedom sent NSF records requests in February to uncover any coordination between the federal agency and internet communications monopolies such as Google and Facebook.

NSF funds one-quarter of all federally funded academic fundamental research projects at U.S. higher education institutions, and funds technology development at approximately 400 “small businesses” each year. It has an annual budget of $8.5 billion. It also funds programs that exclude recipients based on race, according to an NSF factsheet.

The NSF’s “Convergence Accelerator,” which funds special research projects, launched a “cohort track” in 2021 for “Trust & Authenticity in Communication Systems.” Recipients developed software to control online speech labeled “misinformation” about politically sensitive topics including Covid-19 treatments and election integrity.

Twelve “teams” were selected for “phase 1” of the project, according to the House Judiciary Committee, and six were selected for “phase 2” funding at $5 million each. In total, the NSF awarded $39 million total to projects in this “track” of the program.

The NSF-funded projects include the Analysis and Response Toolkit for Trust (ARTT), Co-Designing for Trust, Co:Cast, Co-Insights, CommuniTies, CourseCorrect, Expert Voices Together, Search Lit, TrustFinder, and WiseDex. Projects had ties to infamous pro-censorship organizations including Google, Meta, Snopes, Wikimedia, the World Economic Forum, and the World Health Organization.

The Federalist asked NSF how much total funding each project received from the agency, and if there have been any “cohorts dedicated to developing anti-misinformation projects since ‘track F.’” An NSF spokesman replied that, “in recent years,” Congress asked NSF to “identify and address issues of safety, ethics and adversarial influence online” through funding bills and the IOGAN Act. So the Convergence Accelerator “initiated Track F.”

“This program has not made an award since 2021 and will not be making any awards in the future,” the representative said. “NSF invests in research, innovation, and workforce development that accelerates the development, testing, and understanding of technology. NSF plays no role in content policies nor content regulations.”

The Federalist again asked “how much total NSF funding each project received” and “whether NSF helped develop other anti-misinformation projects aside from Track F.” The spokesman simply pointed to the agency’s “award search page,” and said, “[o]ther than that, I don’t have anything else to add.” A search for active NSF awards regarding “misinformation” yields more than 100 results.

ADF also filed public records requests for documents regarding the NSF Convergence Accelerator, as The Federalist previously reported. Mathew Hoffman, legal counsel for ADF’s Center for Free Speech, said at the time the group was investigating “where our tax dollars are being spent to fund censorship” — and “if anyone’s rights have been violated by the censorship-industrial complex, that litigation will certainly be an option.”

Keep reading