Latest Government Report Reveals 10 Times Biden Regime Pressured Facebook to Take Down What Ended Up Being Truthful Information on COVID and the Vaccines

On Wednesday, the House Judiciary Committee released an 800 page report on the Biden White House censorship regime.

The report included numerous times the Biden regime threatened social media companies to censor, silence and take down information on the COVID origins and the COVID vaccines.

Here is the full 800 page report released by the House Judiciary Committee on the Biden Administration’s Censorship Industrial Complex.

On Thursday, investigative reporter Mike Benz revealed “10 flaming examples” Facebook, YouTube and Amazon explicitly said they only passed censorship policies because they were threatened by the Biden government.

Keep reading

Facebook designates Grayzone journalist Kit Klarenberg a ‘dangerous individual’

The notoriously intelligence-friendly social media network appears to have imposed a ban on posting a recent report by Kit Klarenberg, and is automatically restricting users who re-publish his work.

Multiple Facebook users have reported being banned, or having their posts censored, after sharing an investigation by The Grayzone’s Kit Klarenberg into CIA and MI6 involvement in the creation of ISIS. Readers who post links to the piece on the social network find themselves frozen out of their accounts, on the apparent grounds that Facebook has classified Klarenberg as a “dangerous individual.”

“I just shared this article from @Kit Klarenberg on Facebook and the post was immediately deleted,” wrote Ricky Hale, the founder of popular independent left-wing outlet Council Estate Media.

In a Substack article published April 5, Hale wrote that “the page was hit with restrictions and I was told I had shared a post from a dangerous individual or organisation.”

Hale was only able to regain control of his Facebook page, which boasts over 44,000 fans, by removing administrative privileges from the user who shared it — which happens to be himself.

Other restrictions imposed due to sharing Klarenberg’s work have not been lifted, and may well never be. Hale says he has been blocked from changing the page’s name, inviting people to join the page, or creating new Facebook groups. “Given Facebook had already reduced my page’s visibility for another absurd violation, I’m assuming my posts are going to be invisible,” Hale lamented. “This means a Facebook page with 44,000 users has been rendered useless because of state censorship that’s been outsourced to big tech. This is not how a free society operates.”

It was not the first time that Facebook censored one of its users for posting Klarenberg’s article. Hours beforehand, another social media user revealed the piece had been removed from her Facebook timeline mere “seconds” after it was posted.

Keep reading

Facebook let Netflix see user DMs to help them tailor content as part of a close collaboration between the two tech giants, new court documents claims

Facebook‘s parent company Meta allegedly allowed Netflix to peer at its user DMs ‘for nearly a decade’ to help the streaming giant better tailor content for its own users, an explosive lawsuit has alleged. 

Court documents unsealed on March 23 that were filed last April as part of a major anti-trust lawsuit against Meta appear to have exposed the intricate relationship between two of Silicon Valley’s biggest players. 

The class-action lawsuit, filed by two US citizens, Maximilian Klein and Sarah Grabert, alleged Netflix and Facebook ‘enjoyed a special relationship’, with the social media platform giving the streaming site ‘bespoke access’ to user data. 

The two Silicon Valley players also agreed to ‘custom partnerships and integrations that helped supercharge Facebook’s ad targeting and ranking models’ from at least 2011, thanks to the personal relationship between Netflix’s co-founder Reed Hastings and Facebook’s founder Mark Zuckerberg

Lawyers alleged that ‘within a month’ of Hastings joining Facebook’s board of directors, the two companies signed an ‘Inbox API’ (Application Programming Interface) agreement that ‘allowed Netflix programmatic access to Facebook’s user’s private message inboxes.’

Keep reading

FBI Agent Says He Hassles People ‘Every Day, All Day Long’ Over Facebook Posts

The FBI spends “every day, all day long” interrogating people over their Facebook posts. At least, that’s what agents told Stillwater, Oklahoma, resident Rolla Abdeljawad when they showed up at her house to ask her about her social media activity. 

Three FBI agents came to Abdeljawad’s house and said that they had been given “screenshots” of her posts by Facebook. Her lawyer Hassan Shibly posted a video of the incident online on Wednesday.

Abdeljawad told agents that she didn’t want to talk and asked them to show their badges on camera, which the agents refused to do. She wrote on Facebook that she later confirmed with local police that the FBI agents really were FBI agents.

“Facebook gave us a couple of screenshots of your account,” one agent in a gray shirt said in the video.

“So we no longer live in a free country and we can’t say what we want?” replied Abdeljawad.

“No, we totally do. That’s why we’re not here to arrest you or anything,” a second agent in a red shirt added. “We do this every day, all day long. It’s just an effort to keep everybody safe and make sure nobody has any ill will.”

Keep reading

Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading

Meta Launches Real-Time Content Censorship Unit for 2024 Elections

When Facebook (Meta) wants to safeguard its “right to censor,” the company presents itself as basically just another private company out there minding its own business.

But when election campaigns get in full swing, especially in the US, but also the EU, the way Meta reacts, announcing all sorts of yet new policies and new units to deal with information related to elections, shows that it could have a massive influence on their outcome.

And while it’s repeatedly said that (mostly arbitrarily “defined”) misinformation is the scourge of democracy, there is another, this time, no doubt about it: censorship, sometimes based on such flimsy excuses as basically somebody’s subjective opinion – for example, “potential threats.”

None of this seems to be important to Meta, who have just announced how they are “preparing” for the elections in the EU this summer.

There’s a slew of news on this front: Meta will have what it calls an Elections Operations Center whose job will be identifying “potential threats.” And then real-time “mitigation” (i.e., censorship) will follow.

Oh happy news: despite all the controversies around “fact-checker,” Meta has announced it is continuing to rely on them, and even boasts about having “the largest fact-checking network of any platform.”

Keep reading

Senators Obliterate Zuckerberg For ‘Helping’ Pedos Find Child Sex Abuse Content

During a remarkable Senate hearing Wednesday, Republican Senators wiped the floor with Meta owner Mark Zuckerberg, exposing how his company has acted woefully when it comes to child sexual abuse material and other harmful content on its platforms that has directly led to the deaths of children.

By the end of the hearing Zuckerberg was utterly humiliated, forced to stand and face the families of victims who have suffered because of his failures, and told that he should be sued into oblivion for gross dereliction of duty.

During the hearing titled ‘Big Tech and the Online Child Sexual Exploitation Crisis’, Senator Ted Cruz essentially accused Zuckerberg of helping pedophiles gain access to child porn on his platforms.

“Every parent in America is terrified about the garbage that is directed at our kids,” Cruz told Zuckerberg, adding “the phones they have are portals to predators…and each of your companies could do a lot more to prevent it.”

Keep reading

The Covid vaccine gave me side effects that ruined my life, but Facebook keeps censoring me from telling my friends about what happened

A woman who says she suffered chronic health complications after taking the AstraZeneca vaccine claims to have been censored from sharing her story on Facebook.

Caroline Pover, 52, received the jab in March 2021 and within nine hours, experienced convulsions, shivering, breathing difficulties and low blood pressure.

Ms Pover, of Cirencester, Gloucestershire, says she was hospitalised when her condition escalated to ‘stroke-like’ symptoms, in addition to exhaustion, breathing difficulties, a racing heart and migraines.

Her story was shared in a national newspaper in March last year as she and 800 other victims struggled to claim the Government’s Vaccine Damage Payment Scheme (VDPS).

But after sharing the link on her Facebook feed at the start of this year, Ms Pover says the website put a warning notice on her account.

Ms Pover, herself a freelance journalist and entrepreneur, said: ‘My posts about what was happening to me started having FB “notes” appearing underneath them about vaccination.

‘A group page I was an admin on was shut down completely by Facebook in the summer of 2021.

‘When I posted the Daily Express article, which did an excellent job of not discussing anything pro or anti… I received a warning and the post was hidden.

‘It’s a ridiculous situation for vaccine-injured people, who have a right to information.

‘If this was an online support group relating to cancer or another type of serious condition, we’d be outraged at the thought of it being censored and we’d be very sensitive to people having to navigate a very complicated health situation.’

Keep reading

Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

FACEBOOK APPROVED AN ISRAELI AD CALLING FOR ASSASSINATION OF PRO-PALESTINE ACTIVIST

A SERIES OF advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

Keep reading