The Covid vaccine gave me side effects that ruined my life, but Facebook keeps censoring me from telling my friends about what happened

A woman who says she suffered chronic health complications after taking the AstraZeneca vaccine claims to have been censored from sharing her story on Facebook.

Caroline Pover, 52, received the jab in March 2021 and within nine hours, experienced convulsions, shivering, breathing difficulties and low blood pressure.

Ms Pover, of Cirencester, Gloucestershire, says she was hospitalised when her condition escalated to ‘stroke-like’ symptoms, in addition to exhaustion, breathing difficulties, a racing heart and migraines.

Her story was shared in a national newspaper in March last year as she and 800 other victims struggled to claim the Government’s Vaccine Damage Payment Scheme (VDPS).

But after sharing the link on her Facebook feed at the start of this year, Ms Pover says the website put a warning notice on her account.

Ms Pover, herself a freelance journalist and entrepreneur, said: ‘My posts about what was happening to me started having FB “notes” appearing underneath them about vaccination.

‘A group page I was an admin on was shut down completely by Facebook in the summer of 2021.

‘When I posted the Daily Express article, which did an excellent job of not discussing anything pro or anti… I received a warning and the post was hidden.

‘It’s a ridiculous situation for vaccine-injured people, who have a right to information.

‘If this was an online support group relating to cancer or another type of serious condition, we’d be outraged at the thought of it being censored and we’d be very sensitive to people having to navigate a very complicated health situation.’

Keep reading

Hackers Exploit Third-Party Cookies to Access Google Accounts Without Passwords

Security experts at CloudSEK have reportedly identified a new form of malware that exploits third-party cookies, allowing unauthorized access to Google accounts without the need for passwords.

The Independent reports the alarming security breach, first announced on a Telegram channel by a hacker in October 2023, exploits vulnerabilities in third-party cookies. Specifically, it targets Google authentication cookies, which are normally used to streamline user access without repeated logins.

Hackers have devised a method to extract these cookies, allowing them to bypass password-based security and even two-factor authentication mechanisms to access user accounts.

This exploit is a major risk for all Google accounts as it allows for ongoing access to Google services, even after a user’s password has been changed. An analysis by the cybersecurity firm CloudSEK indicates that several hacking groups are actively experimenting with this technique.

Keep reading

Media Outlets Are Already Calling for Online 2024 Election Censorship

The page has only just been turned on 2023 and already the narrative that much policing of online speech will be vital for 2024, an election year, has already stirred.

The legacy media outlet The Guardian, in its piece about Kate Starbird, has already complained that there may be less censorship ahead of the 2024 elections, and claimed that Rep. Jim Jordan’s committee’s reports on Big Tech-government censorship collusion are based on “outlandish claims.” This is ignoring the fact that an injunction was successfully placed on the Biden administration for its censorship pressure on Big Tech, a case that will be ruled on by The Supreme Court this year.

In an era where the policing of online speech is increasingly contentious, Kate Starbird’s role in combating what she terms election misinformation has placed her squarely in the midst of a heated debate. As a leading figure at the University of Washington’s Center for an Informed Public, Starbird has actively engaged in documenting what she and her team perceive as misinformation during the 2020 elections, particularly focusing on claims of voter fraud.

However, Starbird’s approach and her team’s actions have not been without controversy. Critics argue that their efforts amount to a form of censorship, infringing upon free speech. This criticism extends beyond Starbird’s team to a broader national trend, where researchers engaged in similar work face accusations of partisanship and censorship, challenging the principles of free expression.

Jim Jordan, chair of the House judiciary committee, has emerged as a key figure in opposing what he views as the overreach of these researchers. He has focused on investigating groups and individuals involved in counteracting misinformation, especially in the context of elections and Covid-19. Central to the controversy is the practice of working with government entities and flagging content to social media platforms, which some argue leads to undue censorship and violates First Amendment rights.

The debate over the role of anti-misinformation efforts has escalated beyond Congress, evidenced by lawsuits from the attorneys general of Missouri and Louisiana and from the state of Texas, along with two rightwing media companies. These legal actions challenge the alleged collaboration between the Biden administration, the Global Engagement Center, and social media companies, showing it as a constitutional breach.

Keep reading

Biden Administration Urges Supreme Court To Overturn Injunction on Federal Agencies Influencing Tech Censorship

The US Court of Appeals for the Fifth Circuit recently affirmed an injunction against federal agencies to stop the current White House from colluding with Big Tech’s social media.

And now, the Biden Administration is going to the US Supreme Court in a last-ditch attempt to reverse this decision.

The big picture effect – or at least, the intended meaning – of the Fifth Circuit ruling was to stop the government from working with Big Tech in censoring online content.

There’s little surprise that this doesn’t sit well with that government, which now hopes that the federal appellate court’s decision can be overturned.

The White House says the ruling is banning its “good” work done alongside social media to combat “misinformation”; instead of admitting its actions to amount to collusion with Big Tech – which has been amply documented now, not least by the Twitter Files – the government insists its actions are serving the public, and its “ability” to discuss relevant issues.

We obtained a copy of the petition for you here.

US Surgeon General Vivek Murthy is back again here – to say that what those now in power in the US (a message amplified by legacy media) did ahead of the 2020 presidential election, as well as subsequently regarding the pandemic “misinformation” – which is now fairly widely accepted to be censorship (“moderation”) – is what Murthy still calls, justified.

Keep reading

Substackers Battle Over Banning Nazis

Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled “Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong. 

This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substack—everyone’s favorite platform for pretending like it’s 2005 and we’re all bloggers again—has already come under fire multiple times for its moderation policies (or lack thereof).

Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, also—and more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.

Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content. 

Substack has some rules, but they’re pretty broad—nothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).

Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking.

Keep reading

Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading

Democrats Demand Social Media Platforms Censor Abortion “Misinformation” With Direct Letters to Musk and Zuckerberg

House Democrats have issued a strong call to Elon Musk and Mark Zuckerberg, urging them to address what they call the widespread issue of abortion “misinformation” on their social media platforms, including Facebook, Instagram, and X.

These concerns were expressed in two letters from the House Oversight Committee.

We obtained a copy of the letter for you here.

The letters, spearheaded by Jamie Raskin (D-Md.), the committee’s Ranking Member, request that Musk’s and Zuckerberg’s platforms urgently tackle the spread of false abortion information and provide Congress with briefings on this matter by December 14.

“Your company’s decision to keep these posts visible seems at odds with your Terms of Service that allow you to remove unlawful conduct on your platform,” the letter states. “Even more concerning is your company’s apparent double-standard when it comes to removing posts that you label ‘abortion advocacy’ or posts that offer legitimate medical and logistical advice for someone considering abortion, while allowing crisis pregnancy centers and anti-abortion advocates to spread false and misleading information regarding abortion.”

The committee, in its letters, highlights the nature of the allegedly misleading medical information and false content about abortion that is proliferating, especially on the platforms managed by Musk and Zuckerberg. This alleged misinformation, according to the committee, can lead to people doubting their healthcare providers and even their own judgment, posing significant health and safety risks.

Keep reading

Democrats Blasted for Claiming “No Evidence” of Big Tech-Government Censorship Collusion

Republicans called out Democrats for continuing to deny that the Biden administration colluded with tech platforms to censor speech during a hearing today, despite lawsuitssubpoenas, and other releases uncovering huge troves of evidence documenting the Biden administration’s relentless censorship demands.

Democrats claimed there’s “no evidence” of censorship collusion, branded the notion that social media companies are colluding with the government to censor conservative voices as “unfounded,” and called it a “conspiracy” theory during a House Judiciary Committee Hearing on the Weaponization of the Federal Government.

But Republicans shot back and called them out for ignoring the huge banks of evidence that showcase Biden admin officials leaning on Big Tech to censor speech they disapprove of.

Three of the witnesses, journalist Matt Taibbi, journalist Michael Shellenberger, and journalist Rupa Subramanya, also challenged Democrat attempts to dismiss evidence of Biden admin-Big Tech censorship collusion during the hearing.

Rep. Stacey Plaskett (D-VI) claimed there’s “no evidence” of tech companies colluding with the government to censor conservatives.

Keep reading

Instagram Videos Sexualizing Children Shown To Adults Who Follow Preteen Influencers

In June, we noted that Meta’s Instagram was caught facilitating a massive pedophile network, by which the service would promote pedo-centric content to other pedophiles using coded emojis, such as a slice of cheese pizza.

According to the the Wall Street JournalInstagram allowed pedophiles to search for content with explicit hashtags such as #pedowhore and #preteensex, which were then used to connect them to accounts that advertise child-sex material for sale from users going under names such as “little slut for you.” And according to the National Center for Missing & Exploited Children, Meta accounted for more than 85% of child pornography reports, the Journal reported.

Companies whose ads appeared next to inappropriate content included Disney, Walmart, Match.com, HIms and the Wall Street Journal itself.

And yet, no mass exodus of advertisers…

They haven’t stopped…

According to a new report by the Journal, Instagram’s ‘reels’ service – which shows users short video clips of things which the company’s algorithm thinks users would find of interest – has been serving up clips of sexualized children to adults that follow preteen influencers, gymnasts, cheerleaders and other categories that child predators are attracted to.

The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.

In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl. -WSJ

A separate experiment run by the Canadian Centre for Child Protection had similar results after running similar tests. 

Keep reading