Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

FACEBOOK APPROVED AN ISRAELI AD CALLING FOR ASSASSINATION OF PRO-PALESTINE ACTIVIST

A SERIES OF advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

Keep reading

Facebook and Instagram content enabled child sexual abuse, trafficking: New Mexico lawsuit

Facebook and Instagram created “prime locations” for sexual predators that enabled child sexual abuse, solicitation, and trafficking, New Mexico’s attorney general alleged in a civil suit filed Wednesday against Meta and CEO Mark Zuckerberg.

The suit was brought after an “undercover investigation” allegedly revealed myriad instances of sexually explicit content being served to minors, child sexual coercion, or the sale of child sexual abuse material, or CSAM, New Mexico attorney general Raúl Torrez said in a press release.

The suit alleges that “certain child exploitative content” is ten times “more prevalent” on Facebook and Instagram as compared to pornography site PornHub and adult content platform OnlyFans, according to the release.

“Child exploitation is a horrific crime and online predators are determined criminals,” Meta said in a statement to CNBC. A spokesperson said that the company deploys “sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators.”

The New Mexico suit follows coordinated legal actions against Meta by 42 other attorneys general in October. Those actions alleged that Facebook and Instagram directly targeted and were addictive to children and teens.

New Mexico’s suit, by contrast, alleges Meta and Zuckerberg violated the state’s Unfair Practice Act. The four-count suit alleges that the company and Zuckerberg engaged in “unfair trade practices” by facilitating the distribution of CSAM and the trafficking of minors, and undermined the health and safety of New Mexican children.

The lawsuit argues that Meta’s algorithms allegedly promote sex and exploitation content to users and that Facebook and Instagram lack “effective” age verification. The suit also alleges that the company failed to identify child sexual exploitation “networks” and to fully prevent users it had suspended for those violations from rejoining the platform using new accounts

“In one month alone, we disabled more than half a million accounts for violating our child safety policies,” a Meta spokesperson said in a statement.

Keep reading

Meta sues FTC, hoping to block ban on monetizing kids’ Facebook data

Meta sued the Federal Trade Commission yesterday in a lawsuit that challenges the FTC’s authority to impose new privacy obligations on the social media firm.

The complaint stems from the FTC’s May 2023 allegation that Meta-owned Facebook violated a 2020 privacy settlement and the Children’s Online Privacy Protection Act. The FTC proposed changes to the 2020 privacy order that would, among other things, prohibit Facebook from monetizing data it collects from users under 18.

Meta’s lawsuit against the FTC challenges what it calls “the structurally unconstitutional authority exercised by the FTC through its Commissioners in an administrative reopening proceeding against Meta.” It was filed against the FTC, Chair Lina Khan, and other commissioners in US District Court for the District of Columbia. Meta is seeking a preliminary injunction to stop the FTC proceeding pending resolution of the lawsuit.

Meta argues that in the FTC’s administrative proceedings, “the Commission has a dual role as prosecutor and judge in violation of the Due Process Clause.” Meta asked the court to “declare that certain fundamental aspects of the Commission’s structure violate the US Constitution, and that these violations render unlawful the FTC Proceeding against Meta.”

Meta says it should have a right to a trial by jury and that “Congress unconstitutionally has delegated to the FTC the power to assign disputes to administrative adjudication rather than litigating them before an Article III court.” The FTC should not be allowed to “unilaterally modify the terms” of the 2020 settlement, Meta said.

The FTC action “would dictate how and when Meta can design its products,” the lawsuit said.

Keep reading

Microsoft and Meta Detail Plans To Combat “Election Disinformation” Which Includes Meme Stamp-Style Watermarks and Reliance on “Fact Checkers”

And so it begins. In fact, it hardly ever stops – another election cycle in well on its way in the US. But what has emerged these last few years, and what continues to crop up the closer the election day gets, is the role of the most influential social platforms/tech companies.

Pressure on them is sometimes public, but mostly not, as the Twitter Files have taught us; and it is with this in mind that various announcements about combating “election disinformation” coming from Big Tech should be viewed.

Although, one can never discount the possibility that some – say, Microsoft – are doing it quite voluntarily. That company has now come out with what it calls “new steps to protect elections,” and is framing this concern for election integrity more broadly than just the goings-on in the US.

From the EU to India and many, many places in between, elections will be held over the next year or so, says Microsoft, however, these democratic processes are at peril.

“While voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests,” said a blog post co-authored by Microsoft Vice Chair and President Brad Smith.

By “another force,” could Smith possibly mean, Big Tech? No. It’s “multiple authoritarian nation states” he’s talking about, and Microsoft’s “Election Protection Commitments” seek to counter that threat in a 5-step plan to be deployed in the US, and elsewhere where “critical” elections are to be held.

Critical more than others why, and what is Microsoft seeking to protect – it’s all very unclear.

Keep reading

Nashville Mayor’s Office, MSM Flips Out After Trans Shooter Manifesto Leaks; Facebook Censors

As the Epoch Times notes:

Metro Nashville Mayor Freddie O’Connell said in a statement on Nov. 6 that he had directed the city’s legal director to initiate an investigation into the leak, but he didn’t address the veracity of the documents. Other agencies were unable to verify the authenticity of the documents when asked to do so by The Epoch Times on Nov. 6.

I have directed Wally Dietz, Metro’s law director, to initiate an investigation into how these images could have been released,” Mr. O’Connell said in the statement. “That investigation may involve local, state, and federal authorities. I am deeply concerned with the safety, security, and well-being of the Covenant families and all Nashvillians who are grieving.”

A spokeswoman for MNPD said there was “no information” they could provide at this time when reached via phone on Nov. 6. So far, the Tennessee Bureau of Investigation said that they can offer no confirmation of the documents, according to a spokesman of the agency.

. . .

Earlier Monday Alex Jones claimed that the Biden DOJ suppressed the document.

Keep reading

Man in his 40s is arrested after ‘dressing up as Manchester Arena bomber Salman Abedi for Halloween and posting it on Facebook’

A man in his 40s has been arrested after allegedly dressing up as Manchester Arena bomber Salman Abedi for Halloween and posting it on Facebook

Pictures posted by David Wootton show him wearing an Arabic-style headdress, with the slogan ‘I love Ariana Grande’ on his T-shirt, and carrying a rucksack with ‘Boom’ and ‘TNT’ written on the front.

The disturbing Halloween costume which was captioned ‘bet I get kicked out of the party’ caused fury on social media. 

North Yorkshire Police confirmed the man arrested had been released on conditional police bail to allow for further enquiries to be carried out. 

Abedi killed 22 people – some of them children – as well as himself when he detonated his device in the foyer of Manchester Arena at the end of an Ariana Grande concert in May 2017. 

In a statement, the force said: ‘North Yorkshire Police can confirm that a man has been arrested after the force received complaints about a man wearing an offensive costume on social media, depicting murderer, Salman Abedi who killed 22 people at Manchester Arena.

‘The man, who is aged in his 40s, was arrested on 1 November on suspicion of a number of offences including using a public communication network to send offensive messages.’

Keep reading

INSTAGRAM CENSORED IMAGE OF GAZA HOSPITAL BOMBING, CLAIMS IT’S TOO SEXUAL

INSTAGRAM AND FACEBOOK users attempting to share scenes of devastation from a crowded hospital in Gaza City claim their posts are being suppressed, despite previous company policies protecting the publication of violent, newsworthy scenes of civilian death.

Late Tuesday, amid a 10-day bombing campaign by Israel, the Gaza Strip’s al-Ahli Hospital was rocked by an explosion that left hundreds of civilians killed and wounded. Footage of the flaming exterior of the hospital, as well as dead and wounded civilians, including children, quickly emerged on social media in the aftermath of the attack.

While the Palestinian Ministry of Health in the Hamas-run Gaza Strip blamed the explosion on an Israeli airstrike, the Israeli military later said the blast was caused by an errant rocket misfired by militants from the Gaza-based group Islamic Jihad.

While widespread electrical outages and Israel’s destruction of Gaza’s telecommunications infrastructure have made getting documentation out of the besieged territory difficult, some purported imagery of the hospital attack making its way to the internet appears to be activating the censorship tripwires of Meta, the social media giant that owns Instagram and Facebook.

Keep reading

Court Orders Facebook To Comply With Subpoena For Data On All Users That Broke “Covid-19 Misinformation” Rules

The District of Columbia (DC) Court of Appeals has rejected Meta’s appeal to quash a sweeping subpoena that demanded it hand over “documents sufficient to identify all Facebook groups, pages, and accounts that have violated Facebook’s COVID-19 misinformation policy with respect to content concerning vaccines” to the DC government.

Millions of users, many of whom made truthful statements that challenged the government’s Covid narrative, are likely to be swept up in this government data grab due to the scope of Facebook’s “Covid-19 misinformation” rules and the number of users that were impacted by them.

Facebook’s Covid-19 misinformation rules prohibited many truthful statements during the pandemic. For example, at one point claiming that “vaccines are not effective at preventing the disease they are meant to protect against” was banned — an assertion that health officials have now reluctantly admitted is true.

Even Meta CEO Mark Zuckerberg has acknowledged that Facebook censored truthful information.

And millions of people were impacted by these far-reaching censorship rules. In some quarters, Facebook censored over 100 million posts for violating these rules. Some of the groups Facebook took down under these rules also had hundreds of thousands of users.

Meta had challenged the subpoena on free speech and privacy grounds, arguing that it violated the First Amendment and that a warrant was required to compel disclosure of the requested data.

Specifically, Meta argued that the subpoena violated Meta’s own First Amendment rights by “prob[ing] and penaliz[ing]” its ability to exercise editorial control over content on its platform and also violated Meta users’ First Amendment rights because it would deter them from engaging in future online discussions of controversial topics.

Additionally, Meta cited the warrant requirements in the Stored Communications Act (SCA) — a law that sought to provide Fourth Amendment-like privacy protections by statute to communications held by third party service providers.

However, the DC appeals court rejected Meta’s arguments.

The court stated that Meta had not shown the subpoena will result in its free speech or associational rights being chilled. Additionally, it said Meta users’ First Amendment rights wouldn’t be chilled because “the users who made those posts have already openly associated themselves with their espoused views by publicly posting them to Facebook.”

The court also insisted that the warrant requirement in the SCA does not apply to public posts and that the subpoena “does not require Meta to ‘unmask’ any anonymous Users.”

Furthermore, the court characterized this mass request for user data as “reasonably relevant” to the DC’s investigation and said the subpoena is “narrowly tailored to the government’s asserted interest.”

We obtained a copy of the opinion for you here.

Keep reading

Meta deletes Al Jazeera presenter’s profile after show criticising Israel

Al Jazeera Arabic presenter Tamer Almisshal has had his Facebook profile deleted by Meta 24 hours after the programme Tip of the Iceberg aired an investigation into Meta’s censorship of Palestinian content titled The Locked Space.

The programme’s investigation, which aired on Friday, included admissions by Eric Barbing, former head of Israel’s cybersecurity apparatus, about his organisation’s effort to track Palestinian content according to criteria that included “liking” a photo of a Palestinian killed by Israeli forces.

Then the agency would approach Facebook and argue that the content should be taken down.

According to Barbing, Facebook usually complies with the requests and Israel’s security apparatus follows up cases, including bringing court cases if need be.

The investigation followed up on Barbing’s admissions by interviewing a number of human and digital rights experts who agreed that there was a distinct imbalance in how Palestinian content is restricted.

The programme also interviewed Julie Owono, a member of Facebook’s oversight board, who admitted there is a discrepancy in how rules are interpreted and applied to Palestinian content and added that recommendations had been sent to Facebook to correct this.

Al Jazeera has asked Facebook about why Almisshal’s profile was shut down with no prior warning or explanation. It had not received a response by the time of publication.

Keep reading