Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading

Meta Launches Real-Time Content Censorship Unit for 2024 Elections

When Facebook (Meta) wants to safeguard its “right to censor,” the company presents itself as basically just another private company out there minding its own business.

But when election campaigns get in full swing, especially in the US, but also the EU, the way Meta reacts, announcing all sorts of yet new policies and new units to deal with information related to elections, shows that it could have a massive influence on their outcome.

And while it’s repeatedly said that (mostly arbitrarily “defined”) misinformation is the scourge of democracy, there is another, this time, no doubt about it: censorship, sometimes based on such flimsy excuses as basically somebody’s subjective opinion – for example, “potential threats.”

None of this seems to be important to Meta, who have just announced how they are “preparing” for the elections in the EU this summer.

There’s a slew of news on this front: Meta will have what it calls an Elections Operations Center whose job will be identifying “potential threats.” And then real-time “mitigation” (i.e., censorship) will follow.

Oh happy news: despite all the controversies around “fact-checker,” Meta has announced it is continuing to rely on them, and even boasts about having “the largest fact-checking network of any platform.”

Keep reading

Senators Obliterate Zuckerberg For ‘Helping’ Pedos Find Child Sex Abuse Content

During a remarkable Senate hearing Wednesday, Republican Senators wiped the floor with Meta owner Mark Zuckerberg, exposing how his company has acted woefully when it comes to child sexual abuse material and other harmful content on its platforms that has directly led to the deaths of children.

By the end of the hearing Zuckerberg was utterly humiliated, forced to stand and face the families of victims who have suffered because of his failures, and told that he should be sued into oblivion for gross dereliction of duty.

During the hearing titled ‘Big Tech and the Online Child Sexual Exploitation Crisis’, Senator Ted Cruz essentially accused Zuckerberg of helping pedophiles gain access to child porn on his platforms.

“Every parent in America is terrified about the garbage that is directed at our kids,” Cruz told Zuckerberg, adding “the phones they have are portals to predators…and each of your companies could do a lot more to prevent it.”

Keep reading

The Covid vaccine gave me side effects that ruined my life, but Facebook keeps censoring me from telling my friends about what happened

A woman who says she suffered chronic health complications after taking the AstraZeneca vaccine claims to have been censored from sharing her story on Facebook.

Caroline Pover, 52, received the jab in March 2021 and within nine hours, experienced convulsions, shivering, breathing difficulties and low blood pressure.

Ms Pover, of Cirencester, Gloucestershire, says she was hospitalised when her condition escalated to ‘stroke-like’ symptoms, in addition to exhaustion, breathing difficulties, a racing heart and migraines.

Her story was shared in a national newspaper in March last year as she and 800 other victims struggled to claim the Government’s Vaccine Damage Payment Scheme (VDPS).

But after sharing the link on her Facebook feed at the start of this year, Ms Pover says the website put a warning notice on her account.

Ms Pover, herself a freelance journalist and entrepreneur, said: ‘My posts about what was happening to me started having FB “notes” appearing underneath them about vaccination.

‘A group page I was an admin on was shut down completely by Facebook in the summer of 2021.

‘When I posted the Daily Express article, which did an excellent job of not discussing anything pro or anti… I received a warning and the post was hidden.

‘It’s a ridiculous situation for vaccine-injured people, who have a right to information.

‘If this was an online support group relating to cancer or another type of serious condition, we’d be outraged at the thought of it being censored and we’d be very sensitive to people having to navigate a very complicated health situation.’

Keep reading

Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

FACEBOOK APPROVED AN ISRAELI AD CALLING FOR ASSASSINATION OF PRO-PALESTINE ACTIVIST

A SERIES OF advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

Keep reading

Facebook and Instagram content enabled child sexual abuse, trafficking: New Mexico lawsuit

Facebook and Instagram created “prime locations” for sexual predators that enabled child sexual abuse, solicitation, and trafficking, New Mexico’s attorney general alleged in a civil suit filed Wednesday against Meta and CEO Mark Zuckerberg.

The suit was brought after an “undercover investigation” allegedly revealed myriad instances of sexually explicit content being served to minors, child sexual coercion, or the sale of child sexual abuse material, or CSAM, New Mexico attorney general Raúl Torrez said in a press release.

The suit alleges that “certain child exploitative content” is ten times “more prevalent” on Facebook and Instagram as compared to pornography site PornHub and adult content platform OnlyFans, according to the release.

“Child exploitation is a horrific crime and online predators are determined criminals,” Meta said in a statement to CNBC. A spokesperson said that the company deploys “sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators.”

The New Mexico suit follows coordinated legal actions against Meta by 42 other attorneys general in October. Those actions alleged that Facebook and Instagram directly targeted and were addictive to children and teens.

New Mexico’s suit, by contrast, alleges Meta and Zuckerberg violated the state’s Unfair Practice Act. The four-count suit alleges that the company and Zuckerberg engaged in “unfair trade practices” by facilitating the distribution of CSAM and the trafficking of minors, and undermined the health and safety of New Mexican children.

The lawsuit argues that Meta’s algorithms allegedly promote sex and exploitation content to users and that Facebook and Instagram lack “effective” age verification. The suit also alleges that the company failed to identify child sexual exploitation “networks” and to fully prevent users it had suspended for those violations from rejoining the platform using new accounts

“In one month alone, we disabled more than half a million accounts for violating our child safety policies,” a Meta spokesperson said in a statement.

Keep reading

Meta sues FTC, hoping to block ban on monetizing kids’ Facebook data

Meta sued the Federal Trade Commission yesterday in a lawsuit that challenges the FTC’s authority to impose new privacy obligations on the social media firm.

The complaint stems from the FTC’s May 2023 allegation that Meta-owned Facebook violated a 2020 privacy settlement and the Children’s Online Privacy Protection Act. The FTC proposed changes to the 2020 privacy order that would, among other things, prohibit Facebook from monetizing data it collects from users under 18.

Meta’s lawsuit against the FTC challenges what it calls “the structurally unconstitutional authority exercised by the FTC through its Commissioners in an administrative reopening proceeding against Meta.” It was filed against the FTC, Chair Lina Khan, and other commissioners in US District Court for the District of Columbia. Meta is seeking a preliminary injunction to stop the FTC proceeding pending resolution of the lawsuit.

Meta argues that in the FTC’s administrative proceedings, “the Commission has a dual role as prosecutor and judge in violation of the Due Process Clause.” Meta asked the court to “declare that certain fundamental aspects of the Commission’s structure violate the US Constitution, and that these violations render unlawful the FTC Proceeding against Meta.”

Meta says it should have a right to a trial by jury and that “Congress unconstitutionally has delegated to the FTC the power to assign disputes to administrative adjudication rather than litigating them before an Article III court.” The FTC should not be allowed to “unilaterally modify the terms” of the 2020 settlement, Meta said.

The FTC action “would dictate how and when Meta can design its products,” the lawsuit said.

Keep reading

Microsoft and Meta Detail Plans To Combat “Election Disinformation” Which Includes Meme Stamp-Style Watermarks and Reliance on “Fact Checkers”

And so it begins. In fact, it hardly ever stops – another election cycle in well on its way in the US. But what has emerged these last few years, and what continues to crop up the closer the election day gets, is the role of the most influential social platforms/tech companies.

Pressure on them is sometimes public, but mostly not, as the Twitter Files have taught us; and it is with this in mind that various announcements about combating “election disinformation” coming from Big Tech should be viewed.

Although, one can never discount the possibility that some – say, Microsoft – are doing it quite voluntarily. That company has now come out with what it calls “new steps to protect elections,” and is framing this concern for election integrity more broadly than just the goings-on in the US.

From the EU to India and many, many places in between, elections will be held over the next year or so, says Microsoft, however, these democratic processes are at peril.

“While voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests,” said a blog post co-authored by Microsoft Vice Chair and President Brad Smith.

By “another force,” could Smith possibly mean, Big Tech? No. It’s “multiple authoritarian nation states” he’s talking about, and Microsoft’s “Election Protection Commitments” seek to counter that threat in a 5-step plan to be deployed in the US, and elsewhere where “critical” elections are to be held.

Critical more than others why, and what is Microsoft seeking to protect – it’s all very unclear.

Keep reading

Nashville Mayor’s Office, MSM Flips Out After Trans Shooter Manifesto Leaks; Facebook Censors

As the Epoch Times notes:

Metro Nashville Mayor Freddie O’Connell said in a statement on Nov. 6 that he had directed the city’s legal director to initiate an investigation into the leak, but he didn’t address the veracity of the documents. Other agencies were unable to verify the authenticity of the documents when asked to do so by The Epoch Times on Nov. 6.

I have directed Wally Dietz, Metro’s law director, to initiate an investigation into how these images could have been released,” Mr. O’Connell said in the statement. “That investigation may involve local, state, and federal authorities. I am deeply concerned with the safety, security, and well-being of the Covenant families and all Nashvillians who are grieving.”

A spokeswoman for MNPD said there was “no information” they could provide at this time when reached via phone on Nov. 6. So far, the Tennessee Bureau of Investigation said that they can offer no confirmation of the documents, according to a spokesman of the agency.

. . .

Earlier Monday Alex Jones claimed that the Biden DOJ suppressed the document.

Keep reading