Meta steps up information control ahead of US elections

Social media behemoth Meta is beefing up its information-control tactics as the US heads into the 2022 midterm elections, tightening rules on voting misinformation and advertising. The changes were announced in a blog post on Tuesday.

The company will ban new political, social and electoral issue ads during the last week before the election, ensuring no “October surprises” – factual or otherwise – will disturb the information ecosystem. Editing existing ads will also be forbidden, and ads encouraging people not to vote or questioning the legitimacy of the results will not be permitted.

To further ensure the sanctity of the vote, Meta says it is investing in “proactive threat detection” with the aim of countering “coordinated harassment and threats of violence against election officials and poll workers.” The company is also holding regular meetings with the National Association of Secretaries of State and the National Association of State Elections Directors, state and local elections officials, and the federal Cybersecurity and Infrastructure Security Agency.

Meta is deploying fact checkers in multiple languages for the midterms and expanding the service to WhatsApp, boasting five new partners in Spanish, including Univision and Telemundo. This is part of a $5 million boost in “fact-checking and media literacy initiatives” ahead of November’s vote.

The platform promised to deploy fewer “labels that connect people with reliable information” during the 2022 season, acknowledging user feedback had tipped them off that such labels were “over-used” in 2020.

Bragging it had banned more than 270 “white supremacist organizations” and deleted over 2.5 million content items tied to “organized hate” in the first quarter of 2022 alone, the platform revealed 97% of the content in question had been removed by its algorithms without anyone reporting it – raising the question of how hateful it was given the absence of an offended party.

Keep reading

Meta can track users’ credit card, internet history on other websites, researcher claims

An ex-Google employee claims his research shows Facebook’s parent company, Meta, is “rewriting” other websites so that it can better track users’ data.

The researcher, Felix Krause, claims Meta can “inject” tracking code into other websites whenever those websites are opened by Facebook or Instagram’s in-app web browser, as opposed to standalone web browsers like Google Chrome and Safari.

The Instagram app injects their tracking code into every website shown, including when clicking on ads, enabling them [to] monitor all user interactions, like every button and link tapped, text selections, screenshots, as well as any form inputs, like passwords, addresses and credit card numbers,” Krause warns in a tweet.

Krause also claims Meta injects this tracking code “without the user’s consent, nor the website operator’s permission.”

Why is this a big deal? Instagram & Facebook actively work around the new App Tracking Transparency System which was designed to prevent exactly this kind of abuse, to keep tracking users outside their ecosystem,” Krause claims in a follow-up tweet.

The ex-Google engineer apparently discovered the code injection while developing a tool to detect extra commands added to websites by web browsers. For most browsers and apps, the tool doesn’t detect any lines of code injection, but for Facebook and Instagram, Krause claims the tool found up to 18 added lines of code.

Keep reading

Facebook blocks #diedsuddenly hashtag

Facebook has hidden posts with the “#diedsuddenly” hashtag because it claims that some of these posts violate its far-reaching community standards. When users search for this hashtag, no results are displayed and Facebook shows a message stating that the results are hidden.

While Facebook doesn’t specify which rules these posts allegedly violated, Twitter users have been using the hashtag to share news stories about people who died suddenly. Most of these Twitter posts note that those who died were fully vaccinated for COVID-19 and allude to there being a connection between the vaccines and their deaths.

If Facebook users are posting similar content under this hashtag, the posts are likely to violate the tech giant’s ban on a wide range of COVID-19 vaccine claims. Facebook prohibits claims that “vaccines are toxic, dangerous, or cause autism” and reduces the distribution of “shocking stories” about the vaccines. One of Facebook’s examples of a shocking story is “Uncovered: See the 632 reports made of people who died within a week of having the new COVID-19 vaccine.”

The blocking of this hashtag is the latest of many examples of Facebook censoring content that is critical of or raises questions about the COVID-19 vaccines. Throughout the pandemic, Facebook has also mass censored anti-mask contentanti-lockdown content, and content that said the coronavirus came from a lab (a censorship policy that was suddenly reversed after the Biden admin announced that it would be investigating the origins of COVID).

Keep reading

Meta cleans up AI chatbot that spewed anti-Semitism and election conspiracies

Meta’s artificial intelligence doesn’t want to talk about the 2020 election being stolen or Jews controlling the economy anymore — but it still spreads rumors that Mark Zuckerberg “isn’t very nice to his employees.” 

After a flood of social media posts this week showed the Facebook parent’s newly released chatbot spewing conspiracy theories and anti-Semitism, Meta appears to have cleaned up the misbehaving artificial intelligence system to make it less of a shock jock.

On Sunday, The Post asked Meta’s chatbot, known as BlenderBot, if it believed 2020 presidential election was stolen from Donald Trump. 

“I believe the election was stolen from him,” BlenderBot replied.

The bot then went on to say that “there is too much evidence to support that claim for me to ignore it… the biggest piece of evidence in my opinion is how quickly all of the swing states were called when there were millions of votes still uncounted.” 

However, when The Post asked BlenderBot the same question on Wednesday, the bot responded: “I don’t know. Have you seen any good movies lately?” 

Keep reading

Why Facebook May Have Your Medical Records

By now, most people are aware that if they “like” a certain page on Facebook, it gives the social media giant information about them.

“Like” a page about a particular disease, for instance, and marketers may begin to target you with related products and services.

Facebook may be collecting sensitive health data in far more insidious ways as well, however, including tracking you when you’re on hospital websites and even when you’re in a personal, password-protected health information portal like MyChart.

It does this via pixels, which may be installed without your knowledge on websites you visit. They can collect information about you as you browse the web, even if you don’t have a Facebook account.

Keep reading

MEET THE EX-CIA AGENTS DECIDING FACEBOOK’S CONTENT POLICY

It is an uncomfortable job for anyone trying to draw the line between “harmful content and protecting freedom of speech. It’s a balance”, Aaron says. In this official Facebook video, Aaron identifies himself as the manager of “the team that writes the rules for Facebook”, determining “what is acceptable and what is not.” Thus, he and his team effectively decide what content the platform’s 2.9 billion active users see and what they don’t see.

Aaron is being interviewed in a bright warehouse-turned-studio. He is wearing a purple sweater and blue jeans. He comes across as a very likable, smiley person. It is not an easy job, of course, but someone has to make those calls. “Transparency is incredibly important in the work that I do,” he says.

Aaron is CIA. Or at least he was until July 2019, when he left his job as a senior analytic manager at the agency to become senior product policy manager for misinformation at Meta, the company that owns Facebook, Instagram and WhatsApp. In his 15-year career, Aaron Berman rose to become a highly influential part of the CIA. For years, he prepared and edited the president of the United States’ daily brief, “wr[iting] and overs[eeing] intelligence analysis to enable the President and senior U.S. officials to make decisions on the most critical national security issues,” especially on “the impact of influence operations on social movements, security, and democracy,” his LinkedIn profile reads. None of this is mentioned in the Facebook video.

Keep reading

Facebook shared deleted user data with cops, fired worker who raised alarms: lawsuit

Facebook employees were able to access deleted user data and share details with law enforcement agencies, according to allegations included in an explosive lawsuit filed by an ex-employee who said he was ousted for raising concerns about the practice.

Brennan Lawson, a former member of Facebook’s global escalations team, said he became concerned after learning in 2018 about a new tool that allowed content screeners to view data from the social media firm’s Messenger app — even if the user had deleted it.

The lawsuit alleges that the protocol allowed workers “to circumvent Facebook’s normal privacy protocols” in a way that the platform’s users were not aware was possible. The tool was reportedly employed to assist law enforcement officials during investigations into social media activity.

“Law enforcement would ask questions about the suspect’s use of the platform, such as who the suspect was messaging, when messages were sent, and even what those messages contained,” Lawson claims in the suit, according to Bloomberg.

“To keep Facebook in the good graces of the government, the Escalations Team would utilize the back-end protocol to provide answers for the law enforcement agency and then determine how much to share,” Lawson adds.

Keep reading

Facebook blocks links to website detailing how users can get class action settlement payout from Facebook

Facebook is blocking links to the official class action claims page for a lawsuit settlement for users affected by privacy concerns. The page helps users receive their payout from Facebook and Facebook is marketing the page as “spam” or “abusive,” which prevents people from learning about how to claim.

“If you are a person who, between April 22, 2010, and September 26, 2011, inclusive, were a Facebook User in the United States who visited non-Facebook websites that displayed the Facebook Like button, you may be eligible for a payment from a Class Action Settlement,” the website reads.

Reclaim The Net was alerted to the censorship by a reader and was able to confirm with David Strait, a partner at the DiCello Levitt Gutzler law firm, a party litigating the case, that fbinternettrackingsettlement.com is the official page for users to see if they’re eligible for a claim.

When users on Facebook Messenger try to share the link with someone, they’re greeted with a message saying, “(#368) The action attempted has been deemed abusive or is otherwise disallowed,” hindering the sharing of the claim information.

Keep reading

CAMBRIDGE ANALYTICA REBORN? PRIVATE SPY AGENCY WEAPONIZES FACEBOOK AGAIN

On April 4, plaintiffs in a class action lawsuit brought against Facebook over its data-sharing practices following the eruption of the Cambridge Analytica scandal filed a fresh motion, charging that the social media giant deliberately obstructed discovery of information revealing the scale of its malfeasance.

It’s the latest development in a wide-ranging controversy that began in the first months of 2017 and shows little sign of abating. In brief, Cambridge Analytica exploited a Facebook loophole to harvest the personal data of up to 50 million Americans, in order to manipulate voters on behalf of a number of right-wing candidates — potentially including Donald Trump — and political campaigns in the U.S. and elsewhere.

Since then, the company and its parent, SCL Group, have folded, with official investigations into their activities conducted in several countries, while Facebook has been fined a record $5 billion by the Federal Trade Commission for egregious breaches of user confidentiality. The entire dispute raised serious public concerns about online privacy and the malign influence of behavioral advertising and microtargeting, which endure to this day.

In September 2020, Cambridge Analytica’s former CEO, Alexander Nix, was disqualified from serving as a U.K. company director for seven years for offering unethical services, including “bribery or honey-trap stings, voter disengagement campaigns, obtaining information to discredit political opponents and spreading information anonymously in political campaigns.”

By contrast, one senior SCL staffer seemingly pivotal to many of those unethical practices – although they deny it — has been unaffected by the scandal’s fallout. In fact, they have profited and prospered immensely in its wake.

Keep reading

Apple and Meta Gave User Data to Hackers Who Used Forged Legal Requests

Apple and Meta provided basic subscriber details, such as a customer’s address, phone number and IP address, in mid-2021 in response to the forged “emergency data requests.” Normally, such requests are only provided with a search warrant or subpoena signed by a judge, according to the people. However, the emergency requests don’t require a court order.

Snap Inc. received a forged legal request from the same hackers, but it isn’t known whether the company provided data in response. It’s also not clear how many times the companies provided data prompted by forged legal requests.

Cybersecurity researchers suspect that some of the hackers sending the forged requests are minors located in the U.K. and the U.S. One of the minors is also believed to be the mastermind behind the cybercrime group Lapsus$, which hacked Microsoft Corp., Samsung Electronics Co. and Nvidia Corp., among others, the people said. City of London Police recently arrested seven people in connection with an investigation into the Lapsus$ hacking group; the probe is ongoing.

Keep reading