Former FBI and Twitter Lawyer Jim Baker Joins Election Task Force Advocating for Social Media Censorship

From presidential election to another election, to Covid – to another election. That is how members of particular, mostly flying-under-the-radar power centers in the US have been moving over the last decades.

From time to time, however, circumstances demand that they show their faces: one is James “Jim” Baker, a former FBI lawyer whose “censorship portfolio” includes the infamous case of endorsing the Hunter Biden laptop story suppression – while he was on Twitter’s payroll.

And while there – Baker also wanted to know how come President Trump was not censored for a post saying – “Don’t fear Covid.”

Well, Baker also seems to be staying true to himself – unfortunately, his “truth” appears to be to never miss the chance to support the wrong thing (the “RussiaGate” saga happens to be among them). Right now, he has joined something called “the National Task Force on Election Crises.”

It’s a crisis, alright. A crisis of online censorship that can, and does, produce multiple “election” crises and a rapid erosion of trust in legacy media and political institutions.

The group’s parent operation is the Protect Democracy Project.

There’s nothing particularly innovative about the group’s lobbying talking points: remove or downgrade “election misinformation” and make sure removing and labeling content (as false) is done ASAP by social and news media (time is clearly of the essence, at this point…)

Keep reading

Advertiser Alliance Members Are Called To Testify After Allegations of Efforts To “Demonetize, and Censor Disfavored Viewpoints”

The Global Alliance for Responsible Media (GARM) is back in the headlines big time – what with the recent decision of X to rejoin the group, and now, as anticipated, the US Congress is stepping up its attempts to shed more light on what GARM actually does, censorship-wise.

Once again it is House Judiciary Committee Chairman Jim Jordan who is trying to hold Big Tech – and in this case, “the advertising industrial complex” as it were – accountable.

GARM is a World Economic Forum (WEF)-affiliated initiative, launched by the World Federation of Advertisers (WFA); the latter by its own admission represents more than 150 biggest brands and over 60 advertiser associations around the world.

“Brand safety” is what the group says it is offering to these clients. But Jordan, and many conservatives and media outlets and businesses – allied or perceived to be allied with them – have strong suspicions that GARM can and is being used as yet another avenue of censorship and suppression – this time via actions that result in demonetization or boycott of those who hold “disfavored views.”

Concerning GARM, Jordan started fighting what supporters must see as “the good fight” last year (first by requesting information and then by issuing a subpoena once that was ignored).

Then, this March, the Committee sent letters to five members of the GARM Steering Team including Unilever and GroupM (a media investment group) asking for access to documents and communications that might prove the overall anti-conservative bias executed by the imitative.

We obtained a copy of the letter for you here.

Keep reading

The Supreme Court Just Opened the Door to a New Orwellian Censorship Regime

The Supreme Court’s decision in a recent case challenging the Biden administration’s censorship efforts unleashed renewed threats to Americans’ ability to speak and listen freely online while effectively putting a legal remedy out of reach ahead of the 2024 election, legal experts told the Daily Caller News Foundation.

Last year on Independence Day, U.S. District Court Judge Terry A. Doughty issued the initial injunction blocking a range of government agencies from communicating with social media companies to suppress speech, calling the government’s actions “Orwellian.” But one year later, with the Fifth Circuit’s narrower injunction now lifted by the Supreme Court in Murthy v. Missouri, officials have free rein to again employ the same tactics.

“It’s basically a roadmap for government actors, not just the federal government, but also state and local government actors, to reach out to social media companies and pressure them into censoring this disfavored speech,” Center for American Liberty associate counsel Eric Sell told the DCNF.

The Supreme Court held that plaintiffs in the case, who included two states and five individuals, did not have standing to seek an injunction against the government.

In her majority opinion, Justice Amy Coney Barrett said the plaintiffs failed “to link their past social-media restrictions to the defendants’ communications with the platforms.” She also noted that platforms had “independent incentives to moderate content,” making it difficult for the plaintiffs to establish they were harmed directly as a result of the government’s requests.

Justice Samuel Alito worried in his dissent that the Supreme Court’s ruling, though it did not reach the merits of the issue, would send the message that coercive government campaigns against certain speech can run unchecked if “carried out with enough sophistication.”

Keep reading

Oxford Nanopore: The Internet of Living Things is closer than you think

The Internet of Living Things (“IoLT”) is a concept that connects living organisms, such as humans, animals and plants, to the Internet, enabling the exchange of data and information. This concept is an extension of the Internet of Things (“IoT”), which focuses on connecting devices and objects.

In short, the IoLT enables real-time monitoring of biological functions, such as vital signs, genetic data and environmental factors.  The collected data is analysed to provide insights into the biological state of the organism, enabling early detection of diseases and personalised healthcare. The data is transmitted to the cloud, where it can be accessed and analysed by healthcare professionals, researchers and other people. The biological state of an organism becomes an extension of the internet, enabling the creation of new intelligence about natural systems.

Examples of how the IoLT will collect data are:

  • Wearable sensors, such as fitness trackers, which can monitor vital signs and transmit data to the cloud for analysis.
  • Smart contact lenses, contact lenses with embedded sensors that can monitor glucose levels and transmit data to the cloud for diabetes management.
  • Portable genomic sequencers, portable devices that can sequence DNA and transmit data to the cloud for genetic analysis.
  • Internet-enabled biocyber interfaces, biocyber interfaces which can connect living insects to the internet, enabling control of their behaviour and communication with the environment.

The topic of this article is portable genomic sequencers; in particular Oxford Nanopore Technology devices.

In 2015, 9 years ago, when Clive Brown, Chief Technology Officer of Oxford Nanopore Technology was asked what the likelihood was of portable DNA sequencers becoming reality, he answered: “It is already a reality. The technology is now in the optimisation phase and will only get better. If you are asking how long before it reaches a clinic – then I think that is a different question, but it will be in many other non-clinical environments first.”

The DNA sequencer he was referring to was Oxford Nanopore’s MinION.  Any living thing, or system of living things, can be connected to the internet via the MinION or by any similar real-time DNA sensing devices, Brown said.

“Healthcare is just one application [ ]; equally, water sources, food supplies, hospital air and many other systems can be frequently sampled and sequenced – also allowing their state to be trended, tracked and predicted,” Brown said.

A few years later, in 2019, an article published by International Defence, Security and Technology (“IDST”) described MinION as small as a USB stick and easy to use. “Oxford Nanopore has designed an intelligent cloud lab, Metrichor, to be used for genomics data storage in conjunction with smartphone apps that interpret the meaning of DNA sequences. Researchers around the world now use pocket-size genomic sequencers to rapidly detect resistant pathogenic strains in hospitals, and diagnose infectious agents in food supply and aboard spaceships,” IDST wrote.

In early 2020, Oxford Nanopore’s technology was put to use in the surveillance of the coronavirus outbreak

Keep reading

Anthony Blinken Reveals Government’s AI Plan To Censor Free Speech

U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involved the cooperation of Big Tech and other private-sector partners.

At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”

He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system. These goals include the gai-worshipping climate agenda, along with new restrictions on free speech, the freedom of movement, wealth transfers from rich to poor countries, and the digitization of humanity. Now Blinken is saying these goals could be jumpstarted by employing advanced artificial intelligence technologies.

Listen to Blinken, in the video below, openly describe how the government will use AI to clamp down on the free speech of citizens. (Fast-forward to the 3-minute mark and watch through the 7:07 mark.)

Keep reading

Google News Ranks AI-Generated Spam as Top Results Above Actual Stories

Despite recent algorithm changes aimed at improving search quality, Google News continues to face challenges with AI-generated spam content appearing in its top results, frequently outranking original reporting.

Wired reports that while artificial intelligence becomes more commonplace across the internet, Google News finds itself grappling with a persistent issue: the proliferation of AI-generated spam content in its search results. This problem persists even after the search giant implemented significant algorithm updates earlier this year to combat unhelpful and unoriginal content.

The issue came to light when a journalist, while searching for information on Adobe’s AI policies, stumbled upon a plagiarized article at the top of Google News results. The article, published on a website called Syrus #Blog, was a near-exact copy of a piece originally published by Wired, with only minor alterations in phrasing. Further investigation revealed that this was not an isolated incident, as similar plagiarized content was found in multiple languages, affecting articles from various reputable news sources.

Elizabeth Tucker, a director of product management at Google, had announced in April that the company had completed a major rollout of changes to its search engine ranking system. These updates were intended to reduce low-quality, unoriginal content in search results by 45 percent. However, the persistence of AI-generated spam in Google News suggests that the problem is far from solved.

Lily Ray, senior director of search engine optimization at Amsive, describes this as a “rampant problem” on Google. She notes that many clients have reported instances of their articles being rewritten and republished using AI tools. This practice not only undermines the work of original content creators but also poses a significant challenge to Google’s efforts to provide high-quality, reliable information to its users.

The creators of Syrus #Blog, when contacted, claimed that their content creation process involves AI tools that “analyze and synthesize information from various sources while always respecting intellectual property.” They pointed to a single hyperlink at the bottom of their articles as sufficient attribution. However, SEO experts and content creators argue that this practice falls far short of ethical standards and violates Google’s spam policies.

Keep reading

OnlyFans Contains Child Sexual Abuse Material Of Toddlers And Teens, According To Reports

Porn-driven website OnlyFans has been found to contain explicit content of minors despite its claims to build the “safest digital media platform in the world,” according to a new investigative report.

Reuters has exposed OnlyFans’ involvement in child sexual abuse material (CSAM) after documenting 30 complaints from U.S. police and court records of CSAM from December 2019 and June 2024. According to the news agency, they “cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers.” In another case, a minor stayed on OnlyFans for over a year.

After 16-year-old girl from Florida went missing in 2023, her parents found nude photos and videos of herself to 22-year-old Ethan Diaz, who appeared to have abducted her. Police found that Diaz posted explicit content of the girl on OnlyFans. One video showed the teen penetrating herself and was advertised for $20. The caption read, “Watch me get super wild.” Diaz was charged with human trafficking and other offenses.

In the 30 cases reviewed by Reuters, over half resulted in an arrest and criminal convictions. Most of the adults involved in CSAM were accused of preying on and exploiting minors to create pornographic material and profit from it. In other cases, minors somehow bypassed OnlyFans vetting to sell their own explicit content. And “in the case involving toddlers, a man used the site to send another man more than 100 files featuring the abuse of children of all ages,” Reuters added.

Despite these findings, OnlyFans asserts that it is strictly for adults, with measures to monitor users, vet content, and remove and report CSAM. “We know the age and identity of everyone on our platform,” said CEO Keily Blair in a 2023 speech. “No children allowed, nobody under 18 on the platform.”

Is it possible that OnlyFans has more CSAM than we think? Due to its paywall model, the site makes it more difficult for police to detect CSAM. In fact, the National Center for Missing & Exploited Children (NCMEC) only received access to OnlyFans in late 2023. And, per ArsTechnica, the organization seemingly can’t scan the entire platform at once, telling Reuters that its access was “limited” exclusively “to OnlyFans accounts reported to its CyberTipline or connected to a missing child case.”

In 2023, OnlyFans made 347 CyberTipline reports “out of hundreds of millions of posts,” a “testament to the rigorous safety controls OnlyFans has in place,” their spokesperson said. They explained that most of the suspected material “does not turn out to be CSAM” or “are duplicate images or videos.” Still, specialists in CSAM informed Reuters the actual amount of CSAM is difficult to verify due to the individual paywalls, with OnlyFans currently boasting over 3 million creators. Trey Amick, director of forensic consultants at Magnet Forensics Inc., told Reuters, “It’s not just one paywall. It’s a paywall for each and every contributor.”

Keep reading

NewsGuard Co-Founder Advocates Banning Anonymous Social Media Posts, Enabling Lawsuits Against Tech Firms for “False” Content

NewsGuard co-founder and co-CEO Steve Brill has published a book, “The Death of Truth” – but he’s not taking any responsibility. On the contrary.

Namely, Brill’s “apolitical (misinformation) rating system for news sites” as NewsGuard is promoted to customers, is often blasted – and currently investigated by Congress for possible First Amendment violations – as yet another tool to suppress online speech.

But corporate media sing his praises, presenting him as a “media maven.”

A censorship maven more like it, critics would say. And while getting his book promoted, Brill managed to add his name to the steadily growing list of governments, NGOs, and associated figures who are attacking online anonymity.

Keep reading

AI Garbage Pollutes Internet: 42% of Overall Web Traffic Now Generated by Bots

A new report from Akamai Technologies has unveiled alarming statistics about the prevalence and impact of web scraper bots on businesses, particularly in the e-commerce sector. An astounding 42 percent of overall web traffic is now generated by bots, according to the cloud computing giant.

The report reveals that as the internet becomes increasingly dominated by automated traffic, businesses are facing a growing threat from malicious bots. Akamai Technologies, a leading cloud company, has released its latest State of the Internet (SOTI) report, titled “Scraping Away Your Bottom Line: How Web Scrapers Impact e-commerce,” shedding light on the pervasive issue of web scraping bots and their detrimental effects on online businesses.

The report’s findings are striking: bots now account for a staggering 42 percent of overall web traffic, with 65 percent of these bots classified as malicious. This surge in bot activity is particularly concerning for the e-commerce sector, which relies heavily on revenue-generating web applications and is therefore most vulnerable to high-risk bot traffic.

Patrick Sullivan, CTO of Security Strategy at Akamai, emphasized the seriousness of the issue, stating, “Bots continue to present massive challenges resulting in multiple pain points for app and API owners.” He highlighted the various ways in which scraper bots can harm businesses, including data theft and brand impersonation.

The report identifies several key ways in which scraper bots are being weaponized against businesses. These include competitive intelligence gathering, inventory hoarding, and the creation of imposter sites. Such activities not only impact a company’s bottom line but also degrade the customer experience.

One of the most concerning developments highlighted in the report is the rise of AI botnets. These advanced bots have the capability to discover and scrape unstructured data and content, even when it’s presented in inconsistent formats or locations. Moreover, they can leverage business intelligence to enhance their decision-making processes, making them increasingly difficult to detect and mitigate.

Keep reading

Windows 11’s Sneaky OneDrive Sync

Those still using Microsoft Windows (now in version 11) as their operating system in 2024 have a lot of experience being left out of the “decision-making process” concerning their own computer and their own data.

This is what closed-source, proprietary software gets you (in addition to a lack of innovation and overall technical quality); but there are even more ways to avoid transparency, and, frankly, disrespect paying customers.

And one is introducing questionable features without even announcing them.

OneDrive – Microsoft’s cloud service – is also available to back up Windows folders like Desktop, Documents, Music, Pictures, Videos… and as it turns out, users don’t even have to agree to this – or even know it’s happening.

Namely, if you are installing Windows 11 (signed into the Microsoft account, as Microsoft prefers), the default is now to upload content from those folders to Microsoft’s cloud. And Microsoft didn’t bother informing their users about this change, compared to the previous installation process, Neowin reported.

“Informing” here means, not with a press release, and not even with prompts during installation and setup.

The backup, i.e., the syncing of the files is now already ongoing or done as soon as a fresh install is finished, and users are reportedly only (slowly) becoming aware of the change because of new visual indicators on their desktop shortcuts and folder icons (showing that the backup is in progress or done).

Windows users can still be grateful there are several ways to deal with the situation. One is to go to the OneDrive settings, and then go through several steps (Sync and Backup>Manage Backup…) and uncheck whatever folders should not sync with the Microsoft cloud service.

(But there are also older versions of OneDrive, where the way is, Manage Backup>StopBackup.)

Keep reading