Judge Acquits Backpage Co-Founder Michael Lacey on Most Counts

A federal judge has acquitted Backpage co-founder Michael Lacey of dozens of counts, including a majority of those on which federal prosecutors planned to retry Lacey later this year. U.S. District Judge Diane Humetewa also acquitted former Backpage executives Jed Brunst and Scott Spear on multiple counts of which they were convicted by a jury last fall.

“After viewing the record in the light most favorable to the Government, the Court finds there is insufficient of evidence to support convictions under Counts 19–51 as to Mr. Lacey and Counts 66–99 as to Messrs. Lacey, Brunst, and Spear,” concluded Humetewa.

In November, a jury found Lacey guilty of just one the 86 counts against him and not guilty of one count as well. The jury was hung on the other 84 counts, including all charges that Lacey actively facilitated prostitution or participated in a conspiracy to facilitate prostitution via the online classifieds site he founded with his longtime newspaper partner James Larkin. (Larkin took his own life last summer a few days before the trial was scheduled to begin.)

The feds then decided to retry Lacey on those 84 counts, despite the fact that there had already been two trials on the same charges. (The first, in 2021, was declared a mistrial after prosecutors and their witnesses couldn’t stop talking about sex trafficking despite none of the defendants facing sex trafficking charges.)

Now, Humetewa has acquitted Lacey on 53 of the remaining 84 counts against him. Additionally, Humetewa acquitted Spear, former executive vice president of Backpage, of 10 of the counts on which he was found guilty by the jury and acquitted former Chief Financial Officer Brunst of 18 of the counts on which he was convicted.

Two of the other defendants were acquitted on all charges by the jury.

Keep reading

Judge Challenges Appeals Court Over Computer Monitoring Ban in January 6 Parole

A US federal judge – who imposed draconian surveillance measures against a man charged and later convicted and paroled in connection with the January 6 events – is clearly unimpressed by the ruling of a US Court of Appeals, that recently overturned his decision.

Senior District Judge Reggie Walton now wants the controversy officially revisited, so he scheduled a new hearing date for June 4 in a bid to make his original order for Daniel Goodwyn’s computer to be surveilled for “mis/disinformation” stick.

Early in April, the US Court of Appeals for the District of Columbia announced that the order to monitor and “inspect” Goodwyn’s computer for “mis/disinformation” was the result of the district court having “plainly erred.”

Goodwyn (described in reports as a citizen journalist) was convicted on a single trespassing misdemeanor count based on him spending 36 seconds inside the Capitol on the day.

Goodwyn was subsequently arrested and sentenced by Judge Walton to two months in prison, but that was not all – his computer was to be “monitored and inspected” during his parole.

This last bit of the ruling was too much for the circuit court, which overturned it earlier in the month. The ruling said Walton “plainly erred in imposing the computer-monitoring condition without considering whether it was ‘reasonably related’ to the relevant sentencing factors and involved ‘no greater deprivation of liberty than is reasonably necessary’ to achieve the purposes behind sentencing.”

But now Walton is trying to once again impose surveillance of Goodwyn’s computer, ordering him to “show cause” as to why that should not be happening.

Keep reading

Telegram Founder Reveals US Government’s Alleged Covert Maneuvers to Backdoor The App

What a shocker. Is this really newsworthy? Actually yes – because here, we’re seeing the opposite of clickbait – a subdued, to put it generously, headline in legacy US media, in an attempt to report about some of the things Telegram CEO Pavel Durov said during his interview with Tucker Carlson.

But behind this headline lies a pretty explosive, even if not surprising story – of how countries (in reality, more likely than one, but in this case, one is named) view the backbone of internet safety and integrity, namely – reliable, secure encryption.

Long story short – they view it as the enemy.

Durov, a Russian now in possession of multiple passports, based in Dubai, UAE, and often apparently butting heads with snooping efforts from governments (including Russian) revealed during the interview how the government in Washington one time tried to “break into Telegram,” as he put it.

But really, doing this successfully, given the nature of the encrypted app, would have meant not just breaking “into” – but, breaking Telegram.

Keep reading

Big Brother in Training? How Proposed Legislation Might Pave the Way for Online Age Verification and Digital ID

Bipartisan legislative efforts are underway in the US House of Representatives to adopt new versions of two laws originally drawn up to deal with the safety of youth online.

But the fear is that the bills introduced now – H.R.7891, the Kids Online Safety Act (KOSA), and H.R. 7890, the Children and Teens Online Privacy Protection Act (COPPA) 2.0 – will facilitate implementation of a future sweeping age verification and digital ID push.

These concerns are raised because KOSA is directing the secretary of commerce, together with the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to conduct a study “evaluating the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

At this stage of the proceedings, the study will not be used to mandate that platforms implement “an age gating or age verification functionality” – however, once the authorities have at their disposal the technical solutions to do it, some observers expect it could be used for a more aggressive legislative push at the federal level later on.

The key difference between the existing Senate version of KOSA and the proposed House bill is found under the “care of duty” component, with the House text now defining that to apply to “high impact online companies” with $2.5 billion or more annual revenue, and 150+ million global monthly active users over at least three months of the preceding year.

The Senate version refers to platforms “reasonably likely to be used by a minor” (employing 500 or more people, with gross annual revenue of $50 million or more).

Keep reading

NSA “Just Days Away From Taking Over The Internet” Warns Ed Snowden

The United States National Security Agency (NSA) is only days away from “taking over the internet” with a massive expansion of its surveillance powers, according to NSA whistleblower Edward Snowden.

In an April 16 post to X, Snowden drew attention to a thread originally posted by Elizabeth Goitein — the co-director of the Liberty and National Security Program at the Brennan Center for Justice — that warned of a new bill that could see the U.S. government surveillance powers amplified to new levels.

Source: Edward Snowden

The bill in question reforms and extends a part of the Foreign Intelligence Surveillance Act (FISA) known as Section 702.

Currently, the NSA can force internet service providers such as Google and Verizon to hand over sensitive data concerning NSA targets.

However, Goitein claims that through an “innocuous change” to the definition of “electronic communications surveillance provider” in the FISA 702 bill, the U.S. government could go far beyond its current scope and force nearly every company and individual that provides any internet-related service to assist with NSA surveillance.

“That sweeps in an enormous range of U.S. businesses that provide wifi to their customers and therefore have access to equipment on which communications transit. Barber shops, laundromats, fitness centers, hardware stores, dentist’s offices.”

Additionally, the people forced to hand over data would be unable to discuss the information provided due to hefty gag order penalties and conditions outlined in the bill, added Goitein.

Keep reading

DATA DELIVERY RATES 4.5 MILLION TIMES FASTER THAN AVERAGE BROADBAND ACHIEVED IN MAJOR FIBER OPTIC BREAKTHROUGH

The fastest data delivery rate ever achieved using specific wavelength bands in fiber optic technology has been reported by an international team of researchers, who say they sent information 4.5 million times faster than the average home broadband will allow.

According to researchers at Aston University in Birmingham, England, who participated in the experiment, a series of new technologies they developed allowed them to successfully transfer data at an astonishing rate of 301 terabits (301,000,000 megabits) per second.

Remarkably, the Aston team says the breakthrough that achieved these impressive data transfer rates used only standard optical fiber.

The team says their breakthrough paves the way toward greener solutions by allowing the transfer of larger amounts of information without the use of additional optical fiber.

Keep reading

EU’s Plan To Mass Surveil Private Chats Has Leaked

The latest version of the proposed European Parliament (EP) and EU Council regulation to adopt new rules related to combating child sexual abuse has been made available online.

Despite its declared goal, the proposal, which first saw the light of day in May 2022 and is referred to by opponents as “chat control” is in fact a highly divisive draft of legislation that aims to accomplish the stated objective through mass surveillance of citizens’ private communications.

Now, the French site contexte.com has the full text of the newest version of the proposal – yet another controversial undertaking of the current, 6-month Belgian EU presidency. Judging by the leaked document, the key and most contentious components of “chat control” have not been changed.

German EP member (MEP) Patrick Breyer and long-time vocal critic of the proposal said on his blog that the text would be discussed by a law enforcement working party at the Council on Wednesday, with the target date for adoption being sometime in June.

That will happen once any political differences have been smoothed over at the EU’s Committee of Permanent Representatives (“COREPER”).

Commenting on the development, Breyer remarked that the Council’s legal service has also confirmed that the new version “does not change the nature of detection orders.”

“Limiting bulk chat searches to ‘high-risk services’ is meaningless because every communication service is misused also for sharing illegal images and therefore has an imminently high risk of abuse,” the MEP noted of the latest proposal, adding:

“Informing law enforcement only of repeat hits is also meaningless, as falsely flagged beach pictures or consensual sexting rarely involve just a single photo.”

He went on to explain that the upcoming regulation is set up in a way that will result in the end of the privacy of people’s digital communications, since the subject of content searches will be “millions” of chats and photos, including those belonging to persons who have no links to child sexual abuse.

And because the technology proposed to carry out the mass surveillance is unreliable, there are also risks of this content getting leaked.

Keep reading

Net Neutrality Could Expand Biden’s Social Media Censorship To The Whole Internet

The Supreme Court recently heard oral argument in one of the most important cases this term, one with critical implications for First Amendment free speech rights as society proceeds further into a world reliant upon internet service.

The controversy at the heart of Murthy v. Missouri is the Biden administration’s effort to pressure or “jawbone” social media to censor various opinions and public policy advocacy about the Covid pandemic that it found objectionable. 

The Biden administration naturally claims it was simply engaging in discourse with social media leaders to “inform and persuade,” but discovered correspondence included direct threats against the companies while White House officials openly and publicly threatened new “legal and regulatory measures” if the targeted groups failed to submit to its desires. 

Those revelations only confirm widespread suspicion that the left-leaning administrative state, favored and further empowered by the Biden administration, seeks to exploit its vast authority to suppress the speech of Americans who don’t share its preferred narratives or big-government goals.

The Supreme Court must now determine whether that White House pressure campaign crossed the line into unconstitutional intimidation and censorship, even without formal government prosecution or enforcement. Under applicable Supreme Court precedent, the Biden administration’s form of “informal censorship may sufficiently inhibit the circulation of publications to warrant injunctive relief,” even where the targeted groups are “free” to ignore its threats, because “people do not lightly disregard officers’ thinly veiled threats.” 

Keep reading

The Witch Hunt Begins In Pennsylvania – The Thought Police Are Coming For You

You probably know that the Department of Homeland Security (DHS) has the charge to protect America’s “critical infrastructure”. You probably also understand that to mean they should be stopping terrorists from blowing up things like nuclear power plants, bridges and dams. You are unlikely to understand that DHS believes it is supposed to “protect” America’s “cognitive infrastructure” and that this means censoring your free speech and preventing you from disseminating ideas or opinions with which DHS disagrees.

DHS believes it has the power to silence you, and it has been doing so for some time now.

Within DHS is an entity called the Cybersecurity and Infrastructure Security Agency (CISA). CISA has for some time now been working with private entities to silence anyone who strays from the official government-approved narrative. The Attorney General of Louisiana described it this way in testimony before Congress.

“The First Amendment clearly states that “Congress shall make no law…abridging the freedom of speech.” This means that the government has no power to restrict expression because of its message, ideas, subject matter, or content. The U.S. Supreme Court also firmly established that “the Constitution ‘demands that content-based restrictions on speech be presumed invalid … and that the Government bear the burden of showing their constitutionality.” United States v. Alvarez, 567 U.S. 709, 716-717 (2012) (plurality op.). Yet in our lawsuit we have uncovered a censorship enterprise so vast that it spans over a dozen significant government institutions, including the White House, the Centers for Disease Control and Prevention (CDC), the Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Department of State’s Global Engagement Center (GEC), and the National Institutes of Health (NIH). Publicly, these federal actors have justified their deeds in the name of protecting the public against “misinformation” and “disinformation,” when in fact it is done to suppress disfavored views…”

“To understand the framework of these activities, we must first look at the way censorship has been approached by those actively engaged in it. For example, CISA — an agency within the Department of Homeland Security (DHS) — has classified the thoughts, ideas, and beliefs of the American public as “critical infrastructure.” In an effort to control, manage, and maintain these cognitive assets, CISA serves as a “switchboard” for sending disfavored information from state and local officials to the necessary social media company to ensure content-moderation policies are applied. As a result, America’s “cognitive infrastructure” can be maintained in the same way that the DHS might protect the nation’s physical infrastructure from outside threats; but in this case, CISA aims to protect our collective consciousness from independent thought and inquiry at the individual level…”

Keep reading

Here Come the AI Worms

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

Keep reading