UK woman said she wanted to be tortured and killed on fetish site — her body was found in shallow grave in the US

The boyfriend of a woman who allegedly paid someone to torture and kill her said that she had been suffering from mental illness before the shocking events, according to Florida authorities.

Sonia Exelby was reported missing in October before police were able to trace her to an Airbnb in Reddick and found her remains nearby.

Exelby boarded a flight to Florida and arrived on Oct. 10, according to an investigation by the Florida Department of Law Enforcement. The FDLE said that Exelby posted on a fetish website that she was willing to pay someone to torture and kill her.

A week after she arrived in Florida, her remains were found in a shallow grave.

Investigators linked her to a man named Dwain Hall, who had used her bank card and tried to use her credit cards.

When they interviewed him, he gave conflicting accounts of how they met.

Police said they gathered evidence pointing to Hall as Exelby’s alleged killer.

Authorities said Hall purchased rope and gun cleaner among other items at a Walmart in Gainesville on Oct. 10. After that purchase, he made a second purchase of a shovel. He then allegedly went to pick up Exelby at the airport, and they both went to an Airbnb that he had rented.

The next day, he charged $1,200 to Exelby’s bank card.

Authorities said he recorded a video of Exelby showing her with cuts and bruises, and asking her to say that she consented to being stabbed.

Exelby sent a message to a friend via the Discord app expressing regret.

“I’m so, so scared. I’m so broken and in so much pain. … I thought he’d do it quick and not give my mind time to stew,” she wrote.

On Oct. 14, Hall allegedly sent a package to a friend that authorities said contained a knife that had traces of Exelby’s blood. It also had a bracelet with DNA from both Exelby and Hall.

Keep reading

Google and US government battle over the future of internet advertising

Google will confront the U.S. government’s latest attempt to topple its internet empire in federal court on Friday as a judge considers how to prevent the abusive tactics that culminated in parts of its digital ad network being branded as an illegal monopoly.

The courtroom showdown in Alexandria, Virginia, will pit lawyers from Google and the U.S. Department of Justice against each other in closing proceedings focused on the complex technology that distributes millions of digital ads across the internet each day.

After a lengthy trial last year, U.S. District Judge Leonie Brinkema ruled in April that pieces of Google’s ad technology had been rigged in a way that made it an illegal monopoly. That set up another 11-day trial earlier this fall to help Brinkema determine how to remedy its anti-competitive practices.

Friday’s closing arguments will give both Google and the Justice Department a final chance to sway Brinkema before she issues a ruling that probably won’t come until early next year.

The Justice Department wants Brinkema to force Google to sell some of the ad technology that it has spent nearly 20 years assembling, contending a breakup is the only way to rein in a company that the agency’s lawyers condemned as a “recidivist monopolist” in filings leading up to Friday’s hearing.

The condemnation refers not only to Google’s practices in digital advertising but also to the illegal monopoly that it unleashed through its dominant search engine. Federal prosecutors also sought a breakup in the search monopoly case, but the judge handling that issue rejected a proposal that would have required Google to sell its popular Chrome web browser.

Although Google is still being ordered to make reforms that it’s resisting, the outcome in the search monopoly case has been widely seen as a proverbial slap on the wrist. The belief that Google got off easy in the search case is the main reason the market value of its parent company Alphabet surged by about $950 billion, or 37%, to nearly $3.5 trillion since U.S. District Judge Amit Mehta’s decision came out in early September.

That setback hasn’t discouraged the Justice Department from arguing for a breakup of an ad tech system that handles 55 million requests per second, according to estimates provided by Google in court filings.

The huge volume of digital ads priced and distributed through Google’s technology is one of the main reasons that the company’s lawyers contend it would be too risky to force a dismantling of the intricate system.

“This is technology that absolutely has to keep working for consumers,” Google argues in documents leading up to Friday’s hearing. The company’s lawyers blasted the Justice Department’s proposal as a package of “legally unprecedented and unsupported divestitures.”

Besides arguing that its own proposed changes will bring more price transparency and foster more competition, Google is also citing market upheaval triggered by artificial intelligence as another reason for the judge to proceed cautiously with her decision.

In his decision in the search monopoly case, Mehta reasoned that AI was already posing more competition to Google.

Keep reading

EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

We obtained a copy of the letter for you here.

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading

UK Tech Secretary Urges Ofcom to Fast-Track Censorship Law Enforcement

UK Technology Secretary Liz Kendall is pressing Ofcom to accelerate the rollout of the controversial censorship law, the Online Safety Act, warning that delays could weaken protections for vulnerable users. In a letter to the communications regulator, she said:

“I remain deeply concerned that delays in implementing duties, such as user empowerment, could hinder our work to protect women and girls from harmful content and protect users from antisemitism.”

Kendall is determined to enforce the controversial law quickly, even as more people have finally realized that the Online Safety Act grants excessive power to regulators over what citizens can say or share online.

Ofcom has confirmed that it expects to publish by July next year a register identifying which companies will face the strictest obligations, including mandatory age verification.

That schedule is roughly a year later than initially promised. The regulator said the delay was due to “factors beyond its control,” citing a legal challenge that raised “complex issues.”

One challenge involves 4chan and Kiwi Farms, platforms often targeted by politicians seeking tighter online speech regulation.

Reclaim The Net recently reported that 4chan’s legal team had rejected Ofcom’s attempt to impose fines under the Act, arguing that the regulator’s enforcement powers overreach.

The law has also drawn criticism abroad.

The US State Department condemned the UK’s online censorship laws, including the Online Safety Act, warning that the powers granted to Ofcom could restrict the open exchange of ideas.

We also covered the growing concern among technology companies that the Act’s broad language and compliance costs could force them to reconsider their presence in the UK.

Keep reading

Brussels Aims at WhatsApp in the Next Round of Speech Control

Meta’s WhatsApp platform is set to come under tighter European oversight as regulators prepare to bring its “channels” feature under the European Union’s far-reaching censorship law, the Digital Services Act (DSA), the same framework that already pressures Facebook and Instagram.

According to Bloomberg, people familiar with the matter say the European Commission has informed Meta that WhatsApp’s channels are being prepared for designation as a “Very Large Online Platform.”

That classification carries extensive responsibilities for content censorship. Although no public date has been announced, the Commission’s notice indicates that WhatsApp will soon face some of the most demanding digital rules in the world.

Channels, which allow public updates from news outlets, public figures, and organizations, function more like social media feeds than private chats.

WhatsApp reported earlier this year that these channels reached around 46.8 million users in Europe by late 2024, slightly above the DSA’s 45 million-user threshold for stricter oversight.

Once a service crosses that line, it must perform regular assessments of how illegal or “harmful” content circulates and develop strategies to limit its spread. Platforms are also required to publish user figures twice a year and risk fines of up to 6 percent of global revenue for failing to comply.

The DSA does not apply to private, encrypted communication, so WhatsApp’s core messaging service will remain unaffected.

Still, the EU’s decision to expand its regulatory reach into new areas of online conversation has caused concern that these rules could burden companies and discourage open dialogue in the name of safety.

The European Commission has remained cautious about providing details, saying only that it “cannot confirm the timeline for a potential future designation.”

For Meta, the move adds another chapter to its ongoing disputes with European regulators.

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

FBI Seeks To Unmask Anonymous Web Archiving Service Owner

The subpoena, dated last Tuesday and posted publicly on Archive.today’s X account, states it relates to a federal criminal investigation being conducted by the FBI, as The Verge reported. However, the document provides no specific details about what alleged crime is under investigation.

The FBI is requesting comprehensive identifying information from Tucows, including customer or subscriber name, address of service, and billing address associated with Archive.today, per The Verge report.

Beyond basic contact details, the subpoena demands an extensive array of data such as telephone connection records, including incoming and outgoing calls and SMS or MMS records, payment information like credit card or bank account numbers, internet connectivity session times and durations, device identifiers, IP addresses, and details about services used such as email, cloud computing, and gaming services.

The subpoena instructs Tucows not to disclose its existence indefinitely, as any such disclosure could interfere with an ongoing investigation and enforcement of the law, as recounted by Gizmodo. 

That request became moot when Archive.today publicly posted the document. Journalist Max Blumenthal, editor of The Grayzone, drew attention to the subpoena on X, emphasizing that Archive.today is used by journalists and researchers to “document edits to articles, bypass subscription walls and avoid giving traffic to the failing corporate media.”

Keep reading

Hollywood Producer Buys Israeli NSO Spyware Maker, Hires David Friedman to Sell Hacking Tools to U.S.

Hollywood producer Robert Simonds has purchased the Israeli spyware maker NSO Group to bring it under “American” control and hired Trump’s former Ambassador to Israel, David Friedman, to lobby the president to remove sanctions on the firm so they can sell their hacking tools to US law enforcement.

Though the company was sold to a consortium of alleged “Americans” led by Simonds, the NSO Group “said Sunday that it would continue to operate from Israel under the full regulatory authority of the Defense Ministry, as it expands its global footprint and seeks to resume operations in the US,” the Times of Israel reports.

From The Wall Street Journal, “Israeli Spyware Maker NSO Gets New Owners, Leadership and Seeks to Mend Reputation”:

TEL AVIV—NSO Group, the Israeli company behind Pegasus spyware, says a group of investors led by Hollywood producer Robert Simonds has acquired a controlling stake in the firm, which has named a former Trump official to lead an effort to restore its battered reputation.

The company, which has faced lawsuits and U.S. government sanctions since revelations that its technology was used to spy on political dissidents, human-rights advocates, journalists and American officials, declined to disclose the purchase price.

NSO’s new executive chairman, David Friedman, a former U.S. ambassador to Israel and onetime bankruptcy lawyer for President Trump, said he wants to use his ties to the Trump administration to help rebuild the company’s spyware business in the U.S.

“If the administration, as I expect they’ll be, is receptive to considering any opportunity that might keep Americans safer, it will consider us,” said Friedman, who splits his time between Florida and Israel.

This is naked influence peddling.

Keep reading

Wisconsin Lawmakers Propose VPN Ban and ID Checks on Adult Sites

Wisconsin legislators have found a new villain in their quest to save people from themselves: the Virtual Private Network.

The state’s latest moral technology initiative, split into Assembly Bill 105 and Senate Bill 130, would force adult websites to verify user ages and ban anyone connecting through a VPN.

It passed the Assembly in March and now waits in the Senate, where someone will have to pretend this is enforceable.

Supporters are selling the plan as a way to “protect minors from explicit material.”

The bill’s machinery reads like a privacy demolition project written by people who still call tech support to reset passwords.

The law would apply to any site that “knowingly and intentionally publishes or distributes material harmful to minors.” It then defines that material as anything lacking “serious literary, artistic, political, or scientific value for minors.”

The wording is broad enough to rope in half the internet, yet somehow manages to exclude “bona fide news” (as to be determined by the state) and cloud platforms that don’t create the content themselves.

Whether that covers social media depends on who you ask: lawyers, lobbyists, or whichever intern wrote the definitions section.

The bill instructs websites to delete verification data after access is granted or denied.

That sounds good until you recall how the tech industry handles deletion promises.

Au10tix left user records exposed for a year after pledging to delete them within 30 days. Tea suffered multiple breaches despite assurances of immediate deletion. In the real world, “deleted” often means “archived on an unsecured server until a hacker finds it.”

The headline feature is a rule penalizing anyone who uses a VPN to access restricted material. VPNs encrypt internet traffic and disguise user locations, which lawmakers apparently see as a threat to order.

The logic is that if people can hide their IP addresses, the state can’t check their ID to ensure they’re old enough to view certain content. That’s technically true and philosophically disturbing.

Officials in other places are already cheering this idea. Michigan introduced a proposal requiring internet providers to detect and block VPN traffic.

If Wisconsin adopts the rule, VPN users would become collateral damage. Journalists, activists, and everyday users who rely on encryption for safety would be swept up in the ban.

Keep reading