Cloudflare offers to make AI pay to crawl websites

Cloudflare will block AI bots from crawling websites by default for new customers, and broker pay-per-crawl deals between its customers and bot operators.

Cloudflare will block AI crawlers from accessing new customers’ websites without permission starting July 1 and is testing a way to make AI pay for the data it gathers.

Furthermore, website owners can now decide who crawls their sites, and for what purpose, and AI companies can reveal via Cloudflare whether the data they gather will be used for training, inference, or search, to help owners decide whether to allow the crawl.

The company began enabling its customers to choose to block AI crawlers in July 2024. Since then, it said, over one million customers have opted in.

“For decades, the Internet has operated on a simple exchange: search engines index content and direct users back to original websites, generating traffic and ad revenue for websites of all sizes. This cycle rewards creators that produce quality content with money and a following, while helping users discover new and interesting information,” Cloudflare said in its announcement. “That model is now broken. AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source — depriving content creators of revenue, and the satisfaction of knowing someone is reading their content. If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Keep reading

Social Media Especially Harms Girls’ Sleep & Mental Health

June 30 was World Social Media Day.

In a survey conducted between September and October 2025, 50 percent of 13- to 17-year-old girls said that social media has hurt their sleep, versus 40 percent of boys the same age.

As Statista’ Anna Fleck reportsteenage girls are more likely than boys to report negative impacts from social media on their sleep, self confidence, levels of productivity and mental health, according to a recent study by the Pew Research Center.

You will find more infographics at Statista

A similar gap occurs for the issue of mental health (25 percent of girls, 14 percent of boys).

However, the biggest share of respondents said social media sites neither helped nor hurt their mental health.

Around one in five of both sexes said that social media had negative impacts on school grades.

Teens were more positive when it came to the question of friendships.

Keep reading

Skynet is coming: the malware that attacks Artificial Intelligence!

An unusual example of malicious code has been discovered in a real computing environment, which for the first time recorded an attempt to attack not classical defense mechanisms, but directly artificial intelligence systems. We are talking about the prompt injection technique, i.e. the introduction of hidden instructions capable of compromising the functioning of language models, which are increasingly used for the automatic analysis of suspicious files. This case is the first concrete confirmation that malware authors are starting to perceive neural networks as an additional vulnerable target.

The file was uploaded to the VirusTotal platform in early June 2025. It was sent anonymously by a Dutch user via a standard web interface. Upon examining its contents, researchers discovered that an unusual string of text was encrypted within the program, an attempt to interfere with the operation of artificial intelligence tools used for reverse engineering and automatic code verification.

The authors of the malware called it Skynet, a reference to the well-known botnet based on the Zeus Trojan, which has been actively used since 2012 for DDoS attacks and covert cryptocurrency mining. However, the new Skynet, in its functionality, resembles more an experimental assembly or an empty object than a tool ready for mass use.

Keep reading

North Korean IT workers infiltrated Fortune 500 companies in massive fraud scheme

Federal authorities have unraveled several schemes by the Democratic People’s Republic of North Korea (DPRK) that were used to fund its regime through remote information technology (IT) work for U.S. companies, resulting in two indictments, tech and financial seizures and an arrest.

The Department of Justice (DOJ) said Monday that North Korean actors were helped by individuals in the U.S., China, the United Arab Emirates and Taiwan to obtain employment with over 100 U.S. companies, including Fortune 500 companies.

In one scheme, U.S.-based individuals created front companies and fraudulent websites to promote the legitimacy of remote workers, while hosting laptop farms where remote North Korean IT workers could remotely access company-provided laptop computers.

In another scheme, IT workers in North Korea used false identities to gain employment with a blockchain research and development company in Atlanta, Georgia, and steal virtual currency worth over $900,000.

Keep reading

Denmark Plans Sweeping Ban on Online Deepfakes to Combat “Misinformation”

Denmark is preparing legislation that would outlaw the sharing of deepfake content online, a move that could open the door to unprecedented restrictions on digital expression.

Deepfakes, which can involve photos, videos, or audio recordings manipulated by artificial intelligence, are designed to convincingly fabricate actions or statements that never occurred.

While governments cite misinformation concerns, broad bans risk stifling creativity, political commentary, and legitimate speech.

The Danish Ministry of Culture announced Thursday that lawmakers from many parties are backing the effort to clamp down on the distribution of AI-generated imitations of people’s appearances or voices.

The forthcoming proposal, according to officials, aims to block the spread of deepfakes by making it illegal to share such material. Culture Minister Jakob Engel-Schmidt argued that “it was high time that we now create a safeguard against the spread of misinformation and at the same time send a clear signal to the tech giants.”

But these assurances do little to address the chilling effect such measures could have on free expression.

Authorities describe the planned rules as among the most comprehensive attempts yet to confront deepfakes and their potential to mislead the public.

The United States last year introduced legislation criminalizing the non-consensual sharing of intimate deepfakes, while South Korea has imposed tougher punishments for similar offenses and tightened regulations on social media platforms.

Keep reading

Marsha Blackburn Proposes Bipartisan Bill to Rein In Big Tech as ‘Unaccountable Gatekeepers’ over Apps

Sen. Marsha Blackburn (R-TN) and a group of bipartisan lawmakers this week introduced legislation that would prevent big tech from operating as “unaccountable gatekeepers” for the mobile app economy.

Sens. Blackburn, Richard Blumenthal (D-CT), Mike Lee (R-UT), Amy Klobuchar (D-MN), and Dick Durbin (D-IL) introduced the Open App Markets Act, a bill aimed at setting clear and enforceable rules of consumer protections within the app market.

“Big Tech giants have operated as unaccountable gatekeepers of the mobile app economy, forcing American consumers to use their app stores at the expense of innovative startups that threaten their bottom line,” Blackburn said in a statement.

“Our bipartisan Open App Markets Act would ensure a freer and fairer marketplace for consumers and small businesses by promoting competition in the app marketplace and opening the door to more choices and innovation,” she added.

With the advent of the smartphone, mobile devices have become a central aspect of the American consumers’ economic, social, and civic lives. The bipartisan group of lawmakers asserted that their legislation would break Apple and Google’s predominant “grip on the app economy.”

Blackburn’s press release about the legislation noted that consumers spent $92 billion on the Apple App Store and roughly $35.7 billion on the Google Play Store.

Apple has actively worked to prevent users from using third-party app stores on Apple devices, requiring app users and developers to use their Apple payment system.

The lawmakers stated that startups often face serious challenges because big tech can prioritize their own app to the disadvantage of smaller competitors.

Keep reading

Supreme Court Greenlights Online Digital ID Checks

With a landmark ruling that could shape online content regulation for years to come, the US Supreme Court has upheld Texas’s digital ID age-verification law for adult websites and platforms, asserting that the measure lawfully balances the state’s interest in protecting minors with the free speech rights of adults.

The 6-3 decision, issued on June 27, 2025, affirms the constitutionality of House Bill 1181, a statute that requires adult websites to verify the age of users before granting access to sexually explicit material.

Laws like House Bill 1181, framed as necessary safeguards for children, are quietly eroding the rights of adults to access lawful content or speak freely online without fear of surveillance or exposure.

Under such laws, anyone seeking to view legal adult material online (and eventually even those who want to access social media platforms because may contain content “harmful” to minors) is forced to provide official identification, often a government-issued digital ID or even biometric data, to prove their age.

Supporters claim this is a small price to pay to shield minors from harmful content. Yet these measures create permanent records linking individuals to their browsing choices, exposing them to unprecedented risks.

We obtained a copy of the opinion for you here.

Keep reading

COPPA 2.0: The Age Check Trap That Means Surveillance for Everyone

A new Senate bill designed to strengthen online privacy protections for minors could bring about major changes in how age is verified across the internet, prompting platforms to implement broader surveillance measures in an attempt to comply with ambiguous legal standards.

The Children and Teens’ Online Privacy Protection Act (S.836) (COPPA 2.0), now under review by the Senate Commerce Committee, proposes raising the protected age group from under 13 to under 17. It also introduces a new provision allowing teens aged 13 to 16 to consent to data collection on their own.

The bill has drawn praise from lawmakers across party lines and received backing from several major tech companies.

We obtained a copy of the bill for you here.

Supporters frame the bill as a long-overdue update to existing digital privacy laws. But others argue that a subtle change in how platforms are expected to identify underage users may produce outcomes that are more intrusive and far-reaching than anticipated.

Under the current law, platforms must act when they have “actual knowledge” that a user is a child.

The proposed bill replaces that threshold with a broader and less defined expectation: “knowledge fairly implied on the basis of objective circumstances.” This language introduces uncertainty about what constitutes sufficient awareness, making companies more vulnerable to legal challenges if they fail to identify underage users.

Instead of having to respond only when given explicit information about a user’s age, platforms would be required to interpret behavioral cues, usage patterns, or contextual data. This effectively introduces a negligence standard, compelling platforms to act preemptively to avoid accusations of noncompliance.

As a result, many websites may respond by implementing age verification systems for all users, regardless of whether they cater to minors. These systems would likely require more detailed personal information, including government-issued identification or biometric scans, to confirm users’ ages.

Keep reading

Austria Approves Spyware Law to Infiltrate Encrypted Messaging Platforms

Austria is moving forward with legislation that would authorize law enforcement to infiltrate encrypted communications, marking a pivotal shift in the country’s surveillance powers and stirring a fierce debate over digital privacy.

The federal cabinet’s approval of the plan comes after months of negotiations, with proponents citing national security needs and opponents warning of expansive overreach.

The proposed law targets messaging platforms widely used for private communication, including WhatsApp, Signal, and Telegram.

It introduces the use of spyware, formally known as source TKÜ, which would allow authorities to bypass encryption and monitor conversations directly on suspects’ devices. The change represents a major escalation in surveillance capabilities for a country that has traditionally lagged behind its European counterparts in digital interception laws.

Backers of the measure, such as Social Democrat Jörg Leichtfried, who oversees the Directorate for State Security and Intelligence (DSN), framed the move as a preventative strategy. “The aim is to make people planning terrorist attacks in Austria feel less secure; and increase everyone else’s sense of security.”

Leichtfried called the cabinet’s approval an “important milestone.”

Austria’s domestic intelligence services have until now been dependent on international partners, including the UK and the US, to provide warnings of potential threats.

Keep reading

Obama Wants Filters Not Freedom

Barack Obama’s recent appearance at The Connecticut Forum once again revealed a troubling truth: the political establishment is becoming increasingly comfortable with the idea of government-managed speech.

In an extended conversation with historian Heather Cox Richardson, the former president signaled that his tolerance for open discourse ends where his ideological preferences begin.

Amid warnings about the spread of “propaganda” and falsehoods online, Obama floated the notion of imposing “government regulatory constraints” on digital platforms.

His rationale? To counter business models that, in his opinion, elevate “the most hateful voices or the most polarizing voices or the most dangerous, in the sense of inciting violence.”

But it doesn’t take much reading between the lines to see what’s really being proposed: a top-down mechanism to filter speech based on government-approved standards of truth.

This wasn’t framed as a direct assault on the First Amendment, of course. Obama was careful to qualify that such regulations would remain “consistent with the First Amendment.”

But that’s little comfort when the very premise involves the government determining which voices deserve a platform. Once the state takes a role in deciding what is true or acceptable, the line between moderation and censorship evaporates.

Obama’s remarks included a reference to a saying he alleges is attributed to Russian intelligence and later adopted by Steve Bannon: “You just have to flood the zone with so much poop…that at some point people don’t believe anything.”

This, he argued, is the tactic used by bad actors to disorient the public. What he failed to acknowledge is that the antidote to this isn’t more control, but more speech. Free people, given access to a full spectrum of views, are capable of discerning fact from fiction without government supervision.

The real danger isn’t “too much speech.” It’s the increasing desire to place speech under bureaucratic management.

Obama’s suggestion that some speech is too “hateful” or “dangerous” to be left unchecked invites a future where those in power decide what the public is allowed to hear, a vision completely incompatible with a free society.

And we’ve already seen how that plays out.

Keep reading