EU AI Act Effectively Legalizes Biometric Mass Surveillance

On February 2, the EU AI Act, which came into force in August, reached the first compliance deadline. These are provisions that contain rules that allow the bloc to legalize biometric mass surveillance.

This is happening via Article 5, that on the face of it seeks to introduce protections against using AI systems that represent “unacceptable risk.” But, there are also exceptions to this, defined in the act as the “strictly necessary” cases when those same systems are allowed to be used.

It is this that gave rise to the fear that one of the consequences of the AI Act is to in fact legalize some highly contentious deployments of biometric data-fueled mass surveillance.

Article 5 prohibits real-time remote biometric ID systems from being used by law enforcement in public spaces – but only “unless and in so far as such use is strictly necessary.”

The “strictly necessary” instances are described as those when law enforcement is searching for “specific victims” of crimes like abduction, trafficking, and sexual exploitation, but also when they look for missing persons.

The second definition gets less precise in scope as it allows for AI surveillance systems to be used to prevent crime. This includes a threat to life or physical safety of individuals that is deemed to be “specific, substantial, and imminent” – or threat of a terrorist attack that law enforcement decides is “genuine and present” but also – “genuine and foreseeable.”

Lastly, the AI Act treats as “strictly necessary” to exempt from prohibited AI activities the following: “Localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses.”

Keep reading

Bill Gates Says ‘Humans Won’t Be Needed For Most Things’

Bill Gates says that in the near future humans won’t be needed for “most things” thanks to AI, but will still be required for the purposes of entertainment.

Gates made the comments during an appearance on The Tonight Show Starring Jimmy Fallon to promote his new book.

Fallon asked Gates whether AI was going to “take over” and what the downsides and benefits of it were.

Gates explained that “intelligence is rare,” but that artificial intelligence will start to replace great teachers and doctors over the course of the next decade.

The Microsoft founder went on to suggest that many types of jobs will be replaced and that people could only end up working two or three days a week.

“This is a bit scary, it’s completely new territory,” he added.

Keep reading

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews

If you search Google for a way to turn off the company’s AI-powered search results, you may well get an AI Overview telling you that AI Overviews can’t be directly disabled in Google Search. But if you instead ask Google how to turn off “fucking Google AI results,” you’ll get a standard set of useful web suggestions without any AI Overview at the top.

The existence of this “curse to disable Google AI” trick has been making the rounds on social media in recent days, and it holds up in Ars’ own testing. For instance, when searching for “how do you turn off [adjective] Google AI results,” a variety of curse word adjectives reliably disabled the AI Overviews, while adjectives like “dumb” or “lousy” did not. Inserting curse words randomly at any point in the search query seems to have a similar effect.

There’s long been evidence that Google’s Gemini AI system tries to avoid swearing if at all possible, which might help explain why AI Overviews balk at queries that contain curses. Users should also keep in mind, though, that the actual web link results to a query can change significantly when curse words are inserted, especially if SafeSearch is turned off.

Keep reading

The Hidden Dangers Of AI In Finance

Jim Rickards recently published a compelling article on AI risk for Insider Intel subscribers.

In it, Jim discusses a different way in which AI could crash markets. One that is totally separate from the DeepSeek, China, and NVIDIA angle we’ve been covering for the past week.

Today we’re going to review his key points and explore them in detail.

Here’s Jim:

The ultimate danger arises when a large cohort of asset managers controlling trillions of dollars of assets all employ the same or similar AI algorithms in a risk management role. An individual robot working for a particular asset manager tells the manager to sell stocks in a crashing market. In some cases, the robot may be authorized to initiate a sale without further human intervention.

Taken separately, that may be the best course of action for a single manager. In the aggregate, a selling cascade with no offsetting buy orders from active managers, specialists or speculators takes stock prices straight down. Amplification through feedback loops makes matters worse.

Individual AI systems have various trigger points for selling. Not all will be triggered at once, yet all will be triggered eventually as selling begets more selling, which triggers more automated systems that add to the selling pressure, and so on. There are no contrarians among the robots. Building sentiment into systems is still at a primitive stage.

This is a good example of why I read Jim’s work. He always approaches issues from a unique and thoughtful angle.

This risk is clearly real. We are now at the point where trading firms are integrating LLMs (AI models) into their proprietary algorithms.

What happens if a majority of trading firms are using the same AI software to drive their trading? For example, it’s likely that many money managers have integrated OpenAI’s ChatGPT models into their algos.

Now that DeepSeek R1 is the new shiny object, maybe a significant portion of firms are switching to that model.

Perhaps DeepSeek approaches trading in a completely different way. What happens if ChatGPT interprets data bullishly, but DeepSeek sees the same information as bearish?

Keep reading

Investigation: Healthcare Driven by AI Technology Will Lead to More Mass Surveillance of Americans

During a press conference with Silicon Valley luminaries during his second day in office, President Donald Trump threw his political support behind a $500 billion private-sector artificial intelligence (AI) project called Stargate.

The joint venture between OpenAI, Oracle, SoftBank and others will fund infrastructure for AI. Among other things, part of that funding will develop AI for early cancer detection and the rapid creation of mRNA cancer vaccines.

But well before Trump’s announcement, Silicon Valley and the U.S. Department of Defense (DOD) had already teamed up to transform U.S. healthcare into an AI-driven system — a system designed to unleash the power of “predictive medicine” for the early detection and treatment of disease, in an individual or population, sometimes even before an illness manifests, according to a new investigative report by Unlimited Hangout’s Max Jones.

The future of predictive medicine depends on data sharing between the DOD, the U.S. Department of Health and Human Services (HHS), which oversees 13 public health agencies, and the private tech sector, Jones wrote.

Predictive medicine will be used for mass surveillance of Americans, and will inform future approaches to pandemics, Jones reported.

Jones said that at the heart of this new system is the Center for Forecasting and Outbreak Analytics (CFA). Announced last year by the Centers for Disease Control and Prevention (CDC), CFA aims to establish “a National Weather Service, but for infectious diseases” — using mass data collection to predict and control disease outbreaks, The Defender reported.

To launch the initiative, HHS announced an estimated $262 million in grant funding over five years to establish a network of 13 infectious disease forecasting and analytics centers to coordinate this work across the U.S.

Keep reading

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

Keep reading

ERROR: The U.S. Artificial Intelligence Industry Is About to Fall Off a Cliff

The coolest thing about being a PR “hired gun” is that it requires you to learn the ins and outs of all kinds of different industries: sports, tech, financial, entertainment, etc. It’s absolutely mandatory when PR pros onboard new clients. (Which, if you stop and think about it, makes sense: you can’t optimize a company’s brand until you know how its profit model is supposed to work.)

In fact, I always warn people: Beware any marketing “expert” who tells you how to run your company before he bothers to learn your profit model! It’s almost always stupid, off-target advice.

The most common reason why most businesses fail is because they create products and campaigns for their own amusement instead of being laser-focused on the desires and aspirations of their customers. That’s also why most marketing efforts fail: The marketer is marketing to himself, not to his audience.

This brings us to the multibillion-dollar artificial intelligence (AI) industry.

Until just a few weeks ago, the conventional wisdom was that it was an investment-driven enterprise: the country with the most capital would upscale the fastest. The future, we were told, would belong to the first country that achieves AGI — artificial general intelligence.

And the two frontrunners are the United States of America (hooray!)… and communist China (boo!).

Supposedly, AGI is an inevitable landmark on the roadmap to ASI — artificial superintelligence. When ASI occurs, the reasoning and intelligence of our AI systems won’t just be on par with the world’s smartest humans; it will exponentially surpass it.

And before long, these AIs would become hi-tech gods, capable of calculations and breakthroughs that far exceed a million-trillion Albert Einsteins. The possibilities are near endless: cures for diseases, new solutions to complex social problems — who knows what our AI gods might do!

It’s a cool story. And it still might happen. But most people — and certainly most businesses — aren’t demanding an AI that’s a million-trillion times smarter than Albert Einstein. Instead, they simply want an AI that helps them finish a book report or summarize long, boring business emails or help their company with a handful of specific tasks. 

Keep reading

Destroyer Has Become First U.S. Navy Ship To Deploy Artificial Intelligence System

The Navy destroyer USS Fitzgerald (DDG-62) became the first warship to deploy with a program-of-record artificial intelligence (AI) platform. Its creators say the system will help the fleet predict and tackle maintenance needs in a far less disruptive fashion. The system aims to reduce surprise equipment casualties while ensuring that more of the fleet is available should an all-out war break out, requiring a surge of forces. 

Known as Enterprise Remote Monitoring Version 4 (ERM v4), the system is the shipboard aspect of a Pentagon program called Condition Based Maintenance Plus, which in part aims to leverage machine learning to help ship crews, ashore commands, logistical nodes, and other units keep more assets ready to fight, Zac Staples, a retired Navy officer whose Austin-based company, Fathom5, created the system, told TWZ Wednesday. Staples spoke at the annual WEST conference in San Diego this week, which TWZ attended, about his company’s innovation before chatting with TWZ

Keep reading

The Nvidia Story Is A Narrative Scam Attack On US Markets

Simple fact: You must have maximum powered servers, high speed computing (HSC), and superconductor chip production to make AI. Where did this “small China lab” get the chips and power

CNBC reported that perhaps the Chinese lab “found a way to work around the rules, or that the export controls were not the chokehold Washington intended.”

They are referring to ITARs, and with such demand and control on the GPUs, there is absolutely NO WAY. Nvidia or other such tech chip firms had to have facilitated this Chinese effort to produce this AI. PERIOD.

And that is against FEDERAL LAW.

This was an attack on the US Market and the new Trump Administration.

If I were a betting man, I would expect that the Deep State of the US that was just unseated, assisted the CCP to build this AI. It’s the ONLY way China would have been able to put this perfect storm together.

Remember the Wuhan lab and U.S. cooperation to develop a gain-of-function enhanced coronavirus?

Get ready, the truth is going to come out fast.

Keep reading

Davos: UN Introduces AI As “The Next Existential Threat”

At Davos, Guterres slams backsliding on climate commitments

22 January 2025

The world’s political and business elite present in Davos on Wednesday faced an uncompromising address from UN chief António Guterres as he rounded on a lack of multilateral collaboration in an “increasingly rudderless world” at risk from two existential dangers: climate change and unregulated Artificial Intelligence (AI).

‘Fossil fuel addiction’

Likening fossil fuel addiction to Frankenstein’s monster – “sparing nothing and no one” – the Secretary-General noted the irony that 13 of the world’s biggest ports for oil supertankers are set to be overwhelmed by rising sea levels, a consequence of rising temperatures and sea ice melt, caused overwhelmingly by burning coal, crude oil and natural gas.

….

AI’s untold promise 

The next existential threat, AI, is a double-edged sword, Mr. Guterres continued, as it is already revolutionizing learning, diagnosing illnesses, helping farmers to increase their yields and improving the targeting of aid.

But it comes with profound risks if it is left ungoverned: it can disrupt economies, undermine trust in institutions and deepen inequalities, the Secretary-General warned.

Read more: https://news.un.org/en/story/2025/01/1159271

OK, maybe by “the next existential threat”, the UN just meant “another existential threat”. But it is an interesting turn of phrase.

Keep reading