WHO and European Commission Launch AI System to Monitor Social Media and Online “Misinformation” in Real Time

The World Health Organization has introduced a major overhaul of its global monitoring network, unveiling an AI-powered platform that tracks online conversations and media activity in real time.

Known as Epidemic Intelligence from Open Sources 2.0 (EIOS), the system is being presented as a new step in “pandemic preparedness,” but its reach extends well beyond disease surveillance.

The upgrade is part of a growing merger between health monitoring, digital tracking, and centralized information control.

Developed with the European Commission’s Joint Research Centre (JRC), the new version of EIOS is designed to scan the internet for signals of emerging health threats.

According to the WHO, it now automatically analyzes social media posts, websites, and other public sources to detect possible outbreaks.

Keep reading

Dover, NJ Implements AI Surveillance, Expanding Facial Recognition and Public Monitoring Systems

Dover, New Jersey, has joined a growing wave of municipalities embedding artificial intelligence into public spaces, advancing a surveillance system that includes facial recognition and automated video analysis across its government buildings.

The town partnered with technology firm Claro to retrofit its existing camera infrastructure with AI tools, avoiding the need for costly new hardware while expanding its monitoring capabilities.

The system brings a range of features into play, including facial recognition, visible weapons detection, and real-time behavioral analytics.

These tools are now active in locations such as the town hall, police department, fire station, and public library.

Town officials say the technology is being used for incident detection, crime prevention, crowd control, traffic monitoring, and illegal dumping enforcement.

“As a small municipality, we don’t have the budget for constant law enforcement presence,” said Mayor James Dodd. “Claro gave us the ability to enhance safety with cutting-edge technology that works with what we already have.”

The rollout reflects a broader trend where small towns turn to algorithmic systems to fill gaps traditionally addressed by human staff.

AI tools, particularly facial recognition, are increasingly being deployed in public settings, sparking ongoing concern about surveillance practices and the erosion of privacy rights.

Councilman Sergio Rodriguez, who helped lead the initiative, emphasized that the project came together through collaboration rather than off-the-shelf sales.

“Claro wasn’t just selling a product,” he said. “They listened to our needs and delivered solutions that worked for the Town of Dover.” He pointed to the technology’s role in optimizing public safety while helping stretch municipal budgets.

“With AI supporting day-to-day operations,” he said, “we can better protect residents and allocate our budget more effectively.”

Claro markets its AI platform as adaptable to existing surveillance systems and suitable for both real-time alerts and forensic investigations.

Keep reading

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms

On a scorching hot Saturday in San Antonio, dozens of teachers traded a day off for a glimpse of the future. The topic of the day’s workshop: enhancing instruction with artificial intelligence.

After marveling as AI graded classwork instantly and turned lesson plans into podcasts or online storybooks, one high school English teacher raised a concern that was on the minds of many: “Are we going to be replaced with AI?”

That remains to be seen. But for the nation’s 4 million teachers to stay relevant and help students use the technology wisely, teachers unions have forged an unlikely partnership with the world’s largest technology companies. The two groups don’t always see eye to eye but say they share a common goal: training the future workforce of America.

Microsoft, OpenAI and Anthropic are providing millions of dollars for AI training to the American Federation of Teachers, the country’s second-largest teachers union. In exchange, the tech companies have an opportunity to make inroads into schools and win over students in the race for AI dominance.

AFT President Randi Weingarten said skepticism guided her negotiations, but the tech industry has something schools lack: deep pockets.

“There is no one else who is helping us with this. That’s why we felt we needed to work with the largest corporations in the world,” Weingarten said. “We went to them — they didn’t come to us.”

Weingarten first met with Microsoft CEO Brad Smith in 2023 to discuss a partnership. She later reached out to OpenAI to pursue an “agnostic” approach that means any company’s AI tools could be used in a training session.

Under the arrangement announced in July, Microsoft is contributing $12.5 million to AFT over five years. OpenAI is providing $8 million in funding and $2 million in technical resources, and Anthropic has offered $500,000.

Keep reading

From AI to TikTok to TV, This Pro-Israel Billionaire Is Expanding Power in US

Larry Ellison’s name isn’t always mentioned alongside more public-facing megabillionaires like Elon Musk, Jeff Bezos, or Mark Zuckerberg. But as he vaults to the top of the U.S. power elite after a string of high-profile corporate deals, that’s about to change.

Ellison, the founder of the tech giant Oracle, is quickly emerging as the new face of oligarchic power in the U.S. Oracle has become an AI powerhouse at the same time Ellison and his son David have acquired Paramount and its vast media empire. With Donald Trump’s recent executive order, Ellison and Oracle will also now oversee TikTok’s algorithms, shaping a platform that reaches 150 million U.S. users.

What’s more alarming than Ellison’s sheer wealth — in September, he briefly surpassed Musk as the world’s richest person — is that he’s building his concentrated power and control in collaboration with the Trumpian project of attacking so-called “wokeness,” all while supercharging the corporate expansion of artificial intelligence and tech surveillance.

Keep reading

These Are The Five States Leading America’s Data Center Boom

Most of the rapid growth in new data center capacity is happening in established (“incumbent”) markets – places like Virginia, Texas, Oregon, Ohio, and Iowa, while new or smaller states (“emerging markets”) are starting to attract data center development, though on a smaller scale so far.

Goldman analysts, led by Hongcen Wei, cited new current project schedules from Aterio data that showed US data center capacity is projected to reach 46 GW by October 2025, marking a 37% year-over-year increase. He found that most of this increase comes from incumbent markets

Here are the key takeaways from the report:

  • Top states (Virginia, Texas, Oregon, Ohio, Iowa) account for 7.6 GW of the 12.4 GW added year-to-date.
  • Virginia remains dominant with 33% yoy growth, while Texas and Georgia lead in acceleration, each up 57% yoy.
  • 31 states have added capacity in 2025 (versus 22 in 2024), highlighting broader national expansion, though most new entrants remain modest in scale.
  • PJM (Mid-Atlantic), ERCOT (Texas), and the Southeast (mainly Georgia) together account for 64% of new US capacity.
  • TVA (Tennessee Valley Authority) is the least competitive region due to power constraints.

Looking ahead:

  • Another 4 GW of capacity is expected by year-end 2025, primarily from top markets.
  • Beyond that, 63 GW in new projects are announced for the next few years.
  • Rapid growth in data centers is expected to push major US power markets – CAISO, MISO, and PJM – toward critical tightness in coming years.

Data center buildouts are entering hypergrowth (read “circle jerk“).

Keep reading

Google’s ex-CEO Eric Schmidt shares dire warning of homicidal AI models

Talk about a killer app.

Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.

The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.

“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.

“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

The predictions might not be so far-fetched.

In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.

Keep reading

NSW Flood Relief Data Breach: Contractor Uploads Personal Details of Thousands to ChatGPT

Thousands of flood survivors in New South Wales, Australia, have had their personal details exposed after a former contractor to the NSW Reconstruction Authority uploaded sensitive data to ChatGPT.

The breach involves the Northern Rivers Resilient Homes Program, which was created to support residents impacted by the 2022 floods.

Through the program, the government offered options such as voluntary home buybacks, financial help to rebuild, or property upgrades aimed at improving resilience.

Now, applicants who sought relief through this initiative may be dealing with the consequences of a serious privacy failure.

Central to the incident is an Excel spreadsheet containing more than 12,000 rows of data.

The document, which was uploaded to ChatGPT between March 12 and 15, is believed to include information on as many as 3,000 people.

The compromised data includes names, phone numbers, email addresses, physical addresses, and some health-related information. According to the government, the upload was carried out without authorization.

Despite taking place over six months ago, the breach was not made public until this week, during a public holiday in NSW.

The delay in disclosure is a reminder of ongoing concerns around the speed and transparency of mandatory breach notifications.

Keep reading

Grassley calls out judges for using AI to draft error-filled rulings

Senate Judiciary Committee Chairman Chuck Grassley (R-Iowa) wrote to two federal judges regarding their alleged use of generative artificial intelligence (AI) to draft court orders with little to no human verification. Grassley’s oversight inquiry follows public  that U.S. District of Mississippi Judge Henry T. Wingate and U.S. District of New Jersey Judge Julien Xavier Neals issued court orders containing serious factual inaccuracies, prompting allegations of AI use.

“As Chairman of the Senate Judiciary Committee, I am committed to safeguarding litigants’ rights and ensuring that every party in federal court receives fair treatment and careful review by the Article III judges confirmed by the Senate,” Grassley wrote.

“No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy. Indeed, Article III judges should be held to a higher standard, given the binding force of their rulings on the rights and obligations of litigants before them,” Grassley continued.

Grassley is asking Wingate and Neals to explain whether they, their law clerks, or any court staff used generative AI – or entered non-public case information into generative AI tools – in preparing their decisions. Further, Grassley called on the district judges to re-docket their original orders to preserve a transparent history of the courts’ actions.

Read Grassley’s letter to Wingate HERE and letter to Neals HERE.

Keep reading

‘Swarms of killer robots’: Former Biden official says US military is afraid of using AI

A former Biden administration official working on cyber policy says the United States military would have a problem controlling its soldiers’ use of artificial intelligence.

Mieke Eoyang, the deputy assistant secretary of defense for cyber policy during the Joe Biden administration, said that current AI models are poorly suited for use in the U.S. military and would be dangerous if implemented.

With claims of “AI psychosis” and killer robots, Eoyang said the military cannot simply use an existing, public AI agent and morph it into use for the military. This would of course involve giving a chatbot leeway on suggesting the use of violence, or even killing a target.

Allowing for such capabilities is cause for alarm in the Department of Defense, now Department of War, Eoyang claimed.

Keep reading

The “Reimagined State”: Tony Blair Institute’s Blueprint for a Global Techno-Dictatorship

The Tony Blair Institute for Global Change (TBI) has unveiled its “Reimagined State” initiative, a sweeping plan to use artificial intelligence and digital technology to reshape the way governments operate and ultimately, to change how people live their lives. The stated goal is to make public services more efficient, less costly, and more effective, but the deeper implications raise serious concerns about privacy, freedoms, centralized control, and digital autocracy.

The proposal calls for AI-powered digital assistants to streamline how citizens interact with government services, AI tools to help civil servants automate casework and routine tasks, and a “National Policy Twin,” a data platform designed to simulate policy outcomes and guide decision-making.

The TBI has already implemented this alleged aid to government decision-making in Albania’s parliament. In September 2025, Prime Minister Edi Rama appointed Diella as Minister of State for Artificial Intelligence, making it the world’s first AI to hold a cabinet-level position.

But the role of the AI minister, named Diella, is not to aid in decision-making but to actually make decisions, because, as Diella said in her introductory speech, the problem of the past has not been machines but rather the poor decision-making of humans. TBI will now save us from ourselves by controlling us with technology.

TBI argues that the digital transformation of the reimagined state is necessary to solve the UK’s fiscal crisis, declining public services, and stagnant economy. Embedded within this vision, however, is a plan to make government data fully interoperable across departments and to implement a nationwide digital ID system, an infrastructure that would give the state unprecedented access to personal information.

Former UK Prime Minister Tony Blair, who leads the institute, has called digital ID an “essential part of modern digital infrastructure.” Under his plan, each citizen would be assigned a single digital identifier linking personal health, tax, welfare, and immigration records. More alarmingly, such a system could give the state the power to track citizens and exclude them from services as punishment.

Bank accounts could be frozen, access to air travel restricted, and movement monitored through electronic toll systems. Since the same global advocates are pushing for electric vehicles, the ability to charge one’s car could also be suspended. In effect, an individual’s mobility and financial access could be controlled from a central government computer system.

Policies like the Green New Deal could be enforced digitally by cutting off electricity or water once monthly limits are exceeded, or by canceling flights after a person’s air travel pollution credits run out. Critics warn that Tony Blair’s “Future of Britain” and “Reimagined State” initiatives are not mere modernization efforts but blueprints for a global technocratic system. By linking digital identity systems, central bank digital currencies, and cross-border data networks, the Tony Blair Institute (TBI) promotes a framework that could enable digital totalitarianism, where access to essential services depends on government approval. What Blair describes as “a little work of persuasion” toward modernization, is the normalization of mass surveillance and centralized control over private life.

Keep reading