Bill Gates Thinks Anything He Disagrees with Is ‘Misinformation’ that Should Be Policed by AI

Billionaire tech tycoon Bill Gates has expressed concern about the impact of “misinformation” on future generations, claiming that AI should be used to police ideas he disagrees with online. The creepy Microsoft founder has a long history of taking radical positions on issues including climate alarmism that he’d like to stifle dissent against.

Fortune reports that in a recent interview, Bill Gates discussed what he sees as the growing problem of misinformation and its potential to negatively affect the lives of his children and future generations. Gates, who reinvented himself as a creepy advocate for climate alarmism and other leftist causes,, noted that the spread of false information online has become a significant concern for him and his family.

The billionaire’s comments came in light of a recent incident involving his daughter, Phoebe Gates, who claims she was subjected to online harassment. This experience led Gates to reflect on the broader implications of misinformation and its ability to cause harm to individuals and society as a whole.

“Seeing my daughter targeted by false claims and harassment online was a wake-up call,” Gates said. “It made me realize that the problem of misinformation is not just about the present, but it’s also about the future we’re leaving for our children.”

Gates argues that the spread of what he considers to be “misinformation” can have far-reaching consequences, from undermining public trust in institutions to hindering progress on critical issues such as public health and climate change. He noted that the proliferation of false information online has the potential to erode the foundations of democracy and create a more polarized and divided society.

“Misinformation is a complex problem that requires a multi-faceted approach,” Gates explained. “We need to invest in media literacy programs, support fact-checking organizations, and encourage responsible behavior from social media platforms. But we also need to foster a culture of critical thinking and healthy skepticism, so that people are better equipped to distinguish between credible information and false claims.”

Keep reading

Canada Is Building The Wrong Army For The War That Is Coming

The next major land war will not reward elegance, boutique modernization, or the comforting belief that advanced technology can replace mass and endurance. It will expose armies built on fragile assumptions. Concealment has largely disappeared. Attrition has returned as a central fact of combat. Sustainment shapes outcomes as decisively as firepower. Yet the Canadian Army remains organized, equipped, and intellectually anchored to a vision of warfare that belonged to yesterday’s world. The problem is not a simple modernization lag or a lack of new kit. It is a deeper conceptual failure—a refusal to absorb how radically and irreversibly the character of land warfare has changed.

That is the larger point. The key change is not this or that technology. The battlespace itself has changed. Artificial intelligence, proliferated drones, commercial satellites, autonomous strike systems, and persistent ISR have combined into a transparent, data-rich battlespace where everyone is on the move, movement is tracked instantly, concentrations are targeted rapidly, and supply lines are targeted as soon as they begin to form—an environment already documented in assessments of modern conflict. An army that cannot scatter, regenerate while under fire, and sustain itself while under persistent observation is not going to muddle through. It is going to break.

Transparency and the End of Concealment

Western armies have operated on the assumptions of concealment and intermittent detection for a generation. Those assumptions are no longer valid. The contemporary battlespace is full of aerial surveillance, open-source commercial satellite imagery, digital emissions that reveal every vehicle and headquarters location, and loitering munitions that make ground above those locations perpetually contested—patterns captured in recent operational analyses.

The issue is time: the time between being discovered and being targeted. The time between when a headquarters can command and when it becomes a targeting point. The time between declaring a movement and becoming a target.

Survival requires dispersion, deception, mobility, and an entire operating paradigm built on the idea that you are observed all the time. The Canadian Army knows about the emergence of drones, ISR, and digital exposure, but it has not yet internalized the ways that they change land warfare’s fundamentals.

Attrition Has Returned—and Canada Is Not Ready

Precision fires promised surgical, inexpensive war. In reality, they have intensified attrition: the ability to strike targets more often, more reliably, and more predictably. Ukraine has demonstrated the scale of this shift: modern war is industrial, not surgical. It consumes people, equipment, ammunition, drones, and spare parts at rates far beyond what most Western forces planned for in peacetime, as shown by studies of wartime industrial demand.

The Canadian Army is not designed for this reality. It is small and brittle. It is optimized for controlled, expeditionary contributions, not for open-ended, high-intensity conflict. Ammunition stocks are low. Maintenance capacity is thin. Replacement cycles are slow. Mobilization—across industry, reserve forces, and training pipelines—is largely theoretical, even as official modernization documents highlight the fragility of the current model.

You can have a small and lethal army if it is small and lethal through design and deliberate choice. You cannot have a small, hollow, and unprepared army if it has to fight for extended periods. In an attritional war, those features are decisive.

Sustainment as a Front-Line Fight

The rise of long-range strike, drones, and cyber means that the old rear area is no more. Supply lines are now a front-line fight from start to finish. Supply depots, railheads, ports, repair facilities, and fuel infrastructure are all high-priority targets. If an enemy cannot stop forward brigades, it will attempt to starve them. Analyses of modern logistics under fire emphasize that industrial capacity and resilient supply networks—not efficiency—determine strategic endurance.

An army for the future must be able to fight under conditions of intermittent resupply, contested and damaged infrastructure, disrupted and overloaded communications, and near-constant threats to supply lines. Planning and organization must prioritize resilience, redundancy, and regeneration rather than peacetime efficiency and timeliness.

The Canadian Army still plans as if reliable resupply were a given and rear areas could stay intact. The moment a capable adversary enters the fight, those assumptions are shattered.

Keep reading

MIT scientist poised to upend fossil fuel industry before assassination linked to Brown University shooter

The murdered professor from the Massachusetts Institute of Technology was on the brink of revolutionizing the energy sector and upending fossil fuel use as we know it. 

Nuno Loureiro, 47, was gunned down at his home in the Boston suburb of Brookline on Monday.

Authorities believe that the same alleged gunman, Claudio Neves Valente, who carried out the mass shooting at Brown University, may have assassinated Loureiro, but the investigation is still ongoing.

Before his death, Loureiro was leading MIT’s efforts to revolutionize energy production by making a game-changing clean power source that needs just a fraction of the fossil fuels current machines and vehicles use today.

His team’s research at MIT’s Plasma Science and Fusion Center (PSFC) centered on plasma physics, the study of super-hot, ionized gases, and how to apply them to fusion energy, a promising clean power source.

Fusion provides what scientists call ‘baseload electricity,’ a steady supply of power 24/7, using tiny amounts of fuel with no air pollution or climate-warming emissions, unlike carbon dioxide-producing fossil fuels.

A breakthrough in this field could disrupt the trillion-dollar fuel industry by reducing demand for oil, gas, and coal, especially for generating power and transportation. High-demand users like data centers could also switch to fusion for reliable, green energy.

‘This is a very advanced technology, and whatever nation masters it first is going to have an incredible advantage,’ Loureiro said on December 8.

Keep reading

The Surveillance State Is Making a Naughty List—and You’re On It

The Surveillance State is making a naughty list, and we’re all on it.

Unlike Santa’s naughty list, however, the consequences of landing on the government’s “naughty list” are far more severe than a stocking full of coal. They can include heightened surveillance, loss of privacy, travel restrictions, financial scrutiny, police encounters, or being flagged as a potential threat—often without notice, explanation, or recourse.

What was once dismissed as a joke—“Santa is watching”—has morphed into a chilling reality. Instead of elves, the watchers are data brokers, intelligence agencies, predictive algorithms, and fusion centers. Instead of a naughty-or-nice list, Americans are sorted into databases, risk profiles, and threat assessments—lists that never disappear.

The shift is subtle but profound.

Innocence is no longer presumed.

Everyone is watched. Everyone is scored. Everyone is a potential suspect.

This is the surveillance state in action.

Today’s surveillance state doesn’t require suspicion, a warrant, or probable cause. It is omnipresent, omniscient, and inescapable.

Your smartphone tracks your location. Your car records your movements. License plate readers log when and where you drive. Retail purchases create detailed consumer profiles. Smart speakers listen to everything you say. Home security cameras observe not just your property, but your neighbors, delivery drivers, and anyone who passes by.

In a dramatic expansion of surveillance reach, the Transportation Security Administration now shares airline passenger lists with Immigration and Customs Enforcement, enabling ICE to identify and arrest travelers at airports based on immigration status.

Keep reading

Bernie Sanders Wants To Pause New Data Centers To Stop the Economy From Growing Too Much

The United States is leading a global data center boom. Investors are plowing some $7 trillion into the infrastructure necessary to support AI development, with 40 percent of that investment happening here in the United States.

This boom in data center investment is so pronounced that many analysts argue it’s propping up an economy that’d otherwise be wobbling under the strain of tariffs and high borrowing costs.

Some skeptics credibly argue that the money flowing into AI research and the physical infrastructure needed to support it is a bubble that will eventually pop.

Unconvinced by the skeptics is Sen. Bernie Sanders (I–Vt.), who seems to believe that data center investment will generate large profits, produce technological innovations, and drive economy-wide productivity growth.

Therefore, he wants to shut it down.

In a video posted to Instagram, the socialist senator called for a federal moratorium on data center construction until our politicians can figure out just what the hell is going on.

According to Sanders, the development of artificial intelligence and robotics technologies powered by data centers “is moving very, very quickly, and we need to slow it down.”

He warns that the current boom, if left unchecked, could well end up enriching already wealthy billionaires investing in the technology, leading to job automation and powering a distracting and alienating technology.

A “moratorium will give democracy a chance to catch up with the transformative changes that we are witnessing and make sure the benefits of these technologies work for all of us,” Sanders concludes.

Given general bipartisan support for “winning the AI race” and the amount of growth being generated by data center investment, it’s unlikely that any such moratorium will come to pass.

The fact Sanders is proposing it anyway is reflects just how much anxiety he and other members of the socialist left feel whenever capitalism is working.

Whether it’s driverless cars or choices in deodorant brands, Sanders cannot stop worrying and learn to love it when capitalists make productive investments and give consumers what they want.

Any economic growth that is not planned by the bureaucrats and approved by the electorate is inherently suspicious and perhaps downright malicious.

Sanders’ call for a data center moratorium is to prevent investment in this infrastructure from yielding productive fruit.

He’s worried that investors will reap profits from data center construction. Those same profits would be a signal that their investments were a prudent use of capital that’s driving real growth in the economy.

Likewise, the job automation Sanders worries about would be another sign that data center investments were well-placed. A primary purpose of capital investment and technological innovation is to shift more labor off the backs of human beings and onto machines.

Keep reading

Senators press toy makers on AI-powered toys exposing children to inappropriate content

Many of these toys are… exposing children to inappropriate content, privacy risks, and manipulative engagement tactics.”

These chatbots have encouraged kids to commit self harm and suicide, and now your company is pushing them on the youngest children.

US Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sent a letter to the Chief Executive Officers of Little Learners Toys, Mattel, Miko, Curio Interactive, FoloToy, and Keyi Robot sounding the alarm on the use of artificial intelligence (AI) in their toys. The Senators demanded answers on what safeguards these companies have in place to protect kids from sexually explicit, violent, or otherwise inappropriate content for children.

AI Chatbots in Toys Pose Serious Risk to Children’s Healthy Development

“We write today to express our concern with the sale of toys powered by artificial intelligence (AI). These AI toys—specifically those powered by chatbots imbedded in everyday children’s toys like plushies, dolls, and other beloved toys—pose risks to children’s healthy development.

While AI has incredible potential to benefit children with learning and accessibility, experts have raised concerns about AI toys and the lack of research that has been conducted to understand the full effect of these products on our kids.

Many of these toys are not offering interactive play, but instead are exposing children to inappropriate content, privacy risks, and manipulative engagement tactics. These aren’t theoretical worst-case scenarios; they are documented failures uncovered through real-world testing, and they must be addressed.”

Keep reading

US Suspends $41 Billion Tech Deal with UK over Online Censorship Laws

The great transatlantic tech romance has hit the skids. What was sold as a landmark agreement binding Silicon Valley brains to British ambition has been shoved into neutral, all because Britain decided it quite fancies telling American machines what they are allowed to say.

Washington has now suspended the much-trumpeted US-UK technology agreement, a decision driven by mounting alarm over Britain’s new censorship law, the Online Safety Act.

The idea that a British regulator might fine or muzzle American firms has landed in Washington like a dropped wrench.

One participant in the talks put it bluntly, telling The Telegraph, “Americans went into this deal thinking Britain were going to back off regulating American tech firms but realized it was going to restrict the speech of American chatbots.”

The Online Safety Act gives Britain the power to fine companies it believes are enabling “harmful” or “hateful” speech, concepts elastic enough to stretch around just about anything if you pull hard enough.

The communications regulator Ofcom has not been shy about using these powers.

Enforcement notices have already landed on the desks of major American firms, even when their servers, staff, and coffee machines are nowhere near Britain.

From Washington’s perspective, this looks less like safety and more like Britain peering over the Atlantic with a ruler, ready to rap American knuckles.

The White House had been keen on the £31 ($41) billion Tech Prosperity Deal, seeing it as a front door to closer ties on AI research and digital trade.

Instead, officials began to see the Online Safety Act as a mechanism for deciding what American platforms, and their algorithms, are allowed to say. Chatbots like ChatGPT or Elon Musk’s Grok suddenly looked like potential defendants in a British courtroom, accused of wrongthink.

Keep reading

YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers Watched By Millions

 YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal.

The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views.

The channels have been replaced with the message: “This page isn’t available. Sorry about that. Try searching for something else.”

Screen Culture and KH Studio were approached for comment. They are based in India and Georgia, respectively.

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI.

The channels later returned to monetization when they started adding “fan trailer,” “parody” and “concept trailer” to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community.

YouTube’s position is that the channels’ decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination.

“The monster was defeated,” one YouTuber told Deadline following the enforcement action.

Deadline’s investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers.

Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube’s algorithm by being early with fake trailers and constantly iterating with videos.

For example, Screen Culture had created 23 versions of a trailer for The Fantastic Four: First Steps by March, some of which outranked the official trailer in YouTube search results. More recent examples include HBO’s new Harry Potter series and Netflix’s Wednesday.

Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction. The studios declined to comment.

Disney properties featured prominently on Screen Culture and KH Studio. The Mouse House sent a cease-and-desist letter to Google last week, claiming that its AI training models and services infringe on its copyrights on a “massive scale.”

Keep reading

The Things Young Kids Are Using AI for Are Absolutely Horrifying

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling.

new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.

Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.

Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.”

Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue.

The report, which is awaiting peer review — and, to be fair, produced by a company in the business of marketing surveillance software to jittery parents — emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.

“We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.”

“These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.”

One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.

Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay.

The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about deathsuicide, and psychological harm to adult users as well.)

Keep reading

Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading