China’s Military AI and Biotechnology Directed at the United States

Soldiers in a brigade attached to the 83rd Group Army of the Chinese People’s Liberation Army conduct virtual reality exercises. Photo: Screenshot from China Central Television

According to a congressional report, the People’s Liberation Army continues to exploit military-civil fusion to integrate commercial and academic research into military systems. Military-Civil Fusion is China’s national strategy to merge civilian technology, research institutions, and industry with the defense sector in order to build a world-class military.

The strategy aligns commercial innovation with military requirements across fields ranging from artificial intelligence to semiconductors, pooling state and private resources to accelerate military development. Chinese authorities describe military-civil fusion as a core component of comprehensive national power and a central driver of long-term military modernization.

Through state laboratories, funding programs, conferences, and industrial parks, China has ensured sustained private-sector participation in this effort. As a result, it has made significant advances in artificial intelligence and large language models that underpin many emerging PLA technologies. AI reasoning systems support cyber operations, command decision-making, and influence campaigns, while also enabling autonomous and unmanned platforms, drone swarms, and loyal wingman UAVs.

These capabilities increasingly intersect with developments in quantum computing, quantum sensing, and quantum communications, which China has identified as priorities for national security and future warfare.

Chinese leader Xi Jinping has described quantum technologies as drivers of industrial transformation, and Beijing is investing in post-quantum cryptography, military applications of quantum sensing, and ground- and space-based infrastructure for a global quantum communications network with both civilian and military uses.

Quantum communications support nuclear command, control, and communications by enabling hardened and interception-resistant links, while quantum sensing has potential applications in anti-submarine warfare by enabling detection methods that do not rely on active sonar.

Semiconductor self-sufficiency remains a parallel strategic objective. In 2024, firms including Semiconductor Manufacturing International Corp and Huawei Technologies received substantial local government funding to accelerate chip indigenization. Although China continues to lag the West in the most advanced GPUs, it is pursuing alternative pathways through nontraditional microchip technologies, including photonic components developed by state research institutes.

Domestic chip production underpins military resilience by enabling continued weapons manufacturing under sanctions and securing supply chains for missiles, drones, and radar systems. Alternative chip architectures support AI processing and reduce reliance on advanced Western GPUs, sustaining production capacity during conflict.

Keep reading

Is The Gloom And Doom About An AI Dystopia Justified?

“Artificial intelligence does the work of many minds at once. Will human creativity flourish or fail?”

“Artificial intelligence will make us useless slaves. We must stop these abuses before they start!”

“AI will create killer robots! We’re doomed!”

I have heard various versions of the above concerns regarding the rise of robots, the growth of artificial intelligence, and the broader concerns about the moral and ethical dilemmas facing humanity as technological innovation advances—and then accelerates.

The gloom around AI is understandable but incorrect.

Technological innovation has always served as a winnowing process. Old jobs fall away, but new jobs take their place. Some career paths may disappear, but new opportunities take over.

No matter how sophisticated, artificial intelligence cannot replace human intelligence, wisdom, ingenuity, and entrepreneurship. AI can hone specialized skills for those who want to retain or maintain specific fields of craftsmanship, but craftsmen are not going away.

With special thanks to Canadian commentator JJ McCullough, AI makes it easier to create templates and ideas, but the quality and the taste of the pictures, objects, and ideas created are, on the surface, still cringeworthy. A machine cannot inspire, nor can it replicate the inspiration of the human spirit. Whatever stories, poems, or other forms of art that can come out of a ChatGPT prompt, the style and substance will never suffice or suffuse the human mind. Furthermore, the compact creations of Grok or Meta AI programs can’t reflect the inner tensions of man’s search for place or meaning in his world, including the scenes that he depicts. While AI can generate pictures or formulate ideas into pictures, it cannot create or enhance the contrasts, shades, and shadows that transform the ordinary into the extraordinary.

Of course, there is a growing number of artists and intellectuals ruggedly opposed to AI. They think that all creative works deserve assessment and praise based on the amount of effort put into the creation. Here’s the fundamental failure of the marketplace for leftists. Value is not determined by labor but by the interest or value of the consumer. No creation, no good, and no service has value in and of itself, but rather its value is based on what it produces. There you have the Austrian economics’ subjective theory of value.

Animators fear that AI will take their jobs away. AI might make it easier to produce films, but the fundamental characters and templates of individual actors, processions, and ideas will have to come from the people. The stunning beauty of Walt Disney’s “Snow White” still rivals the computer animation of DreamWorks.

Instead of limiting or bankrupting artists, AI will induce the general public to discern quality, and artists will strive to reflect that. The general public will have a greater appreciation for the abstract and avant-garde. That’s a win-win for artists.

We cannot predict how broad and commanding man’s ingenuity will be going forward. AI has provided a means for man to be more creative more quickly, but it cannot predict or anticipate the future wants and needs of the general public, either.

Man and his search for competence, recognition, and meaning will not disappear, but our lives will improve in the search for answers.

Another fear about rapid mechanization and advancement of AI was that there would be such widespread unemployment that governments and peoples would have to invest in universal basic incomes.

Industries that promoted the upkeep and well-being of horses fell into decline with the arrival of the automobile. Animal enthusiasts, performers, and general-interest equestrians still own horses, ride them, and enjoy their company. The horse-riding industry was limited, but it became more specialized.

But specialized careers employ fewer people. Where’s the comfort, then? Consider the moment when banks switched to automatic in the 1970s. There was widespread fear that automated teller machines, ATMs, would put thousands of people out of work. The opposite happened. Banks shifted their services to more customer-related features. With the increased savings, these financial firms opened up more branches, and they ended up having to hire more people! Free enterprise does entail creative destruction, but there always follows a creative proliferation!

Furthermore, it’s rather arrogant for labor leaders and liberal pundits to claim that “There will be no jobs left.” Human wants and needs are constantly changing. Steve Jobs created the portable phone with Internet before there was a thought, let alone a want, for the phone. Once he invented the nifty device, everyone had to have one. The innovations often create the need because of the facility and agility they provide to the consumer.

Even now, reports are listing the jobs that AI cannot replace. Human beings will always have employment opportunities.

Keep reading

The AI Arms Race Is Cracking Open The Nuclear Fuel Cycle

  • The abstract “cloud” of artificial intelligence possesses a massive, structural demand for 24/7 “baseload” power that is equivalent to adding Germany’s entire power grid by 2026, a need intermittent renewables cannot meet.
  • Decades of underinvestment have resulted in a widening uranium supply deficit, with mined uranium expected to meet less than 75% of future reactor needs and an incentive price of $135/lb required to restart mothballed mines.
  • Big Tech hyperscalers are privatizing energy security by locking in clean baseload nuclear power via long-term agreements, effectively making the public grid’s “service” secondary to the “compute-ready” requirements of major platforms.

We are seeing a violent collision between two worlds: the high-speed, iterative world of artificial intelligence and the slow, grinding, capital-intensive world of nuclear physics. 

Data from a survey of over 600 global investors reveals that 63% now view AI electricity demand as a “structural” shift in nuclear planning. This isn’t a temporary spike or a speculative bubble. It is the physical footprint of every Large Language Model (LLM) query finally showing up on the global balance sheet.

For years, the energy narrative was dominated by “efficiency.” We were told that better chips would offset higher usage. That era is over. Generative AI doesn’t just use data; it incinerates energy to create it.

Why the “Efficiency” Narrative Failed

The “Reverse-Polish” reality of AI is that the more efficient we make the chips, the more chips we deploy, and the more complex the models become. This is Jevons Paradox playing out in real-time across the data centers of Northern Virginia and Singapore.

When you look at the energy density required for an AI hyperscale center, you aren’t looking at a traditional office building. You are looking at a facility that pulls as much power as a mid-sized city, but does so with a 99.999% uptime requirement.

Traditional demand models simply didn’t account for a single industry deciding to double its power footprint in less than five years. S&P Global Energy recently highlighted that data center electricity consumption could hit 2,200 terawatt-hours (TWh). 

Intermittent renewables…the darlings of the corporate ESG report…cannot provide the 24/7 “baseload” these machines require…

The hyperscalers have realized that if they want to dominate AI, they need to secure physical atoms before the other guy does.

Keep reading

Artificial Intelligence In The Classroom Destroys Actual Intelligence In Students

Ialways assumed that before AI destroyed our humanity, we’d at least put up a fight. No, I’m not talking about AI gaining consciousness and physically enslaving us (though I’m certainly not ruling that out as a possibility) — but it is alarming how quick many are to accept AI usage not just in their daily lives, but in education.

As an educator, I’ve heard high school teachers and college professors alike defend teaching AI usage in the classroom: “The technology isn’t going away, so kids have to learn how to use it responsibly.” “AI can be a useful tool.” “Learning how to write the right prompts is a marketable skill.” They say we should not only allow but encourage students to use AI to brainstorm ideas, write outlines, and provide feedback on their work.

On the surface, these suggestions can seem benign. Our society is pushing the idea that AI usage is not only inevitable but good. “You’re a writer,” a silky tone on an advertisement for AI software sings, “even if you are the kind who relies on AI.” Okay, so that’s not the exact verbiage, but that’s the idea we’re being sold. We’re reassured that AI can simply be a legitimate “tool.” You are a writer even if you use an AI generator. You are an artist just by instructing prompts. You are a creator, although it’s the algorithms doing the creating.

If the goal is simply to produce outcomes, one could argue that AI usage should not just be tolerated but encouraged. But education shouldn’t be about producing outcomes – whether it be a sparkling essay or a gripping short story – but shaping souls. The purpose of writing isn’t to instruct a prompt or even to produce a quality paper. The purpose is to become a strong thinker and someone who enriches the lives of everyone, no matter their profession. 

Each and every step of the struggle it takes to write is essential. Yes, it can all be arduous and time-consuming. As a writer, I get how hard it is and how tempting it might be to take shortcuts. But doing so is cheating oneself out of growth and intellectual payoff. Outsourcing parts of the process to algorithms and machines is outsourcing the rewards of doing one’s own thinking. Organizing ideas, refining word choices, thinking about tone are all skills that many citizens in this nation lack, and it’s often apparent in our chaotic, senseless public discourse. These are not steps to be skipped over with a “tool,” but rather things people benefit from learning if they value reason. Strong writing is strong thinking.

But these thoughts aren’t just my own opinions. A recent MIT study shows that AI usage decreases cognitive function like critical thinking. Seems rather odd to insist that something proven to weaken our brains should be introduced to places where institutions of learning, isn’t it?

Many argue that in order to thrive in today’s job market, young people need to master the skill of “writing prompts.” The assumption is that it’s a great skill to learn how to tell a robot to do a job for you; a skill so great, in fact, that we need to send kids to school for it.  For decades, educators have argued kids need screen time to prepare them for today’s job market. They acted as if using the internet were a skill that needs years of training when in reality three-year-olds naturally become experts. Let us first focus on developing the minds of the youth — something best done without AI assistance — and then let them use those skills in the workplace as needed. Students should aspire to be more than mere “prompt writers,” but minds capable of thinking, reasoning, and perseverance.

Keep reading

‘Revenge porn’ betrayal of thousands of women: 40,000 cases identified but only three per cent of suspects charged as new AI apps help abusers humiliate victims

Women are being ‘failed’ by revenge porn laws because most perpetrators are avoiding justice, official data suggests.

Just 3 per cent of suspects were charged in 40,000 cases probed by police over the past five years.

The problem – in which intimate private photos or video footage are shared without consent by a former partner – is likely to grow, as experts warn that AI apps capable of generating lifelike fake sexual images will make it easier for abusers to humiliate victims.

Thousands of distressing cases were left unsolved or closed over the five-year period due to a lack of crucial evidence, the Home Office statistics revealed.

The data showed that 40,110 offences were reported to the police – roughly one every hour. Yet just 3.2 per cent of cases ended with a criminal charge.

Shadow Home Secretary Chris Philp said last night: ‘The statistics are absolutely staggering. We must do more to ensure perpetrators are held accountable and victims are properly supported so the system doesn’t fail women.

‘We were unapologetic in government about standing up for women and protecting victims of domestic abuse. 

‘We toughened up sentences for rapists and stalkers, outlawed upskirting and revenge porn, and made violence against women and girls a national policing priority – but it is clear more needs to be done.’

Love Island star Georgia Harrison took her ex-boyfriend to court for uploading online sexual footage of her in 2023.

Ms Harrison, 31, gave evidence against Stephen Bear, 35, at Chelmsford Crown Court, where he was jailed for 21 months for voyeurism and two counts of disclosing private sexual images without consent.

Experts fear the number of cases could soar as culprits use so-called ‘nudification’ apps, which can edit an ordinary photograph of a person to make it appear that they are naked. 

Other powerful AI software can be used to insert a victim’s face into sexually explicit pictures or videos – known as deep-fakes – such as the high-profile clips of pop star Taylor Swift that caused outrage last year.

Keep reading

China Deploys Humanoid AI Robots to Guard Border

China is dispatching a new kind of recruit to its bustling border with Vietnam: humanoid robots. The UBTECH Walker S2 machines will patrol the Fangchenggang crossing in a high-stakes trial of AI technology.

The above video from UBTECH shows hundreds of these bipedal bots marching in formation, “staring” out at the world with two eye-sized cameras mounted over digital displays. Sophisticated sensors and software help them balance and navigate crowded spaces, and the droids also autonomously swap their own batteries to minimize downtime. At the end of the video, the bots file into multiple Chinese shipping containers and give a salute.

Border officials claim the robots’ roles will be diverse, from guiding passenger lines to checking cargo IDs and seals. The deployment is part of China’s national strategy to lead the global robotics race; footage released earlier this month shows a different model of android soldier deployed near the country’s border with India.

Keep reading

Bill Gates Thinks Anything He Disagrees with Is ‘Misinformation’ that Should Be Policed by AI

Billionaire tech tycoon Bill Gates has expressed concern about the impact of “misinformation” on future generations, claiming that AI should be used to police ideas he disagrees with online. The creepy Microsoft founder has a long history of taking radical positions on issues including climate alarmism that he’d like to stifle dissent against.

Fortune reports that in a recent interview, Bill Gates discussed what he sees as the growing problem of misinformation and its potential to negatively affect the lives of his children and future generations. Gates, who reinvented himself as a creepy advocate for climate alarmism and other leftist causes,, noted that the spread of false information online has become a significant concern for him and his family.

The billionaire’s comments came in light of a recent incident involving his daughter, Phoebe Gates, who claims she was subjected to online harassment. This experience led Gates to reflect on the broader implications of misinformation and its ability to cause harm to individuals and society as a whole.

“Seeing my daughter targeted by false claims and harassment online was a wake-up call,” Gates said. “It made me realize that the problem of misinformation is not just about the present, but it’s also about the future we’re leaving for our children.”

Gates argues that the spread of what he considers to be “misinformation” can have far-reaching consequences, from undermining public trust in institutions to hindering progress on critical issues such as public health and climate change. He noted that the proliferation of false information online has the potential to erode the foundations of democracy and create a more polarized and divided society.

“Misinformation is a complex problem that requires a multi-faceted approach,” Gates explained. “We need to invest in media literacy programs, support fact-checking organizations, and encourage responsible behavior from social media platforms. But we also need to foster a culture of critical thinking and healthy skepticism, so that people are better equipped to distinguish between credible information and false claims.”

Keep reading

UK Police Pilot AI System to Track “Suspicious” Driver Journeys

Police forces across Britain are experimenting with artificial intelligence that can automatically monitor and categorize drivers’ movements using the country’s extensive number plate recognition network.

Internal records obtained by Liberty Investigates and The Telegraph reveal that three of England and Wales’s nine regional organized crime units are piloting a Faculty AI-built program designed to learn from vehicle movement data and detect journeys that algorithms label “suspicious.”

For years, the automatic number plate recognition (ANPR) system has logged more than 100 million vehicle sightings each day, mostly for confirming whether a specific registration has appeared in a certain area.

The new initiative changes that logic entirely. Instead of checking isolated plates, it teaches software to trace entire routes, looking for patterns of behavior that resemble the travel of criminal networks known for “county lines” drug trafficking.

The project, called Operation Ignition, represents a change in scale and ambition.

Unlike traditional alerts that depend on officers manually flagging “vehicles of interest,” the machine learning model learns from past data to generate its own list of potential targets.

Official papers admit that the process could involve “millions of [vehicle registrations],” and that the information gathered may guide future decisions about the ethical and operational use of such technologies.

What began as a Home Office-funded trial in the North West covering Merseyside, Greater Manchester, Cheshire, Cumbria, Lancashire, and North Wales has now expanded into three regional crime units.

Keep reading

Bernie Sanders Wants To Pause New Data Centers To Stop the Economy From Growing Too Much

The United States is leading a global data center boom. Investors are plowing some $7 trillion into the infrastructure necessary to support AI development, with 40 percent of that investment happening here in the United States.

This boom in data center investment is so pronounced that many analysts argue it’s propping up an economy that’d otherwise be wobbling under the strain of tariffs and high borrowing costs.

Some skeptics credibly argue that the money flowing into AI research and the physical infrastructure needed to support it is a bubble that will eventually pop.

Unconvinced by the skeptics is Sen. Bernie Sanders (I–Vt.), who seems to believe that data center investment will generate large profits, produce technological innovations, and drive economy-wide productivity growth.

Therefore, he wants to shut it down.

In a video posted to Instagram, the socialist senator called for a federal moratorium on data center construction until our politicians can figure out just what the hell is going on.

According to Sanders, the development of artificial intelligence and robotics technologies powered by data centers “is moving very, very quickly, and we need to slow it down.”

He warns that the current boom, if left unchecked, could well end up enriching already wealthy billionaires investing in the technology, leading to job automation and powering a distracting and alienating technology.

A “moratorium will give democracy a chance to catch up with the transformative changes that we are witnessing and make sure the benefits of these technologies work for all of us,” Sanders concludes.

Given general bipartisan support for “winning the AI race” and the amount of growth being generated by data center investment, it’s unlikely that any such moratorium will come to pass.

The fact Sanders is proposing it anyway is reflects just how much anxiety he and other members of the socialist left feel whenever capitalism is working.

Whether it’s driverless cars or choices in deodorant brands, Sanders cannot stop worrying and learn to love it when capitalists make productive investments and give consumers what they want.

Any economic growth that is not planned by the bureaucrats and approved by the electorate is inherently suspicious and perhaps downright malicious.

Sanders’ call for a data center moratorium is to prevent investment in this infrastructure from yielding productive fruit.

He’s worried that investors will reap profits from data center construction. Those same profits would be a signal that their investments were a prudent use of capital that’s driving real growth in the economy.

Likewise, the job automation Sanders worries about would be another sign that data center investments were well-placed. A primary purpose of capital investment and technological innovation is to shift more labor off the backs of human beings and onto machines.

Keep reading

Senators press toy makers on AI-powered toys exposing children to inappropriate content

Many of these toys are… exposing children to inappropriate content, privacy risks, and manipulative engagement tactics.”

These chatbots have encouraged kids to commit self harm and suicide, and now your company is pushing them on the youngest children.

US Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sent a letter to the Chief Executive Officers of Little Learners Toys, Mattel, Miko, Curio Interactive, FoloToy, and Keyi Robot sounding the alarm on the use of artificial intelligence (AI) in their toys. The Senators demanded answers on what safeguards these companies have in place to protect kids from sexually explicit, violent, or otherwise inappropriate content for children.

AI Chatbots in Toys Pose Serious Risk to Children’s Healthy Development

“We write today to express our concern with the sale of toys powered by artificial intelligence (AI). These AI toys—specifically those powered by chatbots imbedded in everyday children’s toys like plushies, dolls, and other beloved toys—pose risks to children’s healthy development.

While AI has incredible potential to benefit children with learning and accessibility, experts have raised concerns about AI toys and the lack of research that has been conducted to understand the full effect of these products on our kids.

Many of these toys are not offering interactive play, but instead are exposing children to inappropriate content, privacy risks, and manipulative engagement tactics. These aren’t theoretical worst-case scenarios; they are documented failures uncovered through real-world testing, and they must be addressed.”

Keep reading