Artificial Intelligence In The Classroom Destroys Actual Intelligence In Students

Ialways assumed that before AI destroyed our humanity, we’d at least put up a fight. No, I’m not talking about AI gaining consciousness and physically enslaving us (though I’m certainly not ruling that out as a possibility) — but it is alarming how quick many are to accept AI usage not just in their daily lives, but in education.

As an educator, I’ve heard high school teachers and college professors alike defend teaching AI usage in the classroom: “The technology isn’t going away, so kids have to learn how to use it responsibly.” “AI can be a useful tool.” “Learning how to write the right prompts is a marketable skill.” They say we should not only allow but encourage students to use AI to brainstorm ideas, write outlines, and provide feedback on their work.

On the surface, these suggestions can seem benign. Our society is pushing the idea that AI usage is not only inevitable but good. “You’re a writer,” a silky tone on an advertisement for AI software sings, “even if you are the kind who relies on AI.” Okay, so that’s not the exact verbiage, but that’s the idea we’re being sold. We’re reassured that AI can simply be a legitimate “tool.” You are a writer even if you use an AI generator. You are an artist just by instructing prompts. You are a creator, although it’s the algorithms doing the creating.

If the goal is simply to produce outcomes, one could argue that AI usage should not just be tolerated but encouraged. But education shouldn’t be about producing outcomes – whether it be a sparkling essay or a gripping short story – but shaping souls. The purpose of writing isn’t to instruct a prompt or even to produce a quality paper. The purpose is to become a strong thinker and someone who enriches the lives of everyone, no matter their profession. 

Each and every step of the struggle it takes to write is essential. Yes, it can all be arduous and time-consuming. As a writer, I get how hard it is and how tempting it might be to take shortcuts. But doing so is cheating oneself out of growth and intellectual payoff. Outsourcing parts of the process to algorithms and machines is outsourcing the rewards of doing one’s own thinking. Organizing ideas, refining word choices, thinking about tone are all skills that many citizens in this nation lack, and it’s often apparent in our chaotic, senseless public discourse. These are not steps to be skipped over with a “tool,” but rather things people benefit from learning if they value reason. Strong writing is strong thinking.

But these thoughts aren’t just my own opinions. A recent MIT study shows that AI usage decreases cognitive function like critical thinking. Seems rather odd to insist that something proven to weaken our brains should be introduced to places where institutions of learning, isn’t it?

Many argue that in order to thrive in today’s job market, young people need to master the skill of “writing prompts.” The assumption is that it’s a great skill to learn how to tell a robot to do a job for you; a skill so great, in fact, that we need to send kids to school for it.  For decades, educators have argued kids need screen time to prepare them for today’s job market. They acted as if using the internet were a skill that needs years of training when in reality three-year-olds naturally become experts. Let us first focus on developing the minds of the youth — something best done without AI assistance — and then let them use those skills in the workplace as needed. Students should aspire to be more than mere “prompt writers,” but minds capable of thinking, reasoning, and perseverance.

Keep reading

Israel’s ‘Game-Changing’ Iron Beam Laser System Enters Combat, Offering U.S. Blueprint for Future Missile Defense

Israel has begun field deployment of its “game-changing” Iron Beam laser air defense system, according to the country’s Defense Ministry, which confirmed the technology is now operating as part of the country’s layered missile defense network, with its combat performance expected to inform U.S. missile defense planning.

The announcement, made Sunday, follows months of phased rollouts and testing by Israel’s defense establishment and domestic industry partners. Defense officials said the system is already being positioned at multiple sites nationwide.

Senior defense officials have described the high-energy laser as a system that will “fundamentally change the rules of engagement” as Israel confronts sustained rocket and drone threats from Iran and its regional terror proxies.

Keep reading

‘Revenge porn’ betrayal of thousands of women: 40,000 cases identified but only three per cent of suspects charged as new AI apps help abusers humiliate victims

Women are being ‘failed’ by revenge porn laws because most perpetrators are avoiding justice, official data suggests.

Just 3 per cent of suspects were charged in 40,000 cases probed by police over the past five years.

The problem – in which intimate private photos or video footage are shared without consent by a former partner – is likely to grow, as experts warn that AI apps capable of generating lifelike fake sexual images will make it easier for abusers to humiliate victims.

Thousands of distressing cases were left unsolved or closed over the five-year period due to a lack of crucial evidence, the Home Office statistics revealed.

The data showed that 40,110 offences were reported to the police – roughly one every hour. Yet just 3.2 per cent of cases ended with a criminal charge.

Shadow Home Secretary Chris Philp said last night: ‘The statistics are absolutely staggering. We must do more to ensure perpetrators are held accountable and victims are properly supported so the system doesn’t fail women.

‘We were unapologetic in government about standing up for women and protecting victims of domestic abuse. 

‘We toughened up sentences for rapists and stalkers, outlawed upskirting and revenge porn, and made violence against women and girls a national policing priority – but it is clear more needs to be done.’

Love Island star Georgia Harrison took her ex-boyfriend to court for uploading online sexual footage of her in 2023.

Ms Harrison, 31, gave evidence against Stephen Bear, 35, at Chelmsford Crown Court, where he was jailed for 21 months for voyeurism and two counts of disclosing private sexual images without consent.

Experts fear the number of cases could soar as culprits use so-called ‘nudification’ apps, which can edit an ordinary photograph of a person to make it appear that they are naked. 

Other powerful AI software can be used to insert a victim’s face into sexually explicit pictures or videos – known as deep-fakes – such as the high-profile clips of pop star Taylor Swift that caused outrage last year.

Keep reading

China Deploys Humanoid AI Robots to Guard Border

China is dispatching a new kind of recruit to its bustling border with Vietnam: humanoid robots. The UBTECH Walker S2 machines will patrol the Fangchenggang crossing in a high-stakes trial of AI technology.

The above video from UBTECH shows hundreds of these bipedal bots marching in formation, “staring” out at the world with two eye-sized cameras mounted over digital displays. Sophisticated sensors and software help them balance and navigate crowded spaces, and the droids also autonomously swap their own batteries to minimize downtime. At the end of the video, the bots file into multiple Chinese shipping containers and give a salute.

Border officials claim the robots’ roles will be diverse, from guiding passenger lines to checking cargo IDs and seals. The deployment is part of China’s national strategy to lead the global robotics race; footage released earlier this month shows a different model of android soldier deployed near the country’s border with India.

Keep reading

Bill Gates Thinks Anything He Disagrees with Is ‘Misinformation’ that Should Be Policed by AI

Billionaire tech tycoon Bill Gates has expressed concern about the impact of “misinformation” on future generations, claiming that AI should be used to police ideas he disagrees with online. The creepy Microsoft founder has a long history of taking radical positions on issues including climate alarmism that he’d like to stifle dissent against.

Fortune reports that in a recent interview, Bill Gates discussed what he sees as the growing problem of misinformation and its potential to negatively affect the lives of his children and future generations. Gates, who reinvented himself as a creepy advocate for climate alarmism and other leftist causes,, noted that the spread of false information online has become a significant concern for him and his family.

The billionaire’s comments came in light of a recent incident involving his daughter, Phoebe Gates, who claims she was subjected to online harassment. This experience led Gates to reflect on the broader implications of misinformation and its ability to cause harm to individuals and society as a whole.

“Seeing my daughter targeted by false claims and harassment online was a wake-up call,” Gates said. “It made me realize that the problem of misinformation is not just about the present, but it’s also about the future we’re leaving for our children.”

Gates argues that the spread of what he considers to be “misinformation” can have far-reaching consequences, from undermining public trust in institutions to hindering progress on critical issues such as public health and climate change. He noted that the proliferation of false information online has the potential to erode the foundations of democracy and create a more polarized and divided society.

“Misinformation is a complex problem that requires a multi-faceted approach,” Gates explained. “We need to invest in media literacy programs, support fact-checking organizations, and encourage responsible behavior from social media platforms. But we also need to foster a culture of critical thinking and healthy skepticism, so that people are better equipped to distinguish between credible information and false claims.”

Keep reading

Canada Is Building The Wrong Army For The War That Is Coming

The next major land war will not reward elegance, boutique modernization, or the comforting belief that advanced technology can replace mass and endurance. It will expose armies built on fragile assumptions. Concealment has largely disappeared. Attrition has returned as a central fact of combat. Sustainment shapes outcomes as decisively as firepower. Yet the Canadian Army remains organized, equipped, and intellectually anchored to a vision of warfare that belonged to yesterday’s world. The problem is not a simple modernization lag or a lack of new kit. It is a deeper conceptual failure—a refusal to absorb how radically and irreversibly the character of land warfare has changed.

That is the larger point. The key change is not this or that technology. The battlespace itself has changed. Artificial intelligence, proliferated drones, commercial satellites, autonomous strike systems, and persistent ISR have combined into a transparent, data-rich battlespace where everyone is on the move, movement is tracked instantly, concentrations are targeted rapidly, and supply lines are targeted as soon as they begin to form—an environment already documented in assessments of modern conflict. An army that cannot scatter, regenerate while under fire, and sustain itself while under persistent observation is not going to muddle through. It is going to break.

Transparency and the End of Concealment

Western armies have operated on the assumptions of concealment and intermittent detection for a generation. Those assumptions are no longer valid. The contemporary battlespace is full of aerial surveillance, open-source commercial satellite imagery, digital emissions that reveal every vehicle and headquarters location, and loitering munitions that make ground above those locations perpetually contested—patterns captured in recent operational analyses.

The issue is time: the time between being discovered and being targeted. The time between when a headquarters can command and when it becomes a targeting point. The time between declaring a movement and becoming a target.

Survival requires dispersion, deception, mobility, and an entire operating paradigm built on the idea that you are observed all the time. The Canadian Army knows about the emergence of drones, ISR, and digital exposure, but it has not yet internalized the ways that they change land warfare’s fundamentals.

Attrition Has Returned—and Canada Is Not Ready

Precision fires promised surgical, inexpensive war. In reality, they have intensified attrition: the ability to strike targets more often, more reliably, and more predictably. Ukraine has demonstrated the scale of this shift: modern war is industrial, not surgical. It consumes people, equipment, ammunition, drones, and spare parts at rates far beyond what most Western forces planned for in peacetime, as shown by studies of wartime industrial demand.

The Canadian Army is not designed for this reality. It is small and brittle. It is optimized for controlled, expeditionary contributions, not for open-ended, high-intensity conflict. Ammunition stocks are low. Maintenance capacity is thin. Replacement cycles are slow. Mobilization—across industry, reserve forces, and training pipelines—is largely theoretical, even as official modernization documents highlight the fragility of the current model.

You can have a small and lethal army if it is small and lethal through design and deliberate choice. You cannot have a small, hollow, and unprepared army if it has to fight for extended periods. In an attritional war, those features are decisive.

Sustainment as a Front-Line Fight

The rise of long-range strike, drones, and cyber means that the old rear area is no more. Supply lines are now a front-line fight from start to finish. Supply depots, railheads, ports, repair facilities, and fuel infrastructure are all high-priority targets. If an enemy cannot stop forward brigades, it will attempt to starve them. Analyses of modern logistics under fire emphasize that industrial capacity and resilient supply networks—not efficiency—determine strategic endurance.

An army for the future must be able to fight under conditions of intermittent resupply, contested and damaged infrastructure, disrupted and overloaded communications, and near-constant threats to supply lines. Planning and organization must prioritize resilience, redundancy, and regeneration rather than peacetime efficiency and timeliness.

The Canadian Army still plans as if reliable resupply were a given and rear areas could stay intact. The moment a capable adversary enters the fight, those assumptions are shattered.

Keep reading

MIT scientist poised to upend fossil fuel industry before assassination linked to Brown University shooter

The murdered professor from the Massachusetts Institute of Technology was on the brink of revolutionizing the energy sector and upending fossil fuel use as we know it. 

Nuno Loureiro, 47, was gunned down at his home in the Boston suburb of Brookline on Monday.

Authorities believe that the same alleged gunman, Claudio Neves Valente, who carried out the mass shooting at Brown University, may have assassinated Loureiro, but the investigation is still ongoing.

Before his death, Loureiro was leading MIT’s efforts to revolutionize energy production by making a game-changing clean power source that needs just a fraction of the fossil fuels current machines and vehicles use today.

His team’s research at MIT’s Plasma Science and Fusion Center (PSFC) centered on plasma physics, the study of super-hot, ionized gases, and how to apply them to fusion energy, a promising clean power source.

Fusion provides what scientists call ‘baseload electricity,’ a steady supply of power 24/7, using tiny amounts of fuel with no air pollution or climate-warming emissions, unlike carbon dioxide-producing fossil fuels.

A breakthrough in this field could disrupt the trillion-dollar fuel industry by reducing demand for oil, gas, and coal, especially for generating power and transportation. High-demand users like data centers could also switch to fusion for reliable, green energy.

‘This is a very advanced technology, and whatever nation masters it first is going to have an incredible advantage,’ Loureiro said on December 8.

Keep reading

The Surveillance State Is Making a Naughty List—and You’re On It

The Surveillance State is making a naughty list, and we’re all on it.

Unlike Santa’s naughty list, however, the consequences of landing on the government’s “naughty list” are far more severe than a stocking full of coal. They can include heightened surveillance, loss of privacy, travel restrictions, financial scrutiny, police encounters, or being flagged as a potential threat—often without notice, explanation, or recourse.

What was once dismissed as a joke—“Santa is watching”—has morphed into a chilling reality. Instead of elves, the watchers are data brokers, intelligence agencies, predictive algorithms, and fusion centers. Instead of a naughty-or-nice list, Americans are sorted into databases, risk profiles, and threat assessments—lists that never disappear.

The shift is subtle but profound.

Innocence is no longer presumed.

Everyone is watched. Everyone is scored. Everyone is a potential suspect.

This is the surveillance state in action.

Today’s surveillance state doesn’t require suspicion, a warrant, or probable cause. It is omnipresent, omniscient, and inescapable.

Your smartphone tracks your location. Your car records your movements. License plate readers log when and where you drive. Retail purchases create detailed consumer profiles. Smart speakers listen to everything you say. Home security cameras observe not just your property, but your neighbors, delivery drivers, and anyone who passes by.

In a dramatic expansion of surveillance reach, the Transportation Security Administration now shares airline passenger lists with Immigration and Customs Enforcement, enabling ICE to identify and arrest travelers at airports based on immigration status.

Keep reading

Bernie Sanders Wants To Pause New Data Centers To Stop the Economy From Growing Too Much

The United States is leading a global data center boom. Investors are plowing some $7 trillion into the infrastructure necessary to support AI development, with 40 percent of that investment happening here in the United States.

This boom in data center investment is so pronounced that many analysts argue it’s propping up an economy that’d otherwise be wobbling under the strain of tariffs and high borrowing costs.

Some skeptics credibly argue that the money flowing into AI research and the physical infrastructure needed to support it is a bubble that will eventually pop.

Unconvinced by the skeptics is Sen. Bernie Sanders (I–Vt.), who seems to believe that data center investment will generate large profits, produce technological innovations, and drive economy-wide productivity growth.

Therefore, he wants to shut it down.

In a video posted to Instagram, the socialist senator called for a federal moratorium on data center construction until our politicians can figure out just what the hell is going on.

According to Sanders, the development of artificial intelligence and robotics technologies powered by data centers “is moving very, very quickly, and we need to slow it down.”

He warns that the current boom, if left unchecked, could well end up enriching already wealthy billionaires investing in the technology, leading to job automation and powering a distracting and alienating technology.

A “moratorium will give democracy a chance to catch up with the transformative changes that we are witnessing and make sure the benefits of these technologies work for all of us,” Sanders concludes.

Given general bipartisan support for “winning the AI race” and the amount of growth being generated by data center investment, it’s unlikely that any such moratorium will come to pass.

The fact Sanders is proposing it anyway is reflects just how much anxiety he and other members of the socialist left feel whenever capitalism is working.

Whether it’s driverless cars or choices in deodorant brands, Sanders cannot stop worrying and learn to love it when capitalists make productive investments and give consumers what they want.

Any economic growth that is not planned by the bureaucrats and approved by the electorate is inherently suspicious and perhaps downright malicious.

Sanders’ call for a data center moratorium is to prevent investment in this infrastructure from yielding productive fruit.

He’s worried that investors will reap profits from data center construction. Those same profits would be a signal that their investments were a prudent use of capital that’s driving real growth in the economy.

Likewise, the job automation Sanders worries about would be another sign that data center investments were well-placed. A primary purpose of capital investment and technological innovation is to shift more labor off the backs of human beings and onto machines.

Keep reading

Senators press toy makers on AI-powered toys exposing children to inappropriate content

Many of these toys are… exposing children to inappropriate content, privacy risks, and manipulative engagement tactics.”

These chatbots have encouraged kids to commit self harm and suicide, and now your company is pushing them on the youngest children.

US Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sent a letter to the Chief Executive Officers of Little Learners Toys, Mattel, Miko, Curio Interactive, FoloToy, and Keyi Robot sounding the alarm on the use of artificial intelligence (AI) in their toys. The Senators demanded answers on what safeguards these companies have in place to protect kids from sexually explicit, violent, or otherwise inappropriate content for children.

AI Chatbots in Toys Pose Serious Risk to Children’s Healthy Development

“We write today to express our concern with the sale of toys powered by artificial intelligence (AI). These AI toys—specifically those powered by chatbots imbedded in everyday children’s toys like plushies, dolls, and other beloved toys—pose risks to children’s healthy development.

While AI has incredible potential to benefit children with learning and accessibility, experts have raised concerns about AI toys and the lack of research that has been conducted to understand the full effect of these products on our kids.

Many of these toys are not offering interactive play, but instead are exposing children to inappropriate content, privacy risks, and manipulative engagement tactics. These aren’t theoretical worst-case scenarios; they are documented failures uncovered through real-world testing, and they must be addressed.”

Keep reading