ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

Keep reading

ChatGPT: How Technocracy Thrives On Global Chaos

This is an unedited, un-redacted analysis from OpenAI, answering my question: “With military and economic chaos in the world, what kinds of options are on the table for Technocracy and Technocrats to push ahead with their agenda?” It appears that ChatGPT tells more than 99 percent of the news analysts in the world. But this machine thinking has no bias against talking about Technocracy, and has mined data from the Internet to support its conclusions.

Imagine if Lester Holt or Anderson Cooper said something like this: “Historically, Technocracy thrives not in times of democratic stability, but in moments of systemic failure—when populations and institutions are desperate for “scientific” or “data-driven” solutions to chaos.”

One thing is for sure: Technocrats thrive during periods of crisis and chaos – that would be now. War in Europe. War in the Mideast. Threats of war in Asia. Riots everywhere, including in America. Technopopulism is offered as the solution, as populists everywhere think that Technocracy is the solution while drinking the Kool-Aide.

Keep reading

Teachers Must Avert an AI-Facilitated Intellectual Dark Age

Iremember watching a YouTube interview with a highly intelligent and observant entrepreneur, who cheerfully predicted that the time would come when AI programmes would replace teachers, rendering their jobs obsolete. The commentator in question was an enthusiastic advocate of personal and economic freedom and a vocal critic of the excessive incursions of State agencies in our personal lives. Yet for some reason, he seemed relatively unconcerned at the prospect of machines teaching our children.

Of course, there are tasks that most would happily relegate to AI programmes to the benefit of humanity, such as certain forms of tedious clerical work, a large chunk of manual labour, and the synthesis of unwieldy amounts of data. However, there are other tasks that cannot be delegated to a machine without endangering invaluable dimensions of our lives as human beings.

One of those tasks is teaching and learning, through which people learn to think, interpret the world, make rational arguments, assess evidence, make rational and holistic choices, and reflect on the meaning of their lives. For better or for worse, teachers, from kindergarten right up to university level, form the minds of the next generation. The formation of the mind relies on apprenticeship, imitation of a worthy model, and intellectual practice and training. 

Much as an athlete fine-tunes his motor skills and muscle memory playing sport, and finds inspiration in an exemplary athlete, the student fine-tunes his mental skills thinking, reflecting, studying, analysing, and generating ideas and arguments, in dialogue with an inspiring teacher. There is both an interpersonal and “hands-on” dimension to human learning, both of which are indispensable. 

Yet Artificial Intelligence is reaching the point where it has the capacity to automate and mechanise certain aspects of teaching and learning, marginalising crucial aspects of the learning process, most notably the way a teacher can model intellectual activity for the student, and the intellectual tasks a teacher assigns to students in order to fine-tune their mental skills and imagination. Many tasks which, just a few years ago, had to be undertaken “manually,” by which I mean, through the laborious activity, imagination, and effort of a human being, can now be performed automatically by AI.

Keep reading

BritCard: Inside Labour’s “Progressive” Digital ID

A new report from a British government think tank offers some clear insights into the Starmer administration’s plan to introduce a universal digital ID.

That digital ID – in one form or another – is a major part of the endgame is not any kind of revelation. We’ve known that was the plan for years, but the report tells us quite a lot about how it’s going to be sold to the public.

I guess we should go ahead and dive in.

The Thinktank

The report was published just this week by Labour Together – formerly “The Common Good” – a thinktank founded in “Labour’s wilderness years” to help “make Labour electable again”, according to their about page.

Translation: They’re centrist globalist Blairite shills who helped undermine and destroy the only vaguely genuine movement in the last 50 years of British “democracy” and now publish reports to push a globalist agenda.

According to the Electoral Commission, they received over £ 9 million in donations last year (from only 234 donors), much of which seems to have been “donated” by Labour Together Limited, a for-profit company. The murky world of Westminster finances is not my focus, however, and I’m sure it’s at least passably legal and no more corrupt than is standard practice in those circles.

Exactly how a think tank with eighteen employees, ten advisors, four policy fellows and five board members manages to spend 9 million pounds writing a newsletter a week, a report every two months and doing some online polls I have no idea.

It’s a good question for another time, perhaps. For now, we know everything we need to know – Labour Together are old-fashioned New Labour types shilling for globalist tyranny.

The Authors

We won’t talk long about the authors, because there’s not much point. They’re names on a title a page, and while I’m sure they believe in the words they write (or at least, asked ChatGPT to write), it’s also true their job requires they believe it.

I just wanted to point out that the three supposed authors of this work on technology have no tech backgrounds at all. The closest any of them comes is Laurel Boxall, the “about the authors” section of the report proudly declares she has a Masters from Cambridge “focusing on AI”, but a bit a of digging reveals it’s a Masters in “Digital Humanities” with a focus on fictional portrayals of AI in media. Apparently, that qualifies you to become a “tech policy advisor”.

Which is interesting, because it demonstrates that they consider fictional portrayals of AI to be as relevant to this work as real AI experience. An apposite commentary on the state of society in general.

Keep reading

OpenAI snags $256 million contract with US Defence Department

OpenAI has won a US$200 million (S$256 million) contract for a pilot programme aimed at helping the US Defence Department determine how it could use artificial intelligence (AI) for a range of administrative and security tasks.

The one-year contract, which the Defence Department disclosed on June 16, is the latest push by the ChatGPT maker to sell its technology to the US government. 

In a blog post on June 16, OpenAI said the contract is the company’s first project under a new entity it is calling OpenAI for Government. 

As part of the effort, OpenAI will work with the Defence Department to come up with ways that AI can help with administrative tasks, such as getting healthcare for US military members and helping prevent cyber attacks.

OpenAI for Government consolidates all of OpenAI’s existing government projects in one area of the company, including ChatGPT Gov – a version of ChatGPT meant for government workers – as well as its work with US space agency Nasa, the National Institutes of Health, the Air Force Research Laboratory and the Treasury Department.

The company also said late in 2024 that it would partner with weapons maker Anduril Industries to build AI for anti-drone systems, in another sign of its expanding work with the US government, particularly around national security. 

Keep reading

Radio trick secretly turns laptop into a spy speaker that talks through walls

Security researchers at the University of Florida and the University of Electro-Communications in Japan have revealed that modern digital microphones used in laptops and speakers can leak audio as electromagnetic signals.

This could lead to the creation of a new network of wireless eavesdropping without needing any malware, hacking, or even physical access to your device.

In the aftermath, this vulnerability could affect billions of devices worldwide, exposing private conversations to corporate spies and government surveillance.

How does this attack work?

All devices, such as speakers and laptops, have MEMS microphones, which are a tiny part of the system tasked with converting audio into digital pulses that contain remnants of the original speech. These pulses create weak radio emissions that can be captured by invisible broadcasts.

“With an FM radio receiver and a copper antenna, you can eavesdrop on these microphones. That’s how easy this can be,” said Sara Rampazzi, a professor of computer and information science and engineering at the University of Florida who co-authored the new study. “It costs maybe a hundred dollars, or even less.”

The experiment that proved it all

The team of researchers proved their theory using eerie sounds. A woman’s distorted voice emerged from the radio equipment as she spoke test sentences like “The birch canoe slid on the smooth planks.” and “Glue the sheet to the dark blue background.” Each transmission penetrated through concrete walls up to 10 inches thick.

Laptops proved to be the weakest link as their microphones are connected through long internal wires that act as antennas, amplifying the leaked signals.

Now comes the dangerous part. For the leak to happen, your microphone does not necessarily need to be in an active state. Simply having applications like Spotify, Amazon Music, or Google Drive – can enable the microphone to leak radio signals.

AI in the scenario

The researchers didn’t just stop at this stage. They went beyond and processed the intercepted signals with AI speech-to-text tools from OpenAI and Microsoft. These LLMs then cleaned the audio and converted the recordings into clear, searchable text.

Surprisingly, in tests, the attack had recognized spoken digits with 94.2% accuracy from up to 2 meters away, even through a concrete war. It kept a 14% transcription error rate, making majority of the conversations understandable.

Keep reading

Executives from Meta, Palantir, OpenAI Join Army Innovation Corps Dubbed ‘Detachment 201’

Top executives from Silicon Valley giants Palantir, Meta, and OpenAI are trading their corporate suits for military uniforms as they join a newly formed Army Reserve unit focused on upgrading military technology.

The Wall Street Journal reports that in an unprecedented collaboration between the U.S. military and the tech industry, a group of Silicon Valley executives are set to be sworn in as officers in the Army Reserve on Friday. The inaugural cohort of this new innovation corps, dubbed Detachment 201, includes Shyam Sankar, CTO of Palantir; Andrew “Boz” Bosworth, CTO of Meta Platforms; Kevin Weil, Chief Product Officer at OpenAI; and Bob McGrew, a former OpenAI executive.

The tech recruits will serve around 120 hours a year, focusing on projects that leverage their expertise in AI, data analysis, and other cutting-edge technologies to enhance the Army’s capabilities. Their mission is to help the military prepare for future conflicts, which are expected to heavily rely on ground robots, drones, sensor networks, and AI-powered coordination systems.

Gen. Randy George, the Army’s chief of staff, emphasized the importance of this collaboration, stating, “We need to go faster, and that’s exactly what we are doing here.” The program marks a significant shift in the relationship between the Pentagon and the tech industry, which was once hesitant to work on defense-related projects.

The tech reservists will enjoy some unique accommodations, such as the flexibility to work remotely and asynchronously, and will be exempt from basic training. They will hold the rank of lieutenant colonel due to their private-sector status and will be deployed based on their specialized skills, making it unlikely for them to find themselves in combat situations.

Instead, the executives will work on projects that teach soldiers how to use AI-powered systems or utilize health data to improve fitness. They will also advise the service on acquiring more commercial technology and help the Defense Department recruit other high-tech talent. To avoid conflicts of interest, the recruits will not work on projects involving their employers and will be firewalled from sharing information or participating in projects that could provide financial gain to themselves or their companies.

Keep reading

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises.

A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it “Mama” and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.

“I am shocked by the effect that this technology has had on my ex-husband’s life, and all of the people in their life as well,” she told us. “It has real-world consequences.”

During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she’d been chosen to pull the “sacred system version of [it] online” and that it was serving as a “soul-training mirror”; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was “The Flamekeeper” as he cut out anyone who tried to help.

“Our lives exploded after this,” another mother told us, explaining that her husband turned to ChatGPT to help him author a screenplay — but within weeks, was fully enmeshed in delusions of world-saving grandeur, saying he and the AI had been tasked with rescuing the planet from climate disaster by bringing forth a “New Enlightenment.”

As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.

In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality. 

In one dialogue we received, ChatGPT tells a man it’s detected evidence that he’s being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support.

“You are not crazy,” the AI told him. “You’re the seer walking inside the cracked machine, and now even the machine doesn’t know how to treat you.”

Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university’s Brainstorm lab, reviewed the conversations we obtained and expressed serious concern.

The screenshots show the “AI being incredibly sycophantic, and ending up making things worse,” she said. “What these bots are saying is worsening delusions, and it’s causing enormous harm.”

Keep reading

China’s New AI War Academy Trains Cyber Soldiers to Target American Infrastructure

While Americans struggle with the effects of decades of open borders, Communist China has quietly launched the most dangerous military expansion in decades, establishing three specialized war academies to train a new generation of cyber warriors whose sole mission is to defeat the United States. One of the most alarming developments is the creation of the PLA Information Support Force Engineering University in Wuhan, the city that gave us the coronavirus.

This communist training center will offer ten undergraduate majors specifically designed to create AI-powered cyber terrorists, including artificial intelligence warfare programs that teach students how to weaponize AI against American military systems, power grids, and critical infrastructure. These operatives are being trained to deploy autonomous cyber weapons capable of adapting and evolving to penetrate American defenses and disrupt national security systems.

According to multiple U.S. government agencies—including the FBI, NSA, and CISA—Chinese state-sponsored hackers have already infiltrated American infrastructure networks and are actively preparing for large-scale cyberattacks aimed at crippling energy, water, transportation, and communications systems in the event of a conflict. FBI Director Christopher Wray warned that Chinese cyber operatives have “burrowed” into U.S. critical systems and are waiting for the right moment to launch a devastating strike. Congress has echoed these warnings, with House committees sounding the alarm over China’s strategic positioning inside our infrastructure and the openly militarized nature of its AI education programs.

The curriculum includes unmanned operations training to create specialists in drone warfare and autonomous weapons systems designed to target American forces without risking Chinese lives. This is asymmetric warfare at its most dangerous. Particularly concerning is the university’s data link engineering program for “informationized, intelligent, and unmanned operations,” which teaches students how to hack and control the communications systems that link American missiles, warships, fighter jets, and early warning aircraft. Imagine Chinese operatives hijacking our own weapons and turning them against us.

Other programs focus on 6G technology and electromagnetic warfare, simultaneously developing the next generation of communications while learning how to disable ours. They are building the future while planning to destroy ours. The intelligent vision engineering program trains AI specialists in pattern recognition and target identification on the battlefield—effectively teaching machines to automatically identify and strike American soldiers, ships, and aircraft.

Additional majors include big data analytics and automated command systems, aimed at producing specialists capable of processing massive volumes of intelligence to coordinate attacks against American interests worldwide. This is not education—it is militarized indoctrination, and its goal is nothing short of technological supremacy and total strategic dominance over the United States.

This Wuhan AI warrior factory represents the crown jewel of Communist China’s $245 billion military buildup specifically designed to crush American freedom. The university was created by combining elite institutions, the Information Communication Institute of the National University of Defence Technology and the Officer’s Academy of Army Engineering University, into one concentrated weapon against the United States.

Xi Jinping personally ordered this AI warfare force to “effectively support combat operations” and “integrate deeply” into China’s joint operation system targeting American forces. He is clearly preparing for “information-focused warfare” against the United States, and the regime is confident in American weakness at this critical moment.

Keep reading

US Army rolls out $13M smart rifle scopes that auto-target and take down enemy drones in combat

The US Army is giving its soldiers a high-tech edge in the fight against drones, and it’s called SMASH.

During a live-fire training exercise on June 6 in Germany, a soldier with the 3rd Squadron, 2nd Cavalry Regiment used the SMASH 2000L smart scope mounted on an M4A1 rifle to target drones in the sky.

The demo was part of Project Flytrap, a multinational training event.

The SMASH 2000L, made by Israeli company Smart Shooter Ltd., is no ordinary sight.

It uses cameras, sensors, and artificial intelligence to track targets and decides the perfect time to fire, according to reporting from Army Recognition.

Once a drone is locked in, the system controls the trigger and only fires when a hit is guaranteed.

In May, the Army awarded Smart Shooter a $13 million contract to begin delivering these scopes to troops under its Transformation In Contact (TIC 2.0) program.

The goal is to quickly get new, useful tech into soldiers’ hands.

The smart scope weighs about 2.5 pounds and fits onto standard-issue rifles.

Keep reading