Is AI Turning Us Into Dummies?

That AI is turning those who use it into dummies is not only self-evident, it’s irrefutable. ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

“Of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

“The task was executed, and you could say that it was efficient and convenient,” Kosmyna says. “But as we show in the paper, you basically didn’t integrate any of it into your memory networks.”

AI breaks the connection between learning and completing an academic task. With AI, students can check the box–task completed, paper written and submitted–without learning anything.

And by learning we don’t mean remember a factoid, we mean learning how to learn and learning how to think. As Substack writer maalvika explains in her viral essay compression culture is making you stupid and uninteresting, digital technologies have compressed our attention spans via what I would term “rewarding distraction” so we can no longer read anything longer than a few sentences without wanting a summary, highlights video or sound-bite.

In other words, very few people will actually read the MIT paper: TL/DR. Here’s the precis: Your Brain on ChatGPT (mit.edu).

Here’s the full paper.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.

To understand the context–and indeed, the ultimate point of the research–we must start by understanding the structure of learning and thinking which is a complex set of processes. Cognitive Load Theory (CLT) is a framework that parses out some of these processes.

Cognitive Load Theory (CLT), developed by John Sweller, provides a framework for understanding the mental effort required during learning and problem-solving. It identifies three categories of cognitive load: intrinsic cognitive load (ICL), which is tied to the complexity of the material being learned and the learner’s prior knowledge; extraneous cognitive load (ECL), which refers to the mental effort imposed by presentation of information; and germane cognitive load (GCL), which is the mental effort dedicated to constructing and automating schemas that support learning.

Checking the box “task completed” teaches us nothing. Actual learning and thinking require doing all the cognitive work that AI claims to do for us: reading the source materials, following the links between these sources, finding wormholes between various universes of knowledge, and thinking through claims and assumptions as an independent critical thinker.

When AI slaps together a bunch of claims and assumptions as authoritative, we don’t gain a superficial knowledge–we learn nothing. AI summarizes but without any ability to weed out questionable claims and assumptions because it has no tacit knowledge of contexts.

So AI spews out material without any actual cognitive value and the student slaps this into a paper without learning any actual cognitive skills. This cognitive debt can never be “paid back,” for the cognitive deficit lasts a lifetime.

Even AI’s vaunted ability to summarize robs us of the need to develop core cognitive abilities. As this researcher explains, “drudgery” is how we learn and learn to think deeply as opposed to a superficial grasp of material to pass an exam.

In Defense of Drudgery: AI is making good on its promise to liberate people from drudgery. But sometimes, exorcising drudgery can stifle innovation.

“Unfortunately, this innovation stifles innovation. When humans do the drudgery of literature search, citation validation, and due research diligence — the things OpenAI claims for Deep Research — they serendipitously see things they weren’t looking for. They build on the ideas of others that they hadn’t considered before and are inspired to form altogether new ideas. They also learn cognitive skills including the ability to filter information efficiently and recognize discrepancies in meaning.

I have seen in my field of systems analysis where decades of researchers have cited information that was incorrect — and expanded it into its own self-perpetuating world view. Critical thinking leads the researcher to not accept the work that others took as foundational and to spot the error. Tools such as Deep Research are incapable of spotting the core truth and so will perpetuate misdirection in research. That’s the opposite of good innovation.”

In summary: given that AI is fundamentally incapable of performing the tasks required for authentic innovation, we’re de-learning how to innovate. What we’re “learning” is to substitute a superficially clever simulation of innovation for authentic innovation, and in doing so, we’re losing the core cognitive skills needed to innovate.

Keep reading

AI reveals unexpected new physics in dusty plasma

Physicists have used a machine-learning method to identify surprising new twists on the non-reciprocal forces governing a many-body system.

The journal Proceedings of the National Academy of Sciences published the findings by experimental and theoretical physicists at Emory University, based on a neural network model and data from laboratory experiments on dusty plasma—ionized gas containing suspended dust particles.

The work is one of the relatively few instances of using AI not as a data processing or predictive tool, but to discover new physical laws governing the natural world.

“We showed that we can use AI to discover new physics,” says Justin Burton, an Emory professor of experimental physics and senior co-author of the paper. “Our AI method is not a black box: we understand how and why it works. The framework it provides is also universal. It could potentially be applied to other many-body systems to open new routes to discovery.”

The PNAS paper provides the most detailed description yet for the physics of a dusty plasma, yielding precise approximations for non-reciprocal forces.

“We can describe these forces with an accuracy of more than 99%,” says Ilya Nemenman, an Emory professor of theoretical physics and co-senior author of the paper.

“What’s even more interesting is that we show that some common theoretical assumptions about these forces are not quite accurate. We’re able to correct these inaccuracies because we can now see what’s occurring in such exquisite detail.”

Keep reading

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network

Hackers planted a Raspberry Pi equipped with a 4G modem in the network of an unnamed bank in an attempt to siphon money out of the financial institution’s ATM system, researchers reported Wednesday.

The researchers with security firm Group-IB said the “unprecedented tactic allowed the attackers to bypass perimeter defenses entirely.” The hackers combined the physical intrusion with remote access malware that used another novel technique to conceal itself, even from sophisticated forensic tools. The technique, known as a Linux bind mount, is used in IT administration but had never been seen used by threat actors. The trick allowed the malware to operate similarly to a rootkit, which uses advanced techniques to hide itself from the operating system it runs on.

End goal: Backdooring the ATM switching network

The Raspberry Pi was connected to the same network switch used by the bank’s ATM system, a position that effectively put it inside the bank’s internal network. The goal was to compromise the ATM switching server and use that control to manipulate the bank’s hardware security module, a tamper-resistant physical device used to store secrets such as credentials and digital signatures and run encryption and decryption functions.

The group behind the attack is tracked in the industry under the name UNC2891. The financially motivated threat group has been active since at least 2017 in targeting the infrastructures of banks. It has earned a well-deserved reputation for proficiency in its use of custom malware in attacks targeting Linux, Unix, and Oracle Solaris systems.

In 2022, Google’s Mandiant division said it had observed UNC2891 spending years inside a targeted network, during which time the intrusion went largely unnoticed. Mandiant researchers went on to identify CakeTap, a custom rootkit for Solaris systems. Among other things, CakeTap manipulated messages passing through an infected ATM switching network, most likely for use in unauthorized cash withdrawals using fraudulent bank cards. Mandiant documented two other custom pieces of malware, which the company named SlapStick and TinyShell.

Group-IB’s report on Wednesday shows that UNC2891 is still active and finding new and advanced ways to burrow into bank networks without detection.

“One of the most unusual elements of this case was the attacker’s use of physical access to install a Raspberry Pi device,” Group-IB Senior Digital Forensics and Incident Response Specialist Nam Le Phuong wrote. “This device was connected directly to the same network switch as the ATM, effectively placing it inside the bank’s internal network. The Raspberry Pi was equipped with a 4G modem, allowing remote access over mobile data.”

To maintain persistence, UNC2891 also compromised a mail server because it had constant Internet connectivity. The Raspberry Pi and the mail server backdoor would then communicate by using the bank’s monitoring server as an intermediary. The monitoring server was chosen because it had access to almost every server within the data center.

Keep reading

China Probes Nvidia Over Alleged H20 AI Chip’s “Backdoor” Security Risks

China’s Cyberspace Administration (CAC) summoned Nvidia officials to address alleged “backdoor” security vulnerabilities with the H20 AI chip, just weeks after Nvidia CEO Jensen Huang’s diplomatic visit to Beijing, and days after US and Chinese officials met in Stockholm to extend their tariff truce

Nvidia was asked “to clarify and submit relevant supporting documentation regarding security risks, including potential vulnerabilities and backdoors, associated with its H20 computing chips sold to China,” according to a statement from CAC. 

China’s top internet watchdog claimed Nvidia’s AI chips pose major security threats due to security vulnerabilities…

“US lawmakers have previously called for advanced chips exported from the US to be equipped with location-tracking features,” the regulator pointed out. 

“The location-tracking and remote shutdown capabilities on Nvidia computing chips are ready, according to US AI experts,” CAC said. 

CAC’s concerns follow a legislative push in Washington, where Republican Senator Tom Cotton and a bipartisan group of eight Representatives recently introduced the U.S. Chip Security Act (H.R. 3447). This bill includes the requirement for companies like Nvidia to embed security mechanisms into advanced chips, such as location verification or tracking mechanisms. 

Nvidia reps were asked to provide technical documents about the H20 AI chip to ensure there are no “backdoors.” Huang has previously denied any security vulnerabilities with his chips. 

Keep reading

Lockheed Has Something ‘Magical,’ Costly as Hell, and Totally Secret Up its Sleeve

Defense giant Lockheed Martin just reported a rare — and yuge — quarterly loss of $1.6 billion, but CEO James D. Taiclet sounded unfazed, thanks to a “magical” classified aeronautics program he claims will create a “game-changing capability for our joint U.S. and international customers.”

Is it a bird? A plane? Superman?

Before we get to the speculation — and there is some juicy stuff — a quick look at how the company lost so much money on something that Taiclet said Lockheed “probably won’t be able to talk about what that is for many years to come.”

Lockheed launched Program X with the Pentagon in 2018 during the Trump 45 administration on a fixed-price basis. That strikes me as a bit odd (albeit awesome for taxpayers) because exotic weapons systems that require developing bleeding-edge technologies are usually done on a cost-plus basis. That’s just because you can’t price something when half the parts haven’t even been invented yet.

So Lockheed signed on to a fixed-price contract just a couple of years before Bidenflation knocked 25% off the value of the dollar. “But I can assure you,” Taiclet said of Project X, “that it’s going to be in high demand for a very long time, well beyond the fixed price commitments.”

What might generate so much revenue, not just from the Pentagon, but from our allies around the world? I did a little poking around on Reddit and other forums where nerds like to geek out and found some fascinating possibilities.

Lockheed lost the contest to produce the Air Force’s Next Generation Air Dominance (NGAD) stealth fighter — now known as the F-47 — to rival Boeing. So there’s been some speculation that Project X is a carrier-based version of Lockheed’s NGAD for the Navy. But Lockheed denies this.

There’s also the long-rumored Hypersonic Reconnaissance Aircraft to replace the long-retired SR-71 spy plane. But those are top-secret, highly specialized aircraft that would be unlikely to generate foreign sales, even if Congress decided to allow it. (Congress refused permission to sell the F-22 Raptor stealth fighter overseas to help keep its secrets.) I seriously doubt Project X is an SR-72.

Here’s where the possibilities get weirder — or should I say, “magical?”

Keep reading

EU plans $30 billion investment in gigawatt AI data centers — multiple sites to host 100,000 AI GPUs each as bloc plays catch-up to US and China

The European Union is the world’s second-largest economy in terms of GDP, but when it comes to its place on the AI market, its position is by far not as strong. To catch up with the U.S. and China, the bloc is launching a $30 billion initiative to build a network of high-capacity data centers that can host millions of AI GPUs, reports CNBC. If successful, the EU will have gigawatt-class datacenters with performance akin to that owned by leading U.S. companies.

To date, the European Union has allocated €10 billion (approximately $11.8 billion) to establish 13 AI data centers, alongside an additional €20 billion earmarked as initial funding for a network of gigawatt-class AI facilities. So far, the project has attracted 76 expressions of interest from 16 member states, covering a total of 60 potential locations, according to CNBC. Initial launches are underway, with the first AI factory expected to go live in the coming weeks and a large-scale project in Munich planned for early September.

Each gigawatt datacenter is expected to require €3 to €5 billion and deliver a level of computational power far greater than existing AI data centers, potentially supporting over 100,000 advanced AI GPUs per site, according to estimates by UBS cited by CNBC. xAI’s Colossus cuper cluster consumes about 150 MW of power when equipped with 100,000 H100 GPUs, so a gigawatt facility will probably be able to host many more GPUs. Perhaps, 300,000 Blackwell Ultra processors.

The EU’s effort, if realized, is probably one of the world’s largest publicly funded initiatives in artificial intelligence, probably well below what Chinese authorities (both federal and local) have invested in AI data centers, but well ahead of what other big economies invest in their AI efforts.

Henna Virkkunen, European Commission executive vice president for technology policy, told CNBC that while Europe has a strong talent base — reportedly 30% more AI researchers per capita than the U.S. — their limited access to computing has held back development. Building massive AI data centers is designed to solve this problem and kick-start the AI sector across the EU.

Despite strong public interest, the scale and sustainability of the project remain in question. Bertin Martens of Bruegel noted that while the EU has committed taxpayer funding, it is unclear how much the public sector will invest in the project. Also, the specifications of the upcoming data centers are unclear. While the EU has access to Nvidia GPUs and other advanced AI accelerators developed in America through a trade agreement with the U.S., Martens pointed out that acquiring hardware is only the beginning.

Keep reading

Porn Studios File Copyright Lawsuit Against Meta Claiming Mass Download of XXX Movies to Train AI

Two major porn production companies have filed a copyright lawsuit against Mark Zuckerberg’s Meta, alleging unauthorized use of their videos to train AI models.

TorrentFreak reports that the adult film studios Strike 3 Holdings and Counterlife Media are taking aim at Meta with a copyright lawsuit. The companies, which produce popular adult brands like Vixen, Tushy, Blacked, and Deeper, claim that Meta illicitly downloaded at least 2,396 of their movies via BitTorrent since 2018 for the purpose of training its AI systems, including the Meta Movie Gen and Large Language Model (LLaMA).

Filed in a California federal court, the complaint alleges that Meta’s unauthorized use of the copyrighted adult films could ultimately result in AI models capable of creating similar “high-quality” porn content at a lower cost, potentially threatening the studios’ business. The plaintiffs argue that by training specifically on their works, “Meta’s AI Movie Gen may very well soon produce full length films with Plaintiffs’ identical style and quality, which other real world adult studios cannot replicate.”

The lawsuit also accuses Meta of not only downloading the copyrighted works without permission but also uploading them to third parties participating in the same BitTorrent swarms. This allegation is allegedly backed by data from the studios’ proprietary tracking software, VXN Scan. BitTorrent’s “tit for tat” algorithm rewards users for sharing content with others to increase download speeds, and the plaintiffs claim that Meta deliberately chose to continue sharing the pirated files to capitalize on faster downloads and infringe more content at a quicker pace.

Strike 3 and Counterlife Media discovered the alleged infringements after Meta’s BitTorrent activity was revealed in a separate lawsuit filed by book authors. In that case, Meta admitted to obtaining content from pirate sources. This revelation prompted the adult studios to search their archive of collected BitTorrent data for Meta-linked IP addresses, uncovering 47 addresses owned by the company that allegedly infringed their copyrights. The complaint provides a list of thousands of alleged infringements from these addresses as evidence. Strike 3 has filed many lawsuits in the past related to videos allegedly downloaded by BitTorrent pirates, leading one judge to label them as a “copyright troll.”

Keep reading

OpenAI and Oracle announce Stargate AI data centre deal

OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative.

It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our AI future are being laid in concrete, steel, and miles of fibre-optic cable, and those foundations are getting colossally bigger.

Together, OpenAI and Oracle are going to build new data centres in the US packed with enough hardware to consume 4.5 gigawatts of power. It’s hard to overstate what a staggering amount of energy that is—it’s the kind of power that could light up a major city. And all of it will be dedicated to one thing: powering the next generation of AI.

This isn’t just a random expansion; it’s a huge piece of OpenAI’s grand Stargate plan. The goal is simple: to build enough computing power to bring advanced AI to everyone.

When you add this new project to the work already underway in Abilene, Texas, OpenAI is now developing over 5 gigawatts of data centre capacity. That’s enough space to run more than two million of the most powerful computer chips available.

This move shows they are dead serious about a pledge they made at the White House earlier this year to plough half a trillion dollars into US AI infrastructure. In fact, with the momentum they’re getting from partners like Oracle and Japan’s SoftBank, they now expect to blow past that initial goal.

But this story isn’t just about silicon chips and corporate deals; it’s about people. OpenAI believes that building and running these new Stargate AI data centres will create over 100,000 jobs.

That job creation presents real opportunities for families across the country from construction crews pouring the concrete, to specialised electricians wiring up racks of servers, and the full-time technicians who will keep these digital brains running day and night.

Keep reading

Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

AI in Wyoming may soon use more electricity than state’s human residents

On Monday, Mayor Patrick Collins of Cheyenne, Wyoming, announced plans for an AI data center that would consume more electricity than all homes in the state combined, according to The Associated Press. The facility, a joint venture between energy infrastructure company Tallgrass and AI data center developer Crusoe, would start at 1.8 gigawatts and scale up to 10 gigawatts of power use.

The project’s energy demands are difficult to overstate for Wyoming, the least populous US state. The initial 1.8-gigawatt phase, consuming 15.8 terawatt-hours (TWh) annually, is more than five times the electricity used by every household in the state combined. That figure represents 91 percent of the 17.3 TWh currently consumed by all of Wyoming’s residential, commercial, and industrial sectors combined. At its full 10-gigawatt capacity, the proposed data center would consume 87.6 TWh of electricity annually—double the 43.2 TWh the entire state currently generates.

Because drawing this much power from the public grid is untenable, the project will rely on its own dedicated gas generation and renewable energy sources, according to Collins and company officials. However, this massive local demand for electricity—even if self-generated—represents a fundamental shift for a state that currently sends nearly 60 percent of its generated power to other states.

Wyoming Governor Mark Gordon praised the project’s potential benefits for the state’s natural gas industry in a company statement. “This is exciting news for Wyoming and for Wyoming natural gas producers,” Gordon said.

The proposed site for the new data center sits several miles south of Cheyenne near the Colorado border off US Route 85. While state and local regulators still need to approve the project, Collins expressed optimism about a quick start. “I believe their plans are to go sooner rather than later,” he said.

Keep reading