DARPA Wants to Crack the Code of Human Behavior—And They’re Betting on “MAGICS” for Bold New Ideas

The U.S. Defense Advanced Research Projects Agency (DARPA) has launched a new program to solicit paradigm-shifting research ideas to revolutionize how scientists predict collective human behavior. 

The program, known as Methodological Advancements for Generalizable Insights into Complex Systems (MAGICS), aims to address the problem that despite the rise of big data and machine learning, we’re still surprisingly bad at forecasting how large, dynamic human systems respond to change.

“For the past decade or more, there has been an assumption and hope that the explosion of digital data streams (e.g., social media, purchase patterns, traffic dynamics, etc.) combined with powerful machine learning tools would usher in a new era of research in complex, dynamic, evolving systems,” a DARPA solicitation notice writes. “[However] Despite many attempts, results have failed to meet expectations.” 

The MAGICS opportunity, announced through DARPA’s Defense Sciences Office, invites individual researchers to propose innovative concepts that could form the foundation for a new science of social prediction. 

As DARPA notes, today’s best statistical tools often falter when applied to real-world, evolving systems—whether it’s understanding how economies adapt to disruption, how populations shift under demographic pressure, or how societies react to technological upheaval.

At the heart of the MAGICS effort is answering the question: Can we develop new ways to model collective human behavior that outperform current statistical approaches and capture the dynamics of complex, evolving systems? 

The Pentagon brain trust is looking for fresh frameworks beyond what’s possible with today’s machine learning models, namely systems that can handle the messy, recursive, and often unpredictable nature of human systems.

The stakes are high for national security. From forecasting the spread of misinformation to anticipating societal responses to crises, the ability to model human behavior accurately could offer profound advantages. 

Yet DARPA acknowledges that researchers must overcome foundational challenges that large datasets and artificial intelligence have failed to address before these benefits can be realized.

Keep reading

Florida AG Subpoenas Medical Firms Over ‘Backdoor’ on China-Made Devices

Florida Attorney General James Uthmeier has subpoenaed two medical companies selling Chinese-made patient monitors over concerns that the devices could send patient data to China.

Uthmeier’s office stated in a press release that they had taken legal action against Contec Medical Systems, a China-based company known for making patient monitors, and Epsimed, a Miami-based company that resells Contec-made monitors under its own brand name.

The office alleged that Contec “concealed serious security problems” in its products, including a built-in “backdoor” that could “allow bad actors to manipulate data” on the devices without knowledge of either the patient or the provider, and programming that automatically sends patient information to an IP address that belongs to a university in China.

“Some of the most private, personal information” is going to China “without the consent, and in most cases, the awareness of the patient,” Uthmeier told The Epoch Times. “I think there’s a major consumer protection issue for Floridians, for Americans as a whole, and we’re not going to stand for it.”

Uthmeier’s office alleged that Contec and Epsimed may have violated a state law, the Deceptive and Unfair Trade Practices Act, in their assurances on product quality when the products appear to fall far short of standards given their security vulnerabilities. He threatened to pursue damages, civil penalties, and injunctive relief to protect consumers.

Contec Medical Systems is headquartered in Qinhuangdao, a port city located in northern China’s Hebei Province. It has an affiliate called Contec Medical Systems USA Inc. in Illinois to handle the U.S. market.

Keep reading

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

Keep reading

Tehran dismantles Mossad sabotage network behind suicide drone attacks

Iranian security forces have announced the thwarting of Mossad-operated drone production plots aimed at undermining the country’s air defenses and military capabilities as part of Israel’s war against the country. 

According to Iranian media reports, authorities raided a three-story building on the outskirts of Tehran on 15 June, uncovering a facility for assembling drones and explosives.

Iranian police released footage showing a large cache of small drones and explosives seized by authorities at the site on Sunday. 

Footage from Sunday also showed an Iranian police officer chasing a truck filled with drones, which was eventually intercepted and seized. 

Keep reading

ChatGPT: How Technocracy Thrives On Global Chaos

This is an unedited, un-redacted analysis from OpenAI, answering my question: “With military and economic chaos in the world, what kinds of options are on the table for Technocracy and Technocrats to push ahead with their agenda?” It appears that ChatGPT tells more than 99 percent of the news analysts in the world. But this machine thinking has no bias against talking about Technocracy, and has mined data from the Internet to support its conclusions.

Imagine if Lester Holt or Anderson Cooper said something like this: “Historically, Technocracy thrives not in times of democratic stability, but in moments of systemic failure—when populations and institutions are desperate for “scientific” or “data-driven” solutions to chaos.”

One thing is for sure: Technocrats thrive during periods of crisis and chaos – that would be now. War in Europe. War in the Mideast. Threats of war in Asia. Riots everywhere, including in America. Technopopulism is offered as the solution, as populists everywhere think that Technocracy is the solution while drinking the Kool-Aide.

Keep reading

Teachers Must Avert an AI-Facilitated Intellectual Dark Age

Iremember watching a YouTube interview with a highly intelligent and observant entrepreneur, who cheerfully predicted that the time would come when AI programmes would replace teachers, rendering their jobs obsolete. The commentator in question was an enthusiastic advocate of personal and economic freedom and a vocal critic of the excessive incursions of State agencies in our personal lives. Yet for some reason, he seemed relatively unconcerned at the prospect of machines teaching our children.

Of course, there are tasks that most would happily relegate to AI programmes to the benefit of humanity, such as certain forms of tedious clerical work, a large chunk of manual labour, and the synthesis of unwieldy amounts of data. However, there are other tasks that cannot be delegated to a machine without endangering invaluable dimensions of our lives as human beings.

One of those tasks is teaching and learning, through which people learn to think, interpret the world, make rational arguments, assess evidence, make rational and holistic choices, and reflect on the meaning of their lives. For better or for worse, teachers, from kindergarten right up to university level, form the minds of the next generation. The formation of the mind relies on apprenticeship, imitation of a worthy model, and intellectual practice and training. 

Much as an athlete fine-tunes his motor skills and muscle memory playing sport, and finds inspiration in an exemplary athlete, the student fine-tunes his mental skills thinking, reflecting, studying, analysing, and generating ideas and arguments, in dialogue with an inspiring teacher. There is both an interpersonal and “hands-on” dimension to human learning, both of which are indispensable. 

Yet Artificial Intelligence is reaching the point where it has the capacity to automate and mechanise certain aspects of teaching and learning, marginalising crucial aspects of the learning process, most notably the way a teacher can model intellectual activity for the student, and the intellectual tasks a teacher assigns to students in order to fine-tune their mental skills and imagination. Many tasks which, just a few years ago, had to be undertaken “manually,” by which I mean, through the laborious activity, imagination, and effort of a human being, can now be performed automatically by AI.

Keep reading

Guilt by Algorithm: Woman Wrongly Accused of Shoplifting Due to Facial Recognition Error

A woman was left “fuming” after being erroneously accused of stealing toilet paper and ejected from two Home Bargains stores in Greater Manchester, UK, due to an apparent mix-up with a facial recognition system designed to prevent shoplifting.

BBC News reports that Danielle Horan, a makeup business owner, found herself in a distressing situation when she was escorted out of Home Bargains branches in Salford and Manchester, without initially being given any explanation for her removal. It was later discovered that Horan had been falsely accused of stealing approximately £10 worth of items after her profile was added to a facial recognition watchlist used by the stores.

The incident unfolded on May 24, when Horan visited the Home Bargains store on Regent Road in Salford. As she was shopping, the store manager approached her and asked her to leave, causing Horan to feel embarrassed and confused in front of other customers. Despite her protestations, the manager advised her to contact Facewatch, the retail security firm that provides the facial recognition technology, directly.

Horan’s attempts to reach out to both Facewatch and Home Bargains initially proved futile. However, when she visited another Home Bargains store in Fallowfield, Manchester, with her 81-year-old mother on June 4, she was once again surrounded by staff and told to leave the premises as soon as she entered the store. This time, Horan stood her ground and demanded an explanation for her treatment.

After persistent emails to Facewatch and Home Bargains, Horan finally learned that there had been an allegation of theft involving approximately £10 worth of toilet rolls in early May. Somehow, her picture had been circulated to local stores, alerting them not to allow her entry. Horan checked her bank account and confirmed that she had, in fact, paid for the items in question.

Eventually, Facewatch responded to Horan, stating that a review of the incident showed she had not stolen anything. The firm acknowledged the distressing nature of Horan’s experience and noted that the retailer had since undertaken additional staff training. However, Horan’s ordeal had already taken a toll on her mental well-being, causing anxiety and stress as she questioned her actions and felt sick to her stomach for a week.

Keep reading

BritCard: Inside Labour’s “Progressive” Digital ID

A new report from a British government think tank offers some clear insights into the Starmer administration’s plan to introduce a universal digital ID.

That digital ID – in one form or another – is a major part of the endgame is not any kind of revelation. We’ve known that was the plan for years, but the report tells us quite a lot about how it’s going to be sold to the public.

I guess we should go ahead and dive in.

The Thinktank

The report was published just this week by Labour Together – formerly “The Common Good” – a thinktank founded in “Labour’s wilderness years” to help “make Labour electable again”, according to their about page.

Translation: They’re centrist globalist Blairite shills who helped undermine and destroy the only vaguely genuine movement in the last 50 years of British “democracy” and now publish reports to push a globalist agenda.

According to the Electoral Commission, they received over £ 9 million in donations last year (from only 234 donors), much of which seems to have been “donated” by Labour Together Limited, a for-profit company. The murky world of Westminster finances is not my focus, however, and I’m sure it’s at least passably legal and no more corrupt than is standard practice in those circles.

Exactly how a think tank with eighteen employees, ten advisors, four policy fellows and five board members manages to spend 9 million pounds writing a newsletter a week, a report every two months and doing some online polls I have no idea.

It’s a good question for another time, perhaps. For now, we know everything we need to know – Labour Together are old-fashioned New Labour types shilling for globalist tyranny.

The Authors

We won’t talk long about the authors, because there’s not much point. They’re names on a title a page, and while I’m sure they believe in the words they write (or at least, asked ChatGPT to write), it’s also true their job requires they believe it.

I just wanted to point out that the three supposed authors of this work on technology have no tech backgrounds at all. The closest any of them comes is Laurel Boxall, the “about the authors” section of the report proudly declares she has a Masters from Cambridge “focusing on AI”, but a bit a of digging reveals it’s a Masters in “Digital Humanities” with a focus on fictional portrayals of AI in media. Apparently, that qualifies you to become a “tech policy advisor”.

Which is interesting, because it demonstrates that they consider fictional portrayals of AI to be as relevant to this work as real AI experience. An apposite commentary on the state of society in general.

Keep reading

OpenAI snags $256 million contract with US Defence Department

OpenAI has won a US$200 million (S$256 million) contract for a pilot programme aimed at helping the US Defence Department determine how it could use artificial intelligence (AI) for a range of administrative and security tasks.

The one-year contract, which the Defence Department disclosed on June 16, is the latest push by the ChatGPT maker to sell its technology to the US government. 

In a blog post on June 16, OpenAI said the contract is the company’s first project under a new entity it is calling OpenAI for Government. 

As part of the effort, OpenAI will work with the Defence Department to come up with ways that AI can help with administrative tasks, such as getting healthcare for US military members and helping prevent cyber attacks.

OpenAI for Government consolidates all of OpenAI’s existing government projects in one area of the company, including ChatGPT Gov – a version of ChatGPT meant for government workers – as well as its work with US space agency Nasa, the National Institutes of Health, the Air Force Research Laboratory and the Treasury Department.

The company also said late in 2024 that it would partner with weapons maker Anduril Industries to build AI for anti-drone systems, in another sign of its expanding work with the US government, particularly around national security. 

Keep reading

Spy Satellite Uncovers Massive Stealth Flying Wing At Secretive Chinese Base

China is well aware that Western spy satellites, including those operated by the U.S., maintain constant overhead surveillance of high-value military assets, such as bases and research facilities. 

The deliberate exposure of a previously unseen, large, low-observable flying-wing HALE (High-Altitude Long-Endurance) unmanned aerial vehicle at the Malan test facility may not have been an accident

Instead, it appears to be a deliberate act of signaling by Beijing to the Trump administration, highlighting the rapid acceleration of China’s next-generation air combat capabilities at a time when the global security environment is rapidly deteriorating.

With the war in Ukraine ongoing and tensions in the Middle East escalating into a hot crisis, Beijing’s timing suggests an intent to assert technological parity and deterrence against the U.S. Broadly speaking, the world is entering a more dangerous and unstable era — a shift from a unipolar world with the U.S. in control to a bipolar geopolitical order, where volatility is expected to intensify throughout the 2030s.

The War Zone’s Tyler Rogoway cited new satellite spy images via Planet Labs that show the previously unseen HALE drone at a secretive test base near Malan in Xinjiang province

“Specifically, the craft was parked outside of a sprawling new facility that was built very recently to the east of the base, connected to it by a very long taxiway leading to a security gate,” Rogoway said.

Keep reading