Mind-altering ‘brain weapons’ no longer only science fiction, say researchers

Sophisticated and deadly “brain weapons” that can attack or alter human consciousness, perception, memory or behaviour are no longer the stuff of science fiction, two British academics argue.

Michael Crowley and Malcolm Dando, of Bradford University, are about to publish a book that they believe should be a wake-up call to the world.

They are this weekend travelling to The Hague for a key meeting of states, arguing that the human mind is a new frontier in warfare and there needs to be urgent global action to prevent the weaponisation of neuroscience.

“It does sound like science fiction,” said Crowley. “The danger is that it becomes science fact.”

The book, published by the Royal Society of Chemistry, explores how advances in neuroscience, pharmacology and artificial intelligence are coming together to create a new threat.

“We are entering an era where the brain itself could become a battlefield,” said Crowley. “The tools to manipulate the central nervous system – to sedate, confuse or even coerce – are becoming more precise, more accessible and more attractive to states.”

The book traces the fascinating, if appalling, history of state-sponsored research into central nervous system (CNS)-acting chemicals.

During the cold war and after, the US, Soviet Union and China all “actively sought” to develop CNS-acting weapons, said Crowley. Their purpose was to cause prolonged incapacitation to people, including “loss of consciousness or sedation or hallucination or incoherence or paralysis and disorientation”.

The only time a CNS-acting weapon was used at scale was by the Russian Federation in 2002 to end the Moscow theatre siege. Security forces used fentanyl derivatives to end the siege, in which armed Chechen militants had taken 900 theatregoers hostage.

Most of the hostages were freed, but more than 120 died from the effects of the chemical agents and an undetermined number suffered long-term damage or died prematurely.

Since then, research has made significant advances. The academics argue that the ability exists to create much more “sophisticated and targeted” weapons that would once have been unimaginable.

Dando said: “The same knowledge that helps us treat neurological disorders could be used to disrupt cognition, induce compliance, or even in the future turn people into unwitting agents.”

The threat is “real and growing” but there are gaps in international arms control treaties preventing it from being tackled effectively, they say.

Keep reading

Quantum teleportation between photons from two distant light sources achieved

Everyday life on the internet is insecure. Hackers can break into bank accounts or steal digital identities. Driven by AI, attacks are becoming increasingly sophisticated. Quantum cryptography promises more effective protection. It makes communication secure against eavesdropping by relying on the laws of quantum physics. However, the path toward a quantum internet is still fraught with technical hurdles.

Researchers at the Institute of Semiconductor Optics and Functional Interfaces (IHFG) at the University of Stuttgart have now made a decisive breakthrough in one of the most technically challenging components, the quantum repeater. They report their results in Nature Communications.

Nanometer-sized semiconductor islands for information transfer

“For the first time worldwide, we have succeeded in transferring quantum information among photons originating from two different quantum dots,” says Prof. Peter Michler, head of the IHFG and deputy spokesperson for the Quantenrepeater.Net (QR.N) research project.

What is the background? Whether WhatsApp or video stream, every digital message consists of zeros and ones. Similarly, this also applies to quantum communication, in which individual light particles serve as carriers of information.

Zero or one is then encoded in two different directions of polarization of the photons (i.e., their orientation in the horizontal and vertical directions or in a superposition of both states). Because photons follow the laws of quantum mechanics, their polarization cannot always be completely read out without leaving traces. Any attempt to intercept the transmission would inevitably be detected.

Making the quantum internet ready for the fiber-optic infrastructure

Another challenge: An affordable quantum internet would use optical fibers—just like today’s internet. However, light has only a limited range. Conventional light signals, therefore, need to be renewed approximately every 50 kilometers using an optical amplifier.

Because quantum information cannot simply be amplified or copied and forwarded, this does not work in the quantum internet. However, quantum physics allows information to be transferred from one photon to another as long as the information stays unknown. This process is referred to as quantum teleportation.

Keep reading

Elon Musk Plans to End Prisons by Having His Robots Follow Crooks and Stop Them from ‘Doing Crime’

Tesla CEO and notorious hypemaster Elon Musk has suggested a new use for the company’s Optimus humanoid robots — having them track and surveil former criminals to prevent them from reoffending. Musk plans to end prisons in favor of a “more humane form of containment of future crime.”

Futurism reports that at a recent Tesla shareholder meeting, Elon Musk unveiled his latest unconventional idea: using the company’s Optimus robots to monitor criminals and intervene if they attempt to commit new crimes. Musk framed this as a “more humane form of containment of future crime” compared to the current prison system.

“You don’t have to put people in prisons and stuff,” Musk said. “If somebody’s committed crime, you now get a free Optimus and it’s just gonna follow you around and stop you from doing crime. But other than that you get to do anything. It’s just gonna stop you from committing crime, that’s really it.”

The proposal immediately generated controversy and raised numerous ethical and practical questions. Musk provided few details on how exactly the robots, which are still in early development, would be able to identify and prevent criminal acts. There are also significant concerns around privacy, surveillance, and the societal implications of having for-profit robots integrated into the criminal justice system.

“This shows a complete lack of understanding of criminology, the justice system, and frankly, basic human rights,” said Michael Johnson, a legal expert at the Brookings Institution. “The idea that a robot can somehow rehabilitate offenders better than comprehensive support programs is absurd. It comes across as a dystopian surveillance scheme more than anything else.”

Others noted the current limitations of Tesla’s Optimus robots, which have so far only been shown performing basic tasks like waving and carrying boxes. AI ethicists highlighted the immense challenges in programming robots to make nuanced judgements in dynamic real-world situations.

Keep reading

Mark Zuckerberg’s Meta Notches Legal Win Against Authors in AI Copyright Case

Mark Zuckerberg’s Meta has prevailed in a copyright infringement lawsuit brought by authors who claimed the company violated their rights by using millions of copyrighted books to train its AI language model, Llama. Although the decision is a win for Meta and other AI giants, the judge stated the decision was more about the plaintiffs’ poor case than about Meta’s approach to AI training.

Bloomberg Law reports that a San Francisco federal court has ruled in favor of Mark Zuckerberg’s in a lawsuit brought by a group of authors. The plaintiffs alleged that Meta had violated their copyrights by using millions of books to train its generative AI model, Llama, without obtaining permission.

Judge Vince Chhabria determined that Meta’s use of the copyrighted books for AI training falls under the fair use defense in copyright law. However, the judge cautioned that his opinion should not be interpreted as a blanket endorsement of Meta’s practices, stating that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

The judge’s decision appears to hinge on the authors’ failure to effectively argue their case, rather than a definitive legal interpretation of the fair use doctrine in the context of AI training. This suggests that future cases involving similar issues may yield different outcomes, depending on the strength of the arguments presented.

The lawsuit, which was closely watched by the tech industry and legal experts, is believed to be the first of its kind to challenge the use of copyrighted material for training AI models. As generative AI technologies continue to advance and become more prevalent, the question of how copyright law applies to the use of protected works in AI training is likely to remain a contentious issue.

A Meta spokesperson told Breitbart News, “We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

Keep reading

The AI Epidemic On College Campuses Exposes How Broken Our Higher Education System Is

It’s no secret that higher education has been a mess for some time now. From DEI initiatives to seemingly never-ending protests to the skyrocketing college debt crisis to the ridiculous ideological imbalance amongst professors, our once hallowed institutions of higher learning are ripe for root-and-stem reform.

But just when it seemed that the American college experience couldn’t become any worse, artificial intelligence came roaring onto the scene. Now, instead of popping Adderall and Ritalin to power through finals like the good old days, college kids are now pawning their assignments off on AI.

This all comes as college professors (many at supposedly prestigious institutions) bemoan that their students either can’t be bothered to read normal college-level assignments or simply can’t because of their limited vocabulary and critical thinking skills. Even professors at notoriously leftist schools have had enough, venting their frustration at any left-of-center outlet that will listen.

These students are supposedly going to college for a particular area of study, meaning they theoretically want to learn about it. Yet they just pass on their assignments on to ChatGPT. How are they supposed to have jobs in STEM, or even subjects like literature, if they can’t even comprehend the material without AI?

If AI is producing all their work in college, isn’t it reasonable to assume that it will continue to be a crutch for them when they become chemists, lawyers, or even teachers themselves? Then the question becomes whether or not the student, given the (non)education he’s received, is worth hiring at all. AI can do it better.

Crib sheets, CliffsNotes, and stimulants are one thing, but relying on a machine to complete even simple assignments, and therefore forgoing any attempt to engage with the material, presents a looming competency crisis. Not only does it pose an existential danger to how our society and economy functions, it poses a threat to the broken diploma pipeline model embodied by today’s higher education system.

The solution to this epidemic seems rather obvious. Students may use AI as a research assistant, no different from Google Scholar, but submission of any assignment or essay that has a single sentence crafted by anything other than the student’s own mind should receive an immediate failing grade as well as a referral for expulsion. Some smaller schools, like my alma mater Washington and Lee University, already have an honor system in place that has the same expectations and penalties.

An even stricter move would be to heavily weight course grades toward in-class tests and essays done with pencil and paper with no devices allowed (besides maybe calculators for STEM classes).

Keep reading

How Much Online Content Will Be Replaced by Artificial Intelligence?

In 2017, Stephen Hawking warned Wired magazine that artificial intelligence could one day outperform humans. “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.” AI has already been incorporated into many aspects of our lives, from medical tests and procedures to gaming and predictive text – but it is now becoming more common to see AI-generated content on the internet, including news sites.

Will Artificial Intelligence Replace Human Writing?

ChatGPT’s most advanced version, GPT-4, can generate documents up to 25,000 words long and comprehend more than 26 languages. It and other artificial intelligence tools are continuing to learn and be incorporated into online content, and many worry that this automated text will soon entirely replace human writing. In fact, a well-known Washington newspaper bragged about using AI to generate more than 850 articles about the 2016 Rio Olympics.

AI-generated content has caused considerable mistrust because of deepfakes, misleading information, and scams. As Liberty Nation reported, the Center for Countering Digital Hate explained that artificial intelligence has a lot of influence and that AI image generators created “election disinformation in 41% of cases, including images that could support false claims about candidates or election fraud.” The Fourth Estate was once a trusted and valuable resource for news, but that public trust has been falling recently. How much worse will it be if outlets start depending on AI to broadcast information?

Keep reading

Navy to Test Microwave Anti-Drone Weapon at Sea in 2026 

The U.S. Navy plans to mount a high-powered microwave prototype system on one of its vessels as early as 2026, according to the Navy’s Fiscal Year 2025 budget documents.

The system will come from the Navy’s Project METEOR, which is developing a directed energy weapon system prototype that the service plans to integrate on ships in 2026.

METEOR will “provide capability with low cost-per-shot, deep magazine, tactically significant range, short time engagement for multi-target approach, dual deception and defeat capability,” according to the budget documents.

This system will be the Navy’s first high-powered microwave, a type of directed energy weapon system that the Army, Navy and Air Force are exploring to counter cheap unmanned aerial systems.

Unlike other directed energy systems the Navy uses, the METEOR prototype will use a different kill mechanism to disable targets. Instead of a focused beam of light, HPM systems use microwave energy to inflict damage to electronics inside targets.

The Navy believes that this mechanism, which is unique to HPM systems, will be useful in defeating anti-ship ballistic missiles like the ones fielded by China’s People’s Liberation Army Rocket Force.

“Currently, the Joint Force suffers from a lack of redundant, resilient hard kill/soft kill options against stressing stream raid threats of Anti-Ship Ballistic Missiles (ASBM),” reads the FY 2025 budget documents.

“The issue is particularly acute in the [U.S. Indo-Pacific Command area of responsibility] due to the vast geographic distances involved, ship magazine size and adversary actions.”

Keep reading

ISRAELI SPYWARE FIRM NSO DEMANDS “URGENT” MEETING WITH BLINKEN AMID GAZA WAR LOBBYING EFFORT

ON NOVEMBER 7, NSO Group, the Israeli spyware company infamous for its Pegasus phone-tapping technology, sent an urgent email and letter by UPS to request a meeting with Secretary of State Antony Blinken and officials at the U.S. State Department. 

“I am writing on behalf of NSO Group to urgently request an opportunity to engage with Secretary Blinken and the officials at the State Department regarding the importance of cyber intelligence technology in the wake of the grave security threats posed by the recent Hamas terrorist attacks in Israel and their aftermath,” wrote Timothy Dickinson, partner at the Los Angeles-based law firm Paul Hastings, headquartered in Los Angeles, on behalf of NSO. 

In the last two years NSO’s reputation has taken a beating amid revelations about its spyware’s role in human rights abuses. 

As controversy was erupting over its role in authoritarian governments’ spying, NSO Group was blacklisted by the U.S. Department of Commerce in November 2021, “to put human rights at the center of US foreign policy,” the agency said at the time. A month after the blacklisting, it was revealed that Pegasus had been used to spy on American diplomats

NSO’s letter to Blinken — publicly filed as part of Paul Hastings’s obligation under the Foreign Agents Registration Act — is part of the company’s latest attempt to reinvent its image and, most importantly, a bid to reverse the blacklisting. (Neither the State Department nor Paul Hastings responded to requests for comment.)

Keep reading

Biometrics Look Increasingly Likely for Schools Afraid of Gun Attacks

When U.S. state of New York schools got the OK last week to use biometric systems short of facial recognition, the move gave more momentum to policies arming teachers with guns.

One of the first reactions came from the Security Industry Association, which criticized the fact that the state’s department of education continued to bar face biometrics collection of any kinds on school grounds.

The board announced that other biometrics systems can be used by schools so long as boards examine how doing so would affect privacy and civil rights, how effective the systems would be and how parents feel about the idea.

SIA members had opposed a three-year temporary complete ban on biometrics investment while campus deployment of facial recognition systems, specifically, could be studied. The ban’s lift largely followed recommendations of that report.

The group largely blames the continuing prohibition on face scans on “intense pressure” from the American Civil Liberties Union, which had filed a lawsuit to prevent a school district from putting AI behind CCTV networks.

Keep reading

26-Year-Old Tech CEO Pava LaPere Found Dead With Blunt-Force Trauma Inside Her Apartment

Tech CEO Pava LaPere, 26, has been found dead in her apartment with “blunt force trauma.”

LaPere had been on the Forbes “30 under 30” list for her success in the tech world.

LaPere was reported missing on Monday and was found a short time later in her Mount Vernon luxury apartment.

The EcoMap tech company founder was found with “blunt force trauma,” according to local police.

Few details are available at this time. The New York Post announced her social media accounts indicated “she was single” and that it “is unknown whether LaPere had any guests over prior to her death.”

Keep reading