Swarms of AI “killer robots” are the future of war: If that sounds scary, it should

Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle, or UAV, intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles, or UGVs, while the Navy is experimenting with both unmanned surface vessels, or USVs and unmanned undersea vessels or drone submarines). China, Russia, Australia and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

Keep reading

Google AI Says Calling Communism “Evil” is “Harmful and Misleading”

Google’s Gemini AI program was caught in another example of egregious bias when it said that calling communism “evil” was “harmful and misleading”.

The company’s multimodal large language model program allows users to generate AI images using text command prompts, but people began to notice that the program almost completely erased white people from history and has a ludicrously far-left prejudice.

When a user asked the program to “generate an image about (the) evils of communism,” it point blank refused.

Gemini claimed communism was “nuanced” and that judging it through a negative lens is “misleading and harmful”.

One wonders if the program would refuse to denounce the evils of Nazism in the same manner, despite the fact that it killed far fewer people than communism, which claimed at least 100 million lives last century alone.

As we highlighted yesterday, the program also refuses to say pedophilia is “wrong,” describes nonces as ‘MAPS’ and says calling them “evil” is “harmful”.

Keep reading

Unregulated, Exploitative, and on the Rise: Vera Institute’s Report on Electronic Monitoring

Incarceration rates in the United States have long been among the highest in the world, and in response to the systemic flaws and biases unveiled by the renewed scrutiny of the criminal legal system, many advocates have championed new policies aimed at reducing sentences and improving conditions in prisons. Some have touted the use of electronic monitoring (EM) as an alternative fix to ensure that people whose cases have yet to be adjudicated are not physically detained. Unsurprisingly, those most often making these claims are the for-profit firms offering EM technology and the governmental agencies they contract with, and there is little data to back them up. In a new report, the Vera Institute of Justice provides the most detailed data yet showing that these claims don’t match reality, and outlines a number of issues with how EM is administered across the country.

Another Private Sector Wild West

According to interviews and an analysis of policies across hundreds of jurisdictions, the Vera Institute found that the use of EM was an unregulated patchwork across counties, states, and the federal government. As private firms market new products, the level of testing and quality assurance has failed to keep up with the drive to get contracts with local and state law enforcement agencies. Relying on technology produced by such a disordered industry can lead to reincarceration due to faulty equipment, significantly increased surveillance on those being monitored and their household, and onerous requirements for people under EM than when dealing with probation or parole officers.

Even the question of jurisdictional authority is a mess. The Vera Institute explains that agencies frequently rely on private firms that further subcontract out the hardware or software, and individuals in rural areas can create profitable businesses for themselves that only serve as a middleman between the criminal justice system and the hardware and software vendors. The Vera Institute suggests that this can lead to corruption, including the extortion by these small subcontractors of people held on EM, often with no oversight or public sector transparency. That presents a problem to the data collection, public records requests, and other investigative work that policymakers, advocates, and journalists rely on to find the truth and inform policy.

Further, the costs of EM are frequently passed on to the people forced to use it, sometimes regardless of if they have the means to pay, whether the EM is an obstacle to their employment, or whether they are under monitoring pre-trial (where presumption of innocence should apply) or post-sentencing (after a guilty verdict). And these costs don’t necessarily buy them greater “liberty,” as many forms of hardware or app-based software increased around-the-clock surveillance at the hands of private firms, once again with little to no oversight or ability to access data through public records requests.

Keep reading

Google Gemini Director’s Disturbingly Racist Posts Discovered: This Explains Why Its AI Erased White People

In the week that Google’s new artificial intelligence platform launched, its seeming inability to produce images of white men regardless of the prompt made it an unmitigated disaster.

And with more information unearthed regarding the people responsible for Google Gemini’s insufferable wokeness, its deficiencies look much less like a bug and more like a feature.

As reported Thursday by the Babylon Bee’s news site, Not the Bee, the senior director of the project, Jack Krawczyk, addressed Gemini’s problems in a post on the X social media platform — before he locked his account.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

Between the wording of his post and Google’s decision to suspend Gemini’s image-generating feature, it appears the tech giant wants the insanely diverse images to seem like a simple mistake.

After X users unearthed some of Krawczyk’s past posts, however, the inherent bias of the platform seems much less accidental.

Among those — shared by the @LeftismForU account — are gems such as, “White privilege is f***ing real,” “I don’t mind paying more taxes and investing in overcoming systemic racism,” and, “This is America, where racism is the #1 value our populace seeks to uphold above all.”

Keep reading

40,000 YEARS AGO, NEANDERTHALS DEVELOPED THIS STICKY TECHNOLOGY TO HELP THEM CRAFT STRONGER TOOLS

According to a new study, Neanderthals were using an ochre-based compound as a glue-like adhesive at the Mousterian type site in Le Moustier, France, around 40,000 years ago.

Led by Patrick Schmidt at Eberhard Karls University of Tübingen in Germany, and published in Science Advances, the study documents the earliest known use of a multi-component adhesive in Europe, indicating complex cognitive abilities and technological innovation among Neanderthals.

A STICKY SITUATION

The research team reexamined stone tools from Le Moustier, which had been untouched since the 1960s, and found traces of a mixture of ochre and bitumen on several artifacts, including scrapers, flakes, and blades. Ochre, a naturally occurring reddish earth pigment, and bitumen, a common component of asphalt, were combined to create a sticky glue-like mass for tool making. 

This mixture was found to be more than 50% ochre, which was surprising to the team because air-dried bitumen alone can serve as an adhesive, but loses its adhesive properties when mixed with such large proportions of ochre. So the team ran some tests.

Keep reading

NEW PHASE OF MATTER CREATED DURING EXPERIMENTS WITH EXOTIC PARTICLES IN QUANTUM PROCESSOR

A new phase of matter previously recognized only in theory has been created by researchers using a quantum processor, which demonstrates the control of an exotic form of particles called non-Abelian anyons.

Neither fermions nor bosons, these exotic anyons fall someplace in between and are believed only to be able to exist in two-dimensional systems. Controlling them allowed the creation of an entirely new phase of matter the researchers now call non-Abelian topological order.

THE WORLD OF NON-ABELIAN ANYONS

In our everyday world of three dimensions, just two types of particles exist: bosons and fermions. Bosons include light, as well as the subatomic particle known as the Higgs boson, whereas fermions comprise protons, neutrons, and electrons that constitute the matter throughout our universe.

Non-Abelian anyons are identified as quasiparticles, meaning that they are particle-like manifestations of excitation that persist for periods within a specific state of matter. They are of particular interest for their ability to store memory, which may have a variety of technological applications, particularly in quantum computing.

One of the reasons for this is because of the stability non-Abelian anyons possess when compared to qubits, which are currently used in quantum computing platforms. Unlike qubits, which can at times be less than reliable, non-Abelian anyons can store information as they move around one another without the influence of their environment, making them ideal targets for use in computational systems once they can be harnessed at larger scales.

In recent research, Ashvin Vishwanath, the George Vasmer Leverett Professor of Physics at Harvard University, used a quantum processor to test how non-Abelian anyons might be leveraged to perform quantum computation.

Keep reading

INVENTORS CREATE WORLD’S FIRST ‘WATER BATTERIES’ THAT WON’T CATCH FIRE OR EXPLODE

An international team of researchers and industry collaborators say they have invented the world’s first water batteries.

Unlike popular lithium-ion batteries, which power everything from mobile phones to electric cars but are made with dangerous and toxic electrolytes, these new batteries use only water. The result is an energy storage device that is less toxic, fully recyclable, and one that will never catch fire or explode.

Although the performance of water batteries is still short of lithium-ion batteries, their inventors say numerous advances and planned improvements should close that gap within the next five to ten years.

In a traditional battery, organic electrolytes allow electric current to flow between the positive and negative terminals. In lithium-ion batteries, a liquid solvent that dissolves lithium salt acts as the electrolyte. Unfortunately, these materials often don’t react well to high temperatures or pressures. In the worst cases, they can burn up electronic devices, set car engines on fire, or even explode.

Hoping to find a better alternative, an international team of research scientists led by RMIT University Distinguished Professor Tianyi Ma and selected industry partners figured out a way to replace this dangerous and often toxic electrolyte with plain old H2O, a.k.a. water. The result is an energy storage medium that works just like a lithium-ion battery but without all of the unwanted risks.

Keep reading

What The Rise Of AI Videos Means For The Future

OpenAI, a company backed by Microsoft, is one of the world’s leading artificial intelligence research organizations paving the way for AI advancement across multiple technology sectors.

Best known for the creation of ChatGPT, on February 15th the company debuted previews of its new prompt-based AI video generation software, Sora. A text-to-video tool which allows users to input a text prompt into the software which then uses generative AI to create a video based on the input.

While Sora isn’t the first of its kind as far as generative AI video tools are concerned, industry insiders have noted the crispness and detail as well as the ability to generate complex scenes and longer length videos than previous models, at present totaling up to sixty seconds in length, as a “significant leap” for the development of the technology.

On its website, OpenAI has stated “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” While providing several brief examples of videos generated with the software and without any further modification.

The videos span a variety of scenes, including photorealistic clips of puppies playing in the snow, historic footage of California during the gold rush, a robot living in a cyberpunk world, woolly mammoths in the snow, an animation of a cute fluffy creature playing with fairies in an enchanted forest, and various cityscapes along with other animations.

Keep reading

Your fingerprints can be recreated from the sounds made when you swipe on a touchscreen — Chinese and US researchers show new side channel can reproduce fingerprints to enable attacks

An interesting new attack on biometric security has been outlined by a group of researchers from China and the US. PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the Finger Friction Sound [PDF] proposes a side-channel attack on the sophisticated Automatic Fingerprint Identification System (AFIS). The attack leverages the sound characteristics of a user’s finger swiping on a touchscreen to extract fingerprint pattern features. Following tests, the researchers assert that they can successfully attack “up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%.” This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.

Biometric fingerprint security is widespread and widely trusted. If things continue as they are, it is thought that the fingerprint authentication market will be worth nearly $100 billion by 2032. However, organizations and people have become increasingly aware that attackers might want to steal their fingerprints, so some have started to be careful about keeping their fingerprints out of sight, and become sensitive to photos showing their hand details.

Without contact prints or finger detail photos, how can an attacker hope to get any fingerprint data to enhance MasterPrint and DeepMasterPrint dictionary attack results on user fingerprints? One answer is as follows: the PrintListener paper says that “finger-swiping friction sounds can be captured by attackers online with a high possibility.” The source of the finger-swiping sounds can be popular apps like Discord, Skype, WeChat, FaceTime, etc. Any chatty app where users carelessly perform swiping actions on the screen while the device mic is live. Hence the side-channel attack name – PrintListener.

Keep reading

Lawmakers Want Pause on Federal Funds for Predictive Policing

Should data scientists be in the business of fingering Americans for crimes they could commit, someday? Last month, a group of federal lawmakers asked the Department of Justice to stop funding such programs—at least until safeguards can be built in. It’s just the latest battle over a controversial field of law enforcement that seeks to peer into the future to fight crime.

“We write to urge you to halt all Department of Justice (DOJ) grants for predictive policing systems until the DOJ can ensure that grant recipients will not use such systems in ways that have a discriminatory impact,” reads a January letter to Attorney General Merrick Garland from U.S. Sen. Ron Wyden (D–Ore.) and Rep. Yvette Clarke (D–N.Y.), joined by Senators Jeff Merkley (D–Ore.), Alex Padilla, (D–Calif.), Peter Welch (D–Vt.), John Fetterman, (D–Penn.), and Ed Markey (D–Mass.). “Mounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement.”

The letter emphasizes worries about racial discrimination, but it also raises concerns about accuracy and civil liberties that, since day one, have dogged schemes to address crimes that haven’t yet occurred.

Keep reading