Fears Pentagon was ‘building killer robots in the basement’ sparked stricter AI rules, DoD official claims

Fears the Pentagon has been ‘building killer robots in the basement’ may have led to stricter AI rules that mandated all systems must be approved before deployment.

The Department of Defense (DoD) recently updated its AI rules among ‘a lot of confusion about’ how it plans to use self-decision-making machines on the battlefield, according to the deputy assistant defense secretary.

Michael Horowitz explained at an event this month that the ‘directive does not prohibit the development of any systems,’ but will ‘make clear what is and isn’t allowed’ and uphold a ‘commitment to responsible behavior,’ as it develops lethal autonomous systems.

While the Pentagon believes the changes should ease the public’s minds, some have said they are not ‘convinced’ by the efforts.

News of the update to the Pentagon’s 2012 ‘Autonomy in Weapon Systems,’ has sparked a debate online with many people saying ‘If the Pentagon says they’re not doing it, they’re doing it.’

Dailymail.com has reached out to the DoD for comment. 

The DoD has been aggressively pushing to modernize its arsenal with autonomous drones, tanks, and other weapons that select and attack a target without human intervention.

Mark Brakel, director of the advocacy organization Future of Life Institute (FLI), told DailyMail.com: ‘These weapons carry a massive risk of unintended escalation.’

He explained that AI-powered weapons could misinterpret something, like a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without cause.

Brakel said the result could be devastating because ‘without meaningful human control, AI-powered weapons are like the Norwegian rocket incident [a near nuclear armageddon] on steroids and they could increase the risk of accidents in hotspots such as the Taiwan Strait.’

Keep reading

Facial recognition used after Sunglass Hut robbery led to man’s wrongful jailing, says suit

A 61-year-old man is suing Macy’s and the parent company of Sunglass Hut over the stores’ alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery and led to his wrongful arrest. While in jail, he was beaten and raped, according to his suit.

Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery. He was arrested on 20 October 2023, according to his lawyers.

According to Murphy’s lawsuit, an employee of EssilorLuxottica, Sunglass Hut’s parent company, worked with its retail partner Macy’s and used facial recognition software to identify Murphy as the robber. The image that was put through the facial recognition system came from low-quality cameras, according to the lawsuit. While Houston police department was investigating the armed robbery, the EssilorLuxottica employee called police to say they could stop the investigation because the employee had identified one of two robbers with the technology. The employee also said the system had pointed to Murphy as committing two other robberies, according to the lawsuit.

When Murphy returned to Texas from California, he went to the department of motor vehicles (DMV) to renew his license. Within minutes of identifying himself to a DMV clerk, Murphy told the Guardian he was approached by a police officer who notified him there was a warrant out for his arrest for an aggravated robbery. Murphy said he was not told any details about his supposed crime except for the date the robbery occurred. He realized he was in Sacramento, California, at the time of the robbery – more than a thousand miles away.

“I almost thought it was a joke,” Murphy said.

Still, he was arrested and taken to the local county jail, where he was held for 10 days before being transferred to and processed in Harris county jail.

After a few days at Harris county, his alibi was confirmed by both his court-appointed defense attorney and the prosecutor, and the charges against him were ultimately dropped, according to the lawsuit.

Murphy was never convicted of a crime. Nonetheless, he says his detainment left him with deep scars. He was brutally beaten and gang-raped by three other men in the jail hours before he was released, he alleges. They threatened to kill him if he tried to report them to the jail staff, according to Murphy. After the alleged attack, Murphy remained in the same cell as them until he was released.

“That was kind of terrifying,” Murphy said. “Your anxiety is up so high, you’re still shaking the entire time. And I just got up on my bunk and just faced the wall and was just praying that something would come through and get me out of that tank.”

“The attack left him with permanent injuries that he has to live with every day of his life,” the lawsuit reads. “All of this happened to Murphy because the Defendants relied on facial recognition technology that is known to be error prone and faulty.”

Murphy did not realize facial recognition technology may have been used as evidence against him until two weeks ago, when he began working with his attorney, Daniel Dutko.

Dutko said he discovered from police documents that the Sunglass Hut worker shared camera footage with Macy’s, which employees from the department store chain used to identify Murphy. After that, Macy’s and Sunglass Hut contacted the police together, according to Dutko. Though Macy’s has retail partnerships with the eyewear brand in several locations, Macy’s had no connection to this robbery as the Sunglass Hut in question is a standalone location, he said.

“We feel very comfortable saying facial recognition software is the only possible explanation, and it’s the only reason why [Sunglass Hut] would go to Macy’s to try to identify him,” Dutko said.

Keep reading

SCIENTISTS SUCCESSFULLY TRAP INDIVIDUAL KRYPTON ATOMS TO CREATE THE FIRST-EVER ONE-DIMENSIONAL GAS

Scientists from the University of Nottingham’s School of Chemistry say they have successfully trapped individual krypton atoms to create the world’s first-ever one-dimensional gas. The atoms of Krypton (Kr), a noble gas, were trapped inside a carbon nanotube using an advanced version of transmission electron microscopy (TEM).

“As far as we know, this is the first time that chains of noble gas atoms have been imaged directly, leading to the creation of a one-dimensional gas in a solid material,” said Professor Paul Brown, director of the Nanoscale and Microscale Research Centre (nmRC), University of Nottingham.

In the future, the team says they are planning to employ electron microscopy to directly image temperature-controlled phase transitions and chemical reactions in these types of one-dimensional systems, which may “unlock the secrets” of such unusual states of matter.

Keep reading

RAYTHEON IS BUILDING TWO ULTRA-HIGH POWER DIRECTED ENERGY WEAPONS FOR THE U.S. NAVY AND AIR FORCE

Defense contractor Raytheon has been tasked with designing, building, and testing a pair of directed energy weapons for the U.S. Navy and U.S. Air Force. Unlike cutting-edge laser systems that use the power of light to down airborne threats, the new weapons will use ultra-powerful microwave emitters to fry the electronics of attacking drones, missiles, and other electronically guided ordinance.

According to RTX Corporation, which owns Raytheon, the two prototypes are expected to be delivered in 2024 and 2026, respectively.

DIRECTED ENERGY WEAPONS INCREASINGLY VALUABLE IN ASYMMETRICAL WARFARE

Traditional munitions like bullets use kinetic force to damage or destroy attacking vehicles and personnel. Unfortunately, the cost of more accurate systems like guided missiles is often prohibitively disproportionate when compared to the cost of the attacking vehicle.

For example, the Israeli Iron Dome system that uses missiles can cost as much as $100,000 per shot to bring down a drone or mortar that costs a mere fraction of that amount. Further complicating the issue is the fact that adversaries are increasingly using drones to mount attacks, including the October 7th, 2023, attack on Israel.

To counter what officials have termed asymmetrical threats, the Israeli government has installed a battery of laser cannons known as Iron Beam. According to an April 2022 tweet by Israeli Prime Minister Naftali Bennet, Iron Beam can down incoming mortar and drones at the cost of only $3.50 per shot.

Keep reading

AlphaFold found thousands of possible psychedelics. Will its predictions help drug discovery?

Researchers have used the protein-structure-prediction tool AlphaFold to identify1 hundreds of thousands of potential new psychedelic molecules — which could help to develop new kinds of antidepressant. The research shows, for the first time, that AlphaFold predictions — available at the touch of a button — can be just as useful for drug discovery as experimentally derived protein structures, which can take months, or even years, to determine.

The development is a boost for AlphaFold, the artificial-intelligence (AI) tool developed by DeepMind in London that has been a game changer in biology. The public AlphaFold database holds structure predictions for nearly every known protein. Protein structures of molecules implicated in disease are used in the pharmaceutical industry to identify and improve promising medicines. But some scientists had been starting to doubt whether AlphaFold’s predictions could stand in for gold standard experimental models in the hunt for new drugs.

“AlphaFold is an absolute revolution. If we have a good structure, we should be able to use it for drug design,” says Jens Carlsson, a computational chemist at the University of Uppsala in Sweden.

Keep reading

New British laser weapon in successful high power firing

During a trial at the MOD’s Hebrides Range, the DragonFire laser directed energy weapon (LDEW) system achieved the UK’s first high-power firing of a laser weapon against aerial targets.

The range of DragonFire is classified, but it is a line-of-sight weapon and can engage with any visible target.

  • First high-power firing of a laser weapon against aerial targets
  • Laser boasts pinpoint accuracy and low long-term costs

“DragonFire exploits UK technology to be able to deliver a high power laser over long ranges. The precision required is equivalent to hitting a £1 coin from a kilometre away. Laser-directed energy weapons can engage targets at the speed of light, and use an intense beam of light to cut through the target, leading to structural failure or more impactful results if the warhead is targeted.

Firing it for 10 seconds is the cost equivalent of using a regular heater for just an hour. Therefore, it has the potential to be a long-term low-cost alternative to certain tasks missiles currently carry out. The cost of operating the laser is typically less than £10 per shot.”

Keep reading

Miami Police Used Clearview AI Facial Recognition in Arrest of Homeless Man

Facial recognition technology is increasingly being deployed by police officers across the country, but the scope of its use has been hard to pin down.

In Miami, it’s used for cases big and exceedingly small, as one case Reason recently reviewed showed: Miami police used facial recognition technology to identify a homeless man who refused to give his name to an officer. That man was arrested, but prosecutors quickly dropped the case after determining the officer lacked probable cause for the arrest. 

The case was barely a blip in the daily churn of Miami’s criminal justice system, but it shows the spread of facial recognition technology and the use of retaliatory charges against those who annoy the police.

Lisa Femia, a staff attorney at the Electronic Frontier Foundation (EFF), which advocates for digital privacy rights, calls the case “a particularly egregious example of mission creep with facial recognition technology.”

“It’s often advertised as a way for law enforcement to solve the worst of the worst crimes,” Femia says. “And instead we have law enforcement here using it to harass the homeless.”

According to a police incident report, a man, who Reason is not identifying because he was ultimately not prosecuted, was sleeping on a bench in a parking garage at Miami International Airport on the morning of November 13, 2023, when he was approached by a Miami-Dade County police officer.

“While on routine patrol at the Miami International Airport I observed defendant sleeping on a bench in the Dolphin garage, covered with a blanket and unbagged personal items on airport luggage cart,” the officer wrote in his report. “The bench is provided for passengers waiting for vehicles to and from the airport. It is not designated for housing.”

The report notes that Miami-Dade police have been directed to address homelessness at the airport and that the officer initiated contact to see if the man had been previously issued a trespass warning.

The man didn’t have an ID, and he gave the officer a fake name and 2010 date of birth.

“Defendant was obviously not a 13-year-old juvenile,” the report says. “I provided defendant several opportunities to provide correct information and he refused.”

Under Florida law, police can demand identification from a pedestrian only when there is reasonable suspicion that they have committed a crime. For example, two Florida sheriff’s deputies were disciplined in 2022 after they arrested a legally blind man for refusing to show his ID.

This officer had other means at his disposal, though. “I identified defendant via facial recognition from Clearview, with assistance from C. Perez, analyst at the MDPD real time crime center,” the report says.

Keep reading

Oops: Failed U.S. Moon Lander Now Hurtling Back TOWARDS Earth

The NASA-backed Peregrine One moon lander is now hurtling back towards Earth after last week failing on its historic journey.

Astrobotic, which had been hoping to hoping to land the first American-made spacecraft on the Moon in more than 50 years, predicts its spacecraft will likely burn up in the atmosphere in the next few days, the BBC reports.

Experts had been working with NASA and other space companies to find the most safe and responsible way of ending Peregrine’s mission.

“The team is currently assessing options and we will update as soon as we are able,” the company noted in a social media post.

Keep reading

Bill Gates Hopes AI Can Reduce “Polarization,” Save “Democracy,” Ignores Censorship Implications

The notion that whoever controls and shapes AI could potentially wield significant influence over large swathes of society could be one of the most alarming and prominent over the next few years.

In a recent episode of “Unconfuse Me with Bill Gates,” Sam Altman, the CEO of OpenAI, and tech billionaire Bill Gates controversially delved into the potential of artificial intelligence (AI) as a tool for maintaining democracy and promoting world peace.

The discussion was aired on January 11, 2024.

Read the transcript for the episode here.

The conversation explored the idea of using artificial intelligence as an instrument to foster unity in society, enhance global amity, and help overcome geopolitical polarization.

Microsoft, founded by Gates, and OpenAI, whose CEO Altman is currently working closely with Microsoft, are promoters of using AI to solve global issues.

Gates spoke excitedly on the topic: “I do think AI, in the best case, can help us with some hard problems…Including ‘polarization’ because potentially that breaks democracy and that would be a super bad thing.”

In addition to resolving polarization, the two heavyweights also discussed the notion of AI potentially acting as a peacemaking tool.

Keep reading

Elizabeth Warren’s Terrible Model for Tech Regulation

The Interstate Commerce Commission (ICC), which existed for about a century before being mercifully put out to pasture in 1995, is one of the best historical examples of how governmental attempts at regulating the economy can backfire.

Created with the stated goal of protecting consumers from the competitive interests of Gilded Age railroad barons, the ICC was quickly captured by the very special interests it sought to control, then helped entrench a railroad cartel. At the height of its powers, the ICC tried to limit the use of trucks for hauling freight (an effort that thankfully failed) and used its influence to have a critic of the railroad monopoly committed to an asylum.

Naturally, some senators see the ICC as the ideal model for a new agency aimed at regulating Big Tech. Bad ideas never seem to truly die in Washington.

While promoting their bipartisan bill to ramp up federal regulation of successful tech companies in The New York Times, Sens. Lindsey Graham (R–S.C.) and Elizabeth Warren (D–Mass.) pointed to the ICC as one model for what they aim to do. “It’s time to rein in Big Tech,” they argued, “and we can’t do it with a law that only nibbles around the edges of the problem.” Warren has also invoked the ICC in posts on X (formerly known as Twitter) and in public comments calling for tighter federal control over companies like Amazon and Facebook.

Indeed, their bill wouldn’t nibble. It would create a new federal commission to regulate online platforms. The Digital Consumer Protection Commission would have concurrent jurisdiction (which really means overlapping and duplicative mandates) with the Federal Communications Commission (FCC) and the Department of Justice. In the senators’ telling, this newfangled ICC would aim to “preserve innovation while minimizing harm presented by emerging industries.”

That’s far from the whole story of the original ICC.

Keep reading