Israeli authorities are using facial recognition technology to entrench apartheid

The Israeli authorities are using an experimental facial recognition system known as Red Wolf to track Palestinians and automate harsh restrictions on their freedom of movement, Amnesty International said today.  In a new report, Automated Apartheid, the organization documents how Red Wolf is part of an ever-growing surveillance network which is entrenching the Israeli government’s control over Palestinians, and which helps to maintain Israel’s system of apartheid. Red Wolf is deployed at military checkpoints in the city of Hebron in the occupied West Bank, where it scans Palestinians’ faces and adds them to vast surveillance databases without their consent.

Amnesty International also documented how Israel’s use of facial recognition technology against Palestinians in occupied East Jerusalem has increased, especially in the wake of protests and in the areas around illegal settlements. In both Hebron and occupied East Jerusalem, facial recognition technology supports a dense network of Closed-Circuit Television (CCTV) cameras to keep Palestinians under near-constant observation. Automated Apartheid shows how this surveillance is part of a deliberate attempt by Israeli authorities to create a hostile and coercive environment for Palestinians, with the aim of minimizing their presence in strategic areas.

Keep reading

The Dangers of Biometrics: Beyond Fingerprints and Facial Recognition

Biometrics, the science of identifying individuals based on their unique physical and behavioral characteristics, has a rich history. However, it wasn’t until the late 19th century that Sir Francis Galton established the scientific basis for fingerprint identification.

Over the years, biometrics has evolved from manual methods to sophisticated electronic systems. In the 1960s, the FBI began using computers to store and match fingerprints. The 1970s saw the development of voice recognition systems, and the 1980s brought iris recognition technology. The advent of digital cameras in the 1990s paved the way for facial recognition systems.

Biometrics has become integral to various applications, from securing smartphones to controlling access to high-security facilities. Fingerprint scanners, for instance, are now standard on most smartphones, allowing users to unlock their devices with just a touch. Airports and border control increasingly adopt facial recognition technology to verify travelers’ identities. In other areas, such as India’s Aadhaar program, iris scanners are used for national identification. Meanwhile, wearables and smart home devices continuously collect data from their users’ daily activities. In some cases, individuals willingly hand over their sensitive data, as seen with 23&Me, a company facing financial difficulties and considering selling the DNA data of its 15 million users.

However, the widespread use of biometrics also raises significant privacy concerns. Unlike passwords or other credentials, biometric data such as DNA is immutable—you can’t change it once it’s compromised. This permanence fuels fears about the security of biometric databases. It is a growing concern, as they present attractive targets for threat actors seeking to gain access to sensitive personal data.

Keep reading

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America. One photo, taken in November 2023, shows a partially torn bumper sticker supporting the Obama-Biden lineup.

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations. Newly obtained data reviewed by WIRED shows how a tool originally intended for traffic enforcement has evolved into a system capable of monitoring speech protected by the US Constitution.

Keep reading

Klaus Schwab Announces ‘Collaboration for the Intelligent Age’ will be Theme of Next WEF Meeting in Davos

World Economic Forum (WEF) founder Klaus Schwab announces that the theme for next year’s Annual Meeting in Davos will be “Collaboration for the Intelligent Age.”

Schwab made the announcement on the WEF Agenda blog on September 24, where he also declared, “We have already crossed the threshold into the Intelligent Age.

It is up to us to determine whether it will lead to a future of greater equality, sustainability and collaboration — or if it will deepen divides that already exist.”

Schwab may declare that we are now in the Intelligent Age, but what type of intelligence is he actually talking about?

As we shall see, this Intelligent Age is more about the dumbing of humanity and the rise of smart technologies for mass surveillance and censorship that limits our decision-making capabilities.

“The Intelligent Age is also transforming how we live. Cities are becoming smarter, with sensors and AI managing everything from traffic flow to energy usage. These smart cities, and the smart homes within them, are not just more efficient, they are designed to be more sustainable, reducing carbon emissions and improving quality of life”

Klaus Schwab, “The Intelligent Age: A time for cooperation,” September 2024

In his latest post on the WEF Agenda blog, Schwab lists several examples of how AI and automation is outperforming human capabilities.

Keep reading

AI’s Ominous Split Away From Human Thinking

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

Remember Deepmind’s AlphaGo? It represented a fundamental breakthrough in AI development, because it was one of the first game-playing AIs that took no human instruction and read no rules.

Instead, it used a technique called self-play reinforcement learning (RL) to build up its own understanding of the game. Pure trial and error across millions, even billions of virtual games, starting out more or less randomly pulling whatever levers were available, and attempting to learn from the results.

Within two years of the start of the project in 2014, AlphaGo had beaten the European Go champion 5-0 – and by 2017 it had defeated the world’s #1 ranked human player.

At this point, Deepmind unleashed a similar AlphaZero model on the chess world, where models like Deep Blue, trained on human thinking, knowledge and rule sets, had been beating human grandmasters since the 90s. AlphaZero played 100 matches against the reigning AI champion, Stockfish, winning 28 and tying the rest.

Keep reading

Call Of Duty Comes To Life: Armed Robo-Dogs, Hypersonic Missiles, & Kamikaze Drones Deployed On Modern Battlefields

The Middle East is on the brink of a regional war as the world awaits Israel’s retaliation strike against Iran. President Biden, on Thursday morning, told reporters he was in talks with Israel about possibly striking Iran’s oil facilities. He said, “We’re discussing that.” 

It’s really not hard to imagine if conflict broadens into a regional shitstorm—the modern battlefield would be like the Call of Duty: Modern Warfare video game. Just this week, Iran launched waves of ballistic missiles, including hypersonic ones.

Iran-backed terror organizations around Israel have recently launched countless loitering munitions, or “kamikaze drones,” attacks on the country and commercial shipping in the maritime chokepoint of the Southern Red Sea.  

The newest tech entering the battlefield, already present in Eastern Europe but now being trialed in the Middle East, is armed robot dogs equipped with artificial intelligence, high-tech sensors, and rifles.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading

Judge blocks California deepfakes law that sparked Musk-Newsom row

A federal judge on Wednesday blocked a California measure restricting the use of digitally altered political “deepfakes” just two weeks after Gov. Gavin Newsom signed the bill into law.

The ruling is a blow to a push by the state’s leading Democrats to rein in misleading content on social media ahead of Election Day.

Chris Kohls, known as “Mr Reagan” on X, sued to prevent the state from enforcing the law after posting an AI-generated video of a Harris campaign ad on the social media site. He claimed the video was protected by the First Amendment because it was a parody.

The judge agreed.

“Most of [the law] acts as a hammer instead of a scalpel,” Senior U.S. District Judge John A. Mendez wrote, calling it “a blunt tool hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.” He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Theodore Frank, an attorney for Kohls, said in a statement they were “gratified that the district court agreed with our analysis.”

Keep reading

This fungus grows more vigorously when it feels good vibes

Blasting your favorite playlist can energize your workout. The same is true of fungus—although most people might find its tastes in tunes a bit strange. Fungal soil microbes may get a boost of energy from white noise, according to new research that found the microbes exposed to a particular sound frequency in the lab grew faster. Scientists say they hope the findings, out today in Biology Letters, could lead to sonic techniques that spur the growth of microbes that play critical supportive roles in plant microbiomes, helping rejuvenate stressed ecosystems.

“As humans, we think of sound as an airborne stimulus that we hear,” says Richard Hofstetter, a forest entomologist at Northern Arizona University who was not involved with the study. Other animals respond to sound, too. But even plants and single-celled organisms that can’t “hear” can feel the vibrations. “They don’t have ears or nerves,” he says, but they seem to respond to the mechanical energy that comprises sound. “It’s an energy similar to light,” he says.

Hofstetter’s research has shown a mold called Botrytis cinerea, which grows on fruit including strawberries, gets a growth boost from the acoustic vibrations of refrigerators. Sound has also been shown to boost the growth of Escherichia coli. Both these studies used frequencies of a few thousand hertz (Hz), a high-pitched humming sound the microbes seemed to dig. Other work has shown leaf-dwelling microbes that produce desirable flavor compounds in wine made from Syrah grapes respond to music from the Baroque and early Classical eras.

Keep reading

NATO takes the plunge into the world of venture capital

The NATO Innovation Fund, the “world’s first multi-sovereign venture capital fund,” made its first investments earlier this summer in deep tech companies including British aerospace manufacturing company Space Forge and AI companies ARX Robotics and Fractile.

Modeled like the U.S. intelligence community’s venture capital arm IQT (In-Q-Tel), the fund’s intention is to focus on spurring innovation in areas including biotechnology, AI, space tech, and advanced communications.

As NATO Innovation Fund Board Chairs Klaus Hommels and Fiona Murray described the project’s purview in Fortune in July: “By investing in and adopting emerging dual-use technologies, NATO can leverage the private sector’s innovation power and its transatlantic talent pool, while countering our strategic competitors’ influence and ambitions.”

Keep reading