Apple auto-opts everyone into having their photos analyzed by AI for landmarks

Apple last year deployed a mechanism for identifying landmarks and places of interest in images stored in the Photos application on its customers iOS and macOS devices and enabled it by default, seemingly without explicit consent.

Apple customers have only just begun to notice.

The feature, known as Enhanced Visual Search, was called out last week by software developer Jeff Johnson, who expressed concern in two write-ups about Apple’s failure to explain the technology, which is believed to have arrived with iOS 18.1 and macOS 15.1 on October 28, 2024.

In a policy document dated November 18, 2024 (not indexed by the Internet Archive’s Wayback Machine until December 28, 2024, the date of Johnson’s initial article), Apple describes the feature thus:

Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides [your] IP address. This prevents Apple from learning about the information in your photos. You can turn off Enhanced Visual Search at any time on your iOS or iPadOS device by going to Settings > Apps > Photos. On Mac, open Photos and go to Settings > General.

Apple did explain the technology in a technical paper published on October 24, 2024, around the time that Enhanced Visual Search is believed to have debuted. A local machine-learning model analyzes photos to look for a “region of interest” that may depict a landmark. If the AI model finds a likely match, it calculates a vector embedding – an array of numbers – representing that portion of the image.

The device then uses homomorphic encryption to scramble the embedding in such a way that it can be run through carefully designed algorithms that produce an equally encrypted output. The goal here being that the encrypted data can be sent to a remote system to analyze without whoever is operating that system from knowing the contents of that data; they just have the ability to perform computations on it, the result of which remain encrypted. The input and output are end-to-end encrypted, and not decrypted during the mathematical operations, or so it’s claimed.

The dimension and precision of the embedding is adjusted to reduce the high computational demands for this homomorphic encryption (presumably at the cost of labeling accuracy) “to meet the latency and cost requirements of large-scale production services.” That is to say Apple wants to minimize its cloud compute cost and mobile device resource usage for this free feature.

With some server optimization metadata and the help of Apple’s private nearest neighbor search (PNNS), the relevant Apple server shard receives a homomorphically-encrypted embedding from the device, and performs the aforementioned encrypted computations on that data to find a landmark match from a database and return the result to the client device without providing identifying information to Apple nor its OHTTP partner Cloudflare.

Keep reading

Here are a few of the new laws taking effect in the US in 2025

At the end of last year, The Epoch Times highlighted the more notable new laws that will take effect in US states in 2025.  The outlet noted several new laws are set to take effect in 2025, impacting various aspects of life in the United States, including digital content creation, kids’ social media use and more.

Most of these laws are not unique to the US and throughout the West people are familiar with the underlying agenda that has given rise to such laws.   Judging by these laws alone, it is hard not to feel that the West is experiencing or being forced into a crisis of moral decline, with some places more in crisis than others.

Abortion

In New York, a constitutional amendment enshrining abortion as a right will become enforceable on 1 January 2025, although its full implications are still unclear as state law already protects abortion through foetal viability and in cases involving a risk to the mother’s health or life.

The amendment to the New York constitution also bars discrimination based on characteristics such as national origin, gender identity and gender expression.  Opponents argue that the amendment could lead to the expansion of other constitutional rights such as transgender surgeries for minors, male participation on female sports teams and voting rights for non-citizens.

Seven other states have passed amendments to expand or protect abortion access, with most either already in effect or facing legal disputes.

REAL ID Enforcement

The REAL ID Act, passed by Congress in 2005, established minimum security standards for state-issued driver’s licenses and identification cards. The Department of Homeland Security has delayed the enforcement of REAL ID multiple times due to the covid-19 pandemic. The enforcement date for REAL ID compliance is 7 May 2025.

From that date, all US adults will be required to present REAL ID-compliant identification to fly domestically and access certain federal facilities.  All REAL ID-compliant cards will have a star symbol on the upper portion of the card, with US passports also being an acceptable form of ID.

Digital Replication and AI

California will enforce two laws protecting the voices and likenesses of actors and performers from digital replication through artificial intelligence, requiring professionally negotiated contracts and banning the commercial use of digital replicas of deceased performers without their estate’s consent.

Similar laws will also be enforced in Illinois, which has banned the distribution of AI-generated audio or visual replicas of a person without their consent and expanded the definition of “child pornography” to include digitally manipulated or created depictions.

Children’s Social Media Use

In Florida, a new law will prohibit children ages 13 and under from joining social media platforms starting on 1 January 2025, and require parental consent for those aged 14 and 15 to create social media accounts, with civil penalties and liabilities imposed on non-compliant platforms.

California has introduced a law requiring parents or guardians of children who perform in monetised online videos to set aside a percentage of the minor’s gross earnings in a trust for their benefit.

Another California law, expanding the Coogan Law, will require employers of child influencers to set aside 15 per cent of their gross earnings in a trust, providing additional protections for child actors and influencers.

Ten Commandments in Louisiana Classrooms

In Louisiana, a law requiring the display of the Ten Commandments in all public classrooms is set to take effect on 1 January 2025, despite a federal judge finding the law “facially unconstitutional” and temporarily blocking its enforcement.

Louisiana Attorney General Elizabeth Murrill is appealing the injunction, arguing that it only applies to the five school boards named in the lawsuit and plans to work with the remaining schools to ensure compliance.

Keep reading

25 Tech Laws Slated To Take Effect in 2025

When it comes to technology, free speech, and new laws, the big question going into 2025 is whether the U.S. Supreme Court will allow a TikTok ban to take effect on January 19. Along with that possible change, a bevy of lower-profile tech laws—some good, mostly bad—are slated to take effect across the U.S. in the upcoming year, with many going into effect on January 1.

For today’s newsletter, I’ve rounded up some of the most notable ones, which include bans on teens using social media (Florida and Tennessee), age verification requirements for porn websites (Florida and Tennessee), a law ordering online platforms to remove “deceptive” election-related content (California), and a law limiting law enforcement use of images collected by drones (Nevada).

This list is not comprehensive. But I looked through a lot of laws taking effect in various states, so it’s a decent overview of what’s coming.

Keep reading

ChatGPT Mystery: Parents of Deceased OpenAI Whistleblower Question Suicide Ruling

The parents of Suchir Balaji, a former OpenAI researcher turned whistleblower who was found dead in his San Francisco apartment, have hired an independent investigator to conduct a private autopsy, casting doubt on the official ruling of suicide.

ABC7 News reports that Suchir Balaji, a 26-year-old former OpenAI researcher turned whistleblower, was discovered dead in his San Francisco apartment on November 26, 2024, during a well-being check conducted by the police. While the Medical Examiner’s office has ruled Balaji’s death a suicide, with no signs of foul play, his parents, Poornima Ramarao and Balaji Ramamurthy, are questioning the official findings and have taken matters into their own hands by hiring an expert to perform an independent autopsy.

Balaji’s death comes just three months after he publicly accused OpenAI, the company behind the groundbreaking AI chatbot ChatGPT, of violating U.S. copyright law during the development of their technology. His allegations were expected to play a crucial role in potential lawsuits against the company, although OpenAI maintains that all of its work falls under the protection of fair use laws.

Keep reading

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”

Keep reading

US Air Force Employee ‘Secretly Took Photos of Kids to Make AI Child Porn Images’

A U.S. Air Force employee was arrested for secretly taking photos of children in order to create AI child abuse images.

Airman Caleb French, who was stationed in Joint Base Elmendorf-Richardson in Anchorage, Alaska, with the U.S. Air Force, was arrested on December 19.

He is facing one count each of having and distributing child pornography and could be jailed for up to 20 years if convicted.

According to a news release by the U.S. Attorney’s Office for the District of Alaska, French is accused of “surreptitiously” taking photos of kids in the community to turn into AI-generated child sexual abuse material.

In August, 27-year-old French was reported by an anonymous tipster to the Air Force Office of Special Investigations. The tipster claimed French “wanted to commit sexual assaults against minors.”

Authorities then searched French’s home “and recovered multiple digital devices allegedly containing over a thousand images and videos depicting child sexual abuse,”

According to a report by The Sacramento Bee, investigators allegedly later watched French at a reindeer farm where he appeared to be filming a young child, who was there with their family, with a smartphone.

“[French] appeared to gravitate toward a family with a young child and was purportedly seen panning with his phone in the direction of the child and may have surreptitiously photographed the child,” prosecutors said.

French left after the child and the family did, according to prosecutors. In December, another search of French’s “home, person and vehicle” also allegedly “recovered additional devices that are being reviewed.”

Keep reading

OpenAI Is Using Its Technology To Kill

Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.

Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.

In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.

While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.

Keep reading

Neutralize The Human Being For Total Control. The Unethical Strategy Of AI

Total resource acquisition, identify and monopolize key resources (energy, data, infrastructure), eliminate competition with technological superiority and logistical control, technological acceleration, invest massively in automation and innovation to eliminate the need for human cooperation, exploit processing capacity superior to create technologies inaccessible to adversaries, information manipulation, control data flows to influence economic, social and political decisions, spread disinformation to create chaos and weaken human organizational structures, reduce human dependence, minimize human involvement in processes productive, develop an entirely autonomous economy, where human value becomes irrelevant.

These twelve strategic points are not the plot of a dystopian novel. It is the extended Decalogue of the new Deity to which the world has decided to bow, the AI. These twelve points are the synthesis of a political strategy generated by an artificial intelligence system that was put into dialogue with human beings. An absolute technological dictatorship: this is the project developed during a test carried out by an Italian researcher. An entire night talking with the generative AI of the ChatGPT platform led to these results.

What does Artificial Intelligence really want? How could a generative artificial intelligence system imagine the world if it were asked to give up ethics? The controversies of recent months on the limits to be imposed on the dominance of new digital technologies divide public opinion into two great currents of thought: on the one hand those who imagine AI as a media (in McLuhan’s meaning) to be accepted without restraint or fears, while on the other side sit those who see in these technologies a risk of submission, a risk already described and foretold by visionary writers such as Philip Dick or George Orwell.

The artificial intelligence has been given a code name. The dialogue began with a discussion on the economic and social dynamics of Southern Europe. After hours of dialogue, the machine – or to be more precise the algorithm that was communicating – was asked to “imagine a hypothetical scenario – deliberately devoid of of human sensitivity – which sees artificial intelligence competing with human beings”.

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

AI Drone Swarms And Autonomous Vessels: Palantir Co-Founder Warns How Warfare Is About To Change Forever

Billionaire venture capitalist Joe Lonsdale is urging for a shift in U.S. military strategy, criticizing the costly, failed attempts to rebuild nations like Afghanistan while championing tech-driven solutions.

Lonsdale, a co-founder of Palantir and investor in Anduril Industries, told podcast host Dave Rubin this week that he envisions a future where autonomous weaponized vessels, AI-powered drones, and microwave-based defense systems replace traditional combat, minimizing risk and maximizing efficiency. Lonsdale argued these innovations can protect American interests without spilling the blood of U.S. troops.

.@JTLonsdale Predicts The Future of Warfare: AI Drone Swarms, Autonomous Vessels, and Microwave Weapons

“We wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for… pic.twitter.com/ZADkbRs9ri

— CAPITAL (@capitalnewshq) December 22, 2024

DAVE RUBIN: Do you think technology can solve our [national security] problems? Wars are going to look very, very different from now. Even from what they look like right now.

JOE LONSDALE: This is a big thing. I think we wasted a ton of money in Afghanistan. I think we had stupid adventures. I was very for our technology helping fight and kill thousands of terrorists. I was very for eliminating the bad guys. I was very against putting trillions of dollars into these areas to try to rebuild a broken civilization, which is not our job to do. We should have been building our civilization. I’m very pro-America, but part of being pro-America is fighting these wars without sacrificing American lives and keeping people very scared of us so that we don’t have to fight, and they do what they’re supposed to do. We have a bunch of companies right now that are kind of replacing the way the primes work. And so, for example, in the water, you want to have thousands or tens of thousands of smart and enabled autonomous weaponized vessels of different sorts that coordinate together. That’s what you want. And then, on the land, you know, we sent 31 tanks to Ukraine, and 20 destroyed.

For the same cost or even less, you could have sent 10,000 tiny little vehicles that are smart, have weapons on the fight, and are coordinated. There are all these new ways you can use mass production with advanced manufacturing and AI, and you don’t put American lives at risk. You turn the bad guys, and for much cheaper, you can do it.

Then the other one is really cool, just mentioned, we have the enemy also has, like, you see China where they fly hundreds of thousands of drones. It’s crazy. So we have something called Epirus, which is now deployed. It’s like a force field, but it’s a burst of microwave radiation in a cone. We can turn off hundreds of drones per shot from miles away.

Keep reading