‘Over 800 Biases Uncovered’ As Pentagon Ends AI Chatbot Pilot Program For Military Medicine

The US Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) has concluded a pilot program focused on using AI chatbots in military medical services.  

In a Jan. 2 announcement, the DoD said the Crowdsourced AI Red-Teaming (CAIRT) Assurance Program pilot focused on using large-language models (LLM) for clinical note summarization and as medical advisers in the military.

It comes as more AI firms have begun offering their products to the US military and defense contractors to investigate their usefulness in military applications.

CoinTelegraph’s Stephen Katte reports that, according to the DoD, the pilot was a red-teaming effort conducted by technology nonprofit Humane Intelligence.

It attracted over 200 independent external participants, including clinical providers and healthcare analysts, who compared three prominent chatbot models.

Analysts from the Defense Health Agency and the Uniformed Services University of the Health Sciences also collaborated with the other participants, testing for potential system weaknesses and flaws while the chatbots were used.

According to the DoD, the pilot discovered a few hundred possible issues when using chatbots in military medical applications.

“The exercise uncovered over 800 findings of potential vulnerabilities and biases related to employing these capabilities in these prospective use cases.”

“This exercise will result in repeatable and scalable output via the development of benchmark data sets, which can be used to evaluate future vendors and tools for alignment with performance expectations,” the DoD said.

The Chief Digital and Artificial Intelligence Office’s lead for the initiative, Matthew Johnson, said the results will also be used to shape future DoD research and development of Generative AI (GenAI) systems that may be deployed in the future.

Keep reading

Apple Reaches $95M Settlement Over Lawsuit Accusing ‘Siri’ Of Eavesdropping On Consumers

Apple has agreed to pay $95 million to settle a lawsuit that accuses the company of infringing on its users’ privacy by utilizing “Siri,” Apple’s artificial intelligence (AI) assistant, to eavesdrop on individuals with Apple devices.

The agreed upon settlement, which was filed on December 13th, 2024, in Oakland, California, is currently awaiting approval by a U.S. district judge.

The 5-year-old lawsuit alleged that Apple would activate Siri without the user’s knowledge “for over a decade.” The suit continued, claiming that Apple would continue to record, unbeknownst to the phone owner, sharing conversations and certain key words with advertisers in order to push products and services.

Apple has long marketed itself as a “pioneer” in protecting its consumers privacy. However, users have also long suspected that their device is listening to them after specific ads for products or services have been presented via social media apps after simply discussing topics or figures out loud that are related.

Two plaintiffs in the suit recall that after merely mentioning Air Jordan shoes, their iPhone began showing them advertising for the shoes more often. Another noted that after discussing a specific surgical treatment with his doctor, he began receiving medical ads related to that treatment.

The claims fly in the face of Apple CEO Tim Cook’s claim that the right to privacy is a “fundamental human right.”

If the district judge approves the settlement, tens of millions of Apple consumers who owned devices beginning in September 17th, 2014, would be able to file claims, receiving up to $20 per device, depending on the volume of the claims, according to court documents.

Keep reading

“TIME TO WAKE UP!”: ‘Cybertruck Bomber’ Had Several Manifestos, Warned Of Drones Using ‘Gravatic Propulsion’

The details surrounding Cybertruck bomber Matthew Livelsberger continue to drip – with two ‘manifesto’ letters found in the Cybertruck itself, and another ‘manifesto’ he emailed to former Navy SEAL Sam Shoemate.

In the letters found in the Cybertruck, Livelsberger described the USA as “terminally ill,” and said his actions were meant as a “wake-up call,” and not a terrorist attack.

The most intriguing, however, is the email he sent to Shoemate – in which he warns that the “drones” seen around the United States over the last month are using “gravatic propulsion systems,” which only China and the United States possess.

Shoemate shared the email on Shawn Ryan’s show, writing on X, “I knew taking this public would insert me into the “glowy boi” conspiracy cycle, especially since I’m an intelligence officer,” adding “I had no choice. Dude dumped it in my inbox. When I saw his name in the news, I had little choice but to hand it over to the feds. I knew the FBI wouldn’t release it, or at least without an agenda attached, so I took it to Shawn Ryan because he has the platform to handle the magnitude of this information and will do so as objectively as possible.”

According to Livelsberger’s email:

“China has been launching them from the Atlantic from submarines for years, but this activity recently has picked up. As of now, it is just a show of force and they are using it similar to how they used the blloon for a sigint and isr, which are also part of the integrated comms system,” he writes.

…they are the most dangerous threat to national security that has ever existed. They basically have an unlimited payload  capacity and can park over the WH if they wanted. It’s checkmate.”

Keep reading

Apple auto-opts everyone into having their photos analyzed by AI for landmarks

Apple last year deployed a mechanism for identifying landmarks and places of interest in images stored in the Photos application on its customers iOS and macOS devices and enabled it by default, seemingly without explicit consent.

Apple customers have only just begun to notice.

The feature, known as Enhanced Visual Search, was called out last week by software developer Jeff Johnson, who expressed concern in two write-ups about Apple’s failure to explain the technology, which is believed to have arrived with iOS 18.1 and macOS 15.1 on October 28, 2024.

In a policy document dated November 18, 2024 (not indexed by the Internet Archive’s Wayback Machine until December 28, 2024, the date of Johnson’s initial article), Apple describes the feature thus:

Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides [your] IP address. This prevents Apple from learning about the information in your photos. You can turn off Enhanced Visual Search at any time on your iOS or iPadOS device by going to Settings > Apps > Photos. On Mac, open Photos and go to Settings > General.

Apple did explain the technology in a technical paper published on October 24, 2024, around the time that Enhanced Visual Search is believed to have debuted. A local machine-learning model analyzes photos to look for a “region of interest” that may depict a landmark. If the AI model finds a likely match, it calculates a vector embedding – an array of numbers – representing that portion of the image.

The device then uses homomorphic encryption to scramble the embedding in such a way that it can be run through carefully designed algorithms that produce an equally encrypted output. The goal here being that the encrypted data can be sent to a remote system to analyze without whoever is operating that system from knowing the contents of that data; they just have the ability to perform computations on it, the result of which remain encrypted. The input and output are end-to-end encrypted, and not decrypted during the mathematical operations, or so it’s claimed.

The dimension and precision of the embedding is adjusted to reduce the high computational demands for this homomorphic encryption (presumably at the cost of labeling accuracy) “to meet the latency and cost requirements of large-scale production services.” That is to say Apple wants to minimize its cloud compute cost and mobile device resource usage for this free feature.

With some server optimization metadata and the help of Apple’s private nearest neighbor search (PNNS), the relevant Apple server shard receives a homomorphically-encrypted embedding from the device, and performs the aforementioned encrypted computations on that data to find a landmark match from a database and return the result to the client device without providing identifying information to Apple nor its OHTTP partner Cloudflare.

Keep reading

Here are a few of the new laws taking effect in the US in 2025

At the end of last year, The Epoch Times highlighted the more notable new laws that will take effect in US states in 2025.  The outlet noted several new laws are set to take effect in 2025, impacting various aspects of life in the United States, including digital content creation, kids’ social media use and more.

Most of these laws are not unique to the US and throughout the West people are familiar with the underlying agenda that has given rise to such laws.   Judging by these laws alone, it is hard not to feel that the West is experiencing or being forced into a crisis of moral decline, with some places more in crisis than others.

Abortion

In New York, a constitutional amendment enshrining abortion as a right will become enforceable on 1 January 2025, although its full implications are still unclear as state law already protects abortion through foetal viability and in cases involving a risk to the mother’s health or life.

The amendment to the New York constitution also bars discrimination based on characteristics such as national origin, gender identity and gender expression.  Opponents argue that the amendment could lead to the expansion of other constitutional rights such as transgender surgeries for minors, male participation on female sports teams and voting rights for non-citizens.

Seven other states have passed amendments to expand or protect abortion access, with most either already in effect or facing legal disputes.

REAL ID Enforcement

The REAL ID Act, passed by Congress in 2005, established minimum security standards for state-issued driver’s licenses and identification cards. The Department of Homeland Security has delayed the enforcement of REAL ID multiple times due to the covid-19 pandemic. The enforcement date for REAL ID compliance is 7 May 2025.

From that date, all US adults will be required to present REAL ID-compliant identification to fly domestically and access certain federal facilities.  All REAL ID-compliant cards will have a star symbol on the upper portion of the card, with US passports also being an acceptable form of ID.

Digital Replication and AI

California will enforce two laws protecting the voices and likenesses of actors and performers from digital replication through artificial intelligence, requiring professionally negotiated contracts and banning the commercial use of digital replicas of deceased performers without their estate’s consent.

Similar laws will also be enforced in Illinois, which has banned the distribution of AI-generated audio or visual replicas of a person without their consent and expanded the definition of “child pornography” to include digitally manipulated or created depictions.

Children’s Social Media Use

In Florida, a new law will prohibit children ages 13 and under from joining social media platforms starting on 1 January 2025, and require parental consent for those aged 14 and 15 to create social media accounts, with civil penalties and liabilities imposed on non-compliant platforms.

California has introduced a law requiring parents or guardians of children who perform in monetised online videos to set aside a percentage of the minor’s gross earnings in a trust for their benefit.

Another California law, expanding the Coogan Law, will require employers of child influencers to set aside 15 per cent of their gross earnings in a trust, providing additional protections for child actors and influencers.

Ten Commandments in Louisiana Classrooms

In Louisiana, a law requiring the display of the Ten Commandments in all public classrooms is set to take effect on 1 January 2025, despite a federal judge finding the law “facially unconstitutional” and temporarily blocking its enforcement.

Louisiana Attorney General Elizabeth Murrill is appealing the injunction, arguing that it only applies to the five school boards named in the lawsuit and plans to work with the remaining schools to ensure compliance.

Keep reading

25 Tech Laws Slated To Take Effect in 2025

When it comes to technology, free speech, and new laws, the big question going into 2025 is whether the U.S. Supreme Court will allow a TikTok ban to take effect on January 19. Along with that possible change, a bevy of lower-profile tech laws—some good, mostly bad—are slated to take effect across the U.S. in the upcoming year, with many going into effect on January 1.

For today’s newsletter, I’ve rounded up some of the most notable ones, which include bans on teens using social media (Florida and Tennessee), age verification requirements for porn websites (Florida and Tennessee), a law ordering online platforms to remove “deceptive” election-related content (California), and a law limiting law enforcement use of images collected by drones (Nevada).

This list is not comprehensive. But I looked through a lot of laws taking effect in various states, so it’s a decent overview of what’s coming.

Keep reading

Drones Run Amok

Drones, drones, everywhere drones. For a few weeks, clusters of drones of unknown provenance were recently seen flying in the skies above New Jersey. Local, state, and federal authorities claimed that they did not know whose drones they were. The expression “baseless conspiracy theory” saw an uptick in usage once again as some in the media scoffed at the proliferating hypotheses about what was going on. Incoming U.S. president, Donald Trump, opined that the machines should be shot down, but the Joe Biden administration did not agree, lending significant support to the simplest hypothesis of them all: that the drones have been used by the government itself for whatever its purposes may be. Having once recognized this very real possibility, Trump cryptically intoned, “Something strange is going on. For some reason, they don’t want to tell the people.”

State officials, spurred by their constituents, got to work attempting to find ways to halt the drones overhead, suggesting that, if the executive branch would not itself prevent certain unnameable rogue government departments from violating the Posse Comitatus Act, then at least by asserting the sub-federal authority enjoyed by states, it would be possible to stop whoever was behind whatever the operation may have been. New Jersey is not Nordstream, which U.S. citizens were quite willing to forget about and pretend never happened, despite in all likelihood having paid for the terrorist act of sabotage.

On December 19, 2024, the FAA (Federal Aviation Authority) issued a one-month ban on the flying of UAVs over swaths of New Jersey, declaring the areas to be “national defense airspace,” and oddly claiming that “deadly force” could be deployed in response to violations of the ban. It is unclear what the use of “deadly force” against inanimate machines might mean, but it ominously suggests that the persons behind the drones might be subject to summary execution. Or perhaps the reference to “deadly force” was just part of a cover story composed in order to dispel the most plausible available hypothesis, undoubtedly made even more popular by Trump’s pronouncement that, “The government knows what is happening.” Having himself been the object of attempted assassinations by figures with rather bizarre back stories and curious connections, Trump understandably canceled a planned trip to New Jersey. The terrifying truth is that, with the advent of clusters of weaponized drones the size of insects, there really is nowhere and no way to hide from a determined killer with access to the latest and greatest lethal technologies developed by DARPA (the Defense Advanced Research Projects Agency—fully funded by U.S. taxpayers).

Regardless of who may have launched the mysterious drones, the implication for U.S. citizens is that at long last they have been subjected to the specter of insecurity and danger posed by the hovering overhead of unmanned aerial vehicles (UAVs), which may or may not bear lethal payloads, and which may or may not be monitoring the movements of specific targets, whose names may or may not be found on government “kill don’t capture” lists.

Keep reading

“It’s A Cover-Up And It’s Terrifying”: CIA Whistleblower Says Foreign Adversary ‘Likely’ Responsible For Havana Syndrome

A former Intelligence officer is accusing the CIA of gaslighting the victims of a debilitating health phenomenon known as Havana syndrome.

Investigative journalist Catherine Herridge sat down with the CIA whistleblower to hear about her “career-ending injuries” in an exclusive interview posted on X.

The House Intelligence Committee recently determined in an interim report that “the Intelligence Community has attempted to thwart congressional efforts to uncover the truth at every turn.”

It’s a cover-up and it’s terrifying,” said the former intel officer, who spoke to Herridge using the alias “Alice.  “It should be terrifying for all Americans.”

Dubbed “Anomalous Health Incidents (AHIs),” the syndrome was first experienced by U.S. government personnel stationed in Havana, Cuba, in 2016.  Those affected reported hearing piercing noises and then suffering head pressure followed by various symptoms, including headache, nausea, fatigue, dizziness, vertigo, ear pain, and cognitive dysfunction.

Hundreds of cases of the Havana syndrome have since been reported, chiefly among US spies, diplomats and soldiers posted abroad, leading many to believe they were targeted by a hostile country with a high powered microwave system weapon.

However, the Intelligence Community has consistently resolved that the mysterious illnesses were unlikely the work of a foreign actor, and more likely “attributable to naturally occurring medical conditions, environmental exposures, or psycho-social factors.”

The Democrat-controlled U.S. Senate came to the same conclusion in a new report released on Friday.

The House Intelligence Committee however determined in its Dec. 5 report that it is “increasingly likely a foreign adversary is responsible for some portion of reported AHIs.”

Both committees concluded that the CIA’s handling of the Havana Syndrome cases has been seriously flawed and inconsistent.

“The conclusions published by the DNI in the unclassified Intelligence Community Assessment on AHI are dubious at best, misleading at worst,” said House Permanent Select Committee on Intelligence CIA Subcommittee Chairman Rick Crawford (R-Ark.)

Keep reading

ChatGPT Mystery: Parents of Deceased OpenAI Whistleblower Question Suicide Ruling

The parents of Suchir Balaji, a former OpenAI researcher turned whistleblower who was found dead in his San Francisco apartment, have hired an independent investigator to conduct a private autopsy, casting doubt on the official ruling of suicide.

ABC7 News reports that Suchir Balaji, a 26-year-old former OpenAI researcher turned whistleblower, was discovered dead in his San Francisco apartment on November 26, 2024, during a well-being check conducted by the police. While the Medical Examiner’s office has ruled Balaji’s death a suicide, with no signs of foul play, his parents, Poornima Ramarao and Balaji Ramamurthy, are questioning the official findings and have taken matters into their own hands by hiring an expert to perform an independent autopsy.

Balaji’s death comes just three months after he publicly accused OpenAI, the company behind the groundbreaking AI chatbot ChatGPT, of violating U.S. copyright law during the development of their technology. His allegations were expected to play a crucial role in potential lawsuits against the company, although OpenAI maintains that all of its work falls under the protection of fair use laws.

Keep reading

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”

Keep reading