‘Could Lead to Extinction Event’: Nicole Shanahan Issues Dire Warning Against Stargate’s AI-Backed mRNA Cancer Vaccine Rollout

The distribution of AI-driven mRNA cancer vaccines for individuals as part of President Donald Trump’s Stargate Project could lead to an “extinction event,” warns former RFK Jr. running mate Nicole Shanahan.

In an appearance on Megyn Kelly’s podcast Wednesday, Shanahan, a Silicon Valley attorney and Robert F. Kennedy Jr.’s 2024 running mate, called for a moratorium on experimental mRNA technology because they already raise health concerns since the long-term effects are not yet fully understood.

“What we need for the mRNA platform right now is a moratorium. It’s not ready for human use,” Shanahan said. “One of the reasons why is it delivers an inconsistent result in individuals.”

Shanahan went on to explain how 5% of those who received the experimental COVID-19 mRNA jabs during the plandemic didn’t get the expected results — instead, many ended up with “turbo cancers,” “blood clots” and other adverse side effects, and others were even harmed as a result of spike protein “shedding.”

“In order for our population to grow, to be strong, to be fully able-bodied, and for our human economy to thrive, we do need a moratorium on the mRNA for the time being,” she said.

Kelly added, “Until it’s not Russian roulette to take it.”

Shanahan pointed out that many engineers and pharmacological researchers working on the development of the mRNA tech are overlooking a fundamental truth about human biology: it can’t be programmed the way a computer system can.

“They think that you can program the human body as you program an AI system, as you program a computer system. And the trouble with that mentality is that nature…there’s an element to it that when you interject something like the mRNA vaccine, there’s a huge amount of stochastic randomness that can occur,” she noted.

“AI is a computer system. Human health is not,” she added.

Keep reading

Trump throws his weight behind new generation of mRNA gene-therapy injections, for cancer and other diseases

OpenAI, Softbank, and Oracle will be part of a public-private partnership with the Trump White House called Stargate.

The heads of the tech firms plan to invest up to $500 billion over four years, in building AI infrastructure across the United States. This means data centers. Massive buildings designed to collect and process data. Running these centers requires huge amounts of water and energy.

SoftBank CEO Masayoshi Son, Sam Altman of OpenAI, and Larry Ellison of Oracle appeared at the White House on Tuesday afternoon with President Trump to announce the launching of Stargate.

Trump, standing with the three tech CEOs at the White House, said he would invoke “emergency declarations” to help speed up the Stargate project.

“I’m going to help a lot through emergency declarations,” he said. “Because we have an emergency and we need a lot of help. We need energy generation and they will build their own.”

He said Stargate will build the infrastructure to power the “next generation of AI and this will include data centers. Massive facilities…These are big beautiful buildings.”

He said a team is already scouting the nation for sites on which to build new data centers, adding:

“This is to me a very big deal. It could lead to something that could be the biggest of all.”

Larry Ellison talked about combining the forces of AI and mRNA gene therapy to create a “cancer vaccine.”

Keep reading

The CDC, Palantir and the AI-Healthcare Revolution 

The Pentagon and Silicon Valley are in the midst of cultivating an even closer relationship as the Department of Defense (DoD) and Big Tech companies seek to jointly transform the American healthcare system into one that is “artificial intelligence (AI)-driven.” The alleged advantages of such a system, espoused by the Army itself, Big Tech and Pharma executives as well as intelligence officers, would be unleashed by the rapidly developing power of so-called “predictive medicine,” or “a branch of medicine that aims to identify patients at risk of developing a disease, thereby enabling either prevention or early treatment of that disease.”

This will apparently be achieved via mass interagency data sharing between the DoD, the Department of Health and Human Services (HHS) and the private sector. In other words, the military and intelligence communities, as well as the public and private sector elements of the US healthcare system, are working closely with Big Tech to “predict” diseases and treat them before they occur (and even before symptoms are felt) for the purported purpose of improving civilian and military healthcare.

This cross-sector team plans to deliver this transformation of the healthcare system by first utilizing and sharing the DoD’s healthcare dataset, which is the most “comprehensive…in the world.” It seems, however, based on the programs that already utilize this predictive approach and the necessity for “machine learning” in the development of AI technology, that this partnership would also massively expand the breadth of this healthcare dataset through an array of technologies, methods and sources.

Yet, if the actors and institutions involved in lobbying for and implementing this system indicate anything, it appears that another—if not primary—purpose of this push towards a predictive AI-healthcare infrastructure is the resurrection of a Defense Advanced Research Projects Agency (DARPA)-managed and Central Intelligence Agency (CIA)-supported program that Congress officially “shelved” decades ago. That program, Total Information Awareness (TIA), was a post 9/11 “pre-crime” operation which sought to use mass surveillance to stop terrorists before they committed any crimes through collaborative data mining efforts between the public and private sector.

While the “pre-crime” aspect of TIA is the best known component of the program, it also included a component that sought to use public and private health and financial data to “predict” bioterror events and pandemics before they emerge. This was TIA’s “Bio-Surveillance” program, which aimed to develop “necessary information technologies and a resulting prototype capable of detecting the covert release of a biological pathogen automatically, and significantly earlier than traditional approaches.” Its architects argued it would achieve this by “monitoring non-traditional data sources” including “pre-diagnostic medical data” and “behavioral indicators.” While ostensibly created to thwart “bioterror” events, the program also sought to create algorithms for identifying “normal” disease outbreaks, essentially seeking to automate the early detection of either biological attacks or natural pathogen outbreaks, ranging from pandemics to presumably other, less severe disease events.

Keep reading

Chinese Navy Reveals a Naval Artificial Intelligence “Dreadnaught” Moment

There have been “Revolutions in Military Affairs” (RMAs) over the ages.  RMAs are pivot points when something changes warfare dramatically.  In the naval arena, one of the most memorable RMAs was the introduction of the HMS Dreadnaught in 1906.

It was said, “Dreadnought made every other exist­ing battleship obsolete, and her name became generic for similar fast, modern vessels. All battleships laid down before her were pejoratively labeled “pre-dreadnought.”

The Chinese Navy (PLAN) has revealed a new vessel that may represent the modern, naval “Dreadnaught” moment.  The “Killer Whale” (or Orca), autonomous surface combat vessel has recently been shown in China, cruising on the river from its Guangzhou Shipyard.

This vessel is the largest military purpose USV built to date.  It is little coincidence that Guangzhou was the location of the shipyard.

Guangzhou is the Silicon Valley region of China, and the Orca is not just an autonomous warship, but a floating combat data center.

This vessel reflects significant data collection, data analysis, and AI-enabled autonomous actioning.

AI and Autonomy are trending topics, but the Orca is far ahead of any other AI-enabled, autonomous vessel publicly known to date.

Keep reading

How AI Is Fueling UFO Misinformation Online

Social media platforms have seen a surge in the sharing of alleged videos of UFOs (unidentified flying objects), particularly following a November 2024 U.S. congressional hearing.

In November 2024, the U.S. House of Representatives Committee on Oversight and Accountability held a hearing titled “Unidentified Anomalous Phenomena (UAP): Exposing the Truth.”

During the hearing, a former Department of Defense official testified to Congress that government employees had been injured by UFOs and accused the U.S. government of conducting a secret UFO retrieval program. However, he did not provide direct evidence to support his claims.

Although this hearing was similar to previous congressional UFO hearings, the pedigrees of some whistleblowers who testified set it apart. Witnesses included a former U.S. counterintelligence officer, a retired U.S. Navy rear admiral, and a former NASA associate administrator.

All of them stressed the need for more government transparency, less stigma around the UFO topic, and new policies to bring UAP data out of classified programs and into the public domain.

This congressional hearing energized already enthusiastic UFO communities, prompting many to create AI-generated videos about UFOs and encouraging thousands of people to share them. Misbar investigated some of the most viral claims, analyzing them and explaining how AI-generated content can be identified.

Keep reading

‘Over 800 Biases Uncovered’ As Pentagon Ends AI Chatbot Pilot Program For Military Medicine

The US Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) has concluded a pilot program focused on using AI chatbots in military medical services.  

In a Jan. 2 announcement, the DoD said the Crowdsourced AI Red-Teaming (CAIRT) Assurance Program pilot focused on using large-language models (LLM) for clinical note summarization and as medical advisers in the military.

It comes as more AI firms have begun offering their products to the US military and defense contractors to investigate their usefulness in military applications.

CoinTelegraph’s Stephen Katte reports that, according to the DoD, the pilot was a red-teaming effort conducted by technology nonprofit Humane Intelligence.

It attracted over 200 independent external participants, including clinical providers and healthcare analysts, who compared three prominent chatbot models.

Analysts from the Defense Health Agency and the Uniformed Services University of the Health Sciences also collaborated with the other participants, testing for potential system weaknesses and flaws while the chatbots were used.

According to the DoD, the pilot discovered a few hundred possible issues when using chatbots in military medical applications.

“The exercise uncovered over 800 findings of potential vulnerabilities and biases related to employing these capabilities in these prospective use cases.”

“This exercise will result in repeatable and scalable output via the development of benchmark data sets, which can be used to evaluate future vendors and tools for alignment with performance expectations,” the DoD said.

The Chief Digital and Artificial Intelligence Office’s lead for the initiative, Matthew Johnson, said the results will also be used to shape future DoD research and development of Generative AI (GenAI) systems that may be deployed in the future.

Keep reading

Apple auto-opts everyone into having their photos analyzed by AI for landmarks

Apple last year deployed a mechanism for identifying landmarks and places of interest in images stored in the Photos application on its customers iOS and macOS devices and enabled it by default, seemingly without explicit consent.

Apple customers have only just begun to notice.

The feature, known as Enhanced Visual Search, was called out last week by software developer Jeff Johnson, who expressed concern in two write-ups about Apple’s failure to explain the technology, which is believed to have arrived with iOS 18.1 and macOS 15.1 on October 28, 2024.

In a policy document dated November 18, 2024 (not indexed by the Internet Archive’s Wayback Machine until December 28, 2024, the date of Johnson’s initial article), Apple describes the feature thus:

Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides [your] IP address. This prevents Apple from learning about the information in your photos. You can turn off Enhanced Visual Search at any time on your iOS or iPadOS device by going to Settings > Apps > Photos. On Mac, open Photos and go to Settings > General.

Apple did explain the technology in a technical paper published on October 24, 2024, around the time that Enhanced Visual Search is believed to have debuted. A local machine-learning model analyzes photos to look for a “region of interest” that may depict a landmark. If the AI model finds a likely match, it calculates a vector embedding – an array of numbers – representing that portion of the image.

The device then uses homomorphic encryption to scramble the embedding in such a way that it can be run through carefully designed algorithms that produce an equally encrypted output. The goal here being that the encrypted data can be sent to a remote system to analyze without whoever is operating that system from knowing the contents of that data; they just have the ability to perform computations on it, the result of which remain encrypted. The input and output are end-to-end encrypted, and not decrypted during the mathematical operations, or so it’s claimed.

The dimension and precision of the embedding is adjusted to reduce the high computational demands for this homomorphic encryption (presumably at the cost of labeling accuracy) “to meet the latency and cost requirements of large-scale production services.” That is to say Apple wants to minimize its cloud compute cost and mobile device resource usage for this free feature.

With some server optimization metadata and the help of Apple’s private nearest neighbor search (PNNS), the relevant Apple server shard receives a homomorphically-encrypted embedding from the device, and performs the aforementioned encrypted computations on that data to find a landmark match from a database and return the result to the client device without providing identifying information to Apple nor its OHTTP partner Cloudflare.

Keep reading

Here are a few of the new laws taking effect in the US in 2025

At the end of last year, The Epoch Times highlighted the more notable new laws that will take effect in US states in 2025.  The outlet noted several new laws are set to take effect in 2025, impacting various aspects of life in the United States, including digital content creation, kids’ social media use and more.

Most of these laws are not unique to the US and throughout the West people are familiar with the underlying agenda that has given rise to such laws.   Judging by these laws alone, it is hard not to feel that the West is experiencing or being forced into a crisis of moral decline, with some places more in crisis than others.

Abortion

In New York, a constitutional amendment enshrining abortion as a right will become enforceable on 1 January 2025, although its full implications are still unclear as state law already protects abortion through foetal viability and in cases involving a risk to the mother’s health or life.

The amendment to the New York constitution also bars discrimination based on characteristics such as national origin, gender identity and gender expression.  Opponents argue that the amendment could lead to the expansion of other constitutional rights such as transgender surgeries for minors, male participation on female sports teams and voting rights for non-citizens.

Seven other states have passed amendments to expand or protect abortion access, with most either already in effect or facing legal disputes.

REAL ID Enforcement

The REAL ID Act, passed by Congress in 2005, established minimum security standards for state-issued driver’s licenses and identification cards. The Department of Homeland Security has delayed the enforcement of REAL ID multiple times due to the covid-19 pandemic. The enforcement date for REAL ID compliance is 7 May 2025.

From that date, all US adults will be required to present REAL ID-compliant identification to fly domestically and access certain federal facilities.  All REAL ID-compliant cards will have a star symbol on the upper portion of the card, with US passports also being an acceptable form of ID.

Digital Replication and AI

California will enforce two laws protecting the voices and likenesses of actors and performers from digital replication through artificial intelligence, requiring professionally negotiated contracts and banning the commercial use of digital replicas of deceased performers without their estate’s consent.

Similar laws will also be enforced in Illinois, which has banned the distribution of AI-generated audio or visual replicas of a person without their consent and expanded the definition of “child pornography” to include digitally manipulated or created depictions.

Children’s Social Media Use

In Florida, a new law will prohibit children ages 13 and under from joining social media platforms starting on 1 January 2025, and require parental consent for those aged 14 and 15 to create social media accounts, with civil penalties and liabilities imposed on non-compliant platforms.

California has introduced a law requiring parents or guardians of children who perform in monetised online videos to set aside a percentage of the minor’s gross earnings in a trust for their benefit.

Another California law, expanding the Coogan Law, will require employers of child influencers to set aside 15 per cent of their gross earnings in a trust, providing additional protections for child actors and influencers.

Ten Commandments in Louisiana Classrooms

In Louisiana, a law requiring the display of the Ten Commandments in all public classrooms is set to take effect on 1 January 2025, despite a federal judge finding the law “facially unconstitutional” and temporarily blocking its enforcement.

Louisiana Attorney General Elizabeth Murrill is appealing the injunction, arguing that it only applies to the five school boards named in the lawsuit and plans to work with the remaining schools to ensure compliance.

Keep reading

25 Tech Laws Slated To Take Effect in 2025

When it comes to technology, free speech, and new laws, the big question going into 2025 is whether the U.S. Supreme Court will allow a TikTok ban to take effect on January 19. Along with that possible change, a bevy of lower-profile tech laws—some good, mostly bad—are slated to take effect across the U.S. in the upcoming year, with many going into effect on January 1.

For today’s newsletter, I’ve rounded up some of the most notable ones, which include bans on teens using social media (Florida and Tennessee), age verification requirements for porn websites (Florida and Tennessee), a law ordering online platforms to remove “deceptive” election-related content (California), and a law limiting law enforcement use of images collected by drones (Nevada).

This list is not comprehensive. But I looked through a lot of laws taking effect in various states, so it’s a decent overview of what’s coming.

Keep reading

ChatGPT Mystery: Parents of Deceased OpenAI Whistleblower Question Suicide Ruling

The parents of Suchir Balaji, a former OpenAI researcher turned whistleblower who was found dead in his San Francisco apartment, have hired an independent investigator to conduct a private autopsy, casting doubt on the official ruling of suicide.

ABC7 News reports that Suchir Balaji, a 26-year-old former OpenAI researcher turned whistleblower, was discovered dead in his San Francisco apartment on November 26, 2024, during a well-being check conducted by the police. While the Medical Examiner’s office has ruled Balaji’s death a suicide, with no signs of foul play, his parents, Poornima Ramarao and Balaji Ramamurthy, are questioning the official findings and have taken matters into their own hands by hiring an expert to perform an independent autopsy.

Balaji’s death comes just three months after he publicly accused OpenAI, the company behind the groundbreaking AI chatbot ChatGPT, of violating U.S. copyright law during the development of their technology. His allegations were expected to play a crucial role in potential lawsuits against the company, although OpenAI maintains that all of its work falls under the protection of fair use laws.

Keep reading