FDA Launches New AI Tool

The Food and Drug Administration on June 2 launched a new artificial intelligence (AI) tool.

FDA officials said Elsa, the tool, will help employees “work more efficiently.”

The agency is utilizing Elsa to speed up clinical protocol reviews and scientific evaluations, as well as to identify targets for inspections.

FDA officials described Elsa as a “large language model–powered AI tool designed to assist with reading, writing, and summarizing.” They said it can summarize adverse events to help with safety profile assessments, compare labels faster than humans, and generate code to help develop databases.

“Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,” FDA Chief AI Officer Jeremy Walsh said in a statement.

AI refers to computer systems that perform complex tasks typically performed by humans.

Dr. Marty Makary, the FDA’s commissioner, said in May that the FDA would immediately start using AI and fully integrate it by the end of June.

“Following a very successful pilot program with FDA’s scientific reviewers, I set an aggressive timeline to scale AI agency-wide by June 30,” Makary said on Monday.

“Today’s rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers.”

Health Secretary Robert F. Kennedy Jr., who leads the Department of Health and Human Services (HHS), the FDA’s parent agency, cheered the development, calling it “a revolution in public health” in a post on social media platform X.

The FDA recently fired thousands of employees. HHS officials had said they would cut about 3,500 full-time workers but ended up terminating about 2,500 workers, according to a Senate Democrat report.

Makary told a congressional panel during a recent appearance that no scientific reviewers were fired, although some research scientists were among those terminated.

President Donald Trump has promoted the use of AI, saying in one order that “with the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.”

The first report from Trump’s Make America Healthy Again (MAHA) Commission, headed by Kennedy, contained markers of AI, including nonexistent studies and multiple instances of “oaicite,” which developers say is inserted by OpenAI’s ChatGPT AI tool.

“The pattern is consistent with other cases we’ve seen of using generative AI to create citations,” Dr. Ivan Oransky, co-founder of Retraction Watch and executive director of The Center for Scientific Integrity, told The Epoch Times in an email.

“The fact that these fake citations made it into the report are evidence that no one checked the report for rigor. That should concern anyone who reads it or considers relying on it.”

Keep reading

Palantir Co-Founder Joe Lonsdale & Former Exec Refute NYT Report Warning Over Surveillance ‘Master List’

Palantir co-founder Joe Lonsdale and former executive Wendy Anderson have hit back against a NY Times report warning that the company is laying the groundwork for government surveillance on steroids through a massive database that would coordinate the private information of US citizens across federal agencies. 

Palantir’s not a “database”; it’s a platform created by 1000s of the most talented and patriotic Americans to partner with our DoD to stop attacks and defeat bad guys, while protecting liberty & privacy,” Lonsdale posted on X in response to the account “Retard Finder,” that said “The Palantir database idea is retarded.” 

“There are hundreds of similar types of software and efforts in the USA throughout the west; what’s unique about Palantir is that it’s BY FAR the best at stopping bad guys,” Lonsdale continued

When asked by a self-described Palantir shareholder whether he’d “personally be comfortable with your personal data being stored in this database if AOC or Ilhan Omar were President,” Lonsdale replied: 

“given the government does operate on sensitive data: I 100% prefer PLTR to be there if sketchy people are in charge, as it has full access rules and audit trails; others don’t.”

Keep reading

AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane’

It’s the rise of the robo-recruiters.

Employers are turning to artificial intelligence to screen potential new human hires.

AI recruiting software is increasingly subbing in for actual people during preliminary interviews — with a fake person quizzing candidates and inquiring about their skills, before delivering their findings to managers.

“A year ago this idea seemed insane,” Arsham Ghahramani, co-founder and chief executive officer of Toronto-based AI recruiting startup Ribbon, told Bloomberg. “Now it’s quite normalized.”

Companies say the goal is to ultimately make the interview process more efficient and accessible for candidates — without needing human recruiters to be online all day.

For employers, particularly those hiring at high volume, the switch can save hundreds of hours of manpower per week.

Keep reading

NAACP Accuses Musk Of Endangering Black Communities With Supercomputer Fumes

The NAACP is demanding Memphis officials shut down Elon Musk’s xAI supercomputer facility, claiming the world’s largest AI training center violates clean air laws and threatens the health of nearby black residents.

The civil rights organization sent a letter Thursday to the Shelby County Health Department and Memphis Light, Gas and Water officials, alleging xAI has operated up to 35 gas turbines without proper permits for over a year at its Colossus facility in South Memphis. The turbines power the supercomputer that trains Musk’s Grok AI chatbot, which they claim emits excessive hazardous pollutants.

“We are urging you again to ensure that xAI stops operating its unpermitted turbines in violations of clean air and open meeting act laws and to order xAI to pay penalties for operating in violation of the law,” the letter states. “The message that [Shelby County Health Department] and [Memphis Light, Gas and Water] have sent to the community is that billionaires matter more than the tax payers and residents who live there.”

Musk has said the data center — called “Colossus” — will be the first gigawatt-scale AI training facility in the world. It powers Grok, the chatbot xAI is positioning to compete with OpenAI’s ChatGPT and others.

But the NAACP says the operation is  “illegal,” citing emissions estimates of up to 2,000 tons of nitrogen oxide per year from the turbines and invoking the Clean Air Act’s “New Source Review” rule. They argue xAI’s decision to split the turbines into smaller groups is a deliberate strategy to “sidestep the law.”

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

No Mark Zuckerberg, AI ‘Friends’ are not Good for Mental Health

Think you could use a few more friends? Meta CEO Mark Zuckerberg says AI will do the trick. In a recent interview with podcaster Dwarkesh Patel, the Silicon Valley titan said the average American has fewer than three friends but a desire to have “something like fifteen.”

Zuckerberg thinks computer code will fill the gap: “The average person wants more connectivity, connection than they have,” he observed. “As the personalization loop kicks in, and the AI just starts to get to know you better, I think that will be just really compelling.” 

It’s interesting advice from a guy who heads up two of the largest platforms on the planet for bringing people together.

It’s also an admission from Zuckerberg that chatting with real people isn’t cutting it anymore.

His solution? More technology, not less. Meta has made billions of dollars monetizing our attention. Why not monetize our loneliness, too?  

Turns out it’s a bad time to tell us to make AI friends when we’re already struggling to navigate our digital lives. In 2023, US Surgeon General Vivek Murthy warned of an epidemic of loneliness and isolation.

“One in two adults in America is living with measurable levels of loneliness,” Murthy reported, “but the numbers are even higher among young people.” He pointed to social media and online interactions as a driving factor. 

And we’re not just lonely. Rates of depression and anxiety are on the rise, too, again particularly in our youth.

According to Centers for Disease Control and Prevention data published last month, the prevalence of depression in people age 12 and older has nearly doubled in a decade, jumping from 8.2% between 2013 and 2014 to 13.1% between 2021 and 2023.

Of course, Zuckerberg knew his products were negatively impacting young people years ago.

In 2021, The Wall Street Journal revealed that Facebook, which owns Instagram, had internal evidence showing Instagram use is linked with poorer mental health, particularly among young women.

Facebook buried its findings and failed to address the problem.

Zuckerberg doesn’t seem to understand that the struggle is real for millions of Americans who are finding it anything but easy to manage their well-being around constant online stimulation: “People are smart. They know what’s valuable in their lives,” Zuckerberg told Patel. “I think people have a good sense of what they want.”

Keep reading

Trump’s $795M Data Power Play Sends Palantir Soaring 140%–But Here’s the Hidden Risk

Palantir (NASDAQ:PLTR) is riding a wave of government contracts as the Trump administration ramps up efforts to centralize and analyze federal data. Since Trump signed an executive order in March calling for more interagency data sharing, Palantir has quietly become the go-to vendor for building that digital infrastructure. The company has landed more than $113 million in new and extended federal contracts since Trump took office including a blockbuster $795 million deal with the Pentagon last week. Palantir’s Foundry platform is already in use at Homeland Security and Health and Human Services, and engineers were recently embedded at the IRS to begin building a unified, searchable database for taxpayer records. Talks are also underway with the Social Security Administration and Department of Education, suggesting more agencies could follow.

Investor enthusiasm hasn’t lagged. Since Trump’s re-election, Palantir shares have surged more than 140%, fueled by the prospect that the company may now become the digital backbone of the U.S. federal government. The Department of Government Efficiency (DOGE)a Musk-led initiativehas been instrumental in Palantir’s rise, with several DOGE members having ties to Palantir or Peter Thiel-backed ventures. The company’s tools are now being used to connect data points ranging from immigration status and bank accounts to student loans and disability claims. In April, Immigration and Customs Enforcement (ICE) awarded Palantir a $30 million contract to track migrant movements in real time another sign of how fast the government is scaling its use of Foundry.

Keep reading

Centralized AI Threatens A Democratic Digital Future

Major corporations control decentralized AI (DeAI) companies, leaving decentralized AI in the dust. To build a more decentralized world, the sector must actively execute upon a focused DeAI strategy, with shared standards between projects, without compromise.

In April, a UN report warned that AI’s $4.8-trillion market is dominated by a mere 100 companies, most of which are based in the US and China. Centralized AI incumbents have the money and the connections to control this massive new industry, which means significant implications for society. 

These companies, all employing centralized AI technology, have run into their fair share of headaches. For example, Microsoft’s Copilot garnered attention for creating explicit, inappropriate images, such as children in compromising scenarios. This sparked a public and regulatory backlash.

Although Microsoft created stricter moderation, it had already demonstrated that centralized AI can harbor problems in part due to its closed-source code. 

Citadel was wrapped up in an AI trading scandal in the financial sector, as algorithms allegedly manipulated stock prices via artificial volume creation.

Google’s Project Maven, a Pentagon pilot program used in military tech, has raised ethical questions.

“We believe that Google should not be in the business of war,” reads a letter penned by Google employees and addressed to Sundar Pichai, the company’s CEO. The employees requested that Google leave Project Maven.

“We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter states.

So much for “Don’t be evil” — the company’s old slogan.

These situations give us clear examples of the potential failures of centralized AI, including ethical lapses, opaque decision-making and monopolistic control. DeAI’s open-source ethos, community governance, audit trails and computer facilities can give more than a few massive corporations an edge in the future of AI. 

Keep reading

Who’s Programming The AI, Mr. President?

President Trump’s new legislative centerpiece – the One Big Beautiful Bill – has a promising name and some compelling goals: reduce government bloat, streamline agencies, and modernize operations with cutting-edge technology.

But there’s a problem. A big one.

No one will tell us who’s programming the AI.

This sweeping bill includes a 10-year moratorium on any state or local government regulating artificial intelligence. According to The Washington Post and AP, more than 60 existing state-level laws will be overridden if this provision passes. All regulatory authority over AI—including systems that will be used in law enforcement, healthcare, defense, and finance—will be centralized in the federal government for a decade.

Even worse? The bill empowers the Department of Commerce to deploy “commercial AI” across virtually every federal agency—from the IRS to Homeland Security—according to Indian Express and The Verge.

And yet, no one in the White House or Congress has revealed who is writing the AI code, what datasets it’s trained on, whether it can be independently audited, or whether it’s bound by the U.S. Constitution.

This isn’t just a transparency issue. This is a constitutional crisis in the making.

To be clear, President Trump’s instincts here may be sound. We’ve long needed to shrink the federal leviathan and replace unconstitutional bureaucracies with systems that serve the people—not special interests.

But good intentions won’t protect us from unseen programmers, black-box algorithms, and unaccountable automation.

This bill mandates AI integration across government “to improve efficiency and security.” But efficiency isn’t liberty. Security isn’t sovereignty. And no AI—no matter how “smart”—should be allowed to rewrite, ignore, or reinterpret constitutional rights.

According to Business Insider, the AI moratorium’s stated goal is to “foster innovation” and avoid a “fragmented regulatory landscape.” In reality, it strips states of their authority to protect their citizens from deepfakes, algorithmic bias, digital censorship, and mass surveillance.

This is not governance. This is outsourced tyranny, hidden under the guise of modernization.

So let’s ask the question about what happens when AI is weaponized. If the systems being implemented were open source, transparent, built entirely on constitutional jurisprudence, and auditable by the public, we’d be having a very different conversation.

Instead, we’re facing a future where an algorithm may determine whether you’re eligible for services, a machine learning system may flag you as a “threat” based on your social media posts, and a black-box model may deny you a loan, reject your legal challenge, or freeze your bank account.

Keep reading

Trump’s chief of staff is caught in sinister impersonation plot exposing America’s most powerful players… until fatal slip reveals truth

Donald Trump‘s top advisor has fallen victim to a sinister scheme by hackers who infiltrated her phone and used artificial intelligence to impersonate her voice.

The nefarious plot involved stolen data from the personal cellphone of White House chief of staff Susie Wiles that was then used to call some of American’s most powerful people. 

Over the course of several weeks, high profile senators, governors and American business executives have received voicemails or messages from an unknown number claiming to be Wiles, Wall Street Journal reported. 

The hackers came undone when they made the fatal mistake of asking questions that the president’s closest aide would already know the answer to. 

Wiles – who has been nicknamed ‘Ice Maiden’ – has been contacting associates urging them to disregard any unusual messages or calls from unknown numbers purporting to be her.

In a terrifying twist, some of the phone calls used a voice that mimicked Wiles. Authorities suspect the impersonator used artificial intelligence to replicate it.

The FBI is now investigating the unusual situation, but do not believe that a foreign nation is behind it, sources told the publication.

‘The FBI takes all threats against the president, his staff, and our cybersecurity with the utmost seriousness,’ FBI Director Kash Patel said in a statement. 

‘Safeguarding our administration officials’ ability to securely communicate to accomplish the president’s mission is a top priority.’ 

Authorities are reportedly scrambling to figure out what the goal of the plot is.

In at least one case the impersonator asked for a cash transfer, while in another they asked a lawmaker for a list of people who the president could consider pardoning.

But many of the texts which came through appeared to be legitimate and fooled some of Wiles’ contacts. There are allegedly no suggestions that any sensitive information was handed over.

Some of her associates became suspicious when the impersonator began asking questions – sometimes about Trump – that Wiles as his closest advisor would have known the answers to.

Others said some messages were sent with poor grammar, and in a more formal manner than Wiles generally communicates. 

These messages were not sent from Wiles’ phone number.

Her personal cellphone contacts list was reportedly hacked, giving the impersonator access to a long list of numbers she has amassed over the years as one of Washington’s most formidable political players.

The breach comes after a series of embarrassing data leaks during Trump’s second term.

In one instance, the editor-in-chief of The Atlantic, Jeffrey Goldberg, was accidentally included in a Signal group chat where Secretary of Defense Pete Hegseth discussed planned military strikes in Yemen. 

Trump’s former national security adviser, Mike Waltz, took responsibility for the error and was reassigned to a different role. 

Keep reading