FDA Launches New AI Tool

The Food and Drug Administration on June 2 launched a new artificial intelligence (AI) tool.

FDA officials said Elsa, the tool, will help employees “work more efficiently.”

The agency is utilizing Elsa to speed up clinical protocol reviews and scientific evaluations, as well as to identify targets for inspections.

FDA officials described Elsa as a “large language model–powered AI tool designed to assist with reading, writing, and summarizing.” They said it can summarize adverse events to help with safety profile assessments, compare labels faster than humans, and generate code to help develop databases.

“Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,” FDA Chief AI Officer Jeremy Walsh said in a statement.

AI refers to computer systems that perform complex tasks typically performed by humans.

Dr. Marty Makary, the FDA’s commissioner, said in May that the FDA would immediately start using AI and fully integrate it by the end of June.

“Following a very successful pilot program with FDA’s scientific reviewers, I set an aggressive timeline to scale AI agency-wide by June 30,” Makary said on Monday.

“Today’s rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers.”

Health Secretary Robert F. Kennedy Jr., who leads the Department of Health and Human Services (HHS), the FDA’s parent agency, cheered the development, calling it “a revolution in public health” in a post on social media platform X.

The FDA recently fired thousands of employees. HHS officials had said they would cut about 3,500 full-time workers but ended up terminating about 2,500 workers, according to a Senate Democrat report.

Makary told a congressional panel during a recent appearance that no scientific reviewers were fired, although some research scientists were among those terminated.

President Donald Trump has promoted the use of AI, saying in one order that “with the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.”

The first report from Trump’s Make America Healthy Again (MAHA) Commission, headed by Kennedy, contained markers of AI, including nonexistent studies and multiple instances of “oaicite,” which developers say is inserted by OpenAI’s ChatGPT AI tool.

“The pattern is consistent with other cases we’ve seen of using generative AI to create citations,” Dr. Ivan Oransky, co-founder of Retraction Watch and executive director of The Center for Scientific Integrity, told The Epoch Times in an email.

“The fact that these fake citations made it into the report are evidence that no one checked the report for rigor. That should concern anyone who reads it or considers relying on it.”

Keep reading

Palantir Co-Founder Joe Lonsdale & Former Exec Refute NYT Report Warning Over Surveillance ‘Master List’

Palantir co-founder Joe Lonsdale and former executive Wendy Anderson have hit back against a NY Times report warning that the company is laying the groundwork for government surveillance on steroids through a massive database that would coordinate the private information of US citizens across federal agencies. 

Palantir’s not a “database”; it’s a platform created by 1000s of the most talented and patriotic Americans to partner with our DoD to stop attacks and defeat bad guys, while protecting liberty & privacy,” Lonsdale posted on X in response to the account “Retard Finder,” that said “The Palantir database idea is retarded.” 

“There are hundreds of similar types of software and efforts in the USA throughout the west; what’s unique about Palantir is that it’s BY FAR the best at stopping bad guys,” Lonsdale continued

When asked by a self-described Palantir shareholder whether he’d “personally be comfortable with your personal data being stored in this database if AOC or Ilhan Omar were President,” Lonsdale replied: 

“given the government does operate on sensitive data: I 100% prefer PLTR to be there if sketchy people are in charge, as it has full access rules and audit trails; others don’t.”

Keep reading

Biometric Surveillance Expands: American Airlines Rolls Out Facial Recognition at Four Major Airports

American Airlines has begun using facial recognition to verify passenger identities at airport security, further embedding biometric technology into the air travel experience. The airline’s new Touchless ID program, now live at several major airports, allows select travelers to move through TSA PreCheck without showing ID or boarding passes.

As of May 29, travelers passing through Ronald Reagan Washington National, LaGuardia, Hartsfield-Jackson Atlanta, and Salt Lake City International can now confirm who they are simply by standing in front of a camera. That image is instantly compared against official federal photo databases such as passports or Global Entry records. If there’s a match, the traveler proceeds; no physical documents required.

This identity-verification option is available only to American Airlines AAdvantage members who are 18 or older, have a valid passport, and have an active TSA PreCheck membership with a Known Traveler Number. Users can enroll through the airline’s website or app, and participation lasts for a year, with the freedom to opt-out and revert to standard ID screening at any time.

The integration of facial recognition at TSA checkpoints may seem like a convenience upgrade, but it introduces concrete privacy risks that go far beyond the airport.

Keep reading

AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane’

It’s the rise of the robo-recruiters.

Employers are turning to artificial intelligence to screen potential new human hires.

AI recruiting software is increasingly subbing in for actual people during preliminary interviews — with a fake person quizzing candidates and inquiring about their skills, before delivering their findings to managers.

“A year ago this idea seemed insane,” Arsham Ghahramani, co-founder and chief executive officer of Toronto-based AI recruiting startup Ribbon, told Bloomberg. “Now it’s quite normalized.”

Companies say the goal is to ultimately make the interview process more efficient and accessible for candidates — without needing human recruiters to be online all day.

For employers, particularly those hiring at high volume, the switch can save hundreds of hours of manpower per week.

Keep reading

Japanese Scientists Develop Artificial Blood Compatible With All Blood Types

A critical component of healthcare, blood transfusions play a vital role in saving lives around the globe every day. Maintaining an adequate blood supply, though, is no easy task, particularly in low- and middle-income countries (LMIC). The demand for O–negative blood — the universal donor type — often exceeds supply and donations have a limited shelf life. Looking to address the issue are a group of Japanese scientists led by Hiromi Sakai at Nara Medical University. They’ve developed a new type of artificial blood that can be used in patients of any blood type.  

The artificial blood is created by extracting hemoglobin — a protein containing iron that facilitates the transportation of oxygen in red blood cells — from expired donor blood. It is then encased in a protective shell to create stable, virus-free artificial red blood cells. As these artificial cells have no blood type, there is no need for compatibility testing. The synthetic blood can reportedly be stored for up to two years at room temperature and five years under refrigeration. That is a significant improvement over donated red blood cells, which can only be stored under refrigeration for a maximum of 42 days.  

Keep reading

NAACP Accuses Musk Of Endangering Black Communities With Supercomputer Fumes

The NAACP is demanding Memphis officials shut down Elon Musk’s xAI supercomputer facility, claiming the world’s largest AI training center violates clean air laws and threatens the health of nearby black residents.

The civil rights organization sent a letter Thursday to the Shelby County Health Department and Memphis Light, Gas and Water officials, alleging xAI has operated up to 35 gas turbines without proper permits for over a year at its Colossus facility in South Memphis. The turbines power the supercomputer that trains Musk’s Grok AI chatbot, which they claim emits excessive hazardous pollutants.

“We are urging you again to ensure that xAI stops operating its unpermitted turbines in violations of clean air and open meeting act laws and to order xAI to pay penalties for operating in violation of the law,” the letter states. “The message that [Shelby County Health Department] and [Memphis Light, Gas and Water] have sent to the community is that billionaires matter more than the tax payers and residents who live there.”

Musk has said the data center — called “Colossus” — will be the first gigawatt-scale AI training facility in the world. It powers Grok, the chatbot xAI is positioning to compete with OpenAI’s ChatGPT and others.

But the NAACP says the operation is  “illegal,” citing emissions estimates of up to 2,000 tons of nitrogen oxide per year from the turbines and invoking the Clean Air Act’s “New Source Review” rule. They argue xAI’s decision to split the turbines into smaller groups is a deliberate strategy to “sidestep the law.”

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

Kiev attacked military airfields in five Russian regions – MOD

Military airfields across Russia have been attacked in a series of kamikaze drone strikes, the country’s Defense Ministry reported on Sunday, blaming the incidents on Kiev. Most of the strikes were successfully repelled, with some resulting in material damage, it added.

Airfields were targeted in the Murmansk Region in the country’s north, in Ivanovo and Ryazan regions in central Russia as well as in Irkutsk Region in Siberia and Amur Region in the Far East, the ministry said. All the attacks employed first-person view (PFV) kamikaze drones, with some of them being launched from territories in close proximity to the airfields, it stated.

Some of the culprits behind the attacks have been detained, the ministry said, without revealing the number of those arrested or their identities. The Russian military also said that the “Kiev regime” was ultimately responsible for the strikes, which they described as “terrorist attacks.”

In Ivanovo, Ryazan, and Amur regions, the attacks were repelled and resulted in no damage or casualties, according to the ministry. In Murmansk and Irkutsk regions, the strikes led to some aircraft catching fire, the military said. No casualties have been reported in any of the incidents, according to the ministry’s data.

Keep reading

No Mark Zuckerberg, AI ‘Friends’ are not Good for Mental Health

Think you could use a few more friends? Meta CEO Mark Zuckerberg says AI will do the trick. In a recent interview with podcaster Dwarkesh Patel, the Silicon Valley titan said the average American has fewer than three friends but a desire to have “something like fifteen.”

Zuckerberg thinks computer code will fill the gap: “The average person wants more connectivity, connection than they have,” he observed. “As the personalization loop kicks in, and the AI just starts to get to know you better, I think that will be just really compelling.” 

It’s interesting advice from a guy who heads up two of the largest platforms on the planet for bringing people together.

It’s also an admission from Zuckerberg that chatting with real people isn’t cutting it anymore.

His solution? More technology, not less. Meta has made billions of dollars monetizing our attention. Why not monetize our loneliness, too?  

Turns out it’s a bad time to tell us to make AI friends when we’re already struggling to navigate our digital lives. In 2023, US Surgeon General Vivek Murthy warned of an epidemic of loneliness and isolation.

“One in two adults in America is living with measurable levels of loneliness,” Murthy reported, “but the numbers are even higher among young people.” He pointed to social media and online interactions as a driving factor. 

And we’re not just lonely. Rates of depression and anxiety are on the rise, too, again particularly in our youth.

According to Centers for Disease Control and Prevention data published last month, the prevalence of depression in people age 12 and older has nearly doubled in a decade, jumping from 8.2% between 2013 and 2014 to 13.1% between 2021 and 2023.

Of course, Zuckerberg knew his products were negatively impacting young people years ago.

In 2021, The Wall Street Journal revealed that Facebook, which owns Instagram, had internal evidence showing Instagram use is linked with poorer mental health, particularly among young women.

Facebook buried its findings and failed to address the problem.

Zuckerberg doesn’t seem to understand that the struggle is real for millions of Americans who are finding it anything but easy to manage their well-being around constant online stimulation: “People are smart. They know what’s valuable in their lives,” Zuckerberg told Patel. “I think people have a good sense of what they want.”

Keep reading

T-Mobile Faces Backlash for Auto-Enabled Screen Recording in T-Life App Without User Consent

T-Mobile is facing renewed backlash over its T-Life app, this time for quietly introducing a screen recording feature that is automatically activated on some devices. The tool, labeled “Screen recording tool,” has been discovered by users in the app’s settings, prompting immediate concerns about transparency and user consent.

Described as a means of gathering behavioral data to help enhance the app’s functionality, the tool is being deployed without upfront notification in many cases.

Though T-Mobile insists it does not collect personal data and only monitors activity within the app itself, the feature’s default-on status has unsettled many customers. The company told CNET, “This tool records activities within the app only and does not see or access any personal information,” and noted that users can deactivate it under the Preferences section.

While this type of telemetry is not uncommon in the tech world, the method of deployment here has caught attention. Unlike the app’s pre-existing Screen Share function, which allows support reps to view a user’s screen during troubleshooting, but only with explicit approval, this newly introduced screen recorder operates passively in the background.

Keep reading