‘Trump Flipped On Us’: MAGA Reacts to Potential National Citizen Database

Supporters of President Donald Trump expressed anger and disbelief online following reports that his administration had advanced plans to create a national citizen database with technology firm Palantir.

Newsweek reached out to Palantir for comment.

Why It Matters

The White House has contracted Palantir, a Colorado-based analytics company co-founded by Trump supporter Peter Thiel, to assist in compiling a database of personal information on American citizens, according to unnamed government officials and Palantir employees who spoke with The New York Times. The purported deal follows project talks Palantir had with the Social Security Administration (SSA) and the Department of Education.

The reaction from Trump’s supporters reflected growing unease within conservative circles, indicating a rare rupture between the president and key segments of his constituency. The controversy underscores nationwide anxieties around privacy, civil liberties, and the growing influence of technology firms over personal information management.

What To Know

The Palantir deal marks a significant development in government data collection, drawing sharp concern from privacy advocates and Trump’s own core base, otherwise known as “MAGA.” Detractors compared the centralized database effort to surveillance initiatives in authoritarian regimes.

Numerous pro-Trump voices expressed dismay and feelings of betrayal across social media platforms like X.

“People are so quick to suggest that I flipped on Trump…No, no, no…I didn’t flip on Trump. TRUMP FLIPPED ON US. I’m just not willing to continue living in a LIE, and I will tell you the unfortunate TRUTH about it,” The Patriot Voice wrote on X to his 158,000 followers.

Keep reading

Trump’s ‘Big Beautiful Bill’ Locks Down AI for a Decade – Welcome to the Golden Age of AI Tyranny

President Donald J. Trump’s One Big Beautiful Bill Act, specifically Section 43201, imposes a 10-year moratorium on state-level AI regulations.

This move, part of H.R.1, raises concerns about a future dominated by unchecked AI power.

The bill allocates $500 million to modernize federal IT systems with AI, but it also bars states from enforcing AI laws. Now, critics warn of an “AI tyranny” era.

The One Big Beautiful Bill Act’s Section 43201 outlines a significant shift in AI governance. Trump actively pushed for this bill, which the House passed. Section 43201 states,

“Except as provided in paragraph (2), no State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.”

This moratorium aims to prevent state interference in AI development. However, it allows exceptions for laws that facilitate AI deployment or impose federal requirements.

Therefore, this section prioritizes federal control over AI. The bill also funds AI modernization within the Department of Commerce, allocating $500 million until 2034. Thus, this dual approach sparks debate over innovation versus regulation.

Keep reading

Will Human Error Hand AI The Key To Our Destruction?

By now, the apocalyptic whispers that once belonged solely to science fiction are starting to sound more like realistic forecasts. Artificial intelligence, once hailed as the great liberator of human productivity and ingenuity, is now moonlighting as a con artist, data thief, and spy.

The machines are rising, yes—but they’re not doing it alone. As we embrace AI with reckless abandon, it’s not the code that’s dooming us. It’s the carbon-based lifeforms behind the keyboard making forehead-slapping mistakes. If civilization does collapse under the weight of digital warfare, it’ll be a joint project between rogue AI and good old-fashioned human idiocy.

Let’s talk about the Rise of the Machines, 2025 edition—not in the form of Terminators with glowing eyes, but as lines of sophisticated code hell-bent on manipulation, infiltration, and destruction. Whether we are willing to accept it or not, AI-powered cyberattacks are becoming disturbingly common and alarmingly sophisticated.

We’re seeing the proliferation of deepfake scams, hyper-personalized phishing attacks, and AI-assisted password cracking that make traditional defenses look as flimsy as a paper umbrella in a hurricane.

Take the case of deepfake fraud, where criminals now impersonate CEOs and executives with astonishing accuracy. These aren’t your cousin’s sloppy Photoshop jobs. These are full-motion, pitch-perfect, AI-generated replicas of real people, used in schemes to authorize fraudulent wire transfers, manipulate employees, or simply throw entire organizations into chaos. It’s not just unsettling. It’s an outright weaponization of trust—an erosion of reality itself.

And don’t forget AI-generated phishing emails. These aren’t the hilariously broken English scams from 2006. AI now writes flawless prose, mirroring the tone and style of your boss, your bank, or your kid’s school, tricking you into clicking that one wrong link that detonates ransomware across your organization like a digital IED. The machines aren’t playing chess anymore—they’re playing you.

But even as AI’s capabilities soar into dystopian territory, the greatest cybersecurity threat isn’t machine intelligence. It’s human incompetence. You could hand someone the most secure system in the world, and they’ll still manage to set it on fire with a reused password or a click on an “urgent invoice” from a Nigerian prince.

report by NinjaOne drives this point home with a sledgehammer: nearly 95% of cybersecurity breaches are caused by human error. Think about that. Not Skynet, not Chinese cyber commandos or North Korean hackers in basements—but Steve in Accounting, who uses “123456” as his password and clicks on pop-ups promising free iPhones.

The attack vectors are depressingly mundane: downloading unsafe software, failing to update systems, weak passwords, falling for phishing scams, and misconfigured security settings.

Keep reading

FDA Launches New AI Tool

The Food and Drug Administration on June 2 launched a new artificial intelligence (AI) tool.

FDA officials said Elsa, the tool, will help employees “work more efficiently.”

The agency is utilizing Elsa to speed up clinical protocol reviews and scientific evaluations, as well as to identify targets for inspections.

FDA officials described Elsa as a “large language model–powered AI tool designed to assist with reading, writing, and summarizing.” They said it can summarize adverse events to help with safety profile assessments, compare labels faster than humans, and generate code to help develop databases.

“Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,” FDA Chief AI Officer Jeremy Walsh said in a statement.

AI refers to computer systems that perform complex tasks typically performed by humans.

Dr. Marty Makary, the FDA’s commissioner, said in May that the FDA would immediately start using AI and fully integrate it by the end of June.

“Following a very successful pilot program with FDA’s scientific reviewers, I set an aggressive timeline to scale AI agency-wide by June 30,” Makary said on Monday.

“Today’s rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers.”

Health Secretary Robert F. Kennedy Jr., who leads the Department of Health and Human Services (HHS), the FDA’s parent agency, cheered the development, calling it “a revolution in public health” in a post on social media platform X.

The FDA recently fired thousands of employees. HHS officials had said they would cut about 3,500 full-time workers but ended up terminating about 2,500 workers, according to a Senate Democrat report.

Makary told a congressional panel during a recent appearance that no scientific reviewers were fired, although some research scientists were among those terminated.

President Donald Trump has promoted the use of AI, saying in one order that “with the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.”

The first report from Trump’s Make America Healthy Again (MAHA) Commission, headed by Kennedy, contained markers of AI, including nonexistent studies and multiple instances of “oaicite,” which developers say is inserted by OpenAI’s ChatGPT AI tool.

“The pattern is consistent with other cases we’ve seen of using generative AI to create citations,” Dr. Ivan Oransky, co-founder of Retraction Watch and executive director of The Center for Scientific Integrity, told The Epoch Times in an email.

“The fact that these fake citations made it into the report are evidence that no one checked the report for rigor. That should concern anyone who reads it or considers relying on it.”

Keep reading

Palantir Co-Founder Joe Lonsdale & Former Exec Refute NYT Report Warning Over Surveillance ‘Master List’

Palantir co-founder Joe Lonsdale and former executive Wendy Anderson have hit back against a NY Times report warning that the company is laying the groundwork for government surveillance on steroids through a massive database that would coordinate the private information of US citizens across federal agencies. 

Palantir’s not a “database”; it’s a platform created by 1000s of the most talented and patriotic Americans to partner with our DoD to stop attacks and defeat bad guys, while protecting liberty & privacy,” Lonsdale posted on X in response to the account “Retard Finder,” that said “The Palantir database idea is retarded.” 

“There are hundreds of similar types of software and efforts in the USA throughout the west; what’s unique about Palantir is that it’s BY FAR the best at stopping bad guys,” Lonsdale continued

When asked by a self-described Palantir shareholder whether he’d “personally be comfortable with your personal data being stored in this database if AOC or Ilhan Omar were President,” Lonsdale replied: 

“given the government does operate on sensitive data: I 100% prefer PLTR to be there if sketchy people are in charge, as it has full access rules and audit trails; others don’t.”

Keep reading

AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane’

It’s the rise of the robo-recruiters.

Employers are turning to artificial intelligence to screen potential new human hires.

AI recruiting software is increasingly subbing in for actual people during preliminary interviews — with a fake person quizzing candidates and inquiring about their skills, before delivering their findings to managers.

“A year ago this idea seemed insane,” Arsham Ghahramani, co-founder and chief executive officer of Toronto-based AI recruiting startup Ribbon, told Bloomberg. “Now it’s quite normalized.”

Companies say the goal is to ultimately make the interview process more efficient and accessible for candidates — without needing human recruiters to be online all day.

For employers, particularly those hiring at high volume, the switch can save hundreds of hours of manpower per week.

Keep reading

NAACP Accuses Musk Of Endangering Black Communities With Supercomputer Fumes

The NAACP is demanding Memphis officials shut down Elon Musk’s xAI supercomputer facility, claiming the world’s largest AI training center violates clean air laws and threatens the health of nearby black residents.

The civil rights organization sent a letter Thursday to the Shelby County Health Department and Memphis Light, Gas and Water officials, alleging xAI has operated up to 35 gas turbines without proper permits for over a year at its Colossus facility in South Memphis. The turbines power the supercomputer that trains Musk’s Grok AI chatbot, which they claim emits excessive hazardous pollutants.

“We are urging you again to ensure that xAI stops operating its unpermitted turbines in violations of clean air and open meeting act laws and to order xAI to pay penalties for operating in violation of the law,” the letter states. “The message that [Shelby County Health Department] and [Memphis Light, Gas and Water] have sent to the community is that billionaires matter more than the tax payers and residents who live there.”

Musk has said the data center — called “Colossus” — will be the first gigawatt-scale AI training facility in the world. It powers Grok, the chatbot xAI is positioning to compete with OpenAI’s ChatGPT and others.

But the NAACP says the operation is  “illegal,” citing emissions estimates of up to 2,000 tons of nitrogen oxide per year from the turbines and invoking the Clean Air Act’s “New Source Review” rule. They argue xAI’s decision to split the turbines into smaller groups is a deliberate strategy to “sidestep the law.”

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

No Mark Zuckerberg, AI ‘Friends’ are not Good for Mental Health

Think you could use a few more friends? Meta CEO Mark Zuckerberg says AI will do the trick. In a recent interview with podcaster Dwarkesh Patel, the Silicon Valley titan said the average American has fewer than three friends but a desire to have “something like fifteen.”

Zuckerberg thinks computer code will fill the gap: “The average person wants more connectivity, connection than they have,” he observed. “As the personalization loop kicks in, and the AI just starts to get to know you better, I think that will be just really compelling.” 

It’s interesting advice from a guy who heads up two of the largest platforms on the planet for bringing people together.

It’s also an admission from Zuckerberg that chatting with real people isn’t cutting it anymore.

His solution? More technology, not less. Meta has made billions of dollars monetizing our attention. Why not monetize our loneliness, too?  

Turns out it’s a bad time to tell us to make AI friends when we’re already struggling to navigate our digital lives. In 2023, US Surgeon General Vivek Murthy warned of an epidemic of loneliness and isolation.

“One in two adults in America is living with measurable levels of loneliness,” Murthy reported, “but the numbers are even higher among young people.” He pointed to social media and online interactions as a driving factor. 

And we’re not just lonely. Rates of depression and anxiety are on the rise, too, again particularly in our youth.

According to Centers for Disease Control and Prevention data published last month, the prevalence of depression in people age 12 and older has nearly doubled in a decade, jumping from 8.2% between 2013 and 2014 to 13.1% between 2021 and 2023.

Of course, Zuckerberg knew his products were negatively impacting young people years ago.

In 2021, The Wall Street Journal revealed that Facebook, which owns Instagram, had internal evidence showing Instagram use is linked with poorer mental health, particularly among young women.

Facebook buried its findings and failed to address the problem.

Zuckerberg doesn’t seem to understand that the struggle is real for millions of Americans who are finding it anything but easy to manage their well-being around constant online stimulation: “People are smart. They know what’s valuable in their lives,” Zuckerberg told Patel. “I think people have a good sense of what they want.”

Keep reading

Trump’s $795M Data Power Play Sends Palantir Soaring 140%–But Here’s the Hidden Risk

Palantir (NASDAQ:PLTR) is riding a wave of government contracts as the Trump administration ramps up efforts to centralize and analyze federal data. Since Trump signed an executive order in March calling for more interagency data sharing, Palantir has quietly become the go-to vendor for building that digital infrastructure. The company has landed more than $113 million in new and extended federal contracts since Trump took office including a blockbuster $795 million deal with the Pentagon last week. Palantir’s Foundry platform is already in use at Homeland Security and Health and Human Services, and engineers were recently embedded at the IRS to begin building a unified, searchable database for taxpayer records. Talks are also underway with the Social Security Administration and Department of Education, suggesting more agencies could follow.

Investor enthusiasm hasn’t lagged. Since Trump’s re-election, Palantir shares have surged more than 140%, fueled by the prospect that the company may now become the digital backbone of the U.S. federal government. The Department of Government Efficiency (DOGE)a Musk-led initiativehas been instrumental in Palantir’s rise, with several DOGE members having ties to Palantir or Peter Thiel-backed ventures. The company’s tools are now being used to connect data points ranging from immigration status and bank accounts to student loans and disability claims. In April, Immigration and Customs Enforcement (ICE) awarded Palantir a $30 million contract to track migrant movements in real time another sign of how fast the government is scaling its use of Foundry.

Keep reading