New AI tool flags more than 1,000 questionable science journals… but can it be trusted?

  • The open-access journal boom has fueled predatory publishers exploiting researchers with fees while skipping real peer review.
  • An AI tool trained on 14,500 journals flagged over 1,000 suspicious publications but has a 24% false positive rate.
  • Fake science is surging, with a 2025 study warning that paper mills are doubling fraudulent research output every 1.5 years.
  • Predatory journals threaten public trust, distorting medical guidelines and policy decisions while wasting taxpayer funds.
  • AI detection tools could be misused to censor legitimate but controversial research, raising concerns over truth control.

The explosion of open-access journals has democratized scientific research, but it has also given rise to a shadow industry of predatory publishers that exploit authors with publishing fees while offering little to no legitimate peer review. Now, researchers have developed an AI tool to detect these shady journals—but its 24% false positive rate means human experts are still essential.

Who’s behind it? A team of computational scientists, led by Daniel Acuña of the University of Colorado Boulder, trained an AI model on more than 14,500 journals—12,869 high-quality ones and 2,536 that had been removed from the Directory of Open Access Journals (DOAJ) for violating ethical guidelines. The AI then analyzed nearly 94,000 open-access journals, flagging more than 1,000 previously unknown suspect publications.

The problem with predatory journals

The open-access model was supposed to make research freely available to everyone, breaking down paywalls that restrict knowledge. But as the system grew, so did the number of journals that prioritize profit over scientific integrity. These “questionable” journals often promise rapid publication with little to no peer review, charging authors hefty fees while producing low-quality—or even fraudulent—research.

A 2025 study in PNAS found that the number of fake papers churned out by “paper mills” is doubling every 1.5 years, threatening to flood academia with junk science. “If these trends are not stopped, science is going to be destroyed,” warned Luís A. Nunes Amaral, a data scientist at Northwestern University.

Keep reading

AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

Keep reading

US-Israel plan aims to empty Gaza of Palestinians, build AI-powered ‘smart cities’: Report

A postwar plan for Gaza circulating within President Donald Trump’s White House envisions demolishing the strip, confiscating all public land within it, paying small amounts to remove the entire population of more than 2 million Palestinians, and building “a gleaming tourism resort and high-tech manufacturing and technology hub” on its ruins, The Washington Post reported on 31 August.

A 38-page prospectus seen by The Post envisions placing Gaza in a trust controlled by Israeli and American investors. The trust will then serve as the vehicle for the development of the strip into a high-tech commercial, residential, and tourist hub resembling Dubai.

The Post reports that the proposal to establish the Gaza Reconstitution, Economic Acceleration and Transformation Trust, or GREAT Trust, was developed by some of the same Israelis who created the deadly, US and Israeli-backed Gaza Humanitarian Foundation (GHF), which was used as a pretext to block the delivery of food aid by the UN.

Financial planning for the GREAT Trust project was carried out by a team from the Boston Consulting Group, which also worked on establishing the GHF.

The plan calls for the “voluntary” departure of Gaza’s residents to another country, making them refugees, or herding them into “restricted, secured zones” amounting to concentration camps, within the strip.

In exchange for abandoning their land, Palestinians would be “offered a digital token by the trust in exchange for rights to redevelop their property,” The Post writes. The token could allegedly be used to “finance a new life elsewhere or eventually redeemed for an apartment in the new ”AI-powered smart cities'” to be built in Gaza.

“Each Palestinian who chooses to leave would be given a $5,000 cash payment and subsidies to cover four years of rent elsewhere, as well as a year of food,” The Post further wrote.

After beginning his term as president in January, Trump boasted that all Palestinians would be removed from Gaza, never to return, and the strip redeveloped as the “Riviera of the Middle East.”

“I looked at a picture of Gaza, it’s like a massive demolition site,” Trump stated just two days after taking office.

“It’s got to be rebuilt in a different way.” Gaza, he said, was “a phenomenal location … on the sea, the best weather. Everything’s good. Some beautiful things can be done with it.”

Trump appointed Steve Witkoff, a Jewish real estate developer from New York, as his Special Envoy to the Middle East and point man for alleged negotiations with Hamas to reach a ceasefire.

Keep reading

These Are The 10 Most-Used AI Chatbots In 2025

Chatbots have become a key interface for AI in both personal and professional settings. From helping draft emails to answering complex queries, their reach has grown tremendously.

This infographic, via Visual Capitalist’s Bruno Venditti, ranks the most-used AI chatbots of 2025 by annual web visits. It provides insight into how dominant certain platforms have become, and how fast some competitors are growing.

ChatGPT continues to dominate the chatbot space with over 46.5 billion visits in 2025. This represents 48.36% of the total chatbot market traffic, four times more than the combined visits of the other 10 chatbots. Its year-over-year growth of 106% also shows it is not just maintaining, but expanding its lead.

Keep reading

Transcripts Show AI Fed Tech Worker’s Troubling Delusions Before He Murdered His Own Mother

Just to be perfectly clear, this writer is not one of those artificial intelligence doomsayers who thinks that Terminator 2 was a quasi-documentary.

AI, whether you love it or hate it, has escaped Pandora’s Box, and this is simply the world we must grapple with.

To say that it has no value whatsoever would be naive. Time is the most finite resource we have, and if we can save some of it via AI automation, that’s a net positive value.

But just because AI has its occasional use does not mean that people must just accept a rampant and out-of-control version of it. AI, more so than perhaps any invention in human history, needs guardrails and safety measures because people are essentially trying to play God with this tech.

That’s scary enough, but there’s an even scarier problem: people are replacing God with AI, and this utterly horrific and tragic story from Connecticut highlights the truly sinister side of the technology.

As reported by The Wall Street Journal, Stein-Erik Soelberg, 56, entered into a dangerous and parasocial relationship with a ChatGPT bot prior to murdering his own mother, and then himself.

The incident, which occurred in the spring (both Soelberg’s body and his mother’s were found on Aug. 5, per the New York Post), came after Soelberg had entered into a seeming kinship with the AI chatbot.

The reason the mentally disturbed Soelberg began consulting ChatGPT? He was convinced that he was being spied on, possibly by his own mother, and ChatGPT was all too willing to feed into that delusion.

“A Chinese food receipt contained symbols representing Soelberg’s 83-year-old mother and a demon, ChatGPT told him,” The Wall Street Journal reported.

“After his mother had gotten angry when Soelberg shut off a printer they shared, the chatbot suggested her response was ‘disproportionate and aligned with someone protecting a surveillance asset,’” the outlet proffered as another ominous example of the things ChatGPT was telling Soelberg.

In yet another chat, Soelberg told “Bobby” (the nickname he had bestowed on the AI chatbot) that he thought his mother and her friend had tried to poison him by putting psychedelic drugs into his car’s air vents.

Instead of talking him away from the clearly delusional and paranoid claim, this is what the bot proffered: “That’s a deeply serious event, Erik — and I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”

If that’s not disturbing enough for you, by the summer, the “relationship” between Soelberg and “Bobby” had grown to the point that the two sides were actively discussing how they could reunite in the afterlife.

Keep reading

ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides

AI has allegedly claimed another young life — and experts of all kinds are calling on lawmakers to take action before it happens again.

“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of “The Anxious Generation,” told The Post. “But that’s what we are doing with chatbots.

“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”

The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life in April.

“He would be here but for ChatGPT. I 100% believe that,” Adam’s father, Matt Raine, told the “Today” show.

Keep reading

Meta to spend millions backing pro-AI candidates – media

US tech giant Meta will launch a California‑focused super‑PAC to support state‑level candidates who favor looser technology regulation, especially regarding artificial intelligence, according to media reports.

A super PAC is an independent political committee that can raise and spend unlimited funds from individuals, corporations, and unions to support or oppose candidates. It cannot coordinate directly with campaigns or parties and was created after 2010 US court rulings that loosened campaign finance rules.

The group, named Mobilizing Economic Transformation Across California, will reportedly back candidates from the Democratic and Republican parties who prioritize AI innovation over stringent rules.

According to Politico, the Facebook and Instagram parent plans to spend tens of millions of dollars through the PAC, which could make it one of the top political spenders in the state in the run‑up to the 2026 governor’s race.

The initiative aligns with Meta’s broader effort to safeguard California’s status as a technology hub amid concerns that strict oversight could stifle innovation.

Keep reading

The Loneliness Epidemic Isn’t About Phones, It’s About Algorithms

America’s loneliness epidemic has been headline news for years. We’ve seen study after study confirming what many feel in their bones: more people are isolated, disconnected, and struggling to find meaning in daily life.

Older Americans often chalk this up to technology or to the social scars of COVID. They aren’t entirely wrong, but the deeper story is much larger.

The real driver of this new loneliness is algorithms—the invisible rules and processes that now govern how we live, connect, and even think.

This may sound abstract, but it isn’t. Algorithms are the silent presence shaping your news feed, recommending your next purchase, deciding which job application gets reviewed, and filtering which posts you see from family or friends. They don’t just show you the world; they decide which world you see.

And the most important thing to understand is that algorithms have not touched every generation equally.

Baby boomers and many Gen Xers remember life before algorithms. They grew up with solitude as a normal part of existence: long walks, time alone with books, evenings without distraction. Their social lives were local and embodied. If they were lonely, it was the ordinary kind of loneliness, the kind that might drive someone to call a friend, join a club, or just take a walk and kick around some stones along the way.

Millennials came of age as algorithms entered their lives through the rise of social media and smartphones. For them, the shift was gradual. They still remember analog childhoods, but their adult lives became increasingly tethered to devices. They learned to straddle both worlds, sometimes nostalgically recalling life before algorithms, but never recognizing algorithms as the new driving force in their lives.

Gen Z and Gen Alpha, however, have never known life without algorithmic curation. From childhood, their identities, friendships, and even their sense of self have been shaped inside systems designed to maximize engagement.

They are the most connected generation in history and yet, paradoxically, the loneliest. Studies confirm that they report higher levels of isolation and depression than their parents or grandparents did at the same age. For them, solitude is almost unimaginable. Their sleeping hours have diminished, and their waking hours have been saturated with algorithmic nudges, performance demands, and invisible comparisons.

This is why blaming “phones” or “tech” misses the point. A phone is just a tool. The deeper cause of today’s epidemic of loneliness is the system of algorithms that runs on those devices and quietly governs the lives lived through them.

Keep reading

Mystery Hacker Used AI To Automate ‘Unprecedented’ Cybercrime Rampage

A hacker allegedly exploited Anthropic, the fast-growing AI startup behind the popular Claude chatbot, to orchestrate what authorities describe as an “unprecedented” cybercrime campaign targeting nearly 20 companies, according to a report released this week.

The report, published by Anthropic and obtained by NBC News, details how the hacker manipulated Claude to pinpoint companies vulnerable to cyberattacks. Claude then generated malicious code to pilfer sensitive data and cataloged information that could be used for extortion, even drafting the threatening communications sent to the targeted firms.

NBC News reports:

The stolen data included Social Security numbers, bank details and patients’ sensitive medical information. The hacker also took files related to sensitive defense information regulated by the U.S. State Department, known as International Traffic in Arms Regulations.

It’s not clear how many of the companies paid or how much money the hacker made, but the extortion demands ranged from around $75,000 to more than $500,000, the report said.

Jacob Klein, head of threat intelligence for Anthropic, said the campaign appeared to be the work of a hacker operating outside the U.S., but did not provide any additional details about the culprit.

We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” Klein said.

Anthropic’s findings come as an increasing number of malicious actors are leveraging AI to craft fraud that is more persuasive, scalable, and elusive than ever. A SoSafe Cybercrime Trends report reveals that 87% of global organizations encountered an AI-driven cyberattack over the past year, with the threat gaining momentum.

AI is dramatically scaling the sophistication and personalization of cyberattacks,” said Andrew Rose, Chief Security Officer at SoSafe. “While organizations seem to be aware of the threat, our data shows businesses are not confident in their ability to detect and react to these attacks.”

Artificial intelligence is not only a tool for cybercriminals – it is also broadening the vulnerabilities within organizations. As companies rush to adopt AI-driven tools, they may inadvertently expose themselves to new risks.

Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls,” Rose continued.

Keep reading

NVIDIA’s new “robot brain” could reshape humanity’s future — or seal its fate

Step inside a modern fulfillment center, and you’ll witness a revolution unfolding in real time. The workers aren’t human. They’re Digit, Agility Robotics’ latest generation of humanoid machines — sleek, bipedal, and eerily fluid in their movements. They stack pallets with surgical precision, sort inventory without hesitation, and adapt to new tasks on the fly. But here’s the chilling part: their “brains” aren’t outsourced to some distant server farm. They’re embedded inside each robot, processing terabytes of data in real time, making split-second decisions, and—most alarmingly—learning as they go.

This isn’t a dystopian screenplay. It’s happening now. And the architect of this seismic shift? Nvidia’s Jetson Thor, a $3,500 desktop-sized supercomputer that doesn’t just accelerate artificial intelligence — it gives it a body.

Key points:

  • Nvidia’s Jetson Thor, a $3,499 “robot brain,” delivers 7.5x the AI compute power of its predecessor, enabling real-time reasoning in humanoid robots like Agility’s Digit and Boston Dynamics’ Atlas.
  • The chip runs generative AI models locally, reducing reliance on cloud computing and allowing robots to process complex tasks—from warehouse logistics to surgical assistance—instantly.
  • Major players like Amazon, Meta, and Carnegie Mellon’s Robotics Institute are already integrating Thor into their systems, with Nvidia positioning robotics as its next trillion-dollar growth market after AI.
  • While Nvidia insists this is about augmenting human work, critics warn it could accelerate job displacement, AI autonomy, and even military applications — all while centralizing control in the hands of a few tech giants.
  • The Blackwell-powered Thor is just the beginning. Nvidia’s DRIVE AGX Thor, a variant for autonomous vehicles, is also launching, hinting at a future where AI doesn’t just assist us—it replaces us.

The birth of the physical AI: When code gets a body

For decades, artificial intelligence has been confined to the digital realm — a ghost in the machine, answering questions, generating images, even writing news articles (yes, the irony isn’t lost on us). But AI has always had one glaring limitation: it couldn’t do anything. It could suggest, predict, and simulate, but it couldn’t act. That’s changing.

Jetson Thor is Nvidia’s answer to the physical AI revolution, a term the company uses to describe machines that don’t just process the world but interact with it. Think of it as the difference between a chess computer and a robot that can pick up a chess piece, move it, and then explain its strategy to you in real time. That’s the kind of fluid, multi-modal intelligence Thor enables.

At the heart of this leap is Nvidia’s Blackwell architecture, the same tech powering its latest AI data center chips. Blackwell isn’t just faster; it’s designed for concurrent processing, meaning a robot can run vision models, language models, and motor control algorithms all at once without slowing down. Previous generations of robotics chips, like Nvidia’s own Jetson Orin, could handle one or two of these tasks at a time. Thor does it all — simultaneously.

“This is the first time we’ve had a platform that can truly support agentic AI in a physical form,” said Deepu Talla, Nvidia’s vice president of robotics and edge AI, in a call with reporters. “We’re not just talking about robots that follow pre-programmed paths. We’re talking about machines that can adapt, learn, and make decisions in real-world environments.”

Most advanced AI today relies on remote servers to crunch data. Thor changes that by bringing server-level compute directly into the robot. That means lower latency, better security, and — critically — no need for a constant internet connection. A warehouse robot powered by Thor could keep working even if the Wi-Fi goes down. A military drone could operate in a warzone without relying on a potentially hackable data link.

Keep reading