Ukraine to help Taiwan build combat drones

A Ukrainian tech hub has signed a deal with a Taiwanese defense company to design and build combat drones, according to an announcement on Wednesday. Poland will also provide expertise.

The move was announced at the International Defense Industry Exhibition (MSPO) in Kielce, Poland, and comes amid heightened tensions between Washington and Beijing in the South China Sea.

According to Focus Taiwan, Ukraine will contribute research and development, Poland will provide expertise, and Taichung-based firm Thunder Tiger will supply technology and components. Earlier this year, Thunder Tiger unveiled an FPV drone capable of carrying an 81mm mortar as well as a new naval kamikaze

Keep reading

Trump gives green light for $2m ICE deal with notorious Israeli spyware company

The Trump administration appears to have unfrozen a stalled $2 million Biden-era contract with Paragon Solutions (US) Inc., a spyware company founded in Israel whose products have been accused of facilitating the surveillance of journalists and activists.

On Saturday, a public procurement database showed that a stop work order on the September 2024 deal with U.S. Immigration and Customs Enforcement had been lifted, technology journalist Jack Poulson reported on his All-Source Intelligence Substack.

The deal does not specify what ICE will be getting as part of the deal, beyond describing an agreement for a “fully configured proprietary solution including license, hardware, warranty, maintenance, and training.”

An individual who answered a phone number listed for Paragon on the contract declined to comment.

Keep reading

Comprehensive data privacy laws go into effect in 8 more states this year

This year, comprehensive privacy laws are going into effect in eight states to regulate how businesses handle digital information and to give consumers more protections over their personal data.

The laws in DelawareIowaMinnesotaNebraskaNew HampshireNew Jersey and Tennessee have taken effect already this year, according to a database from the International Association of Privacy Professionals’ Westin Research Center. Maryland’s privacy law, signed by Democratic Gov. Wes Moore last year, will go into effect Oct. 1.

Privacy laws enacted in IndianaKentucky and Rhode Island will go into effect next year.

Several other states are considering comprehensive privacy bills during this year’s legislative sessions. They include MassachusettsMichiganNorth CarolinaPennsylvania and Wisconsin.

When a person visits a website, applies to a job or logs into an online portal, they may be sharing their personal information. Comprehensive privacy laws can apply to a wide range of companies that participate in this kind of data collection.

These laws generally include two types of provisions — those related to consumer rights and those that establish business obligations, according to the association.

Under each of the new laws, consumers have the right to control when and how their data is collected and shared. Some of those provisions include the right to delete data from a company’s database, the ability to opt out of sharing sensitive information and the right to prohibit a company from selling their data.

The new measures also require businesses to ask consumers if they want to opt in to data collection. In some states, businesses are required to complete consumer data risk assessments and identify ways in which discrimination could take place. Some companies also may be required to limit to how consumer data is processed.

Keep reading

Electromagnetic Weapon Destroys Drone Swarm In Seconds: ‘Singularity Event’

Drones have quickly become all the rage among military leaders and Silicon Valley investors, but new weaponry could threaten the nascent technology’s swift rise.

Last Tuesday, defense contractor Epirus quietly tested its latest electromagnetic weapon, Leonidas, against a swarm of 49 quadcopters, neutralizing them in seconds at Camp Atterbury, Indiana, according to Axios, the only news outlet invited to the groundbreaking test. Numerous U.S. military services and foreign allies, including Indo-Pacific partners, witnessed the event. In an interview with Axios, Epirus CEO Andy Lowery hailed the “forcefield system” as a “singularity event.”

The test by Epirus comes as the U.S. military is aggressively advancing its drone capabilities to maintain air superiority in an era of rapidly evolving unmanned systems, spurred by lessons from conflicts like Ukraine’s use of commercial drones against Russia. The Pentagon’s recent policy shift, announced in July by Defense Secretary Pete Hegseth, reclassifies small drones (Group 1 and 2, under 55 pounds) as consumables akin to ammunition, empowering lower-level commanders to procure and deploy them swiftly, bypassing cumbersome bureaucratic processes. The move, which is part of Hegseth’s “Unleashing U.S. Military Drone Dominance” directive, mandates that every U.S. military squad, prioritizing Indo-Pacific units, integrate U.S.-made drones by 2026.

Keep reading

Belarus Tests Largest Unmanned Helicopter in CIS — Sky-Truck Takes Flight

Belarus has begun flight tests of the Sky-Truck, the largest unmanned helicopter in the CIS, capable of carrying up to 600 kg of cargo over 480 kilometers.

The Minsk-based design bureau Unmanned Helicopters has launched flight trials of the Sky-Truck, a multi-purpose UAV that represents the most advanced unmanned helicopter project in the post-Soviet space. With a flight range of up to 480 km and endurance of four hours, the Sky-Truck can deliver between 500 and 600 kg of cargo to remote or hard-to-reach areas.

The drone is intended for a wide range of uses, from the urgent delivery of medical supplies and food to transportation of military equipment and other critical cargo. Its maximum speed of 180 km/h and take-off weight of over two tons make it the most powerful UAV of its kind in the region.

The Sky-Truck’s massive 12.8-meter rotor is powered by a turboshaft gas-turbine engine running on aviation kerosene. Initially designed around the Russian VK-650 engine, the UAV has also been adapted to accommodate various Rolls-Royce powerplants, giving it flexibility for both domestic and international applications.

Large-scale unmanned helicopters of this size have rarely been showcased in Russia or the wider CIS. The Sky-Truck’s development signals a new level of ambition in Belarus’s aerospace sector, combining heavy-lift capacity with unmanned versatility.

Keep reading

Dystopian Rollout Of Digital IDs & CBDCs Is Happening

This isn’t conspiracy; it’s all in their own documentation.

They are building a full-spectrum digital cage, and its two locked doors are Digital Identity and Central Bank Digital Currencies (CBDCs). You cannot have one without the other.

The plan is to replace your government-issued ID with a Digital ID, but it’s not just a card in your phone. It is fundamentally built upon your immutable biometrics: your fingerprints, the precise structure of your face, the unique pattern of your iris.

This biometric data is the key.

It is the hard link that ties your physical body directly to your digital identity credential.

Your very body becomes your password. The reason this is so critical for them is the financial system. UN & Bank for International Settlements docs overtly state that Digital ID and CBDCs are designed to be integrated.

The system cannot exist without this biometric digital ID.

Why?

Know Your Customer (KYC) protocols.

For this new digital financial system to function, they must absolutely “know” every single participant. Your digital wallet will be tied to your digital ID, which is mapped to your biometrics. Total financial-biological linkage.

We see the prototypes being rolled out now:

  • Sam Altman’s WorldCoin lures people to scan their irises for a “unique identifier” and a digital wallet. This is the exact model.
  • The UN’s “Building Blocks” program forces refugees to scan their iris at checkout to receive food rations. The value is deducted from a wallet tied to that biometric ID.

They justify this total surveillance under the guise of closing the “identity gap,” claiming the world’s poor need digital IDs to access essential services like banking and healthcare.

The reality?

This is the ultimate onboarding mechanism into a system of programmable control, where your access to society and your own money is permissioned and revocable based on your compliance.

This is the bedrock of the new global financial system.

It is not about convenience. It is about control.

Keep reading

AI & The New Frontier Of Torts: ChatGPT Faces Claims Of Suicide, Defamation, & Even Murder

“I know what you’re asking, and I won’t look away from it.”

Those final words to a California teenager about to commit suicide were not from some manipulative friend in high school or sadistic voyeur on the Internet.  Adam Raine, 16, was speaking to ChatGPT, an AI system that has replaced human contacts in fields ranging from academia to business to media.

The exchange between Raine and the AI is part of the court record in a potentially groundbreaking case against OpenAI, the company that operates ChatGPT.

It is only the latest lawsuit against the corporate giant run by billionaire Sam Altman.

In 2017, Michele Carter was convicted of involuntary manslaughter after she urged her friend, Conrad Roy, to go through with his planned suicide:

“You need to do it, Conrad… All you have to do is turn the generator on and you will be free and happy.”

The question is whether, if Michele were named Grok (another AI system), there would also be some form of liability.

OpenAI stands accused of an arguably more serious act in supplying a virtual companion who effectively enabled a suicidal teen — with lethal consequences.

At issue is the liability of companies in using such virtual employees in dispensing information or advice.  If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

In a response to the lawsuit, OpenAI insists that “ChatGPT is trained to direct people to seek professional help” but “there have been moments where our systems did not behave as intended in sensitive situations.” Of course, when the company “trains” an AI agent poorly and that agent does “not behave as intended,” it sounds like a conventional tort that should be subject to liability.

OpenAI is facing other potential litigation over these “poorly trained” AI agents. Writer Laura Reiley wrote an essay about how her daughter, Sophie, confided in ChatGPT before taking her own life. It sounded strikingly familiar to the Raines case: “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”

While OpenAI maintains that it is not running a suicide assistance line, victims claim that it is far worse than that: Its AI systems seem to actively assist in suicides.

In the Raines case, the family claims that the system advised the teen how to hide the bruises from prior attempts from his parents and even told him if it could spot any telltale marks.

The company is also accused of fueling the mental illness of a disturbed former Yahoo executive, Stein-Erik Soelberg, 56, who expressed paranoid obsessions about his mother. He befriended ChatGPT, which he called “Bobby,” a virtual companion who is accused of fueling his paranoia for months until he killed his mother and then himself. ChatGPT is even accused of coaching Soelberg on how to deceive his 83-year-old mother before he killed her.

In one message, ChatGPT allegedly told Soelberg, “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.” After his mother became angry over his turning off a printer, ChatGPT took his side and told him her response was “disproportionate and aligned with someone protecting a surveillance asset.” At one point, ChatGPT even helped Soelberg analyze a Chinese food receipt and claimed it contained “symbols” representing his mother and a demon.

As a company, OpenAI can show little more empathy than its AI creations. When confronted with mistakes, it can sound as responsive as HAL 9000 in “2001: A Space Odyssey,” simply saying “I’m sorry, Dave. I’m afraid I can’t do that.”  

When the system is not allegedly fueling suicides, it seems to be spreading defamation.

Keep reading

Musicians Vs. Machines: First Ever Record Contract to AI Music Creator Prompts Panic, Protest

When pop groups and rock bands practice or perform, they rely on their guitars, keyboards and drumsticks to make music. Oliver McCann, a British AI music creator who goes by the stage name imoliver, fires up his chatbot.

McCann’s songs span a range of genres, from indie-pop to electro-soul to country-rap. There’s just one crucial difference between McCann and traditional musicians.

“I have no musical talent at all,” he said. “I can’t sing, I can’t play instruments, and I have no musical background at all.”

McCann, 37, who has a background as a visual designer, started experimenting with AI to see if it could boost his creativity and “bring some of my lyrics to life.” Last month, he signed with independent record label Hallwood Media after one of his tracks racked up 3 million streams, in what’s billed as the first time a music label has inked a contract with an AI music creator.

McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music. A movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI.

It fueled debate about AI’s role in music while raising fears about “AI slop” — automatically generated low quality mass produced content. It also cast a spotlight on AI song generators that are democratizing song making but threaten to disrupt the music industry.

Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it’s impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming.

The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven’t released any figures on AI music.

Keep reading

Ukraine’s battlefield data is being used as LEVERAGE to train the future of military AI

Imagine a drone, no larger than a dinner plate, humming through the skeletal remains of a bombed-out village. It doesn’t hesitate. It doesn’t feel. It simply knows — its artificial brain trained on millions of hours of combat footage, every pixel of destruction meticulously logged, every human movement analyzed like a chessboard. This isn’t science fiction. It’s the future Ukraine is quietly shopping to the highest bidder. Data obtained from the Ukraine-Russia war will soon be used to train military AI to make future war time missions more efficient, more cold and calculated.

For over three and a half years, Ukraine has been more than a battleground — it’s been a lab. A brutal, real-world experiment in how machines learn to kill. Now, as the war grinds on, Kyiv isn’t just fighting for survival. It’s negotiating with its Western allies, dangling something far more valuable than territory or political loyalty: data. Terabytes of it. Footage from first-person-view drones that have stalked Russian tanks like predators. Reconnaissance feeds that map every explosion, every ambush, every death in excruciating detail. And Ukraine’s digital minister, Mykhailo Fedorov, has made one thing clear — this isn’t charity. It’s a transaction. “I think this is one of the ‘cards,’ as our colleagues and partners say, to build win-win relations,” he told Reuters, his words carrying the cold precision of a man who understands leverage. The question isn’t whether this data will be sold. It’s who will wield it — and what happens when they do.

Key points:

  • Ukraine has amassed an unprecedented trove of battlefield data, including drone footage and combat statistics, which is now being positioned as a negotiating tool with Western allies.
  • The data is critical for training military AI, particularly for autonomous drone swarms and target recognition systems, making it a prized asset for defense contractors and governments.
  • Ukraine’s “points system” for confirmed kills has gamified war, incentivizing troops to destroy more Russian targets in exchange for drones and weapons — further feeding the data machine.
  • Experts warn that AI-trained weapons systems could soon operate with full autonomy, raising ethical and existential questions about machine-driven warfare and the risk of uncontrollable kill chains.
  • Historical patterns suggest that warfare technology often escapes its original intent, with civilian casualties rising as automation increases — yet global powers are racing to deploy it.
  • The long-term implications extend beyond Ukraine: this data could accelerate a new arms race, where AI-driven weapons decide who lives and who dies — without human oversight.

The black box of modern war

Fedorov didn’t minced words when he called the data “priceless.” And he’s right. In the hands of defense firms like Palantir — which already works with Ukraine to analyze Russian strikes and disinformation — this isn’t just intelligence. It’s the raw material for the next generation of war. Imagine an AI that doesn’t just assist pilots but replaces them. Drones that don’t just follow orders but make them. Systems that can identify, track, and eliminate targets faster than a human can blink.

Ukraine has already dipped its toes into this future. Fedorov admitted that Kyiv uses AI to scan reconnaissance imagery for targets that would take humans “dozens of hours” to find. They’re testing fully autonomous drones — machines that could soon hunt in swarms, coordinating attacks without a single soldier pulling the trigger. And they’re not alone. The U.S., China, and Russia are all pouring billions into AI-driven warfare, each racing to outpace the others. But Ukraine’s data is different. It’s not simulated. It’s not theoretical. It’s real death, digitized and weaponized.

The problem? We’ve seen this movie before. Every major leap in military technology — from machine guns to atomic bombs — has been sold as a way to end war faster. Instead, it’s made war more efficient, more distant, and more devastating. When the first autonomous drone swarm is unleashed, will it distinguish between a soldier and a civilian? Will it care? Or will it simply follow the patterns it’s been trained on — patterns built on Ukraine’s kill zones, where the line between combatant and bystander has already blurred?

Keep reading

New AI tool flags more than 1,000 questionable science journals… but can it be trusted?

  • The open-access journal boom has fueled predatory publishers exploiting researchers with fees while skipping real peer review.
  • An AI tool trained on 14,500 journals flagged over 1,000 suspicious publications but has a 24% false positive rate.
  • Fake science is surging, with a 2025 study warning that paper mills are doubling fraudulent research output every 1.5 years.
  • Predatory journals threaten public trust, distorting medical guidelines and policy decisions while wasting taxpayer funds.
  • AI detection tools could be misused to censor legitimate but controversial research, raising concerns over truth control.

The explosion of open-access journals has democratized scientific research, but it has also given rise to a shadow industry of predatory publishers that exploit authors with publishing fees while offering little to no legitimate peer review. Now, researchers have developed an AI tool to detect these shady journals—but its 24% false positive rate means human experts are still essential.

Who’s behind it? A team of computational scientists, led by Daniel Acuña of the University of Colorado Boulder, trained an AI model on more than 14,500 journals—12,869 high-quality ones and 2,536 that had been removed from the Directory of Open Access Journals (DOAJ) for violating ethical guidelines. The AI then analyzed nearly 94,000 open-access journals, flagging more than 1,000 previously unknown suspect publications.

The problem with predatory journals

The open-access model was supposed to make research freely available to everyone, breaking down paywalls that restrict knowledge. But as the system grew, so did the number of journals that prioritize profit over scientific integrity. These “questionable” journals often promise rapid publication with little to no peer review, charging authors hefty fees while producing low-quality—or even fraudulent—research.

A 2025 study in PNAS found that the number of fake papers churned out by “paper mills” is doubling every 1.5 years, threatening to flood academia with junk science. “If these trends are not stopped, science is going to be destroyed,” warned Luís A. Nunes Amaral, a data scientist at Northwestern University.

Keep reading