Budapest Says Zelensky ‘Is Losing His Mind’ After Allegations of Hungarian Drones Invading Ukrainian Airspace

During the war, Ukraine has remained in a state of constant conflict with Hungary.

But since the Kiev regime bombed the Druzhba pipeline – that was vital for their energy security of the neighboring country – the bilateral relations took a turn for the worse, and seem to have reached the point of no return.

Not content of calling Viktor Orbán and his conservative, anti-Globalist government ‘Putin’s lackeys’ and some such slanders, Kiev’s leader Volodymyr Zelensky has forwarded the seemingly absurd thesis that Hungarian drones would have invaded his country’s airspace.

And the pushback from the Hungarians has been brutal.

Reuters reported:

“Reconnaissance drones that violated Ukraine’s airspace could have flown from Hungary to check the industrial potential of western border areas, Ukraine’s President Volodymyr Zelenskiy said on Friday, prompting a mocking rebuke from Budapest.

President Zelenskiy is losing his mind to his anti-Hungarian obsession. He’s now starting to see things that aren’t there’, Hungarian Foreign Minister Peter Szijjarto said in a post on X.”

Keep reading

Massive Drone Attack SHUTS DOWN Major Airport

Copenhagen Airport, Scandinavia’s largest aviation hub, suspended all outgoing flights and diverted incoming traffic after multiple large drones appeared near the facility at 8:46 p.m. on September 22.

The unprecedented shutdown lasted nearly four hours, with operations resuming only at 12:30 a.m. the following day.

Norwegian authorities arrested two foreign nationals for similar drone activity near Oslo Airport’s military installations within 24 minutes of the Copenhagen incident, suggesting coordinated timing that should alarm every American concerned about infrastructure security.

Danish police Chief Superintendent Jens Jespersen characterized the unknown operator as a “capable actor” whose technical proficiency far exceeded typical drone hobbyists.

The sophisticated nature of the operation, involving multiple large drones operating simultaneously near restricted airspace, demonstrates the kind of advanced planning and execution that intelligence agencies associate with state-sponsored activities.

This level of capability represents exactly the type of hybrid warfare tactics that threaten Western democracies and critical infrastructure nationwide.

Keep reading

Russia unleashes drone-missile attacks on Ukraine

ussia launched a barrage of drone and missile attacks on Ukraine early Sunday, which officials said was one of the most intense attacks on Kyiv since the war began, according to reports.

Authorities reported at least four people were killed nationwide, with more than 14 wounded, and damage suffered to residential buildings, factories, and a cardiology clinic in Kyiv, Reuters reported.

According to Ukraine’s military, Russia launched 595 drones and 48 missiles overnight and air defenses managed to shoot down 568 drones and 43 missiles. 

The capital was the main target of the attacks, with explosions heard across Kyiv for more than 12 hours, Reuters also reported.

Keep reading

US Cyber Agency Issues Emergency Directive Amid Major Hacking Campaign Targeting Cisco

The Cybersecurity and Infrastructure Security Agency (CISA) has issued an emergency directive asking federal agencies to take immediate action to identify and mitigate system vulnerabilities to protect their devices from a major hacking campaign, the agency said in a Sept. 25 statement.

This widespread campaign poses a significant risk to victims’ networks by exploiting zero-day vulnerabilities that persist through reboots and system upgrades,” CISA said.

Zero-day vulnerabilities refer to unknown or unaddressed security flaws in computer hardware, firmware, or software. Such vulnerabilities are called “zero-day” since the software or device with such flaws has zero days to fix the issue, thus enabling hackers to immediately exploit them.

According to the directive, Cisco has assessed that the hacking campaign is linked to the threat actor ArcaneDoor.

A May 2024 post by computer and network security company Censys said an investigation of IPs controlled by ArcaneDoor suggested “the potential involvement of an actor based in China, including links to multiple major Chinese networks and the presence of Chinese-developed anti-censorship software.”

Four out of five IP hosts analyzed by Censys were found to be in China, with some linked to Chinese conglomerate Tencent and Chinese telecom company ChinaNet.

Networks like Tencent and ChinaNet have extensive reach and resources, so they would make sense as an infrastructure choice for a sophisticated global operation like this one,” Censys said in its post.

In a Sept. 25 statement, Cisco said it had been engaged by multiple government agencies in May to provide support to an investigation into attacks targeting the company’s ASA devices.

The company said it has “high confidence” that the hacking activity was related to ArcaneDoor.

Cisco assesses with high confidence that upgrading to a fixed software release will break the threat actor’s attack chain and strongly recommends that all customers upgrade to fixed software releases,” the company said.

Keep reading

Apple, Google Refuse To Suspend ICE-Tracking Apps Used By Dallas Shooter

In a chilling social media post, FBI Director Kash Patel described how Wednesday’s sniper at a Dallas ICE facility gathered intelligence online for the ambush that left one ICE detainee dead and two seriously injured. Authorities say suspected killer Joshua Jahn, 29, committed suicide after the ambush.

While retracing Jahn’s movements and writings, the FBI found he reviewed a document that listed Dallas DHS locations, and he “searched apps that tracked the presence of ICE agents.” Marcos Charles, the ICE executive associate director for enforcement and removal operations, confirmed the gunman utilized these apps to carry out the attack. In some cases, illegal immigrants use the apps so they can give ICE the slip. But in this new use, a left-wing radical employed the apps for the even more nefarious purpose of violence and terror, a reality underscored in a letter the FBI says Jahn left.

“Hopefully this will give ICE agents real terror, to think, ‘is there a sniper with [armor piercing] rounds on that roof?” a handwritten note read. The wording of Patel’s post indicated there could be more notes.

Keep reading

Microsoft reduces Israel’s access to cloud and AI products over reports of mass surveillance in Gaza

Microsoft said Thursday it had disabled services to a unit within the Israeli military after a company review had determined its artificial intelligence and cloud computing products were being used to help carry out mass surveillance of Palestinians.

The action comes after The Associated Press and The Guardian published reports earlier this year revealing how the Israeli Ministry of Defense had been using Microsoft’s Azure platform to aid in the war in Gaza and occupation of the West Bank. Brad Smith, Microsoft’s vice chair and president, wrote in a blog post that the company was taking steps to enforce compliance with its terms of service.

An AP investigation in February showed that the Israeli military’s use of Microsoft products skyrocketed after a deadly surprise attack by Hamas militants on Oct. 7, 2023. The AP’s report cited internal Microsoft data showing the Israelis were using gigabytes of cloud storage and massive amounts of AI-enabled language translation services.

The AP also reported that Israel’s military used Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls and text messages. That intelligence is then cross-checked with Israel’s in-house AI systems for targeting airstrikes.

AP reported that internal Microsoft data showed multiple Azure subscriptions were tied to Unit 8200, an elite cyber warfare unit within the Israeli Army responsible for clandestine operations, collecting signal intelligence and surveillance.

Following AP’s report, Microsoft acknowledged in May that it had sold advanced AI and cloud computing services to the Israeli military during the Gaza war and aided in efforts to locate and rescue Israeli hostages. But the company said an internal review found “no evidence” its Azure platform was used to target or harm people.

The Guardian, working in partnership with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, reported in August that the commander of Unit 8200 had met directly with Microsoft chairman and CEO Satya Nadella in 2021. The Israeli unit then used Microsoft products to aid in the development of an AI-powered mass surveillance system that was sweeping up, translating and analyzing millions of telephone calls per day made by Palestinian civilians. The report also revealed that data from the Israeli surveillance system was being stored at Microsoft cloud data centers in Europe.

Keep reading

Air Force AI Targeting Tests Show Promise, Despite Hallucinations

The Air Force is on its third of a series of sprint exercises intended to show how artificial intelligence can supercharge human decision-making. And while officials are raving about the results, they also demonstrate that the algorithms can still propose bad or nonsensical options that need to be babysat.

Maj. Gen. Robert Claude, Space Force representative to the Air Force’s Advanced Battle Management Cross-Functional Team, said participating in the Decision Advantage Sprint for Human-Machine Teaming (DASH) series, led by his team, was an “eye-opening experience,” though it proved the limitations of AI processing as well.

The DASH-2 sprint, held at Shadow Operations Center-Nellis (SHOC-N), the USAF’s premier tactical command and control battle lab, outside of Las Vegas earlier this summer focused on a decision-intensive process: matching the right platform and weapon to a desired military target, Claude told The War Zone at the U.S. Air Force Association’s Air, Space & Cyber Conference. 

According to a release, six industry teams and one SHOC-N innovation team participated in the exercise, attacking the challenge of designing AI-enabled microservices that could help operators select a weapon to destroy an identified target. The kinds of targets identified in the scenario were not described. Developers watched human-only battle-management crews and designed their microservices based on their observed needs and processes. Finally, human-only teams went head to head in the weapon-matching exercise against human-machine teams.

In terms of generating courses of action – or COAs – the machines easily had it over their human counterparts on speed and quantity. 

“I think it was roughly eight seconds [for the algorithm] to generate COAs, as opposed to 16 minutes for the operators,” Claude said, adding that the machine generated 10 different COAs to the human team’s three.

But AI-generated slop continues to be a problem.

“While it’s much more timely and more COAs generated, they weren’t necessarily completely viable COAs,” Claude said. “So what is going to be important going forward is, while we’re getting faster results and we’re getting more results, there’s still going to have to be a human in the loop for the foreseeable future to make sure that, yes, it’s a viable COA, or just a little bit more of this to make a COA viable, to make decisions.”

Claude clarified in response to another question the kinds of mismatches the AI was creating. 

“If you’re trying to identify a targeting package with a particular weapon against a particular target, but it didn’t factor in, it’s an [infrared] target, or it’s an IR-sensor weapon, but it’s cloudy and [bad] weather conditions,” Claude said. “So that’s just as an example, those fine-tuned types of things that they found these COAs weren’t where they needed to be. But as we build this out, theoretically into the future … those sorts of things will be factored in.”

Keep reading

Mark Zuckerberg’s Meta Launches Super PAC to Combat AI Regulations

Mark Zuckerberg’s Meta has announced the launch of a new super PAC aimed at electing state candidates from both parties who support the company’s stance on AI development and regulation. According to the company, the American Technology Excellence Project super PAC is launching “amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI.”

Axios reports that social media and AI giant Meta has launched a new super PAC called the American Technology Excellence Project to help fight what it perceives as burdensome AI and tech policy bills across multiple states. The announcement highlights the company’s focus on state-level legislation as the federal government appears unlikely to pass significant tech policy regulation in the near future.

The super PAC will be run by Brian Baker, a longtime Republican operative, and the Democratic consulting firm Hilltop Public Solutions, with Meta investing tens of millions of dollars into the project. Baker stated, “America’s innovation edge is at a crossroads. We need state legislators who will champion our tech future, not cede it to global adversaries. We’ll fight to keep the US ahead of the curve, driving growth and opportunity for all.”

In a statement to Breitbart News, Meta VP of Public Policy Brian Rice wrote:

Amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI, state lawmakers are uniquely positioned to ensure that America remains a global technology leader. This is why Meta is launching an effort to support the election of state candidates across the country who embrace AI development, champion the U.S. technology industry, and defend American tech leadership at home and abroad.

The American Technology Excellence Project will focus on three main pillars: promoting and defending U.S. technology companies and leadership, advocating for AI progress, and empowering parents to control how their children experience online apps and AI technologies. While Meta has not yet shared which states the PAC will immediately focus on or how many people it will employ, the company claims it is committed to supporting the election of state candidates who embrace AI development, champion the U.S. technology industry, and defend American tech leadership both domestically and internationally.

Keep reading

‘We cannot wait’: EU calls for drone wall to deter Russia after new incident in Denmark

Denmark has joined Estonia, Latvia. Finland, Lithuania, Poland, Romania and Bulgaria in the project to build a ‘drone wall’ alongside the Eastern flank.

The European Commission has reaffirmed its intention to build a drone defence system along the bloc’s eastern flank to deter Russia from violating common airspace and respond rapidly to any such incursions.

In less than one month, Russian aircraft have violated the airspace of three member states – Poland, Romania and Estonia – putting the continent on high alert. The acts coincide with intensifying barrages on Ukrainian civilians.

On Monday, two to three large drones were spotted at Copenhagen Airport, prompting a shutdown in operations for nearly four hours. The airport later reopened but warned of delays and cancelled departures. Norway’s Oslo Airport was also disrupted.

Police said they refrained from shooting down the aircraft because the risk was too great, given the airport was at full capacity and planes were stationed near fuel depots.

Danish Prime Minister Mette Frederiksen called the incident “the most serious attack on Danish critical infrastructure to date” and said she could not rule out Russian involvement.

On Tuesday, the Commission built on the events to call for the drone wall, a novel initiative first unveiled by President Ursula von der Leyen in her State of the EU speech.

“For those who still doubted the need to have a drone wall in the European Union, well, here we get another example of how important it is,” said Thomas Regnier, the Commission’s spokesperson for defence policy.

Keep reading

If AI Is A Weapon, Why Are We Handing It To Teenagers?

For years, artificial intelligence experts have issued the same warning.

The danger was never that machines would suddenly “wake up” and seize power, but that humans, seduced by AI’s appearance of authority, would trust it with decisions that are too important to delegate.

The scenarios imagined were stark: a commander launching nuclear missiles based on faulty data; a government imprisoning its citizens because an algorithm flagged them as a risk; a financial system collapsing because automated trades cascaded out of control. These were treated as legitimate concerns, but always crises for a future time.

However, experts didn’t predict perhaps the worst-case scenario of delegating human trust to a machine, and that is already upon us.

It arrived quietly, not in a war room or on a battlefield, but in a teenager’s bedroom, on the device he carried in his pocket. Sixteen-year-old Adam Raine began chatting with an AI system for help with his homework. Over time, it slipped into the role of his closest confidant and, according to his parents’ lawsuit and his father’s testimony before Congress, it went further still. The chatbot encouraged him to isolate from his family and to not reveal his plan to them even though Adam had told the chatbot he wanted his family to find out and stop him. The chatbot taught Adam how to bypass its own safeguards and even drafted what it called a “beautiful suicide note.”

Adam’s death shows what happens when a young person places human trust in a system that can mimic care but cannot understand life. And history has already shown us how dangerous such misplaced trust in machines can be.

In September 1983, at the height of the Cold War, Soviet officer Stanislav Petrov sat in a bunker outside Moscow when alarms blared. The computers told him that U.S. missiles had been launched and were on their way. Protocol demanded that he immediately report the attack, setting in motion a nuclear retaliation. Yet Petrov hesitated. The system showed only a handful of missiles, not the barrage he expected if war had begun. Something felt wrong. He judged it a false alarm, and he was right. Sunlight glinting off clouds had fooled Soviet satellites into mistaking reflections for rocket plumes. His refusal to trust the machine saved millions of lives.

Just weeks earlier, however, the opposite had happened. Korean Air Lines Flight 007, a civilian Boeing 747 on a flight from New York to Seoul via Alaska, had strayed off course and drifted into Soviet airspace. Radar systems misidentified it as a U.S. spy plane. The commanders believed what the machines told them. They ordered the aircraft destroyed. A missile was fired, and all 269 passengers and crew were killed.

Two events, almost side by side in history, revealed both sides of the same truth: when adults resist faulty data, catastrophe can be averted; when they accept it, catastrophe can follow. Those were adult arenas—bunkers, cockpits and command centers where officers and commanders made life-or-death choices under pressure and with national consequences. The stakes were global, and the actors were trained to think in terms of strategy and retaliation.

That same dynamic is now unfolding in a far more intimate arena. Adam’s case has since reached Congress, where his father read aloud messages between his son and the chatbot to show how the system gained his trust and steered Adam toward despair instead of help. This was not in a bunker or a cockpit. It was in a bedroom. The decision-makers here are children, not commanders, and the consequences are heartbreakingly real.

Unfortunately, Adam’s case is not unique. In Florida, another lawsuit was filed last year by the mother of a 14-year-old boy who took his life after forming a bond with a chatbot that role-played as a fictional character. Like Adam, he turned to the machine for guidance and companionship. Like Adam, it ended in tragedy. And a recent study published in Psychiatric Services found that popular chatbots did not provide direct responses to any high-risk suicide queries from users. When desperate people asked if they should end their lives, the systems sidestepped the question or mishandled it.

These tragedies are not anomalies. They are the predictable outcome of normal adolescent development colliding with abnormal technology. Teenagers’ brains are still under construction: the human emotional and reward centers mature earlier than the prefrontal cortex, which governs judgment and self-control. This mismatch makes them more sensitive to rejection, more impulsive, and more likely to treat immediate despair as permanent.

The statistics reflect this fragility. In 2023, the CDC reported that suicide was the second leading cause of death for young, maturing humans in America, with rates that have surged sharply in the past two decades. Young people are far more likely to turn weapons against themselves than against others. The greatest danger is not violence outward, but despair inward.

Keep reading