NO-FLY ZONE: China is Dominant in Today’s Drone Industry – But It’s Tightening Penalties for Civilian Operators To Include Prison Time

To fly a drone over a Chinese city may result in a jail sentence.

China now dominates the global drone industry, but it has become one of the toughest places to fly an FPV due to the new regulations.

The New York Times reported:

“New regulations are sharply tightening rules for recreational and civilian operators. Since January, officials have ramped up the penalties for unauthorized flying of drones to include possible jail time. Starting in May, all drones must be registered with the owners’ real names, requiring operators to link their flight equipment to their official identification or cellphone number.

Permits will be required at least a day in advance in restricted zones, which cover most cities. The rules make an exception for small drones flying below 400 feet in some open areas, but those areas are very limited.”

Flight data will be transmitted to the government in real time, and in cities like Beijing, a near-total drone ban is now in force.

Keep reading

Microsoft says Copilot is for entertainment purposes only, not serious use — firm pushing AI hard to consumers and businesses tells users not to rely on it for important advice

Microsoft used to push its AI services towards its user base, especially with the launch of the Copilot+ PC, but it seems that even the company itself does not trust its creation. According to the Microsoft Copilot Terms of Use, which was updated in October last year, the AI large language model (LLM) is designed for entertainment use only, and users should not use it for important advice. While this may be a boilerplate disclaimer, it’s quite ironic given how hard the company wants people to use Copilot for business uses and has integrated it into Windows 11.

“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended,” the document said. “Don’t rely on Copilot for important advice. Use Copilot at your own risk.” This isn’t limited to Copilot, too. Other AI LLMs have similar disclaimers. For example, xAI says “Artificial intelligence is rapidly evolving and is probabilistic in nature; therefore, it may sometimes: a) result in Output that contains “hallucinations,” b) be offensive, c) not accurately reflect real people, places or facts, or d) be objectionable, inappropriate, or otherwise not suitable for your intended purpose.”

These may sound common sense for people familiar with how LLMs work, but, unfortunately, some people treat AI output as gospel, even those who are supposed to know better. We’ve seen this with Amazon’s services, after some AWS outages were reportedly caused by an AI coding bot after engineers let it solve an issue without oversight. The Amazon website itself has also been hit with a few “high blast radius” incidents that were linked to “Gen-AI assisted changes,” resulting in senior engineers being called up in a meeting to resolve the matter.

Keep reading

Why Is Everyone Suddenly Talking About Putting Data Centers in Space?

Data centers present sprawling engineering and political problems, with ravenous appetites for land and resources. Building them on Earth has proven problematic enough — so why is everyone suddenly talking about launching them into space?

Data centers are giant warehouses for computer chips that run continuously, with up to hundreds of thousands of processors packed closely together taking up a mammoth footprint: An Indiana data center complex run by Amazon, for example, takes up more real estate than seven football stadiums. To operate nonstop, they consume immense amounts of electricity, which in turn is converted to intense heat, requiring constant cooling with fans and pumped-in water.

Fueled by the ongoing boom in artificial intelligence, Big Tech is so desperate to power its data centers that Microsoft successfully convinced the Trump administration to restart operations at the benighted Three Mile Island nuclear plant in Pennsylvania.

The data center surge has spawned a backlash, as communities grow skeptical about their environmental toll and ultimate utility of the machine learning systems they serve.

It’s in this climate that technologists, investors, and the world’s richest humans are now talking about bypassing Earth and its logistical hurdles by putting data centers in space. And if you take at face value the words of tech barons whose wealth in no small part relies on overstating what their companies may someday achieve, they’re not just novel but inevitable. The Wall Street Journal reported last month that Jeff Bezos’s space launch firm Blue Origin has been working on an orbital data center project for over a year. Elon Musk, not known for accurate predictions, has publicly committed SpaceX to putting AI data centers in orbit. “There’s no doubt to me that a decade or so away we’ll be viewing it as a more normal way to build data centers,” Google CEO Sundar Pichai recently told Fox News.

The prospect of taking a trillion-dollar industry that is already experiencing a historic boom and literally shooting it toward the moon has understandably created a frenzy within a frenzy.

But large questions remain: Is it even possible? And if it is, why bother?

Keep reading

AI Influencing Elections: Anthropic Forms PAC Leading into Midterms as It Fights Trump Administration

AI company Anthropic, currently locked in a legal war with the Trump Administration, has filed paperwork to create a new corporate political action committee, claiming that “AnthroPAC” will make bipartisan donations to candidates. This was met with skepticism from conservatives who point out that 99 percent of the company’s past donations have gone to leftists.

The Hill reports that Anthropic submitted a statement of organization on Friday to form AnthroPAC, marking the AI company’s first employee-funded political action committee. The PAC will be financed exclusively through voluntary contributions from Anthropic employees, following a model commonly used by technology companies to participate in electoral politics.

According to information obtained by the Hill, the PAC is designed to operate on a bipartisan basis, with plans to distribute contributions to candidates across both major political parties. The committee will be managed by a supposedly bipartisan board of directors to oversee its activities and donation decisions.

Despite the stated bipartisan intent, several figures aligned with President Trump expressed doubt on Friday about whether the PAC would genuinely support candidates from both parties. Their skepticism stems from Anthropic’s contentious relationship with the Trump administration and the company’s previous political donations, which have been essentially all to Democratic candidates.

Keep reading

Downed U.S. Drone Appears On TikTok Live As Iraqi Children Try To Sell It

As the U.S.-Iran conflict enters its second month, America’s Low-Cost Unmanned Combat Attack System, or LUCAS, is becoming increasingly visible across the Middle East theater, a sign that the Department of War has learned one critical lesson from both the Iranian drone playbook and the Ukraine-Russia war: cheap drones are the future of warfare.

The latest news on LUCAS comes from an unverifiable TikTok video, amplified on X, which appears to show a downed drone seized by Iraqi children who are reportedly trying to sell it.

If authentic, the footage is another reminder that low-cost drones are proliferating so widely across the region that they will likely spread to other parts of the world.

In a separate video reposted on X, Iranian forces appear to have recovered a LUCAS drone in the Persian Gulf area.

Keep reading

US federal judges increasingly turn to AI – study

Over half of US federal judges (60%) are using at least one AI tool in their judicial work, a recent Northwestern University study suggests. The research is based on responses from 112 federal judges, drawn from a random sample of 502 federal bankruptcy, magistrate, district court, and appellate court officials.

The use of AI in courtrooms has recently drawn attention for fabricated citations and other errors that have undermined confidence in some filings. The survey published earlier this week shows that these tools are now being adopted not just by lawyers, but also by federal judges.

The survey found that 60% of judges use AI at least occasionally for tasks such as reviewing documents, conducting legal research, and drafting or editing documents. Around 22% use it daily or weekly. Legal research was the most common (30%), followed by reviewing documents (16%).

Around one in three judges said they permit or encourage AI in their chambers, while 20% formally prohibit it. More than 45% reported that they have not received AI training from the court administration.

While judges acknowledge the risks of AI, experts warn that its unreliability could undermine judicial authority.

Keep reading

Landlords are using ‘extremely unreliable’ AI to settle disputes with tenants

Renters and landlords who find themselves at odds with each other over issues with maintenance, repairs, and rental increases have several options when it comes to mediation. 

Most would agree that legal intervention should be a last resort, but according to a new survey by Availindependent landlords are turning to another resource to help with renter disputes: artificial intelligence.

Along with tapping platforms like ChatGPT for general tasks, AI has become a sounding board for landlords to ask for advice on everything from conflict resolution to local-law research and lease language clarification. 

But is it safe for landlords—and renters—if this becomes a widespread practice?

Keep reading

Technate, Ohio: How Leslie Wexner and Jeffrey Epstein Built The Silicon Heartland

Early last year, shortly after Donald Trump took office for his second term, former presidential candidate Vivek Ramaswamy announced he was departing the recently-formed Department of Government Efficiency (D.O.G.E) after reports of conflict with the department’s co-head, Elon Musk. Ramaswamy joined Fox News to clarify these rumors, and to tease his next endeavor –– holding public office. Ramaswamy noted Musk’s approach was “a technology approach,” whereas his was “focused more on a constitutional law, legislative-based approach.” He furthered, “when you’re talking about a constitutional revival, it’s not just done through the federal government, it’s done through federalism, where states also lead the way.” Despite their differences, Ramaswamy importantly remarked that they were both “on the same page” and that their attempts “in saving the country” required them to “divide and conquer.”

Towards the end of the interview, Ramaswamy mentioned he was flying back to Ohio that week, with an announcement regarding his expressed intention of pushing for reform at the state level coming shortly. The former presidential hopeful explained that when “we look at the country over the last 20 years, Silicon Valley was at the bleeding edge of the American economy. I think the Ohio River Valley can be at the bleeding edge of the American economy for the next 20 years.” A few weeks later, Ramaswamy’s gubernatorial campaign for Ohio was announced and the former D.O.G.E. co-head was promptly endorsed by President Trump. Over the course of that campaign, Vivek’s fortunes have quite literally soared. Since launching his campaign, he has not only come to command a massive campaign war chest filled by deep-pocketed donors, but his own net worth has doubled.

While many once labeled this campaign as a clear demotion for Ramaswamy, the reality of an emerging Ohio –– specifically as it relates to the technocratic goals of the Trump administration and its donors –– paints a vastly different picture. As noted in Iain Davis’ book The Technocratic Dark State, D.O.G.E. –– the agency Ramaswamy co-led –– is part of a larger effort led largely by a small group of the ultra-wealthy to completely privatize the public sector in the name of greater “efficiency” and have it ruled by “techno-kings” or dictator “CEOs.” Davis frames this as a modern iteration of technocracy, bolstered by tech billionaires with close ties to the Trump administration, such as Elon Musk and Peter Thiel. Thiel is the long-time benefactor of former Ohio Congressmen and current Vice President J.D. Vance. Notably, Ohio’s richest man Leslie Wexner, along with help from the infamous Jeffrey Epstein (a Thiel associate), has spent decades creating “partnerships” where private interests, including those he directly controls, dominate its state and local governments. In some cases, such as the Columbus suburb of New Albany, they have completely replaced them.

Quietly over the last decade, Ohio has become a state of incredible national importance, as it continues to attract data centers from American “royalty” and Big Tech stalwarts into its friendly regulatory borders. But long before Amazon, Meta, Anduril, Microsoft and others took their power-hungry –– literally and figuratively –– refuge in the Buckeye State, the most well-known financier of Jeffrey Epstein, Leslie Wexner, and his extensive crime-linked network were laying the foundation for the new Silicon Valley, now known as the Silicon Heartland, along the Ohio River.

Wexner’s own statements last year underscore Ohio’s coming importance in the age of ascendant, AI-powered technocracy. Last May, he stated that Columbus in particular would soon become an international AI destination. He also asserted that “probably the largest AI investment in the world will happen in Columbus.” Wexner would know, as he’s personally responsible for Ohio’s –– specifically Columbus’ –– rise as one of the most important AI hubs in the country.

Yet, Wexner, with Epstein’s help, has done much more than attract massive AI data centers to the state. As this investigation will show, Wexner and his closest allies, Epstein among them, worked to create a model for the takeover of local governments via public-private partnership, starting first in New Albany beginning in the late 1980s. It has since spread to cover the entire state of Ohio via a network of public-private partnerships Wexner helped create. This system has allowed Wexner to use billions of dollars of Ohio taxpayer money, with little to no public scrutiny, to finance what can only be described as a massive welfare system for corporations. Among that system’s current biggest beneficiaries are Wexner’s New Albany Company as well as massive Big Tech corporations with important ties to Jeffrey Epstein (e.g. Amazon and Google). Meanwhile, regular Ohioans are seeing their power bills jump, provoking an affordability crisis in the state, while funding for public schools, libraries and healthcare is cut dramatically –– all to keep the corporate welfare engine designed by Wexner running full tilt.

Keep reading

Anthropic Leaks Source Code for AI Coding Tool in Major Security Breach

AI company Anthropic has accidentally exposed the source code for its widely-used coding assistant Claude Code, marking the second significant data leak to affect the company in less than a week.

Fortune reports that the latest incident comes mere days after Fortune revealed that Anthropic had inadvertently made nearly 3,000 internal files publicly accessible, including a draft blog post describing an upcoming AI model called “Mythos” or “Capybara” that the company warned presents serious cybersecurity risks.

This second leak exposed approximately 500,000 lines of code contained within roughly 1,900 files. When contacted for comment, Anthropic acknowledged that “some internal source code” had been leaked as part of a “Claude Code release.” A company spokesperson stated: “No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

Cybersecurity experts suggest this latest leak could prove more consequential than the earlier exposure of the draft blog post. While the source code leak did not reveal the actual model weights of Claude itself, it enabled technically knowledgeable individuals to extract additional internal information from Anthropic’s codebase, according to a cybersecurity professional who reviewed the leaked materials for Fortune.

Claude Code represents one of Anthropic’s most successful products, with adoption rates climbing rapidly among large enterprise customers. The tool’s functionality derives partly from the underlying large language model and partly from what developers call an “agentic harness” — the software framework that surrounds the core AI model, directing how it interacts with other software tools and establishing crucial behavioral guardrails and operational instructions. It is precisely this agentic harness source code that has now been leaked online.

The exposure creates several competitive and security concerns. Rival companies could potentially reverse-engineer the workings of Claude Code’s agentic harness to enhance their own offerings. Additionally, some developers might attempt to build open-source alternatives based directly on the leaked code.

Keep reading

School librarians told to remove art books with ‘historic paintings of nudes’ in latest censorship row

School librarians are being told to remove art books with ‘historic paintings of nudes’ in the latest censorship controversy revealed today.

The ‘insane’ trend was revealed by a delegate at the annual conference of the National Education Union (NEU), saying she had heard ‘many accounts’ of art books being cut.

It comes after a school librarian at Lowry Academy in Salford, Greater Manchester, revealed last week she had been forced to remove books deemed ‘inappropriate’ by management.

Bosses used artificial intelligence to earmark almost 200 books for removal, including George Orwell’s 1984 and Stephenie Meyer’s Twilight.

The school later admitted it had removed ‘a small number of books’ but said it had put most of them back, into ‘age-appropriate categories’.

The Lowry Academy case prompted the NEU to pass an urgent motion yesterday to ‘fight censorship and defend librarians’.

The union said that although the woman in the original controversy is not part of the union, it wanted to protect its own librarian members from suffering a similar fate.

Proposing the motion, Kristabelle Williams, a member from Lewisham, said: ‘We cannot ignore the issues that this case has brought up.

‘We can take action as a union now to try to make sure it doesn’t happen again.’

She said the support of the union would give librarians the ‘confidence to not self-censor and resist the chilling effect that this case will cultivate’.

She added members fear there is now an ‘increased risk of external complaints’ and ‘hate campaigns’ about books in their libraries.

Also speaking during the debate was Laura Butterworth, a member from Tameside Greater Manchester, which is near Lowry Academy.

Keep reading