Musk: AI Satellites Would “Adjust” Sunlight to “Prevent Global Warming”

With Bill Gates retreating from his high-profile climate crusade, the stage has opened for more unconventional actors to step into the planetary arena. Enter Elon Musk, the chief executive of SpaceX and self-styled architect of humanity’s future in space.

This week, Musk floated an audacious vision: a vast swarm of orbiting satellites, not merely to beam internet or data, but to harvest solar energy and regulate how much sunlight reaches Earth. On Monday, he wrote on his platform X:

A large solar-powered AI satellite constellation would be able to prevent global warming by making tiny adjustments in how much solar energy reached Earth.

It is not an isolated musing. Musk already commands more than 8,000 satellites in orbit, making SpaceX the single largest operator in low Earth orbit. His company is also deeply integrated with the U.S. defense and intelligence establishment, providing secure communications and reconnaissance support. And as one of Donald Trump’s biggest donors and technology contractors, Musk stands at the intersection of private ambition and state power.

The announcement reignited debate over geoengineering — also known as solar radiation modification (SRM) — a highly controversial concept to cool the planet by deflecting sunlight. Many observers, weary of climate-doomsday narratives and wary of billionaire “saviors,” have urged Musk to refrain from “playing God.”

The Technical Blueprint

Musk’s posts were brief, but behind them lie two vast engineering ambitions — one focused on solar power, the other on climate control. To most readers, it may sound like science fiction, yet the ideas are grounded in real, if speculative, physics.

Satellites to Capture the Sun

The first part of Musk’s plan involves satellites that would collect solar energy directly in space. He mentioned harnessing 100 gigawatts per year through an array of orbiting satellites launched by SpaceX’s upcoming Starship rocket. For perspective, one gigawatt equals the output of a large nuclear power plant.

Space-based solar power isn’t new, but it has never advanced beyond early experiments. The principle is simple: Sunlight in space is stronger because it’s unfiltered by Earth’s atmosphere. In orbit, solar panels could generate power 24 hours a day, unaffected by clouds or night.

The challenge is transmitting that energy back to Earth. Musk’s vision likely involves converting solar power into microwave or laser beams, then directing them to ground-based receivers. In theory, it could supply clean electricity to power grids or floating data centers. In practice, it would require precise targeting and vast safety controls to prevent energy loss or harm.

Musk also hinted at an even grander future — moon-based factories building AI satellites directly on the lunar surface. At that scale, he suggested, new satellites could generate hundreds of terawatts of power. That would surpass humanity’s current total energy use of about 17-20 terawatts.

Keep reading

Elon Musk predicts phones and apps will be obsolete in five years, says AI will curate everything

Elon Musk appeared on The Joe Rogan Experience this week, where he predicted that artificial intelligence (AI) will be so transformative that it will replace traditional phones and apps.

Musk told Rogan that within a few years, AI will be so integrated into daily life that people will no longer open individual apps or platforms. Instead, he said, AI will anticipate what users want and curate everything directly for them through their devices.

“Well, I can tell you where I think things are gonna go, which is that it’s, we’re not gonna have a phone in the traditional sense,” Musk said. “What we call a phone will really be an edge node for AI inference, for AI video inference with, you know, with some radios to obviously connect to. But, essentially, you’ll have AI on the server side, communicating to an AI on your device, you know, formerly known as a phone, and generating real-time video of anything that you could possibly want.” 

Musk explained that this shift would eliminate the need for operating systems or apps. “There won’t be operating systems or apps. It’ll just be, you’ve got a device that is there for the screen and audio, and to put as much AI on the device as possible,” he said.

Rogan asked Musk whether platforms like X or email services would still exist if apps disappeared. Musk replied, “You’ll get everything through AI.”

He explained that AI will learn to anticipate users’ preferences and deliver content automatically.

“Whatever you can think of. Or really, whatever the AI can anticipate you might want, it’ll show you.” Musk explained. “That’s my prediction for where things end up.”

When asked how soon this could happen, Musk estimated, “I don’t know. It’s probably, well it’s probably five or six years, something like that.”

“So five or six years, apps are like Blockbuster Video,” Rogan said, to which Musk responded, “Pretty much.”

“Most of what people consume in five or six years, maybe sooner than that, will be just AI-generated content,” Musk added.

Keep reading

AI drones used in Gaza now surveilling American cities

AI-powered quadcopter drones used by the IDF to commit genocide in Gaza are flying over American cities, surveilling protestors and automatically uploading millions of images to an evidence database.

The drones are made by a company called Skydio which in the last few years has gone from relative obscurity to quietly become a multi-billion dollar company and the largest drone manufacturer in the US.

The extent of Skydio drone usage across the US, and the extent to which their usage has grown in just a few years, is extraordinary. The company has contracts with more than 800 law enforcement and security agencies across the country, up from 320 in March last year, and their drones are being launched hundreds of times a day to monitor people in towns and cities across the country.

Skydio has extensive links with Israel. In the first weeks of the genocide the California-based company sent more than one hundred drones to the IDF with promises of more to come. How many more were delivered since that admission is unknown. Skydio has an office in Israel and partners with DefenceSync, a local military drone contractor operating as the middle man between drone manufacturers and the IDF. Skydio has also raised hundreds of millions of dollars from Israeli-American venture capitalists and from venture capital funds with extensive investments in Israel, including from Marc Andreessen’s firm Andreessen Horowitz, or a16z.

And now these drones, tested in genocide and refined on Palestinians, are swarming American cities.

According to my research, almost every large American city has signed a contract with Skydio in the last 18 months, including BostonChicagoPhiladelphiaSan DiegoCleveland and Jacksonville. Skydio drones were recently used by city police departments to gather information at the ‘No Kings’ protests and were also used by Yale to spy on the anti-genocide protest camp set up by students at the university last year.

In Miami, Skydio drones are being used to spy on spring breakers, and in Atlanta the company has partnered with the Atlanta Police Foundation to install a permanent drone station within the massive new Atlanta Public Safety Training Center. Detroit recently spent nearly $300,000 on fourteen Skydio drones according to a city procurement report. Last month ICE bought an X10D Skydio drone, which automatically tracks and pursues a target. US Customs and Border Protection has bought thirty-three of the same drones since July.

The AI system behind Skydio drones is powered by Nvidia chips and enables their operation without a human user. The drones have thermal imaging cameras and can operate in places where GPS doesn’t work, so-called ‘GPS-denied environments.’ They also reconstruct buildings and other infrastructure in 3D and can fly at more than 30 miles per hour.

The New York police were early adopters of Skydio drones and are particularly enthusiastic users. A spokesman recently told a drone news website that the NYPD launched more than 20,000 drone flights in less than a year, which would mean drones are being launched around the city 55 times per day. A city report last year said the NYPD at that time was operating 41 Skydio drones. A recent Federal Aviation Authority rule change, however, means that number will undoubtedly have increased and more generally underpins the massive expansion in the use of Skydio drones.

Prior to March this year, FAA rules meant that drones could only be used by US security forces if the operator kept the drone in sight. They also couldn’t be used over crowded city streets. An FAA waiver issued that month opened the floodgates, allowing police and security agencies to operate drones beyond a visual line of sight and over large crowds of people. Skydio called the waiver ground-breaking. It was. The change has ushered in a Skydio drone buying spree by US police and security forces, with many now employing what is called a ‘Drone As First Responder’ program. Without the need to see the drone, and with drones free to cruise over city streets, the police are increasingly sending drones before humans to call outs and for broader investigative purposes. Cincinnati for example says that by the end of this year 90% of all call outs will be serviced first by a Skydio drone.

Keep reading

Republicans Are Walking Into A Trap On Section 230 Repeal

Among political conservatives, there is no hotter potato at the moment than the civil liability protections afforded by Section 230 to online operators. Unless Republicans learn to love it again and reject the censorship lawfare complex favored by Democrats, they risk dooming our tech leaders and everyone who uses their products to the sharks circling our legal system.

The twenty-six words tucked into the Communications Decency Act of 1996 shielded publishers from liability so they could host and moderate content and still allow a wide range of speech without fear of lawsuits. Since then, Section 230 has evolved to be one of the most powerful legal shields in the nation against civil litigation in U.S. courts. This gave the early digital economy the guardrails it needed to thrive by incentivizing creatives and disruptors to bring their big ideas to life.

Nothing ices a good idea like the fear of a lawsuit.

Yet, to be a rising star in the Republican Party today conveys some kind of fealty to the idea that Section 230 is antiquated – a relic of the early Internet that has outlasted its usefulness.

Last month, Sen. Josh Hawley (R-MO) called on his colleagues to “fully repeal Section 230” to cut the knees of AI companies and thwart their LLM training models. “Open the courtroom doors. Allow people to sue who have their rights taken from them, including suing companies and actors and individuals who use AI,” said Hawley.

He’s joined in these efforts by fellow Republican Sens. Lindsey Graham and Marsha Blackburn, not to mention Democratic Sens. Dick Durbin and Amyâ?¯Klobuchar.

According to the Section 230 Legislation Tracker maintained by Lawfare and the Center on Technology Policy at UNC-Chapel Hill, there have already been 41 separate bills aimed at curbing some aspects of the law by both Democrats and Republicans in the last two sessions.

The principal motivation for Democrats, including former presidential candidate Hillary Clinton, has always been to force censorship of social media platforms to stop “disinformation,” a pretext for muting opposing views. The coordination of Democratic officials pressuring platforms to censor, as revealed in the Twitter Files, proves this beyond dispute.

To highlight the irony, we should remember that President Donald Trump is not only the chief executive of the United States, but also the owner of a social media platform that currently enjoys broad Section 230 protections afforded to any online publisher.

A wish to cripple Section 230 means making Truth Social a target as much as YouTube or Instagram. We should harbor no illusions that right-leaning media publications, podcasters, and websites would be the first to be kneecapped in a post-Section 230 world. Can MAGA and the GOP swallow that pill?

In that scenario, it will be the millions of Americans who currently enjoy freedom of speech online that will lose out. It’s the tens of millions of Americans turning to AI tools to become more productive, create value, and build the next great economic engines of our time who will be harmed by dismantling Section 230.

If Republicans want to cement American dominance in technological innovation, they will have to abandon this devil’s dance on gutting Section 230 liability protections. This is a censorship trap laid by Democrats to benefit them once they return to power.

The premise of broad civil liability protection for platforms is a core principle that has and should be applied to producers across America’s innovative stack, whether it’s oil and gas firms fending off dubious climate cases or artificial intelligence firms building the tools that are the key to America’s present economic dominance.

Keep reading

The Data Center Proliferation Must Be About Much More Than Data

With Amazon, it was never about the books. No doubt Amazon began as an online bookseller, but what made its stock attractive through years of losses is what books represented.

If Amazon could modernize buying habits with an online bookstore, it could eventually be what it became: an everything store. Markets are a look ahead, and book sales didn’t appeal to patient investors as much as what online book sales signaled about Amazon’s future potential as something much greater than an online bookstore.

It’s important to remember this with the rise of data centers around the country. Meta recently completed another one in El Paso, TX. The $1.5 billion project will, once operational, employ 100 people. Its construction employed as many as 1,800 workers.

It’s worth adding that El Paso is Meta’s third data center in Texas alone. Meta put $10 billion into the construction of all three.  

If asked, most would understandably say that data centers are being created “to store, process, and distribute” vast amounts of data. Translated, the data centers will rapidly bring down the already short wait times for AI-authored searches, paintings, papers, and all manner of other things that the AI-adaptive request.

It all sounds amazing on its face, but the bet here is that broad perception of data center capabilities in no way measures up to the towering reality of their potential. Just as Amazon was much more than a bookstore, it’s no reach to suggest that data centers are about much more than greatly enhanced, low latency searches.

Some will ask what they’re for if not just for searches, and the quick answer to the question is that the future would already be here if it were obvious what it was. Which means there’s no way to foretell the future, but it’s easy to say with confidence that it won’t much look like the present.

Evidence supporting the above claim can be found in the enormous investments being made by Amazon, Meta, OpenAI, X and others in the creation of the data centers. The sizable capital commitments signal confidence on the part of the biggest names in AI technology that the growth potential from the data centers well exceeds the enormous amounts of money required to create them. Since capital is expensive, there’s no room for break even or somewhere close to break even in its allocation.

Which is why the future can’t arrive soon enough. As substantial capital allocations meant to fund data centers indicate, their meaning to how we live, work, play, and get healthy so that we can live, work and play some more will be substantial. 

Just as Amazon.com as a source of books in no way resembles what Amazon has become, the cost of data centers signals that their perception in 2025 will in no way resemble how they’re perceived in 2035. Call it a generational thing, but data center will have different meaning depending on when you were born.

Keep reading

Lawmakers Want Proof of ID Before You Talk to AI

It was only a matter of time before someone in Congress decided that the cure for the internet’s ills was to make everyone show their papers.

The “Guidelines for User Age-verification and Responsible Dialogue Act of 2025,” or GUARD Act, has arrived to do just that.

We obtained a copy of the bill for you here.

Introduced by Senators Josh Hawley and Richard Blumenthal, the bill promises to “protect kids” from AI chatbots that allegedly whisper bad ideas into young ears.

The idea: force every chatbot developer in the country to check users’ ages with verified identification.

The senators call it “reasonable age verification.”

That means scanning your driver’s license or passport before you can talk to a digital assistant.

Keeping in mind that AI is being added to pretty much everything these days, the implications of this could be far-reaching.

Keep reading

How NDAs keep AI data center details hidden from Americans

On a March afternoon in Mason County, Kentucky, Dr. Timothy Grosser and his son Andy sat across the table from three men who came with an offer: $10 million for the 250-acre farm where they’d lived and worked for nearly four decades.

That’s 35 times what Grosser bought his land for in 1988 and significantly more than what others in the area had sold their land for recently. But there was a catch — it wasn’t clear who was funding the offer. One of the men said he represented a “Fortune 100 company” that wanted the property for an industrial development, but he refused to say what kind, which company or even his own name.

Instead, he pulled out a non-disclosure agreement.

Grosser said the contract would prevent him from discussing the project’s details with any third parties in exchange for limited information about its purpose, timeline and size. It didn’t disclose the company’s name, which could be discussed only after the company publicly announced its participation in the project.

“We refused to sign it,” Grosser said. “I’m not selling my farm for any amount of money.”

Keep reading

U.S. government allowed and even helped U.S. firms sell tech used for surveillance in China: AP

U.S. lawmakers have tried four times since September last year to close what they called a glaring loophole: China is getting around export bans on the sale of powerful American AI chips by renting them through U.S. cloud services instead.

But the proposals prompted a flurry of activity from more than 100 lobbyists from tech companies and their trade associations trying to weigh in, according to disclosure reports.

The result: All four times, the proposal failed, including just last month.

As leaders Donald Trump and Xi Jinping prepare for a long-heralded meeting Thursday, the sale of U.S. technology to China is among the thorniest issues the U.S. faces, with billions of dollars and the future of tech dominance at stake. But the tough talk about China obscures a deeper story: Even while warning about national security and human rights abuse, the U.S. government across five Republican and Democratic administrations has repeatedly allowed and even actively helped American firms to sell technology to Chinese police, government agencies and surveillance companies, an Associated Press investigation has found.

And time after time, despite bipartisan attempts, Congress has turned a blind eye to loopholes that allow China to work around its own rules, such as cloud services, third-party resellers, and holes in sanctions passed after the Tiananmen massacre. For example, despite U.S. export rules around advanced chips, China bought $20.7 billion worth of chipmaking equipment from U.S. companies in 2024 to bolster its homegrown industry, a report from a congressional committee this month warned.

This reluctance to act reflects the tremendous wealth and power of the tech industry, which is more visible than ever under the Trump administration. And in recent months, the president himself has struck grand deals with Silicon Valley firms that even more closely tie the U.S. economy to tech exports to China, giving taxpayers a direct stake in the profits for the first time.

In August, Trump announced a deal with chipmakers Nvidia and AMD to lift export controls on sales of advanced chips to China in exchange for a 15% cut of the revenue, despite concerns from national security experts that such chips will end up in the hands of Chinese military and intelligence services. That same month, Trump announced that the U.S. government had taken a 10 percent stake in Intel worth around $11 billion.

Longtime Chinese activist Zhou Fengsuo said the U.S. government is letting American companies set the agenda and ignoring how they help Beijing surveil and censor its own people. In 1989, Zhou was a student leader during the Tiananmen protests, where hundreds and possibly thousands were shot and killed by the Chinese government. Zhou was arrested and imprisoned.

Now a U.S. citizen, Zhou testified before Congress in 2024, calling on Washington to investigate the involvement of American tech companies in Chinese surveillance. An AP investigation in September found that American companies to a large degree designed and built China’s surveillance state, playing a far greater role in enabling human rights abuses than previously known.

“It’s driven by profit, and that’s why these strategic discussions have been silenced or delayed,” Zhou said. “I’m extremely disappointed. … this is a strategic failure by the United States.”

Keep reading

Millions Of America’s Teens Are Being Seduced By AI Chatbots

Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

We aren’t just talking about a few isolated cases anymore.

At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.

Unfortunately, there are many examples where these relationships are leading to tragic consequences.

After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life

“What if I could come home to you right now?” “Please do, my sweet king.”

Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI. Minutes later, Sewell took his own life.

His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.

If you allow them to do so, these AI chatbots will really mess with your head.

We are talking about ultra-intelligent entities that have been specifically designed to manipulate emotions.

Keep reading

Australia’s eSafety Chief Pressures Big Tech and AI Firms on Verification, Age Checks

Australia’s top online regulator, eSafety Commissioner Julie Inman Grant, is intensifying her push to reshape speech in the digital world.

Her office has formally warned major social platforms and several AI chatbot companies that they could soon be forced to comply with far-reaching new age verification and “online safety” requirements that many see as expanding government control over online communication.

The warnings are part of the government’s effort to enforce the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which would bar Australians under 16 from creating social media accounts.

Letters sent to Meta, TikTok, Snapchat, X, and YouTube make it clear that each company is expected to fall under the scope of the new law.

The Commissioner’s preliminary assessment is that these services exist mainly for “online social interaction,” which brings them within the definition of social media platforms and subjects them to strict age verification and child protection obligations.

Not all of the companies accept that classification. Snapchat claims to be primarily a messaging platform similar to WhatsApp, while YouTube has opposed losing its original exemption.

At this stage, only services with a clear focus on messaging or education, such as WhatsApp, Messenger, YouTube Kids, and Google Classroom, remain excluded from the Commissioner’s oversight.

Keep reading