Possible X account of missing general William McCasland claimed fellow general was murdered over nuclear material

Online sleuths think they have uncovered missing retired Air Force general William Neil McCasland’s anonymous social media account — which claimed another general was murdered for his dealings with nuclear material.

McCasland, 68, went missing from his Albuquerque, NM, home on Feb. 27 — which is the same day that the person behind a conspicuously credentialed X account centered on spacecraft and advanced science made their last post.

The account @tmbspaceships claims to be run by a “retired 38-year active duty” United States Air Force with a PhD in engineering — listing the Air Force Institute of Technology (AFIT), the Air Education Training Command (AETC), and Air Force Material Command (AETC) as places they’ve worked.

Both the AFIT and AFMC are located at Wright-Patterson Air Force Base, which McCasland ran from 2011 to 2013. He attended the Air War College during his 34-year career, which is a subordinate to the AETC. McCasland attained a PhD in Astronautical Engineering from MIT in 1988.

The account shockingly claimed just months before McCasland’s disappearance that Maj. Gen. John Rossi, who allegedly committed suicide in 2016, was actually murdered because of refusal to hand over nuclear material to private contractors.

The 55-year-old two star general ended his life just two days before receiving a third star and taking the reins at US Army Space and Missile Defense Command, Army Times reported.

Army investigators ruled his hanging death was due a severe lack of sleep and job anxiety, according to the outlet.

“Gen. Rossi was a good friend and it is my opinion he did not commit suicide,” the account wrote in a reply posted on Sept. 2, 2025.

“I believe Gen Rossi was killed because of a [sic] incident, reported to the pentagon IG [inspector general], that he would not transfer nuclear weapons to private hands, just months prior in an attempted Nuclear Weapons theft from Ft. Sill,” the post claimed.

Keep reading

Stripper reveals US troops are blabbing to her about being deployed— and blowing operational security

Loose nips sink ships.

stripper revealed that on TikTok young US troops are apparently leaking news of their deployments to her while blowing their cash at jiggle joints.

San Diego-based dancer Charm Daze — who has 900K followers online — shared an emotional video late Sunday describing a wave of “depressed” servicemen from nearby military bases lamenting a deployment scheduled for next week.

“Something I’ve noticed lately is all the military guys are coming in and they’re spending all of their money,” Daze said. “They’re kind of depressed … They’re like, ‘Oh yeah, we’re gonna have fun,’ but you can tell something’s off. And then they’re like, ‘We deploy next week.’”

Daze performs in clubs around the country, but her Facebook page says she is based in San Diego, home to the largest naval base on the West Coast.

As is custom with military towns, there are also plenty of strip joints.

Major units with the US Navy — including the Navy SEALs — as well as a Marine Expeditionary group are stationed at Naval Base San Diego, Naval Base Coronado and Camp Pendleton in the region.

The dancer described the men as strikingly young — so young she called them “fetuses.”

Daze said many of the troops are polite and soft-spoken, which only made the experience more emotional for her.

Keep reading

The Age Verification Con

Politicians on both sides of the Atlantic are competing to look tough on Silicon Valley. They hold hearings, write bills, and pose for photographs with parents who say their kids’ lives were ruined by social media algorithms they somehow couldn’t pull them away from.

The cause is protecting children from social media, and it supposedly polls so well that it has achieved something almost unheard of in modern politics: genuine bipartisan consensus. Republicans and Democrats in Washington. Labour and Conservatives in Westminster. The Australian parliament voted the whole thing through with barely a whisper of dissent.

There is just one problem with the narrative. The tech giants these politicians claim to be fighting are spending record sums to help them do it. And the tool they have all converged on, age verification, is not really about checking whether someone is 15 or 16. It is the architecture for a verified internet, one where anonymous access is replaced by identity checkpoints, and where using a social media account, downloading an app, or browsing a website requires you to show your papers first.

The campaign is presented as protecting children. The infrastructure being built will apply to everyone.

Keep reading

White House Posts Cryptic Messages to X

The White House on Thursday posted a series of cryptic photos on X after posting and deleting a cryptic video.

Late Wednesday night the White House posted a mysterious video – then quickly deleted it, sparking a buzz.

X users pointed out that it sounded like White House Press Secretary Karoline Leavitt saying, “It’s launching soon, right?”

A male voices responds, “Yes.”

The White House posted another video later Wednesday night with a phone ping notification sound.

On Thursday afternoon, the White House posted a pixilated photo.

Later Thursday, the White House two more pixilated photos.

Keep reading

The Verdict Against Meta and Google That Could End the Anonymous Internet

A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million.

The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance.

K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed.

Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from.

Keep reading

Settlement Stops Government From Using Social Media As ‘Speech Police’

The government censorship machine took a huge hit Tuesday in a historic win for First Amendment rights. 

What is being billed as an “unprecedented” agreement will bar the three government agencies central to killing speech the Biden administration didn’t like from pressuring social media platforms from doing so in the future. 

“This case began with a suspicion, that blossomed into fact, that led to Congressional hearings and an Executive Order that government censorship of Americans’ social media posts should end,” said John Vecchione, Senior Litigation Counsel for the New Civil Liberties Alliance (NCLA), the nonprofit civil rights group that has battled in courts for years to bring justice to victims of government-led speech suppression. 

Also celebrating, Sen. Eric Schmitt, who, as Missouri’s attorney general, sued the Biden administration for “brazenly colluding with Big Tech to silence Missourians.” 

“This is a massive win for the First Amendment and for every American who believes in free speech,” the Missouri Republican said in a press release, adding that President Biden’s tenure in office brought “the most aggressively liberal and antiliberty excesses of government that America has ever seen.”

Keep reading

Government Agencies BANNED From Pressuring Big Tech to Censor Americans for 10 Years

In a historic win for free speech, the U.S. Surgeon General, the Centers for Disease Control and Prevention (CDC), and the Cybersecurity and Infrastructure Security Agency (CISA) have been legally restricted from pressuring social media companies to silence Americans for the next decade. This comes from a formal Consent Decree in Missouri v. Biden, one of the most consequential First Amendment cases in modern history.

The agreement itself is striking. It acknowledges that, in recent years, federal officials “exerted substantial coercive pressure” on social media companies to suppress speech they did not approve of. This case began after physicians, journalists, and everyday Americans—especially those dissenting on COVID and elections—were systematically censored online. This was confirmed through discovery: a coordinated, government-backed effort to pressure Big Tech into silencing alternative viewpoints.

Now, under this decree, these entities are prohibited from threatening, coercing, or directing platforms like Facebook, X, YouTube, and others to remove or suppress lawful speech—including through algorithmic means. These restrictions will remain in place for 10 years.

Perhaps most important, the agreement explicitly states that labeling speech as “misinformation,” “disinformation,” or “malinformation” does not strip it of First Amendment protection.

This is one of the most significant blows yet to the censorship regime.

Keep reading

Jury in Los Angeles finds Meta, YouTube negligent in social media addiction trial

A jury in Los Angeles determined on Wednesday that Meta and Google’s YouTube were negligent and failed to warn users of the dangers associated with using their platforms, in a case that could have repercussions across the social media and the broader technology market.

The personal injury trial commenced in late January in Los Angeles Superior Court. A young woman identified as K.G.M., or Kaley, alleged that she became addicted to apps like Instagram and YouTube as a child. Deliberations began Friday, March 13.

Jurors ultimately ruled in favor of the plaintiff, who claimed that Meta and YouTube’s negligence played a “substantial factor” in causing mental health-related harms. Compensatory damages were assessed at $3 million, with Meta on the hook for 70% and YouTube the remaining 30%. The next phase of the trial will determine punitive damages.

“We respectfully disagree with the verdict and are evaluating our legal options,” a Meta spokesperson said in a statement.

It’s one of several trials taking place this year that experts have characterized as the social media industry’s “Big Tobacco” moment, comparing it the 1990s, when tobacco companies were forced to pay billions of dollars for lying to the public about the safety and potential harms of their products.

On Tuesday, jurors in Santa Fe, New Mexico found that Meta willfully violated the state’s unfair practices after Attorney General Raúl Torrez alleged that the company failed to properly safeguard its apps from online predators targeting children. Meta was ordered to pay $375 million in damages based on the number of violations. The company said that it would appeal the case.

The New Mexico case is separate from other social media lawsuits that state attorneys general have brought against companies including Meta and TikTok.

During the six-week trial in L.A., jury members were tasked with determining whether Meta and YouTube implemented certain design features in their apps like recommendation algorithms and auto-play that contributed to K.G.M.’s crippling, mental distress. The 20-year-old woman alleged that she suffered from severe body dysmorphia, depression and suicidal thoughts due to her near-constant use of the apps and the constant notifications that made it difficult for her to stop.

Keep reading

Meta ordered to pay $375 MILLION for not protecting minors from predators online

A jury has found that Meta failed to protect children from sexual predators as well as misled users, and the tech giant has been ordered to pay $375 million in civil penalties.

New Mexico jury found in the landmark case that Meta misled users about the platform’s safety and did not protect children being exploited, thereby violating the state’s laws. The jury made the decision after there were testimonies from witnesses over the course of six weeks. Witnesses included ex-executives from Meta, teachers, as well as online safety experts, per the New York Post.

The prosecutors in the state argued that Meta had hidden the extent to which the platform endangered children with the threat of sexual predators using the social media platform to target minors. Facebook and Instagram failed to enforce their policies of those under 13 not having profiles and algorithms allegedly made it easier to target minors for sex trafficking and harassment.

“The safety issues that you’ve heard about in this case, weren’t mistakes,” New Mexico attorney Linda Singer said on Monday. “They were a product of a corporate philosophy that chose growth and engagement over children’s safety. And young people in this state and around the country have borne the cost.”

Meta has vowed to appeal to the ruling in the case. “We respectfully disagree with the verdict and will appeal. We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online,” a spokesman said in a statement in response to the verdict.

The attorneys for New Mexico had been seeking $2 billion in penalties against Meta, significantly more than what was given as a penalty to Meta. The case was brought by New Mexico Attorney General Raúl Torrez. In closing arguments, Meta attorney Kevin Huff said of the case, “Meta has built innovative, automated tools to protect people. Meta has 40,000 people working to make its apps as safe as possible.”

Keep reading

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading