Trump Slammed Biden’s $52 Billion CHIPS Act. Then He Used It To Buy a Federal Stake in Intel.

In March, President Donald Trump blasted the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act of 2022. He called it “a horrible, horrible thing.” Passed under President Joe Biden, the CHIPS Act was essentially a $52 billion industrial policy slush fund intended primarily to bolster domestic production of computer chips.

When the law passed in 2022, the Biden administration said it was a “smart investment” that would “strengthen American manufacturing, supply chains, and national security, and invest in research and development, science and technology” while bringing thousands of “good-paying manufacturing jobs back home.”

There was never much reason to believe in the previous administration’s industrial policy boosterism. Early grants largely went either to factories that were already in development and would have been built anyway or to facilities of questionable economic value that might not be completed even with the additional taxpayer funding.

So Trump was on solid ground when he told Congress, “You should get rid of the CHIPS Act, and whatever’s left over…you should use it to reduce debt, or any other reason you want to.” Yet in the months since, Trump has made use of CHIPS funding not to reduce the debt, but to pursue his own questionable industrial policy. His version is even less accountable and may well be even worse for taxpayers.

Among the recipients of CHIPS funding was computer chipmaker Intel, which was set to receive $11 billion to help fund the construction of semiconductor fabs in several states. By late summer, the company said it had already received more than $5 billion of the funds. But Intel struggled to fulfill those commitments, falling behind on factory construction in some places and laying off workers as it suffered from ongoing financial and managerial problems. By the middle of 2025, Intel looked very much like a failing business.

In theory, the CHIPS Act provided a mechanism for the federal government to retract the grant and get all or part of its money back should Intel fail to meet its obligations. It’s not clear whether the federal government would have exercised its option to take the money back, but it was an option—until Trump stepped in.

As the company flailed, Trump met with its CEO, Lip-Bu Tan. Trump first called for him to resign. Then in August, the Trump administration announced that the federal government would just take partial ownership of Intel. Essentially, the U.S. government would purchase a roughly 10 percent stake in the chipmaker, partially nationalizing the company. And funds from CHIPS would be used to do it.

Trump bragged about the deal, saying he planned to “do more of them.” The company’s stock price rose on the news, suggesting that investors liked it. But that’s probably because it was a good deal for the company, at taxpayer expense.

According to public financial filings, the federal government would disburse the remaining funds, about $6 billion, while clearing any obligations for the company to actually complete work on new domestic semiconductor fabs.

In exchange, the federal government would gain partial ownership—as well as all the financial risks stockholders usually have when they invest in companies. Those risks will now be borne by taxpayers. As Carnegie Endowment fellow Peter Harrell pointed out in a social media post, the move came with “a lot of downside risk.”

Fundamentally, Trump gave Intel a federal bailout, removing the company’s public obligations and accountability while loading more financial risk onto the public.

Keep reading

Armed Robber Targets Sam Altman’s Ex-Boyfriend’s House, Forces Transfer Of $11 Million In Crypto

A thief barged into a house owned by Lachy Groom – a wealthy tech investor who once dated OpenAI CEO Sam Altman, tied up a victim, and made off with $11 million in Crypto Saturday evening in San Francisco, the NY Post reports.

Dressed as a delivery worker, the armed robber rang the door at Groom’s $4.4 million home on Dorland Street while carrying a white box, asks for Joshua – who lives with Groom – while claiming to be a UPS driver. The victim answers the door and identifies himself as Joshua. 

The thief then asked for him to sign for the package – asking if he can borrow a pen. The suspect then followed Joshua inside when a loud bang can be heard

According to the report, the suspect pulled a gun, tied up the victim with duct tape, and then stole $11 million worth of Ethereum and Bitcoin (exact method unknown), in what is believed to have been a hit by an organized crime group that the suspect was part of.

The suspect then tortured the victim, beating him while he held a phone up on loudspeaker as foreign voices on the line repeated his personal information that they had obtained. The thief then poured liquid on the victim before the crypto wallets were emptied.

The whole thing took around 90 minutes. 

Homeowner Lachy Groom, 31, is a venture capitalist and the ex-boyfriend of Open AI’s Altman, 40, who dated the billionaire sometime before he got married in 2024, sources with knowledge of their relationship said. Groom bought the property from Altman’s brother in 2021 for $1.8 million, property records show. Details of their relationship have not previously been reported. Attempts to reach Groom were not returned.

The Post has learned Joshua is a fellow tech investor who lives with Groom at the 4-bedroom Dorland Street home. 

Altman and Groom have invested together in various companies. Groom, a native Australian, has founded four startups and sold three before he turned 18. 

Keep reading

Federal Uniformity Sounds Good – Until Big Tech Writes the Rules

Big Tech is jamming preemption of state AI laws into the National Defense Authorization Act (NDAA) at the last minute, but it is unclear if they are fully aware of what it would actually mean. If Congress prohibits states from implementing AI policies that would protect their citizens, it would have far-reaching consequences. True conservatives in Congress must uphold their pro-American values and refuse to support any preemption effort that would ultimately be a coronation of Big Tech as our country’s new rulers.

The United States is the dominant leader in AI on the global stage, and we are in a high-stakes race with adversaries – especially China – to maintain that advantage. We cannot afford to cut corners on oversight and safety while Big Tech develops AI systems at a rapid pace. States are best positioned to test thoughtful safeguards that address the most pressing concerns – from public safety to protecting children. The federal government, by contrast, is lagging behind.

States have been laboratories of democracy on every pressing issue of our time. The issue of AI should not be any different. The federal government is behind the states in terms of simply thinking about the ramifications of AI, and Congress should allow the states to try to find effective policy solutions that cater to our most imminent concerns.

Preemption is a clear violation of the principle of federalism inherent in the 10th Amendment to the Constitution.

Additionally, this provision is a blatant cover for Big Tech. It allows Big Tech to continue to exploit kids, creators, and conservatives. This provision will not empower small businesses and entrepreneurs in AI because they simply don’t have $40 billion in funding to put toward artificial general intelligence (AGI) development and $100 million bonuses to hand out to potential employees.

They are already shut out of the industry by people like OpenAI CEO Sam Altman, who popularized the “patchwork” characterization of state policies that is now being used in smaller circles in support of preemption.

If we intend to outpace China on AI, we must abandon misguided proposals that undermine federalism. The federal government should focus on enacting strong, strategic measures that protect our national security and prevent U.S. technologies and advanced chips from ending up in the wrong hands.

Keep reading

GrapheneOS Quits France, Citing Unsafe Climate for Open Source Tech

GrapheneOS, the privacy-focused Android operating system, has ended all operations in France, saying the country is no longer a safe place for open source privacy projects.

Although French users will still be able to install and use the software, the project is moving every related service, including its website, forums, and discussion servers, outside French territory.

Until now, GrapheneOS used OVH Bearharnois, a hosting provider based in France, for some of its infrastructure. That setup is being dismantled.

The Mastodon, Discourse, and Matrix servers will operate from Toronto on a mix of local and shared systems. These changes are designed to remove any dependency on French service providers.

The developers said their systems do not collect or retain confidential user data and that no critical security infrastructure was ever stored in France. Because of that, the migration will not affect features such as update verification, digital signature checks, or downgrade protection.

The decision also applies to travel and work policies. Team members have been told not to enter France, citing both personal safety concerns and the government’s endorsement of the European Union’s Chat Control proposal.

That measure would allow authorities to scan private communications for illegal material, something privacy developers see as incompatible with secure digital design.

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

Google and US government battle over the future of internet advertising

Google will confront the U.S. government’s latest attempt to topple its internet empire in federal court on Friday as a judge considers how to prevent the abusive tactics that culminated in parts of its digital ad network being branded as an illegal monopoly.

The courtroom showdown in Alexandria, Virginia, will pit lawyers from Google and the U.S. Department of Justice against each other in closing proceedings focused on the complex technology that distributes millions of digital ads across the internet each day.

After a lengthy trial last year, U.S. District Judge Leonie Brinkema ruled in April that pieces of Google’s ad technology had been rigged in a way that made it an illegal monopoly. That set up another 11-day trial earlier this fall to help Brinkema determine how to remedy its anti-competitive practices.

Friday’s closing arguments will give both Google and the Justice Department a final chance to sway Brinkema before she issues a ruling that probably won’t come until early next year.

The Justice Department wants Brinkema to force Google to sell some of the ad technology that it has spent nearly 20 years assembling, contending a breakup is the only way to rein in a company that the agency’s lawyers condemned as a “recidivist monopolist” in filings leading up to Friday’s hearing.

The condemnation refers not only to Google’s practices in digital advertising but also to the illegal monopoly that it unleashed through its dominant search engine. Federal prosecutors also sought a breakup in the search monopoly case, but the judge handling that issue rejected a proposal that would have required Google to sell its popular Chrome web browser.

Although Google is still being ordered to make reforms that it’s resisting, the outcome in the search monopoly case has been widely seen as a proverbial slap on the wrist. The belief that Google got off easy in the search case is the main reason the market value of its parent company Alphabet surged by about $950 billion, or 37%, to nearly $3.5 trillion since U.S. District Judge Amit Mehta’s decision came out in early September.

That setback hasn’t discouraged the Justice Department from arguing for a breakup of an ad tech system that handles 55 million requests per second, according to estimates provided by Google in court filings.

The huge volume of digital ads priced and distributed through Google’s technology is one of the main reasons that the company’s lawyers contend it would be too risky to force a dismantling of the intricate system.

“This is technology that absolutely has to keep working for consumers,” Google argues in documents leading up to Friday’s hearing. The company’s lawyers blasted the Justice Department’s proposal as a package of “legally unprecedented and unsupported divestitures.”

Besides arguing that its own proposed changes will bring more price transparency and foster more competition, Google is also citing market upheaval triggered by artificial intelligence as another reason for the judge to proceed cautiously with her decision.

In his decision in the search monopoly case, Mehta reasoned that AI was already posing more competition to Google.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

Google Censored Vaccine Info Long Before COVID — Could It Have Anything to Do With Parent Company Alphabet’s Deep Pharma Ties?

Throughout the COVID-19 pandemic, Big Tech companies colluded with the government to silence dissent and criticisms of the lockdowns and a coercive mass vaccination campaign by censoring truthful information that did not align with the political agenda.

The Biden administration’s role in the censorship regime was the subject of a May 2024 congressional report titled “The Censorship-Industrial Complex.”

Three months after that report was released, Meta CEO Mark Zuckerberg admitted in a letter to Congress that Facebook had censored factual information under pressure from the White House.

In September, Google’s parent company, Alphabet, responded to a congressional subpoena with a letter similarly disclosing how the Biden administration had pressured YouTube, owned by Google, to remove videos that didn’t even violate its content policies.

Alphabet called the practice “unacceptable and wrong” while insisting that it withstood the pressure and enforced only its own policies against “misinformation.”

That defense, however, sidesteps the fact that those content guidelines were created in collusion with the same “health authorities” advancing the authoritarian governance — like the Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO).

The result was that true information was censored while government-sanctioned disinformation was allowed to proliferate unchallenged.

As a United Nations official admitted at a September 2022 World Economic Forum (WEF) meeting, Google was helping government authorities to “own the science” in its internet search results.

In its letter to Congress, Alphabet noted that YouTube’s content policies had since evolved. Tacitly admitting how creators had been silenced for telling the truth, Alphabet promised to restore YouTube channels suspended for content no longer deemed misinformative.

So, Alphabet acknowledged the censorship but tried to absolve itself by blaming the White House and public health authorities.

The truth is that Google’s censorship of health-related content, including inconvenient facts about vaccines, predated COVID-19 and continues to this day. Could that be because Alphabet has its own deep financial ties to the pharmaceutical and biotech industries?

Alphabet’s ties to Big Pharma exist through numerous of its subsidiaries, including CalicoDeepMindIsomorphic Labs and Verily Life Sciences. In this article, we will focus on the latter of Google’s sister companies.

Keep reading

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

“I think no company is going to be immune, including us,” he said.

In a wide-ranging exclusive interview at Google’s California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant’s ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet’s development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people’s savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of “irrational exuberance” in the market well ahead of the dotcom crash, Mr Pichai said the industry can “overshoot” in investment cycles like this.

“We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound,” he said.

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would “probably be lost”.

But Mr Pichai said Google’s unique model of owning its own “full stack” of technologies – from chips to YouTube data, to models and frontier science – meant it was in a better position to ride out any AI market turbulence.

Keep reading

Italian Court Orders Google to Restore Banned Catholic Blog

Google has been compelled by the Tribunale di Imperia to restore Messainlatino.it, a major Italian Catholic website that, as you may remember, the company had abruptly taken down from its Blogger platform in July.

The ruling, issued against Google Ireland Limited, the firm’s European branch, also requires payment of approximately €7,000 (about $8,100) in court costs.

The blog’s editor, Luigi Casalini, filed legal action after Google deleted the site without warning, claiming a violation of its “hate speech” rules.

The company’s notification consisted of a short, generic email and provided no explanation or chance to appeal.

For Casalini, whose publication had accumulated over 22,000 articles since 2008 and reached around one million monthly readers, the removal appeared to be less a matter of policy enforcement and more an attempt to silence dissenting religious opinion.

Messainlatino.it was well known for covering issues surrounding traditional Catholic liturgy and had been cited by major outlets.

Following Google’s action, questions were raised in both the European Parliament and Italy’s Chamber of Deputies.

Legislators noted that the deletion “raises serious questions about the respect for freedom of expression, speech and religion” as guaranteed by Article 11 of the EU Charter of Fundamental Rights and Article 10 of the European Convention on Human Rights.

They also pointed to the Digital Services Act (DSA), which, despite being a censorship law, obliges platforms to apply their moderation policies with “due regard” for fundamental rights.

Casalini’s legal case focused on that provision. He argued that Google’s decision breached Article 14 of the DSA, which calls for a balance between policy enforcement and the user’s right to free expression.

As Casalini stated to LifeSiteNews, “Google acted in this way in violation of the Digital Services Act.”

Google responded through five lawyers based in Milan. The company claimed that an interview with Bishop Joseph Strickland, who opposed the ordination of women as deacons, violated its hate speech policy.

When the defense team countered that the post merely reported the bishop’s words and contained no discriminatory content, Google’s attorneys maintained in court documents that “it does not matter the source, more or less authoritative (bishop, Pontiff) of the post, if it violates the Policy.”

Keep reading