Tokenization: Trump Administration Moves To Create Digital ID To Facilitate Digital Dollar And Tokenized Assets In Loss Of Financial Freedom

Following the creation of a digital dollar framework in July, the Trump administration is now creating the tools needed to facilitate those digital dollars, also referred to as stablecoins and tokenized asset deposits, as it seeks to create a nationally approved digital ID system for the U.S. that can safely store Americans’ tokenized ‘money’ and digital assets.

Digital ID is tantamount, according to globalist institutions. In 2023, the United Nations Development Programme (UNDP) published1 a framework for member nations to pattern their digital ID around. According to their blog post2, the plans are “an integral part of Agenda 2030 and the Sustainable Development Goals (SDGs),” adding, “SDG Target 16.9, which aims to “provide legal identity for all, including birth registration,” underscores the widespread significance of civil registration in societies globally.”

This framework builds off a report that was published by the UN in May of that year, called “Our Common Agenda,”3 that discussed “the vision for the future,” which involves linking digital IDs to banking. The UN says the implementation of digital IDs will also help to fulfil the broader goal of SDG1, No Poverty.

“Digital IDs linked with bank or mobile money accounts can improve the delivery of social protection coverage and serve to better reach eligible beneficiaries. Digital technologies may help to reduce leakage, errors and costs in the design of social protection programmes.”

Keep reading

Australia Orders Tech Giants to Enforce Age Verification Digital ID by December 10

Australia is preparing to enforce one of the most invasive online measures in its history under the guise of child safety.

With the introduction of mandatory age verification across social media platforms, privacy advocates are warning that the policy, set to begin December 10, 2025, risks eroding fundamental digital rights for every user, not just those under 16.

eSafety Commissioner Julie Inman Grant has told tech giants like Google, Meta, TikTok, and Snap that they must be ready to detect and shut down accounts held by Australians under the age threshold.

She has made it clear that platforms are expected to implement broad “age assurance” systems across their services, and that “self-declaration of age will not, on its own, be enough to constitute reasonable steps.”

The new rules stem from the Online Safety Amendment (Social Media Minimum Age) Act 2024, which gives the government sweeping new authority to dictate how users verify their age before accessing digital services. Any platform that doesn’t comply could be fined up to $31M USD.

While the government claims the law isn’t a ban on social media for children under 16, in practice, it forces platforms to block these users unless they can pass age checks, which means a digital ID.

There will be no penalties for children or their parents, but platforms face immense legal and financial pressure to enforce restrictions, pressure that almost inevitably leads to surveillance-based systems.

The Commissioner said companies must “detect and de-activate these accounts from 10 December, and provide account holders with appropriate information and support before then.”

These expectations extend to providing “clear, age-appropriate communications” and making sure users can download their data and find emotional or mental health resources when their accounts are terminated.

She further stated that “efficacy will require layered safety measures, sometimes known as a ‘waterfall approach’,” a term often associated with collecting increasing amounts of personal data at multiple steps of user interaction.

Such layered systems often rely on facial scanning, government ID uploads, biometric estimation, or AI-powered surveillance tools to estimate age.

Keep reading

Age Verification Company Exposes User Data, Reinforcing Privacy Fears Over Digital ID Systems

A company tasked with confirming users’ ages before they access adult content may be compromising their privacy by leaking detailed browsing data, according to a report by the nonprofit AI Forensics.

The group’s investigation highlights serious flaws in how some sites are complying with growing online age-check requirements, raising new concerns about surveillance and data exposure under the guise of protecting children.

France’s law requires that users’ identities remain concealed, not just from adult websites, but from the age verification services themselves.

Known as “double anonymity,” this standard is meant to ensure that those performing the verification process have no knowledge of which websites users are visiting or what content they attempt to access.

But AI Forensics found that AgeGO, one of the verification systems in active use, doesn’t meet those expectations.

Instead, AgeGO’s system reportedly transmits precise details about the user’s activity, including the URL of the video being viewed and the name of the website.

Keep reading

Josh Hawley Proposes AI Regulations, Section 230 Repeal, and Digital ID Checks for Chatbots

Senator Josh Hawley (R-Mo.) is pushing for broad new regulations on artificial intelligence, including age verification for chatbot access, data ownership rights, and the full repeal of Section 230 of the Communications Decency Act.

While the proposals are framed as efforts to curb corporate overreach in the tech industry, they will ignite concern among digital rights advocates who warn that such measures could undermine online privacy and freedom of expression.

At the National Conservatism Conference, Hawley accused AI developers of building their systems by collecting and using copyrighted material without permission. “The AI large language models [LLMs] have already trained on enough copyrighted works to fill the Library of Congress 22 times over,” he said.

“Let me just put a finer point on that — AI’s LLMs have ingested every published work in every language known to man already.” He claimed that creators were neither consulted nor compensated.

In July, Hawley introduced the AI Accountability and Personal Data Protection Act, which would allow individuals to sue companies that use personal data without consent and would establish property rights over certain categories of digital information.

However, two key components of Hawley’s platform are raising some alarm. His call to repeal Section 230 has been criticized for potentially damaging the open internet.

Section 230 currently shields online platforms from legal liability for content created by users. Without it, many sites could be forced to preemptively remove user content out of legal risk, resulting in widespread over-moderation and silencing of lawful speech.

Keep reading

The Quiet Rebranding Of CBDCs As “Digital-ID”

Let’s call them for what they are: Social Credit systems.

We know that “CBDC” stands for Central Bank Digital Currencies – and we have long held our hypothesis on what those entail (the TL;DR is that they will either launch as, or morph into, China-style social credit systems).

We’ve seen an Executive Order expressly ruling out CBDCs in the US, but as I keep warning readers: we’re seeing components we’d expect to see under a CBDC system appearing – only they aren’t originating at The Fed (who has never really expressed an interest in them, anyway).

Now the US Treasury Department is seeking comments on Digital ID as it relates to DeFi:

“The Department of the Treasury has filed a request for public comments to provide input on the use of “innovative or novel methods to detect and mitigate illicit finance risks involving digital assets” in accordance with the GENIUS Act, as well as in accordance with Donald Trump’s policy to support “the responsible growth and use of digital assets,” as outlined in the President’s Executive Order to strengthen US leadership in digital financial technology.”

— TheRage.co

The areas covered range from:

“the use of APIs “to help enforce strict access controls, monitor transactions and activities, and bolster security and integrity of financial institutions providing digital asset services”, the use of Artificial Intelligence to “make predictions, recommendations or decisions” to “effectively identify illicit finance patterns, risks, trends, and typologies”, and blockchain monitoring to “evaluate high-risk counterparties and activities, analyze transactions across multiple blockchains,trace or monitor transaction activities, and identify patterns that indicate potential illicit transactions.”

As well as Digital ID (which I think is the catch-phrase we’re going to see a lot of in the future, that will capture a lot of the objectives of CBDCs)

“the treasury is also seeking comments on the introduction of “portable digital identity credentials designed to support various elements of AML/CFT and sanctions compliance, maximize user privacy, and reduce compliance burden on financial institutions” to potentially be used “by decentralized finance (DeFi) services’ smart contracts to automatically check for a credential before executing a user’s transaction.”

Sounds similar to what the Bank of International Settlements (BIS) wants to do in terms of rating individual crypto wallets for AML compliance.

In a white paper titled An approach to anti-money laundering compliance for cryptoassets they propose to:

“leverag[e] the provenance and history of any particular unit or balance of a cryptoasset, including stablecoins”

In order to assign an “AML compliance score”.

Keep reading

Dystopian Rollout Of Digital IDs & CBDCs Is Happening

This isn’t conspiracy; it’s all in their own documentation.

They are building a full-spectrum digital cage, and its two locked doors are Digital Identity and Central Bank Digital Currencies (CBDCs). You cannot have one without the other.

The plan is to replace your government-issued ID with a Digital ID, but it’s not just a card in your phone. It is fundamentally built upon your immutable biometrics: your fingerprints, the precise structure of your face, the unique pattern of your iris.

This biometric data is the key.

It is the hard link that ties your physical body directly to your digital identity credential.

Your very body becomes your password. The reason this is so critical for them is the financial system. UN & Bank for International Settlements docs overtly state that Digital ID and CBDCs are designed to be integrated.

The system cannot exist without this biometric digital ID.

Why?

Know Your Customer (KYC) protocols.

For this new digital financial system to function, they must absolutely “know” every single participant. Your digital wallet will be tied to your digital ID, which is mapped to your biometrics. Total financial-biological linkage.

We see the prototypes being rolled out now:

  • Sam Altman’s WorldCoin lures people to scan their irises for a “unique identifier” and a digital wallet. This is the exact model.
  • The UN’s “Building Blocks” program forces refugees to scan their iris at checkout to receive food rations. The value is deducted from a wallet tied to that biometric ID.

They justify this total surveillance under the guise of closing the “identity gap,” claiming the world’s poor need digital IDs to access essential services like banking and healthcare.

The reality?

This is the ultimate onboarding mechanism into a system of programmable control, where your access to society and your own money is permissioned and revocable based on your compliance.

This is the bedrock of the new global financial system.

It is not about convenience. It is about control.

Keep reading

Mississippi’s Digital ID Law Hits a Wall with Mastodon

Mississippi’s privacy-ruining online digital ID law is putting pressure on decentralized platforms, and Mastodon says it simply cannot comply.

The organization behind the software states that it lacks the technical ability to verify users’ ages and refuses to implement IP-based restrictions, which it argues would wrongly affect travelers and those temporarily located in the state.

The law, known as the Walker Montgomery Protecting Children Online Act (HB 1126), has already led to Bluesky withdrawing its service from Mississippi.

Mastodon is not following that path. Instead, it points to the design of its platform, where individual server administrators are responsible for their own compliance with local laws. Mastodon itself neither collects user data nor maintains centralized control over the network.

Although Mastodon’s nonprofit arm initially declined to comment, it later provided a statement to TechCrunch.

The organization explained that while its own servers require users to be at least 16, it does not “have the means to apply age verification” and that the software does not retain any data collected during sign-up.

A feature added in the July 2025 release of Mastodon 4.4 allows server administrators to set age minimums and manage legal terms, but does not support storing verification data.

Each server in the network operates independently. It is up to those server owners to decide whether to integrate third-party systems to check user ages.

Mastodon confirmed it cannot offer “direct or operational assistance” to these operators and instead points them to resources such as the IFTAS library, which provides guidance on trust and safety practices for federated platforms.

The nonprofit reiterated that it does not track user behavior or enforce policy across the wider ecosystem. Responsibility for legal compliance, it says, belongs to those who host and manage the servers in their own jurisdictions.

Keep reading

Google to Require ID Verification for All Android Developers

Google is preparing to implement a wide-reaching identity verification system for Android app developers, one that could significantly alter how apps are installed and shared across the platform.

This new policy will apply not only to those who publish apps through the Play Store but also to developers distributing their software independently, expanding Google’s role as a gatekeeper over what apps are allowed on Android, even when they aren’t downloaded through Google’s app store.

Without passing Google’s verification process, apps will be blocked from running on the vast majority of Android devices.

Android was once known for its hands-off approach, especially when compared to Apple’s tightly managed ecosystem. That distinction is now fading.

Google says the new system is designed to address security concerns, citing internal data that apps obtained outside the Play Store are 50 times more likely to include malicious software. But this goes far beyond policing its own storefront. Instead, it lays the groundwork for universal control over app distribution from other sources.

Developers will be required to submit their identity information, register their app’s package names, and upload signing keys through a revamped Android Developer Console.

Oddly, Google says it will not review the apps themselves, but the identity requirement will serve as a gateway that blocks installation on certified devices unless it is satisfied. This contrasts with Google’s statement that this is all about security. If security is the main concern, then one would expect Google to actually review the app and its safety, rather than simply demanding a developer ID.

Nearly all Android phones outside China rely on Google’s services, meaning this policy will reach almost every user.

Phones running customized versions of Android that lack Google services will not be affected. However, those devices make up only a small portion of the global Android landscape. For nearly everyone else, unverified apps will simply not work.

This move expands on a policy introduced in 2023, when Google began requiring developer verification for apps on the Play Store.

The company says that the effort led to a sharp decline in scams and malware. It argues that forcing developers outside the Play Store to verify themselves will make Android more secure overall by limiting the ability of anonymous actors to spread harmful software.

Even so, this approach could shrink the space for independent app development. Android users and developers have long relied on the ability to sideload apps or use third-party marketplaces without corporate oversight. Now, even those channels may depend on Google’s approval before users can install anything.

Rollout will happen in phases. Google plans to open early access to the new system in October 2025. Developers worldwide will gain access by March 2026.

The policy will go into effect first in Brazil, Indonesia, Singapore, and Thailand in September 2026.

Keep reading

Gates Foundation: “Digital IDs are an effective tool against poverty”

It looks like we can add another to the growing mountain of problems that digital IDs are going to solve for all of us.

We already know that digital IDs will help counter populism, and illegal immigration, and crime, and benefit fraud, and terrorism, and pandemics.

But they’ll help tackle poverty now as well. That’s according to a report published a few days ago by the Bill & Melinda Gates Foundation:

Digital IDs are an effective tool against poverty. A global solution is making them available to millions.

I do love “making them available” in that sentence. As if the world is full of people desperately crying out for digital ID that the powers that be have been unable to supply, when the truth is literally the exact opposite of that.

The aim of the report is promoting something they call Modular Open Source Identity Platform (MOSIP), an “adaptable, modular architecture […] that any country could customize to meet its specific needs.”

For the last few years, MOSIP has been trialed in nine different countries across Africa and Asia, with over 90 million people signing up, the report calls it a powerful example of how low- and middle-income countries can harness open-source technology to improve lives and accelerate development.

The developers of MOSIP are so worried about poor people in the third world not having any proper identification that they – and their backers at the Gates Foundation and the Omidyar Network – are giving it away to any country that wants it for free.

Isn’t that nice of them?

But wait, there’s more good news! The developers of MOSIP believe very strongly that interoperability is…

Keep reading

Brazil Uses Child Safety as Cover for Online Digital ID Surge

Brazil’s Chamber of Deputies has advanced a bill marketed as a child protection measure, drawing sharp condemnation from lawmakers who say the process ignored legislative rules and opens the door to broad censorship of online content.

Bill PL 2628/2022, which outlines mandatory rules for digital platforms operating in Brazil, moved forward at an unusually fast pace after Chamber President Hugo Motta approved an urgency request on August 19.

That decision cut off critical steps in the legislative process, including committee review and broader debate, allowing the proposal to reach the full floor for a vote just one day later.

The urgency motion, Requerimento de Urgência REQ 1785/2025, passed without a roll-call vote. Instead, Motta used a symbolic vote, a method that records no individual positions and relies on the presiding officer’s perception of consensus. Requests for a formal, recorded vote were rejected outright.

Congressman Marcel van Hattem (NOVO-RS) accused the Chamber’s leadership of bypassing democratic norms. He said Motta approved the urgency request to expand the “censorship” of the Lula government.

Other deputies joined the protest, calling the process arbitrary and abusive.

Under the bill, digital platforms must verify users’ ages, take down material labeled offensive to minors, and comply with orders from a newly created federal oversight authority.

That body would hold sweeping powers to enforce regulations, issue sanctions, and even suspend platforms for up to 30 days in some circumstances, potentially without a full court decision.

Although the urgent request had been filed back in May, it gained renewed traction after social media influencer Felca released a series of videos exposing what he called the “adultization” of children online. His content prompted widespread media coverage and pushed the topic of online child safety to the forefront. In response, Motta committed to fast-tracking related legislation.

Keep reading