Digital identity is the infrastructure crisis no one admits
Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.
In the early days of the internet, you didn’t need a password to browse, and online communities operated on good faith and shared curiosity. But as the web evolved into the infrastructure of modern life, helping us govern our money, politics, and information flows, digital identity never caught up.
Summary
- Identity is the missing layer of the internet — while we’ve digitized commerce and communication, online trust still rests on fragile, centralized logins and surveillance systems.
- Verification ≠ identity — proving you hold a key or match a photo isn’t enough; true digital identity must be portable, composable, and tied to both humans and AI agents.
- AI platforms are becoming dangerous gatekeepers — without trustworthy identity, we risk a future where bots, corporations, and governments control access, incentives, and even speech.
- Current fixes fall short — fragmented age-verification tools and surveillance-heavy systems raise more privacy questions than they solve.
- The solution: self-owned, privacy-preserving identity — cryptographic passports and zero-knowledge proofs can enable scalable trust without sacrificing freedom, creating a post-platform internet built on authenticity.
We’ve digitized commerce, communication, and computation, but identity is still a patchwork of logins and surveillance. The very thing that enables trustworthy relationships in the physical world, knowing who you’re interacting with, is nonexistent online.
Digital identity is the missing layer of the internet. Without it, everything we build rests on sand.
Verification isn’t enough
We often confuse identity with verification. Proving that you hold the private keys to a wallet, or that your face matches a passport photo, is only part of the story.
But identity must do more. It must be portable and composable across systems, supporting not just access, but trust. And it must work not just for people, but also the bots and agents we’re increasingly relying on.
Trust infrastructure is the fundamental challenge to be solved to fix digital identity.
The perfect storm
AI is currently being built like platforms, with a single point of failure. We’ve seen this movie before, on the web, Twitter, and Facebook, which centralized the discovery layer of the internet, concentrating control over what we see, share, and believe. AI is heading in the same direction, with a handful of companies owning the gateways to intelligence itself. If we allow this trajectory to continue, the future of AI will be defined not by open innovation, but by gatekeepers who control the inputs, outputs, and incentives of the entire ecosystem.
AI platforms are fast becoming the new gatekeepers of human activity. They train on our conversations and increasingly act on our behalf. But they lack accountability.
AI agents can generate content, apply for jobs, purchase products, and even negotiate contracts. But how do you know if that agent is operating on behalf of a real, unique human? Or a farm of coordinated bots? If you can’t tell the difference, you can’t trust the output.
The question becomes: how do we prove personhood and tie it to real accountability, without giving up privacy or control?
The current system is failing us
Last week, the EU launched a prototype age verification app across five countries, claiming to use zero-knowledge proofs to confirm if someone is over 18 without exposing their identity. The move is part of the EU’s broader Digital Services Act enforcement and a signal that lawmakers are finally starting to treat identity as infrastructure.
In the UK, where age verification has already been mandated under the Online Safety Act, platforms are relying on everything from facial recognition to credit card checks to behavioral data, often powered by opaque third-party providers.
These fragmented approaches raise more questions than they answer. Who stores the data? Who decides who gets access? And what happens when AI systems start using this data to infer, manipulate, or impersonate our identities?
You only need to look at the privacy policy of AI startups like Friend, which states it can use data from “everything you say, hear, and see”, to realize how far we’ve already drifted toward the normalization of surveillance.
Scaling trust
To establish and scale trust, we need ways to prove uniqueness and accountability. But to protect freedom, we must do it without exposing personal data, linking everything on-chain, or submitting to government-run surveillance regimes. Today, identity is centralized and owned by platforms and governments, along with all the data tied to it, leaving individuals with no real control over who sees it, how it’s used, or when it can be taken away. Owning your identity means holding it yourself, not renting it from a provider. This starts with a secure one-to-one mapping between a biological human and a digital representation, encrypted and held locally, a version of a cryptographic passport that’s verifiable, portable, and private.
From there, we can use zero-knowledge proofs to let users verify traits like age, location, and credentials, without disclosing underlying information. Combined with social graph validation, this would allow us to create identity networks that grow virally, not through centralized registration but through real human connections.
This system covers both humans and AI agents alike, ensuring that every autonomous actor on the network can be tied back to a real, accountable individual without ever needing to reveal who they are.
Post-platform Internet
Just as property rights enabled the Industrial Revolution, and Bitcoin (BTC) enabled permissionless finance, we need to unlock the next evolution of digital coordination, and that is authenticity at scale.
Every human should have a portable, self-owned identity that can be used across platforms. We also need to ensure that bots and agents can be audited and held accountable, and that DAOs and marketplaces can make decisions based on real, unique participants, not sybil attacks or fake accounts.
The world we’re sleepwalking toward
Let’s be honest about where this is heading if we do nothing. Over 50 countries are developing CBDCs, AI platforms are cooperating with governments, and wearable devices record our speech, location, heart rate, and more. The most sensitive data about our behavior, thoughts, and preferences will sit in private systems waiting to be breached or weaponized.
If we don’t act now, centralized identity, CBDCs, and AI platforms will converge into a system where governments can cut you off entirely for something you say in public, just as it worked in the USSR, only 100 times more efficient, more permanent, and harder to escape.
What we need is a proactive identity layer for the entire internet. Not just for web3, but for every digital interaction, whether it’s social, financial, creative, or autonomous. One that’s not owned by governments or corporations and verifies human uniqueness without surveillance. One that prioritizes privacy, dignity, and individual freedom at the protocol level.
The future of the internet demands more than patches; it demands new primitives.