~aumetra: Blog - CV
28/06/2023 15 min read

Nomad ID: The first steps

Table of contents

Introduction

In a previous post I alluded to this construct of a nomadic identity which we can utilise in ActivityPub to power a bunch of different functionalities:

  1. Single-sign-on solutions powered by cryptographic signatures
  2. Follower moves without involving the old server
  3. Linking together multiple accounts verifiably without the need of any kind of web presence

And maybe more. Be creative! It’s basically a set of signing keys.

While talking about it, Erlend came up with a name I became quite fond of: Nomad ID.

Note about the name

If you are a little confused because I alluded to an ActivityPub solution and now it is just named like it’s a more general thing, don’t be!

This project can still provide the features talked about in my previous post, the technologies at hand just make this a lot more generic and even usable outside of ActivityPub.

Therefore this name makes sense, you can use this as an overarching identity provider everywhere!

Yay, yet another “general identity provider”.. Mandatory XKCD

Building blocks

Building protocols/standards often involves taking existing things and sticking them together like Lego bricks.

For our identity platform I propose the following fixed building blocks:

And some more, let’s say, “experimental” blocks:

Why DIDs?

You might wonder why I propose DIDs. They are mainly used by cryptocurrency bullshit and cryptocurrencies are ideologically opposed to pretty much everything I believe in.

Yes, it’s true that they are used a lot by cryptocurrency projects, but so is cryptography in general (and I still really like it).

See, DIDs are not really for any fixed usage, they are here to make addressing in decentralised networks a little more coherent and easier to manage for both humans and computers.

The basic anatomy of a DID is as follows:

did:[method]:[some payload]

There is nothing more to it. The DID is parsed, the method is then used to determine how to use the payload. It’s similar to what Multibase is trying to do for all the different Base[whatever] encodings.

Instead of using did:cryptocurrency-bs:[some long string of characters] we use our in every way perfect did:nomad-plc:[another long string of characters], and instead of delegating resolving the DID to some blockchain, we just use our central testnet authority.

To get back to the original question at hand: Why DIDs?
The concept of DIDs allows us to iterate quickly over different approaches without breaking existing identities.

An example

In a previous post I gushed about FEP-c390 and how great it is but also pointed out the flaws with revocability with the default method that was chosen throughout the definition of the FEP.

The FEP does not force the usage of did:key.

I elaborated a bit on how this “Nomad ID” idea could work together with FEP-c390 in this section

Well, let’s say we discover the holy grail of decentralised identity management: The possibility of properly revoking a root keypair. What now?

We should upgrade immediately, right?

But how can we do it in a graceful way that doesn’t either:

  1. Force us to remodel a bunch of our past proposals to not interfere with each other
  2. Break backwards compatibility

Well, with the power of DIDs!

Since the old DIDs are using the key method, we already know what they are and how to use them.

And our new DIDs, let’s call them did:nomad-plus, also have a unique method name which tells all new software: “Hey, instead of using the legacy method, use our new fancy magic to resolve the identity”.

Old software will just refuse to work with them since it doesn’t know how to handle that particular method

The DID method is essentially our version discriminator.

This gives us a clean upgrade path which is why I compared it to Multibase earlier.

While Multibase gives us an easy upgrade path from, for example, Base58 to Bech32, DIDs give us an easy upgrade path from did:key to something better, something more permanent.

Key delegation? What?

I should probably start by explaining the idea of key delegation. The idea of key delegation is that you aren’t always working with your “root” key, so the key that directly represents you.

Instead you generate a bunch of sub-keys along with delegation proofs for these sub-keys. These proofs are signed by your root key and then published.

The proofs are there for telling others that these sub-keys are authorised to do certain things in your name (such as logging in or creating more sub-keys).

Sub-keys can then have keys of their own (that have either equal or a subset of their permissions), which then creates a chain-of-trust.

After creating the first pair of sub-keys you can then take your root key, move it on a USB stick and just lock it into a vault; never to be seen again, because now you can do pretty much everything with these sub-keys.

But what if one of these sub-keys get compromised?

To that there is luckily a good answer: You revoke them with a key higher in the hierarchy. Even if a key close to the top in your key tree gets compromised, you can still revoke them with your root key.

The real issue arises when the root key gets compromised, because the root key IS you.

If your root key is gone, your identity is gone (that’s why I said you should lock your root key in a vault).

I’m still brainstorming ideas on how you could recover from a root key leak.

The folks at Noosphere are hypothesising about threshold signatures of trusted parties, where a threshold of parties has to be reached to successfully sign a root key update, but that’s very much in the realm of theory for now.

Ed25519? Won’t that give you headaches down the line?

If you are into cryptography, you might know that verifying Ed25519 can be a pain, especially across multiple libraries/language ecosystems.

You might have come across the excellent blog post (give it a read, it’s really interesting!) by Henry de Valence who is, among other things, co-author of Dalek cryptography crates and one of the persons behind Ristretto, the Decaf adaption extended to cofactor 8 groups.

A meme showing a crudely drawn stick figure and a weird 3D looking humanoid face having a conversation.

Face: 'how many isogenies are you on'
Figure, with the headline of the Decaf paper over it: 'like,, maybe 1 or 2, right now. my dude'
Face: 'you are like a little baby'
Face: 'watch this'
Face: *looks like an ascended god-like figure with a diagram of the SIDH key exchange in the background*

I was looking for a use of this meme for so damn long and it’s still not 100% perfect. DAMN IT!

The blog post highlights incompatibilities between Ed25519 implementations due to underspecified validation criteria or criteria that changed years after the original RFC was published.

This is a real issue, especially for such a widely used signature scheme.

Despite these problems I’m still in favour of using Ed25519, and here’s my reasoning:

  1. Ed25519 is everywhere, it feels like if it’s a Turing-complete language, there’s a pretty good chance that there’s an Ed25519 implementation.

  2. There are somewhat widely adopted solutions for this problem in the form of unofficial validation criteria in the form of ZIP-215 (as mentioned in Henry’s post).

    These criteria aim to make validation consistent across all implementations and are fully backwards compatible with pretty much all implementations out there.

Some notes why I don’t want to use either RSA, a Weierstrass curve, or a Koblitz curve

So TL;DR: Ed25519 is everywhere and very competent people working on untangling the mess that is the validation criteria (8 years after the first RFC was published).

UCAN

Onto another choice in my “experimental” technology stack: UCAN.

UCAN is a rather young standard for cryptographically verified tokens with the capability to delegate and to revoke “sub tokens”.

The standard itself is pretty interesting, it reuses the JWT standard to form the tokens themselves

UCAN explicity got rid of the none method which is great, and it is (by default) not susceptible to key confusion attacks due to verifying the DID signatures, where it defaults to did:key.

Every time someone uses the phrase “military-grade encryption”, I get a bit sceptical about the entirety of the project.
(especially when the project isn’t even about encryption, but instead about signature schemes)

Please just tell us the actual names or link us to a page with the supported schemes.

(this is from the UCAN website)

Screenshot from the UCAN website of one of the following bulletpoints: 'Secure: military-grade encryption'

Now as to the “why”: The standard looks pretty good, is based on a standard for building and verifying signatures that has been around for a while now (JWT) and is somewhat well understood by now.

It also has all the capabilities I wanted that weren’t covered by the pure did:key method used by the identity proof FEP that sparked this whole journey:

Starting out with our own placeholder method

Everything has to start somewhere and I believe to be able to iterate quickly we should start off with something simple.

The simpler thing being a centralised authority server that keeps track of all the updates and stores them in a single centralised database.

Yes, this isn’t decentralised but we will get there, hear me out on this one!

In the testing stages we first want to get our protocol running. By protocol I mean the key submission, delegation, revocation, etc.

We want to iterate over ideas quickly and easily, and worrying about a P2P delivery system seems like it would probably get into the way.

After the initial simpler implementation has a solid foundation to build off of, we’re gonna add the decentralisation part.

It is important to not forget that we want to go decentralised with the system at one point.

During this stage we need to keep in mind that we can’t rely on the simplifications a central server might give us.

Initial functionality

The initial functionality should include a small yet powerful feature set, therefore I propose the following functionality:

  1. Initial submission of an identity (duh.)
  2. Key delegation
  3. Sub-key revocation (a key higher up in the chain can revoke “lower” keys)

This is feature set is relatively small but gives us enough to work on an authentication system!

Remember: We just need keys that can sign data, everything else is extra functionality

The journey towards decentralisation: DHTs

So, enough about centralisation and quick prototyping. Let’s get into the fun ideas for decentralisation!

I have mentioned DHTs a lot, but what exactly are those anyway?

Well, DHT stands for “Distributed Hash Table”. It’s basically just a really large KV storage distributed across a bunch of nodes where every node usually only keeps a subset of the entire key space (i.e. partitioning).

What this means is that node 1 maybe just has the keys beginning with “A-P” and node 2 has the keys beginning with “Q-Z”.

If node 1 then wants to fetch identity “x[something]” it goes to node 2 and asks them “can you give me the data associated with x[something]?”. Node 1 then receives the value and can work with it.

To first join the network, nodes need things called “bootstrap nodes”. These are really just regular network participants over which they can discover the rest of the network. They can either be run by sponsors or by volunteers, it doesn’t matter who runs them as long as there is at least one running.

Well, this sounds a little complicated, right? Where do we even start? The most popular DHT design is Kademlia (here’s the original paper if you wanna dive deeper into it).

It’s used by popular networks such as IPFS, I2P, and many others. So let’s just build on that as well!

Implementing something like this from scratch however is a little complex. Kademlia gives us the basics, the abstract RPC messages, underlying algorithms, etc. but no concrete design. We would need to design it ourselves.

That sounds annoying. Especially since we are designing an own system already! Isn’t there an implementation we can use with relative ease?

Luckily there is! The folks at libp2p have published their code in the form of a, quote, “modular peer-to-peer networking framework”. And this framework includes an implementation of Kademlia.

If someone wanted to switch from libp2p’s Kademlia flavour to something homegrown they can just utilise the DID mechanism

To reference prior art, we can take a look at Noosphere’s implementation of similar ideas. In fact, a lot of this design is inspired by how Noosphere handles their identity management.

Building together: Noosphere

A lot of people might not be familiar with what Noosphere is.

Noosphere is a project by the Subconscious network. They want to build a, quote, “protocol for thought”.

Basically their idea is to build a decentralised network for notes, blog posts, anything really that you could write down in text form. All of it interlinkable, permanent, tied to a cryptographically verifiable identity, and with a versioning history.

Super cool concept! It combines the storage system of IPFS with a custom made name system based on a Kademlia DHT, and verifies identities via chains-of-trust.

With that last sentence you might see where I’m going with this. Noosphere provides most of the things that we want to achieve with our system (namely the cryptographically verifiable identities).

Now the question is: Instead of building our own network, should we attempt to build on top of Noosphere?

Building an own system of this kind is both technically complex, and requires reimplementation in a bunch of different languages.

The goal for this is to provide, among other things, a somewhat nomadic identity for the Fediverse

The fediverse is a diverse space with a lot of implementations in a lot of languages. We would need implementations in Rust, Ruby, Elixir, Golang, Java, etc.

Note that using Noosphere would not magically solve these issues. But we would have a combined force of developers to drive these implementations.
Not only that, but Noosphere is implemented in Rust and has bindings to C and Swift.

Not only that, but we also need to provide bootstrap servers for the DHT (when we decide to go decentralised), which means we also have hosting costs.

If we were to build on Noosphere, we would offload all of the complexity onto the Subconscious network.

This decision is up to both the protocol designers and, of course, the communities that would be making use of this (in this case, mainly the fediverse community).

Upsides of this approach are obvious, less technical and administrative complexity since this would all be managed by Noosphere.

Potential downsides are that we would have to:

  1. Keep up with an work-in-progress protocol
  2. Accept overhead since we essentially have to run full Noosphere nodes + our own logic

Can we do third-party logins?

Well, after talking about all of those technical details, let’s talk about something else: Third-party logins.

It makes sense for most ActivityPub implementations to ship with a node implementation, i.e. a piece of software that directly connects and integrates with the network.

But what about, let’s say, a comment section on a small blog. Do we want them to host an entire piece of somewhat complex infrastructure?

Most people just looking to add a log-in wouldn’t want to do that and making this a requirement would hinder adoption of this technology.

So, how can we fix this?

Gateways

We can start hosting services that let others make use of an already running node.

That way they can just make an HTTP call to some external service and get the full Nomad ID experience without the complex setup.
(This can be compared to IPFS with their ipfs.io gateway).

Still, the possibility of gateways should not distract from making the software simple to host and encouraging people to start up nodes.

And how will this work with ActivityPub?

This whole proposal here stemmed from the idea of having a low barrier-of-entry nomaic identity solution for the fediverse, so how exactly are we going to combine this idea with ActivityPub?

Well, the basic idea is that we are making use of FEP-c390.

This FEP defines identity proofs via DIDs. However, it doesn’t define what kind of DID methods we should use to produce these proofs.

Based on this, we could sign our identity proof with one of the keys in the chain-of-trust and attach the DID pointing to the user’s root identity in the subject field.

Something like this:

{
    "type": "VerifiableIdentityStatement",
    "subject": "did:nomad-plc:[identifier]",
    "alsoKnownAs": "https://example.com/users/alice",
    "proof": {
        "type": "JcsEd25519Signature2022",
        "created": "2023-06-28T00:00:00Z",
        "verificationMethod": "did:key:[some sub-key that has the required scopes delegated to it]",
        "proofPurpose": "assertionMethod",
        "proofValue": "<proof-value>"
    }
}

The implementation then has to ensure that the did:key set as the verificationMethod is somewhere in the chain-of-trust of the identity set in the subject field.

What DID methods in particular are used in the end is still open for consideration.

Closing thoughts

I think this is a very interesting direction for decentralised identity management independent from existing hyper-capitalistic blockchain-based solutions.

There are some other very real issues blockchains have, apart from ideological and environmental ones, namely the ever growing space requirements, which we want to avoid as much as possible here.

There is still the issue of revocability of the root key. Aggressive delegation might alleviate some of the issues, but it unfortunately doesn’t fix the very real underlying issue that you are one leak away from losing your identity.

I hope this encourages some discussion and will get people to chime in and give their take.