Some Thoughts on Wallets (Part 1)
Summary
- •Charles Hoskinson discusses the Cardano wallet architecture, focusing on an ecosystem-wide view and plans for a dedicated presentation website.
- •The wallet architecture is broken down into layers: user interface, wallet logic, network, data storage, security and cryptography, indexing, and additional services.
- •Alternative clients mentioned include Glow by Blink Labs, a Rust client by TXP Pipe, and a TypeScript client by Harmonic, alongside the reference Haskell node.
- •Aiming for a Cardano test suite similar to Ethereum's, which would allow for certification of various full node implementations.
- •The goal is to enable users to choose their backend and wallet provider, with options for remote or self-hosted nodes.
- •Upcoming hardware solutions discussed include Nvidia's Project Digits for zero-knowledge proofs and Iagon for decentralized data storage.
- •Lace will introduce a desktop and mobile mode in 2025, with a focus on improving user experience and security.
- •Wallet scripts will allow advanced users to automate transactions and manage shared wallets with multisig capabilities.
- •The DApp store is still in development, aiming to enhance the Cardano ecosystem and user experience.
- •Hoskinson emphasizes the importance of standardization and collaboration within the Cardano ecosystem to improve usability and decentralization.
Full Transcript
Hi, this is Charles Hoskinson broadcasting live from warm, sunny Colorado. Always warm, always sunny, sometimes Colorado. Today is January 10th, 2025. I'm making a quick whiteboard video to talk a little bit about wallets. I'll make a far more extensive video in a bit, but I'm trying new modalities of explanation.
You guys know I do the whiteboards, and now I'm starting to write some pretty complicated HTML and CSS code to do a kind of low-tech presentation. I'll show you what I've put together. Let me go ahead and share my screen. What I want to talk about is the Cardano wallet architecture and look at this from an ecosystem-wide view. Eventually, I plan to create a website specifically for my presentations, and as I do these presentations, I'll upload them so you can access the HTML files.
I've made this a bit interactive, and I thought about breaking it down into different logical layers of the system: the user interface layer, the wallet logic layer, the network layer, the data storage layer, security and cryptography, indexing, and additional services. Where I made it interactive is down here; you can click and get a bunch of different text. For example, the user interface layer of a wallet exists because the Cardano network's cryptographic protocol is complex under the hood. The UI simplifies this complexity, focusing on core user needs like viewing balances and sending tokens in an accessible manner. By abstracting away lower-level logic, users can confidently operate wallets without needing to be blockchain experts.
This layer also fosters user trust through clear design patterns and consistent messaging. Then we have the wallet logic layer, the network layer, the data storage layer, the security and cryptography layer, the indexing layer, and finally, some additional services that can be connected to the wallet. I'll go into far more detail when I clean all of this up and put it together. I've also been working on a better diagram. What I'd like to foreshadow in this video is a little bit about alternative clients, which is something people talk a lot about.
We have a Glow client that Blink Labs is working on, and they're all in various states of maturity. TXP Pipe is working on a Rust client, and Harmonic is working on a TypeScript client. Then, obviously, there's the reference Haskell node, which connects over IPC to the Cardano wallet layer, and that will talk to your wallet UI. This is conventional, so you can think of it in terms of REST or JSON RPC, GraphQL; there are all kinds of different ways that you can imagine a person connecting their wallet UI to the Cardano wallet. This uses these Ouroboros mini-protocols that we're famous for, and that's kind of exotic—people love them or hate them.
Those type channels are a mainstay of things. All these other guys have different approaches that cover different parts of the stack. Sebba from DC Spark even made a great video about the different ways to think of these nodes. The point is that we really want to get to what the Ethereum ecosystem has done. If you look at the Ethereum test common test for all Ethereum implementations, they actually have a test suite that they've developed.
Look at all these different implementations of the Ethereum Virtual Machine, the full client. There's Mantis, which we created in Scala years ago during the old Ethereum Classic days, so technically, I was an Ethereum core developer because we got it working with the Ethereum network. There's Besu, Mana, Go, Parity, etc. These tests create a surface where, regardless of what backend you happen to be using, you can ensure that the client works as intended. Where we want to go is to have a Cardano test suite plus blueprints.
Those blueprints are the formal specs, which are incomplete right now. The idea is that some combination of these allows you to validate your full client. We would like to get to a point where, no matter what your experience happens to be, if a user wants to run a full node instead of just downloading Daedalus, they would pick their preferred wallet, like Vesper, Lace, Eternal, etc. Ideally, they would choose a backend, which could be remote, and then connect to the backend of the particular wallet provider. That could run on Blockfrost or Maestro, or it could be self-hosted.
If it's self-hosted, the idea is that you would have a menu of options—the Haskell node, Go node—and all those options would be certified against a standard. That standard is to validate your full client. I met with the TXP Pipe guys representing Pragma, and we discussed a working group between Intersect and Pragma. The idea is that this working group would discuss what a test suite could entail, and anyone building a full node, like Blink Labs and Harmonic, is welcome to participate. The goal is to reach a point where we have a certification concept, and then people can self-certify against that standard, we did.
The Ethereum ecosystem doesn't exactly like us, but they were gracious enough to include us because we certified against their standard. Analogously, you'd have a self-certification standard. When a person installs the wallet, they can choose their own adventure. They can decide whether to run on a remote backend or a self-hosted full node, and if that's the case, they can choose the flavor they want. You'd have your menu, and it would download the option accordingly.
Now, the goal of these is to improve a few things. There's also a parallel workstream, which is this concept of MyRoll Fast Think, and there's also the concept of network contribution. Network contribution currently exists on the peer-to-peer side, so you relay transactions and blocks, but you're not doing the validation. You're kind of setting these things down, and there's a whole specification of how that works. However, there can be more sophisticated contributions.
These are kind of the full clients of Cardano, but then you also have this concept of a super node, where you go beyond Cardano and actually run the infrastructure of Cardano, plus the partner chains network, and potentially other layer ones as well. There's also an idea of providing service modules. Those service modules could include a Hydra node, a proof server, providing a trusted execution environment, etc. Let me show you something cool that's on the horizon. Nvidia released something really nifty, and Iagon has something really cool as well.
I want to show you what both of these have. First off, where is the future of this computing going? Guess I didn't stop sharing that, did I? Oh, okay, I'm tired. So we have Project Digits, which is basically a supercomputer in a very tiny box.
You can actually stack these, and it has a massive amount of memory—128 gigabytes. It can run models up to 200 billion parameters. It has the new Nvidia Grace Blackwell super chip, four terabytes of storage, and a petaflop of compute power, all for about $3,000. Now, why that's relevant is that it turns out those super chips for AI are also excellent at doing zero-knowledge proofs if we optimize the code. When we think about Midnight, it's going to provide a proof layer inside the system, and we're building that out.
There's a world where people buy these little boxes, and at the same time, they can use that box to host a full node. The goal would be that when you install your wallet, you can just install this on some hardware, and then you can connect your cell phone app or your desktop application to that backend, either remotely or on the same system. As long as you leave it running, you can use it in a completely trustless environment because you're self-hosted. Now, the other one is Iagon. I always talk about the home team, and these guys are more of a data storage node.
When you look at the details here, for $1,600, you're getting an Ayon 7, eight-core, 64 gigs of RAM, and 36 terabytes of storage. You could also be running basically an Iagon node, but you can also use this as your full wallet node. The reason I want to get there is that I’d like to reach a point where people in the Cardano ecosystem are hosting their own infrastructure. When you think about the resilience of a network, the quantity of full nodes is a significant component of it. So far, the only way to do that is Daedalus, and we're kind of moving beyond the age of Daedalus.
What's likely going to happen to Daedalus over the next six to nine months is that it will be deprecated from the IO perspective and transferred over to a community-led open-source project. Several people have been interested in taking that over. We will work on the Lace side of building an open standard for how one would connect a node to wallets and try to work with Vesper, Eternal, and Shiro, as well as all the other people in the Cardano ecosystem, to see if we can get them to extend as well. My dream is to make it really simple for a person to download a full Mythril node, run it in their system tray, and have it there, kind of like BitTorrent. At the very least, it would provide network resources, and over time, we want to make it easy for people to build and put in modules or extensions to their full node to help with indexing, running Hydra, or whatever they choose.
This is going to be one of the big roadmap items for Lace in 2025, but we're taking an ecosystem-wide, standards-driven approach. That ecosystem approach means that once we solve it, we solve it for everybody, creating a practical path to a multi-node world. We want the user to be in charge of what backend they want to run, what hardware they want to run that backend on, and what services they want that hardware to provide. The Digits device I showed you is a low-cost way, relatively speaking, of having a proof super server that can do tons of zero-knowledge operations, like recursive SNARKs, if they're optimized well. The Iagon would be an example of a data super server that can store a lot of information and be used to create a decentralized storage layer inside the system.
There are other things that could be optimized for high-bandwidth applications or for running Hydra heads and spinning lots of those things up. It's kind of a choose-your-own-adventure, but the idea is that if we make it simple, it just becomes part of your stack. You stop using remote services, but you're actually self-hosting and completely in control of your entire world. It's going to take some time for the ecosystem to move toward this because we've lived in a monolithic culture for a while, but I believe there is definitely a path to do it. Now that we have Intersect and Pragma in this working group, that's one of the things that can be discussed between those two sides.
When we talk about the budget process, when alternative nodes get funding, part of that funding can be for certification against the standard. Everyone in the Cardano ecosystem has become accustomed to formal methods, peer review, and high-quality, high-assurance software. We should never lose that, but we have to understand that it's not free. It's important that when alternative nodes are constructed, there's a path to ensure they have an equivalent level of quality to what the Haskell node has been providing. Then, the user gets to decide which one makes more sense.
Some of them will have different features, some may work better for certain modules you want to plug in, and some might sync faster. Some might work better in the browser; for example, a TypeScript node can pull in a lot more stuff into the browser environment than you could with the Haskell side. However, you don't want to divorce yourself from that; you want the user to be in the driver's seat. The goal is to eventually enable the capacity to run a super node with that. It's going to take a while for all these things to percolate this year, but Lace will have a desktop mode, and that desktop mode will provide a superior experience compared to the full node that Daedalus has.
To give you some hard numbers, when we tested a Mythril node, the benchmarks showed a full sync in under an hour, whereas Daedalus takes about three days, give or take, with the same security level, relatively speaking. It's an amazing improvement, but that Mythril standard has to be integrated into the full node world, and then those interfaces have to be discussed. There's a lot of discussion about the standardization of cryptography and how interfaces plug in and inter-wallet interoperability. Joining D was one component because we got to talk to wallet makers and other ecosystems, and now we get to do the same within the entire Cardano ecosystem. Those conversations are ongoing and fruitful, and there's a lot to do.
It's nice because we massively improve the user experience for everybody while preserving the same level of security. We also gain a lot more decentralization and resilience. My hope is that tens of thousands to hundreds of thousands of full nodes will be installed, and people will just run them in the background without even thinking about it. This will massively improve the resilience of the system. If we have the TW test suites, things the hard fork combinator will still work, the security levels will be relatively equivalent, and we'll have a lot of standards that people can use.
Overall, this will create a much more vibrant ecosystem. Furthermore, by allowing specialized hardware to come in, people can start building customized nodes with modules that enable Pub/Sub, customized modules that enable Hydra, proof servers, and other such things. When we look at the data diffusion layer of Cardano, there's a lot going on that goes beyond what just the protocols have, and typically this has been handled in a fragmented and ad hoc way, which needs to be cleaned up quite a bit. This is the first of probably a series of videos. As I mentioned, I cleaned up the presentation a little bit, and I'm trying new ways to explain things to people beyond just handwriting it.
I thought it was really cool to introduce that nifty little animation I came in with. The Lace team is also going to get a bit more vocal about these things, especially Brandon Wolf. He's a great leader, and that team is quickly building up. In addition to a desktop mode, we will also have a mobile mode. In fact, we're going to set up a dedicated team in Argentina to create the mobile clients for Lace.
We'll have an Android and iOS client. It will take some time to get there, but it is on the roadmap for 2025. We had to figure out the right format, function, and team, and centralizing all of them in one location and incentivizing them to produce is something we care a lot about. The DApp store is still on the roadmap. A lot of cool stuff is there; it was in development hell for two years, but we've managed to pull it out.
Lace has been one of the most frustrating projects because we had a huge and beautiful roadmap that got gummed up for a variety of reasons. We got past that, and now we're moving quickly. It's exciting to see what we can do. What's nice is we're not just moving as one entity; we're thinking about an ecosystem-wide approach to how these wallets live in a broader ecosystem. Lace is always going to live in the browser because that's where your money is, your transactions are, your information is—that's where your lived experiences are.
It needs to live on the cell phone as well. Just because it lives in the browser doesn't mean it should have a substandard experience or security; it should be a first-class citizen as a desktop application. By allowing you to connect remotely to a full node and process using that as your backend, that's a very powerful step forward. The integration of identity into this is also another powerful step forward. It makes no sense to solve this just for Lace; it makes sense to create standards across the ecosystem so that other wallet providers can benefit from that.
They don't have to spend a lot of money or effort, and suddenly they can offer their user base a self-host opportunity. If you're already a user of Vesper, Eternal, or Nami, it doesn't make sense to say, "Well, you have to migrate to get better security." You should have the ability to do that because our goal is to ensure everybody is using the experiences they like. In fact, we saw that with Nami; we bought Nami and kept the Nami UX, migrating people over into Lace. Lace will always have a Nami mode.
We didn't copy any code; we had to completely rebuild Nami and put it within Lace inside that secure interface because the original code of Nami was very beta and needed to be cleaned up. So that migration is ongoing, and a lot of people are moving over without even realizing they're moving to a totally different infrastructure, a different backend, and a different frontend, even though it has the exact same user experience. There's a lot to do, but the rubber is hitting the road. Once you have this paradigm, you also have a lot of exciting new things. For example, you can have wallet scripts.
Wallet scripts enable you to run almost a shell script or a batch script against a wallet, allowing you to chain a bunch of commands together. Think of an advanced user mode; imagine if you're a company doing payroll. You can have a payroll script that runs hundreds of transactions at the same time, just like any type of command-line process automation. You can even create templates and libraries of wallet scripts to manage things, and also shared wallets. Shared ownership of wallets and having multisig is super important and powerful.
Hardware-enabled multisig is the most secure way of using cryptocurrency in the entire industry. It is ridiculously difficult to compromise a wallet if you mix multiple hardware wallets together in a three-of-five or five-of-seven multisig setup. In fact, I don't think it's ever been done in the history of the cryptocurrency space without social engineering; it's incredibly difficult. The user experience is straightforward, especially if you live in a world of Pub/Sub. There are a lot of SIPs coming to improve that, and there will be a wave of SIPs that come from this effort, a wave of standardization.
We're thinking a lot about this now, and it's time to really improve usability. The final component, the DApp store component, is pivotal to the Bitcoin DeFi experience and will be a massive competitive differentiator. So, I've been in the office since about 7 AM and am just about to go home, but I figured I'd make this video quickly to end the day. I ended up spending way too much time doing CSS and HTML, going back to my youth with that image, but it's pretty cool. It's amazing what you can do these days.
Take a look at this one more time; I'm kind of proud of it. Look at that spinning effect! These things directly scanning the entire blockchain is computationally intensive. By indexing relevant data like UTXO pools and tokens, wallets can instantly retrieve transaction history. Some examples of different things that we have to worry about on the Q&A—it's not rendering bold correctly.
See that? That's the markdown for bolding it. Oh well, anyway, you can see that I'm still a nerd at heart.
Found an error in the transcript?
Help improve this transcript by reporting an error.