Back to videos

Summary

  • Cardano is experiencing high network activity with 668 total transfers, including NFT drops, leading to full blocks.
  • Cardano's design allows it to handle full blocks without throttling or slowing down the network, thanks to various throttling mechanisms.
  • Scalability strategies include increasing block size, decreasing block time, and improving transaction efficiency, with current block size at 80 kilobytes.
  • The ION project has achieved a 10x reduction in transaction size between Plutus V1 and ION, indicating room for further expansion and optimization.
  • Layer 2 solutions like Mithril and Hydra are being developed to enhance scalability, with Hydra showing significant progress and community involvement.
  • The upcoming "Paris" paper aims to introduce fast finality capabilities to Cardano, enhancing transaction settlement speed.
  • Special networks for services, such as partner chains, are being explored to improve transaction privacy and efficiency for specific use cases.
  • The concept of parallel chains is under development, allowing for asynchronous transactions and improved throughput without compromising security.
  • Inclusive accountability and universal access for resource providers are emphasized as key principles in Cardano's design to prevent centralization.
  • Continuous optimization and upgrades to existing protocols are ongoing, with a focus on maintaining decentralization and security while improving user experience.

Full Transcript

I've been going for a while. Sometimes there's a little bit of a memory leak, and we’ll see her come extended UTXO. Look at all that stuff: 668 total transfers, intra-wallet transfers, 66 foreign account transfers, and 62. There's a nice little NFT drop going on there. The blocks are saturated.

You guys over on Twitter noticed that and said, "Damn, the blocks are full." So, let’s talk a little bit about scalability and what this means for us. First off, blocks being full—Cardano is designed for that. Certain systems, like when you have an engine and you run it at redline for a long period of time, will eventually break. However, Cardano's design is a little different.

When you have a blockchain and all these nice little blocks are full, Cardano just carries over to the next block. It doesn’t throttle or slow down the network from the perspective of worrying that things are going to overflow and collapse. Whether your block is at zero kilobytes or at 88 kilobytes, Cardano reacts much the same way. It has a ton of throttling mechanisms to react to periods of high load and low load. So, how do we achieve scale inside this system?

Scale is a multi-parameter model, and I’m going to give you some rough points on it. First off, there’s increasing block size and decreasing block time. Block size goes up, and the time it takes to make blocks goes down. There are also more efficient transactions, allowing you to accomplish the same amount with smaller transactions. When we look at the network right now, our block size is running at 80 kilobytes.

For example, with ION, we’ve achieved a 10x reduction in transaction size between Plutus V1 and ION, and the same goes for Plutus V2 with more efficient utilization. The question is, do we have more room to expand this block size, and can we improve the broadcast to have more blocks per second? The answer is yes, we do. We actually have the ability to do this within reason. There’s a committee made up of community members and engineers looking at both of these things: what optimizations are on the horizon, where we can increase the size, decrease the block time, and also improve language and DApp efficiency to do more with what we have in the existing system.

The second category of scalability involves layer 2 solutions and batching. There’s a litany of other things involved here. When we look at things like Mithril and Hydra, we see that Hydra is growing at a very rapid rate. If you go to Twitter, some people might say it’s a failed project because it’s been talked about for a long time. However, here’s a live look at the roadmap, and it’s actually a very well-developed open-source project with a lot going on.

Hydra has a ton of commits, and it’s making progress. There have been releases, and a whole open-source community is forming around Hydra. We’ve worked our way down the feature set, from head-to-head protocol to single head per WebSocket configuration and terminal use, and now we’re working on incremental commit and decommit, which is a major extension protocol. This modularizes things, and there’s more on the roadmap. What does this translate to?

It means instead of doing everything on-chain, we start moving into a hybrid paradigm where there’s an off-chain part and an on-chain part. Hydra is kind of that linking tissue between the two. There are flavors of the protocol, such as the very high TPS flavor when people talk about thousands of TPS per head. This is relevant for things like tipping and micropayments. What Hydra does is open up a whole bunch of flavors, each with different capabilities, using the same infrastructure and architecture.

The way the heads work together and the complexity of the back parts of the protocol, as well as the primitives those protocols use, give you more capabilities. The same could be accomplished in rollups or recursive SNARKs in the ZK world. You’ll see many projects going down that road, but generally speaking, these frameworks are our first entry point. Mithril is one of our first entry points, and this bridge between on-chain and off-chain is Hydra. Both are in deep R&D and community development.

Hydra is on the Cardano mainnet; people can use it, and they’re starting to do things. As we get deeper into the Hydra roadmap, we start getting much more utility. For example, when you get incremental decommit and commit, you can start talking about doing poker, blackjack, and other interactive games, especially when you automate rollbacks. A significant amount of progress has been made. The hardest parts of Hydra are over from the perspective of deploying it, getting it to work on the mainnet, and building an ecosystem and community around it.

There are already 39 contributors who show up to meetings every week, talking to each other and doing interesting things. It’s a very vibrant project. Going back to DApps, you can also see all the different cool and interesting things on Cardano. SingularityNET, for example, is a $1.5 billion project on Cardano.

Wrapped Ergo is also looking really good. There are just a lot of cool and interesting things happening in the Cardano ecosystem, showing that we’re in a very different world than we were just 12 months ago. The big question, though, is can we do better than the base protocols we have? "Better" has a bunch of different meanings, and I look at this in three categories. The first category is better in terms of finality.

It’s not just about throughput; it’s about how quickly it settles. For many use cases, like ATMs or buying something at the grocery store, you want fast finality. We have a paper almost ready to publish called Paris. Our R&D team, led by Arnold B, who also works on Hydra, has been doing remarkable work. This functionally adds a fast finality capability into Cardano.

The first use case will be with partner chains, such as Cardano SL (the main network) and Cardano CL (the Cardano asset and service layer). When you want to move stuff back and forth, you want fast finality in those transactions. There’s a lot of work going into Paris, and it’s something we can conceive and imagine being in the roadmap this year or next year. The same team implementing Genesis Twe is likely going to be the implementation team for Paris, depending on negotiations. Fast finality is one component; no matter how much throughput you want, you want your settlement times to always be going down.

The second concept is having special networks that provide services, which is the whole partner chains model. When you look at Extended UTXO, these blocks aren’t just transactions; they have nuances. For example, Wing Riders is doing some swaps. These transactions would be better suited in a network with a service layer. Let’s say you have Cardano ASL (the main net) and a partner chain, the next-generation SingularityNET with an LLM similar to ChatGPT.

You would send a query as a transaction, it would process, and the brain would respond with an answer. These systems could potentially process millions to billions of queries per day. Now, the question is: should every single time you ask a question and send funding to answer that question be preserved forever? Should those queries have the same level of publicity? For instance, if you send a picture and ask, "What is this rash?

" do you want everyone in the Cardano ecosystem to know about it? Maybe you don’t want the whole world to know about your rash. If you sent that through a public network instead of a service network, it would have the same level of transparency as Cardano and be preserved as a query. However, as a partner chain, that service network would allow you to pay in ADA or whatever native currency you want. You could send that query in, but maybe you just want the answer, which comes in as an oracle.

When you think about it, every one of those blocks is filled with a lot of traffic. For example, World Mobile will have millions of customers connecting their phones and making purchases of bandwidth daily. Those are millions to billions of transactions, even for a modest-scale network on a monthly basis. Iagon is another example of decentralized storage. You have all these DApps that live here, and they have elastic consumption of resources.

As they gain users, they’re constantly buying temporary space for account profiles. As long as those profiles are funded, they’ll stay, but then they go away. You can imagine all these things. Special networks for services are another big component of scalability. You have to segregate what you want to keep forever and public from what you potentially want to keep for a while or user-defined, like rent, and potentially want to keep private.

That’s what Midnight is doing for people. Midnight can also ask queries to "Ask Ben." Partner chains can talk to partner chains, and maybe "Ask Ben" stores all its training data on a decentralized storage network. Partner chains can do partner chain-to-partner chain transactions and can also be query points for your DApps. This is a very important part of achieving scalability because it’s not just about running billions of transactions.

If you run them under one model, they’re forever public, and everyone knows about your rash, with no ability to have nuance in your storage. The final part, which is what most people are interested in, is the sexiest part. It has many names: parallel chains, input endorsers, etc. This is under heavy review and construction, and this paper is almost out. I’ve been saying it for a while, but we’re actually pretty happy with the direction it’s going.

What this allows us to do is move away from a traditional blockchain model, where everything is sequential and linked to each other, to a model where you have a whole ecosystem of things happening asynchronously. This ecosystem eventually gets stitched back together. You have concepts of input blocks and key blocks. This allows you to massively improve your throughput, and your throughput is asynchronous. Your TPS rate, or in our case, TPT (transactions per transaction), goes way up.

There’s a lot of magic in the security world that goes into how that reconciliation happens, but you’re never really throughput-limited as long as things don’t contradict each other. Right now, we’re under load because there’s an NFT drop going on, which typically creates a high degree of load. It’s not really a problem; at some point, it stitches itself back together. There are special rules within this blob to do this in a way that prevents contradictions. It’s easy to achieve synchronous high throughput, but when conflicts arise, the whole thing can fall apart.

Optimistically, it’s awesome, but pessimistically, the network can actually be slower than the sequential case. You’ll see DAG networks and decentralization with graphs claiming to be fast, like Red Belly, which is an example of a BFT protocol that is super fast, achieving several hundred TPS in optimistic cases. However, if a malicious pattern comes in, the network can revert to a slower state. Leo’s is a major step forward because we’ve thought very carefully about that. This careful thinking took time, but we’re at the finish line.

We now have a model for achieving very high transactions per transaction while being able to recover nicely from various attacks. You’ll see a sprinkling of papers we’ve written along the way, like Ledger Redux and others that focus on recovery from attacks. We also wrote the Parallel Chains paper and a litany of other papers throughout the years that built a conceptual framework, allowing us to reach this optimal high throughput protocol. So, why don’t you just start here? Let me show you a picture that epitomizes why you start over here first and then build extensions to get to high throughput protocols.

If you go back and watch the very first Mario Brothers game on the Nintendo, you’ll see the user experience was pretty remarkable for its time. The original Nintendo Entertainment System had a resolution of 250 by 240 pixels. The PPU was the Ricoh 2A03, a remarkably simple device. The hardware was incredibly simple, with many design flaws and problems with the lockout chip. The processor was clocked at 1.

79 MHz. The CPU had access to just 2 kilobytes of RAM. It’s pretty crazy when you think about it. 37 megahertz. It's like there's just nothing here.

Okay, this is as bare bones as it gets, which is why it's not surprising when you see Mario Brothers; it looks like this. But here's an interesting thing: same hardware generation, later at this, no changes in the underlying hardware. How about that? Look at that radically different game. You have Overland maps, you can fly in the air, so you're doing all this cool stuff, much better soundtrack, much more interactive.

Very same hardware, same cartridge, but somehow, some way, they were able to get to that experience. Is that pretty remarkable? So that's why you start on the left side and you talk about optimization—more efficient transactions. You talk about optimizations to the existing protocols, and then you can talk about extensions. Then you say, "Okay, now let's actually, once we have the most optimized model, start segregating things.

" We break it down into stuff that you need now and quickly. Also, there's transaction prioritization; that's kind of hidden. When you talk about fast finality, there's part A1 and A2. Here, you're also talking about things like tiered pricing. Why?

Because when you submit a transaction, do you care if it settles in one second? Do you care if it settles in ten seconds? Do you care if it settles in a hundred seconds? Do you care if it settles in a thousand seconds? What's your time horizon on that transaction?

Well, let's say that you're at the cash register; you're probably living in that one to five-second range. You want it to settle quickly. Let's say that you are parking a car and you're paying for that receipt; maybe the ten to twenty-second range is not too bad. Okay, let's say that you're at a restaurant, and you're eating, and you give your little credit card; maybe the hundred seconds is not necessarily a problem because you're sitting there drinking your coffee. Okay, let's say that you're doing shipping, and your boat has just gotten to port.

It's checking in all the inventory at night, and the inventory is not going to be checked until the morning. Any of these is fine because you have hours—maybe eight hours—before you even talk about any desire to settle. As long as the guys show up in the morning and it's good, you're good. So pricing should work differently with a finite resource based upon the settlement desire. Having tools to accelerate your settlement time and tools to price it is very important because you get sculpting of traffic, just like packet prioritization in networks.

You have something called quality of service. Okay, software-defined networking typically has this concept. Not all packets are equal, and some of the traffic and transactions you're doing live in a broader ecosystem, but they don't necessarily need to settle on mainnet or they need to be batched. So it's a very close relationship to these special-purpose networks. I love it when Wacom does this; it goes way far away.

There we go. And Hydra—why? Because you can connect these two concepts together. Hydra also is a special ephemeral network that you're setting up for your specific case. For example, remember the Mike Tyson fight that's going on?

Mike Tyson is going to fight Jake Paul. Now, that's going to require a surge resource. Let's say there's a live stream going on, and you can tip real-time Mike Tyson or Jake Paul as they're fighting each other. That'd be a pretty cool thing for Netflix to put in. All right, well, we think there's going to be somewhere between 20 to 50 million people, according to Netflix, that are going to watch that game in real time.

Let's say all of them are users. Okay, so you have a gargantuan surge of consumption for a three to four-hour window. A massive amount of bandwidth is required. Okay, so that World Mobile chain right there is going to be working real hard to do all that live streaming and broadcasting for those 20 to 50 million people. There's an entry point; they have to pay—let's say it was pay-per-view, but it's actually Netflix.

They have to pay to get access to it, and then while they're watching, they're going to be tipping. So you could set up a special-purpose Hydra infrastructure to basically tip these two accounts in real time. You can tip, and there's a batched settlement at the end of the fight for both Mike Tyson and Jake Paul. So on the main chain, the only thing that happens is that funds get batched together, and a giant transaction occurs at the end of the fight to settle to Mike Tyson and to Jake Paul for whatever they've gotten tipped. Some stuff happens with World Mobile and some stuff happens with that Hydra head, but relatively diminutive.

So even though 20 to 50 million people are watching, the main network doesn’t deal with that type of load. You need these cases for surge heterogeneity. You need the ability to surge for these types of use cases. Well, these NFT drops are basically smaller versions of that event. It's declared; we know it ahead of time; it's been broadcasted.

Having the ability to reconcile off and on-chain infrastructure means you can load balance it, and you don't use that finite resource. While this is going on, if it is surging the network, you could pay extra to basically get tiered pricing, so you get a settlement within a window. But now it's connected to the load of the network, just you have surge pricing for highways, for parking, any of these things where you have a finite resource and you have times where it's open and times where it's saturated. Now, after you have all these mechanisms, you really thought about it, which we have as an ecosystem, and there are dedicated teams on all of these things—community-controlled and coming online with SIP 1694. Also, institutional stuff—there's already a vibrant open-source project building out the Hydra ecosystem, and it's growing quickly.

Paris is in deep R&D, and it's getting into the pipeline for it to be implemented. Like genesis tier pricing is in R&D; people are thinking about it, and obviously, a huge effort is going into partner chains. Then, and only then, do you really start thinking about this because you've already optimized; you already have all your capabilities sorted out. This is where you get the global scale as a network. Why?

Because you have all these advanced proof structures. I didn't even mention the zero-knowledge stuff that kind of fits in the second category here, kind of 2B—the ZK world. You have rollups; Plutus V3 is bringing this in. That’s coming along with SIP 1694, and we have a huge house of things Midnight is constructing that could be backported into Cardano. But only after you've kind of optimized all of this do you really start thinking about the global scale side, which is why LAOS was a parallel effort but a longer arc effort.

Because once you get here, then you are talking about big blob main chain. Here, in terms of TPS, you have an enormous amount of stuff. Your data requirements go way up, your data storage, your network requirements go way up, and your CPU requirements for validation go way up. LAOS makes this a parallel process where basically people don't have to be gigantic super nodes to calculate all this, but they can actually do that in that big blob here. There's just a higher overhead there—the network side of things and the data storage side of things.

No matter what you do, you're going to have to assume a higher bandwidth in the network, and you're going to have to assume higher overhead for a full node. Well, hang on a second here. Doesn't Mithril give you something to say about that, where you don't necessarily have to have a full node, and yet you can still act as if you had one from a security perspective? Yeah, okay, so we built that. And then also, there are plenty of ways to reconcile and handle the network; they're kind of built into the network stack we designed for Cardano.

But it took a long time to build out and develop that network stack. Now we're coming to the other side of it. So these protocols, if you implement them naively, what they do is lead to mass centralization because what ends up happening is only a small set of nodes can actually process the high TPS that you're dealing with. They have to have things like 128 gigabytes of RAM or 256 gigabytes of RAM or even a terabyte of RAM for the ME pool. They have to have huge CPU clusters to process all the transactions, and you talk about petabytes of storage.

Now, are you running that on your desktop copy? No. So do we have a homogeneous network? No. You have a network that looks like server and client, and isn't that Web 2?

Whoa. That's the fine print on a lot of these current high-performance protocols. Under the hood, the set of people that run the system is much, much, much smaller than the set of people that use the system, and you start converging to server-client. So the first thing that you want to do is just get your story sorted out. You want to go from Mario Brothers one to two to three.

This is Mario one, two, Mario three. Mario three is as optimized as it's going to get. You can't make Super Mario Brothers with the Nintendo; it wasn't possible to do that. What we've done through several generations of the technology is gotten from Mario one to three. People are starting to get very clever about all of this stuff.

Then what you want to do is be able to have extensions and accessories and new capabilities that are pluggable because then you've opened up the ability to extend the protocol. Then you get to the final part of the story, where you allow this large bubble to occur. There are three principles that matter a lot to me. One principle is this idea of inclusive accountability. You'll hear me say it again and again and again: inclusive accountability.

All that really is, is if you're Bob and I tell you something, you hear something, some knowledge K, you can verify it yourself. You can determine the truth value. So if you tell Bob, "Hey Bob, it's raining outside," okay, so let's draw how Bob would verify that. Bob can look, and Bob says, "I have eyes; I can see the rain. I can hear the rain; I can verify it's raining.

" If he can't see the rain, can't hear the rain, can't smell the rain, can't get wet, he might say, "I don't know. I don't think that's true." So inclusive accountability. What ends up happening in modern naive high-performance cryptocurrencies is that you lose this property in general as the system gets larger because fewer and fewer people have the machinery mechanics necessary to be able to verify themselves. The history they're looking at is okay.

So you go from "I checked it myself" to "trust me, bro." Now, if you don't care about this, if you don't think this is important, okay. But this is the crux of Satoshi's vision. If you read the Satoshi white paper, his assumption was all Bobs are equal. Every Bob is running a full node, and they're all transmitting and communicating with each other.

It's a heterogeneous network; you can always check what you get. You have enough information, and so it's trustless instead of "trust me, bro." Okay, so you don't care about Satoshi's vision; if you don't care about inclusive accountability, it doesn't matter if you're a maxi or not. The second property that's really important is a property that's related to universal access for resource providers. I do not the idea that only a special small class of people gets to manage this bubble.

I think it's a bad idea. Okay, and Bitcoin mining is moving in this direction. It's really expensive to Bitcoin mine at scale. You got to buy ASICs; you got to have access to cheap power. You actually have to have a location to put racks of ASICs in.

If you're living in a small apartment, you're going to get super hot in the apartment, and nobody wants to do that. You think you're a drug dealer because your energy consumption is way high, and they think you're growing pot in your apartment. Does it make a lot of sense? Okay, so universal access for resource providers—if you have some sort of computer, no matter how powerful that computer is, you can play into consensus. Turns out that the current model over here is not as optimized as it needs to be.

So a big design requirement of LAOS was to improve the accessibility. So what does this mean? That the number of SPOs actually goes up. So we go from several thousand; the hope would be to tens of thousands. So there's a lot of protocol thought that's going into this as well.

So that's a capability we're very interested in: inclusive accountability, universal access to resource providers, and three, to preserve the security assumptions and model. So, or aoris, semisynchronous 51%, 50% plus one Byzantine resistance—that's the story. You don't want to go to a protocol like this and one half goes to a third, goes to a fourth. Also, you don't want a situation where when you have adversaries, as your adversarial behavior increases, your performance goes way down. You don't want bad to map to performance like this.

You don't want that to occur; you want a situation where you have graceful failure. So we have graceful failure, self-healing, 50% Byzantine resistance, a litany of these assumptions built into Pros in Genesis. As we go to the new realm, we want to preserve and protect those kinds of things. Turns out that no one—the blockchain dilemma—no one has really solved this, and we're ahead of the pack in terms of the theory and science for how to resolve and solve these types of things. But it's a bit of a problem, and we have certain superpowers.

Like, for example, extended UTXO is a superpower; Mithril is a superpower, and other things that I think are going to really help the story of inclusive accountability. They're going to help the story of universal access and also going to preserve and protect the security assumptions and model that we have. Now here's the thing: while all of this is going on, you can continue optimizing and improving. You can continue moving stuff into Hydra channels; you can continue writing more efficient programs, can optimize the code—all of that can happen. Okay, so that's currently like last gen, just you can write games for the NES while you're waiting for the Super Nintendo, while you're building that up.

But this is a major protocol re-architecture and redesign, but it learns and is inspired by these capabilities. This paper is coming soon. I promised Aala I wouldn't give people the date, but it's sooner than people think, and Paris is actually going soon as well. There you go. Now, what does this mean for Cardano today?

Well, what it means for Cardano today is you live here where blocks can be full. What does this mean for your usability of Cardano? Does it fail? No. Does it fail?

Does Cardano become absolutely and utterly unusable for months at a time? No, it doesn't become unusable. Does it mean that your security goes down? No, your security doesn't go down. All it translates to is that there's a high load, and user experience is not as good as it needs to be during those high load windows.

Well, off-chain, on-chain, what you do is you do the same thing the Mario 1 to Mario 3 people did. You find clever hacks and optimizations in your protocol architecture, your application architecture. So maybe, for example, if somebody's submitting a transaction, they sign it. What you can do is just put it into an off-chain batcher, and it'll relay that when the markets open up a little bit. But from the customer's perspective, they've already signed and transmitted that.

Maybe they get access, like for example, you're issuing NFT books. As long as it's in the batcher, they get access to the intellectual property, and they can see it, read it, interface with it. From the customer's perspective, the transaction's already settled, but the off-chain transaction hasn't quite settled yet; it needs to be included in the blockchain. But the funds have been sent. These things are there; there's all kinds of things that you can come up with that basically allow people to come around, especially if you have custodial wallets or these types of things.

So this is the challenge. Just like any good game developer, you always want a faster GPU; you want the RTX 4090. But what do you do when they have integrated graphics? Not all your customers are going to be able to afford a $1,600 supercomputer GPU. Sometimes they have integrated graphics.

So what are you going to do? Tell your customers, "Well, you can't play the game?" No, you find clever ways to make it usable and playable, but you degrade in ways that the user doesn't necessarily care as much about. That falls upon the application developer. There's still a lot of dApps on Cardano using Plutus V1, which are ten times larger in transaction size than in Plutus V2.

So just by rewriting that, they could save an enormous amount of block space. You go from using 88 kilobytes to 8 kilobytes. Tell me, should we 10x the block or just push those dApps to upgrade their code so that they use it more efficiently? They're basically saying, "The only way I can get to Mario 3 is give me a Super Nintendo." But wait a minute—did Nintendo create Mario 3 with a Nintendo?

They did. So that's a big component of it. These new structures and capabilities that come on, they're trickling out hard fork by hard fork, and they open up completely new worlds that people can use. That's important. Some things are on-chain, like Hydra, that have this strong off-chain component, and they can be accelerated over time from our part.

a huge amount of R&D is going on, and it's easy to get high TPS if you don't care about inclusive accountability. You don't care about universal access for resource providers. So having lots of SPOs and decentralization, you can compromise the security model and rapidly centralize the system. In other words, if you want to be Amazon Web Services, is Amazon Web Services level performance. If you want to be a cryptocurrency that follows Satoshi's vision and has your ability to verify lots of people providing it, so it's very decentralized—that's measured by EDI—and you want to have strong security assumptions.

Well, if that's what you want, you have to have a new protocol because there aren't any on the market that have these capabilities. The good news is that a lot of scientists have spent quite a bit of time looking at this whole thing to figure that part out. So you hear a lot of stuff over the internet; you hear a lot of stuff over Twitter. You see a lot of FUD. If you take a step back and you ask yourself, what are people really complaining about?

Found an error in the transcript?

Help improve this transcript by reporting an error.