Basho, Input Endorsers, and the Future of Scalability
Summary
- •Charles Hoskinson discusses the Basho era of Cardano, emphasizing its misunderstood aspects and the importance of scalability.
- •Highlights a presentation by Ekman on "Driving Continued Technology Advancements Through Input Endorsers" from ScottFest.
- •Scalability involves predictability in cost, latency, and reliability, with a focus on resource growth proportional to user growth.
- •Key projects under development include Hydra (middleware for off-chain processing), Mithril (data availability and proof systems), side chains, and various optimizations.
- •Hydra aims for fast settlement and low transaction fees, while Mithril enhances data availability and wallet performance.
- •Side chains are designed to reduce main chain load and allow for high-volume applications, while optimizations improve resource utilization on-chain.
- •Input endorsers are discussed as a long-term goal to enhance scalability and decentralization, allowing wider participation in the network.
- •Hoskinson addresses competition with Algorand, noting collaborative relationships and differences in governance and participation models.
- •Emphasizes the importance of decentralized governance in decision-making for protocol development and trade-offs between performance and decentralization.
- •Concludes with a focus on Cardano's resilience, having operated continuously for over 2,100 days, contrasting with the operational challenges faced by competitors like Solana.
Full Transcript
Hi everyone, this is Charles Hoskinson broadcasting live from warm, sunny Colorado. Today is July 30th, 2023, and I'm making a Blackboard video to talk a little bit about Basho, the era that is the most misunderstood. So, let’s go ahead and present my screen. First off, many of you last year watched the video streams that came from ScottFest. There’s actually a lovely 15-minute presentation here called "Driving Continued Technology Advancements Through Input Endorsers," done by Ekman, who is the chief architect at Input Output.
They talk about all the cool things that have been done for simulations and a lot of the concepts that exist behind input endorsers, because a lot of people ask about this. Now, let’s talk a little bit about Basho. So, basically, you have this guy Bob, and Bob really wants to send a transaction to do something with Cardano. That transaction could be a lot of things; it could be like, "Hey, I want to move money," or it could involve an asset, a smart contract, voting, or registering some certificate, etc. There are many different use cases for Bob.
When you talk about scalability, you’re really discussing a bunch of different properties of scalability. You look at things like predictability. Predictability means that when Bob wants to do these types of things, can he predict the cost, the latency, and the reliability? For reliability, if you do it a hundred times, how many times does it fail? If it’s 100 out of 100 times on average, that’s perfect reliability.
If it’s 99, it’s pretty good. If it’s 20, that means it fails most of the time—80% of the time. So, you have different metrics of predictability: cost, latency, and reliability. Latency is how long it takes. Is that predictable?
Do we say it usually settles within this range of time? Or is it completely unpredictable, where sometimes it settles in an hour and sometimes in five seconds? Cost predictability is the single biggest driver for enterprises. Then, you have other properties of scalability, such as resource growth. Under what circumstances do the resources of the system grow or contract?
Is it a finite pool system? So, you have this little pie, and the more Bobs there are, the less pie you get for everybody. It gets divided and divided, leading to fewer resources. Or is it the case that the more Bobs you get, the pie gets bigger, and you have a much larger resource pool for people? Truly scalable protocols generally think about resource growth that is proportional to user growth.
As you gain users, you gain resources in the system. Resources are an ambiguous term, but generally speaking, they are consumed from activities such as issuing assets, using smart contracts, voting, and putting stuff on the system. You can usually break it into three broad categories: network, data, and computation. You can have a network that has a lot of computational transaction processing capability but very small blocks, which makes it very constrained. Or you can have a network with a lot of data storage capacity and computational capacity, but it’s very hard to move things around because you’re network constrained.
When you talk about resource growth, you’re thinking about something along these three axes, looking for a sweet spot where they’re all being maximized. You want them to be maximized as users are there, and you’d like that to be constrained within the lens of predictability. Then, you also look at concepts like trade-offs. As you embrace the growth of resources from users and try to establish predictability, you usually look at your decentralization index. This is why we created the EDI and worked with the University of Edinburgh to get it standardized and create an independent metric.
You want to be able to say, "Okay, say between zero and one that your system is X." The question is, am I moving in this direction, or that direction, or am I staying the same? There are all kinds of things that could increase the resources of the system but make you more centralized. True research and development increase the resources of the system while keeping you at the same level of decentralization or improving decentralization. Trade-offs also have a notion of security.
Almost all of the high scalability protocols that we see have Byzantine resistance that goes from one-half dishonesty to about a third, and in some extreme cases, a fourth. That might be okay for certain operating models, but generally, you have less security and less decentralization, but a lot more resources for everybody. You also think about egalitarian participation. The more resources you demand in a replicated sense—network, data, and computation—the fewer people can provide those. You go from a Raspberry Pi on one side to a supercomputer on the other.
This is on Wi-Fi, and this is on fiber optic with a dedicated channel. If we pick this side, we get a lot more resources, but the problem is there are only a few providers. On the other side, many people can participate; you can leave your cell phone on and the charger, and it could potentially be doing something for the network. When we talk about scalability, the Basho agenda was really started as a research project to better understand these types of things: how to achieve predictability, how to grow your resource pool, and what trade-offs are sacred and cannot be violated. Ultimately, the end goal is to get more network data and computation because then Bob gets predictability with these things.
Ideally, he wants fees to lower over time; he wants to pay less in fees and have faster settlement. Bob doesn’t want to wait hours for things to settle; he wants faster settlement and 24/7 utility. When it works, it’s fast, keeps getting cheaper, and provides access to the system around the clock. Alongside the established principles of Cardano, I’ll just start using that acronym, CEP, for all the things we know and love, like resilience, censorship resistance, and decentralization. It turns out that when we started this agenda, there was no protocol.
You have all these big marketing people—the Algorand people, the Avalanche people, and all these other ecosystems—and they say, "Oh, where’s the Gallup?" No, there’s no protocol that is a clear winner. When I say winner, I mean you get all this stuff that always increases as people join, you get predictability, your trade-offs are awesome, and it’s very egalitarian, meaning everybody participates. Nothing in 2015 or today completely satisfies all these things. Now, 2015 wasn’t even a consideration.
Today, there’s actually a lot of amazing research being done by many different protocols and projects that are chipping away at these concepts. We’ve made probably more progress than most, but it’s still an ongoing research concern. So, here’s what we’ve been doing today for Cardano. There are really four major items: Hydra, Mithril, side chains, and optimizations. Hydra is middleware; it’s on mainnet and growing very rapidly.
The release cycle is not like, "Oh, this big thing requires a hard fork." No, it’s a straight-up smart contract. The idea is that a dApp takes Hydra and starts putting some stuff off-chain. You still have the same trust and security guarantees that you care about, and you start getting properties like predictability. Because these resources are local to the user base, you usually get things like very fast settlement and very low transaction fees.
They can be built in a way with high reliability, so you get a lot of good uptime without violating any principles. What’s so cool about this is that it’s a fast, continuous innovation. It’s a big open-source project; lots of people are joining in. In fact, if you want to participate, I’d recommend you go to the Hydra family website. You can see there’s an enormous amount of stuff going on.
If you go to Discord, there are currently about 1,200 members online, with a big chunk for the Hydra community. You can also check GitHub to see the project roadmap; things are moving really quickly, and you can see all the stuff that’s already been released. Mithril is really the beginning of data availability and proof systems. This also includes a concept of roll-ups, and we’ve just released the first version of Mithril. Mithril is going to follow a very similar evolution to Hydra, but there is some science here.
There’s a great video from DC Spark that provides an overview of data availability solutions, including Chia, Polygon, Algorand, Celestia, IPFS, and Ethereum. He does a really good job discussing data availability in these ecosystems. There’s a big research agenda here, and we’re setting up a kind of Manhattan Project-style initiative to catch up on the roll-up side because the time has come to invest in some good technology. Several different firms have come together, and we’ll have a lot to say about this at the Cardano Summit regarding some of the amazing things that have been achieved and will be achieved throughout next year to augment and enhance this area. There’s still a very big ongoing concern because there’s a network consideration on-chain and off-chain, and there’s a data storage consideration on-chain and off-chain.
You don’t want the blockchain to blow up to 100 terabytes all of a sudden. There’s also proof generation; for example, BLS is a big component of this, but there are other constructions needed, and that’s coming in Plutus 3.0, for example, the very next version of Plutus with BLS support. Mithril is on mainnet, and there’s a whole research team working on a much more efficient Mithril 2.0.
Mithril is already amazing, and it’s going to be an essential component of input endorsers. There are a lot of cool things we’re pushing into regarding the more complicated SNARK systems, the recursion, and roll-up side of the world. It’ll be really cool in November to showcase all of that. This is also connected to the side chains agenda. Side chains provide fast finality and roll-ups, creating a connection point.
They also offer different models of transactions and resources. A lot of traffic gets driven to the side chain, which means there’s less load on the main chain. There are several things you can do to get stuff off the chain. For example, Hydra gets stuff off the chain, roll-ups get stuff off the chain, and side chains get stuff off the chain. Optimizations focus on better resource utilization while on-chain.
For example, Plutus 2.0 that shipped with Vossel has led to a lot of dApp developers reporting a 10x improvement in resource utilization. A script that was a kilobyte is now a hundred bytes. That’s a 10x improvement, meaning much more efficient utilization of the resources we have. There are lots of iterative optimizations being done on the network layer, the block structure, the authenticated data structures used, Plutus optimizations, and other things that gain meaningful percentages over time.
People get the same outcome but use less on-chain resources. It turns out that just with roll-ups and these types of state and payment channels, along with side chains to take the very high load things, I believe very firmly that for about a two to five-year window, as long as you’re doing some optimizations, you could meet a very high rate of growth and still have a fairly comfortable network. We need to implement Babel fees so that side chains have first-class citizenship with ADA. We also need TX prioritization, so you need something a fee market or tiered pricing to help Bob get some predictability and consistency in his transaction structure. These are not hypotheticals; this is deployed technology.
Heavy work on side chains is underway, and come November, there will be a lot to discuss and tell you about at the Cardano Summit. I think people will be very happy. There is a beautiful leapfrog effect, and you already saw with diffusion pipelining and a collection of other things that have been done last year and this year that a lot of great optimizations have happened inside the system. These are continuous processes, and while they require, in some cases, hard forks—like adding new versions of Plutus to support proof structures—fast finality is something called Paris. That’s the next version of Ouroboros after Genesis, which basically gives you finality gadgets in the system, allowing for fast settlement on both sides: the main chain and the side chain.
Babel fees will also require a fork, and fee markets will require a fork. There are some things here for the 2024 and beyond agenda, but this is a very easy-to-conceive system. Hydra will continue its rapid evolution and will work its way into a lot of dApps that require scale and consistency. Mithril is going to be a huge win for wallets because you’ll be able to get full nodes that have fast sync; you won’t have to wait three days for a node to sync, and your light clients will have full node security. This also works its way into the whole dApp consideration, and there are tons of amazing things that can be done with data availability.
A lot of work has to be done, but now is the time to invest in that. There’s a multi-million dollar collection of teams working on this. On the sidechain side, it allows very high volume apps that need very specific logic, like World Mobile or Midnight, to actually run, which takes load off the main chain but continues to use ADA as the underlying asset and creates network value for Cardano. The optimizations continue. So, really, what Basho is, is one part of this agenda and the other part is something big.
Now, let’s talk about input endorsers. If you watch the video "Driving Continued Technology Advancement Through Input Endorsers," it explains how one would go from a single-view environment to a multi-view or shard view state. There are many different ways to cut this pie up, but the idea is that a lot of energy and resources are contributed towards creating a system where everybody has, at any given time, the exact same version of history. At the tip of the system, there’s a lot of work that’s a little chaotic, but eventually, you reach some sort of consistency, and that gets added to the tail of the system. This chain, ad nauseam, will be consistent.
If you’re following along, once it’s settled, it’s settled, and it’s there. That’s kind of your single view. The advantage of this is that you have very strong security guarantees, and people can build applications against those guarantees. It simplifies the implementation of DEXes and other things. Occasionally, you have rollbacks based on the nature of the algorithm, like Algorand versus Ouroboros, for example.
They have different views of these things. Some allow fast finality, and once it’s settled, there are no rollbacks. Others allow a probabilistic rollback, which is why people usually have to wait for some blocks for something to be really settled. Settled is dependent upon your security tolerance that you have to evaluate risk. When you move to this multi-view reality, you have a main chain, and then you have all this stuff happening off-chain.
Eventually, you can reconcile that stuff. The advantage here is that the outside stuff can be done asynchronously and in parallel. The downside is that you no longer have this consistent, elegant view of history in the local form, but eventually, you do in the global view. You’ll see different terminology, like input blocks and key blocks. Why do we want to do this?
Because it would be really cool if you have your set of stake pool operators (SPOs) working on different things. This group, this group, and this group know their domain, and eventually, the threads all come together. As you increase the SPOs, you actually increase the throughput. A lot of thought had to go into this. For example, it turns out that if you use technology like Mithril and also extended UTXO, this helps you figure out how to do this in a way that resembles this type of system, especially as you zoom out.
You get the throughput of this kind of system. In other words, your trade-off profile—remember we always talk about trade-offs when discussing scalability—looks more like what you’re used to, but you get the advantage of a truly scalable system where you actually have an increase in resources. The problem is that there’s considerable engineering on the network side. There’s a lot of work that has to do with rebuilding the consensus layer of the system, and there’s also an incentives issue. If we go from a thousand to thirty thousand stake pool operators, for example, you have to pay them differently because there’s all this stuff over here and stuff over there.
In 2022, we were able to de-risk a design and a path, which is what you generally do in research. You have a series of proofs of concept. For example, when they were building the atomic bomb, they came up with the idea of the plutonium implosion shell. They had this structure, a plutonium core, and a little beryllium fuse inside it. They had a hypothesis that they would be able to implode the sphere, but they had to develop clever lenses to make the explosion uniform because it all had to be synchronized.
The first thing you would do is build the shell and demonstrate that you can have equal force on all sides with a synchronized explosion. Effectively, we did the same thing conceptually. We asked, "What’s kind of the implosion shell that we need to do?" There were many simulations and deep thoughts about how to organize consensus and how these different colored threads would work and run in parallel. There was also discussion of what a new incentive scheme would look like.
This has to be made interoperable with where we’d like to go with fee markets or something like that and fast finality in Genesis, along with all the other things we currently take advantage of in that single-sharded environment. Is this necessary to solve in 2024? Not really, because the stuff over here is going to make Cardano more competitive, better, faster, cheaper, add more programming models, and give many different ways to scale. These things are under the control of individual dApp builders, community open-source projects, and they provide enormous throughput potential. You want to get to the system over here because ultimately, what this does is change your trade-offs from the constraint of the consensus algorithm itself to whatever the network is able to process and whatever the data layer of your system is able to process.
In other words, it creates a system that enables scalability for Cardano for the next decade or so. There’s an ongoing, very aggressive research thread called Ouroboros Leos, and they’re making really good progress. Paris is the high priority right now because fast finality is required as one of the last pieces to get side chains to work really well. That same team is also working hard on Leos, and Well-Typed is the firm that Duncan’s firm that did a lot of the historical modeling and The Vossel hard fork includes all the Plutus improvements that have happened, and a lot of people have strong opinions one way or the other. That's why decentralized governance is so important; those opinions need to be included and discussed.
I hope this gives some clarity. When you see arguments on the internet, they often become very simplistic, and I don't think people understand how these decisions are made. They need to understand the properties of scalability: what do you want to be predictable, what resources are you talking about, and what is your scalability model? What type of trade-offs are you willing to accept? What principles do you care about?
Do you want egalitarian participation? One thing I care a lot about is input endorsers. I’d like all the developments happening outside the system to eventually work their way into it. I want this to be something that a Raspberry Pi or a cell phone could potentially participate in. This adds a lot more complexity to the protocol, but it means we leave fewer people behind.
More people can contribute, resulting in a more resilient system. Input endorsers represent a huge long-term win, and there are many very good ideas. Critics may watch this video and say, "Charles, input endorsers are years away," and that’s unfortunate, but that's just where we are. Recently, I did an AMA, and I was asked what I think of Algorand. I'm an adult, and a lot of people at Algorand are adults too.
I mentioned that there are two principal entities: the corporation that Silvio runs and the foundation. Our exposure and work have been primarily on the corporate side. We have a great relationship with them, know the leadership, and have calls with them from time to time. We always find ways to collaborate and work together because we’re adults, and there’s cool stuff that benefits both ecosystems. My experience with the foundation is that they poached our John Woods.
We talk to him from time to time because we’re on good and friendly terms. However, I’ve heard firsthand that some leadership there has a very adversarial view of Cardano. I’m not sure why, because we’ve never really interacted with them, and we don’t view them as competitors; we view them as friends, at least on my side. To illustrate, if you go to the official Algorand Reddit, you’ll see it compresses down to this: "Charles Hoskinson calls out the Algorand Foundation as prickly, hyper-aggressive, and adversarial." In a recent AMA, I also called Algorand brittle on the incentive set of staking.
It is brittle when you have less than 10% participation. My understanding is that a lot of tokens are being created for a group of people to take over and create an incentive system when you have 10% participation versus 74%. There are trade-offs and differences. At least they provided a quote, but if you look at the commentary, it’s filled with claims like, "You’ve got to wait three to five years for the roadmap with layer twos." People say that, and then they resort to deep personal attacks.
They claim I don’t like Stacy, but I’ve never even met her. They call me a lying narcissist and a great salesman, while also labeling me as an argumentative con man. I’ve never seen that kind of rhetoric before. They say I’m a snake oil salesman and accuse me of lying about my education. The comments are mostly noise.
They claim I’ve praised Algorand several times in the past, suggesting a change in my demeanor toward the foundation. However, when the foundation gives grants and tells recipients they can’t do anything on Cardano, that’s not hypothetical; it actually happened. This brings up a broader point about fiction and reality. We are adults, and I talk to you like adults. These are very complicated systems that require people with PhDs to understand them.
What we’re trying to do is break down the complexity and simplify it. As we move toward decentralized governance, the community will be in the driver’s seat regarding which protocols to select. Engineers in the community will build those protocols, accepting that they won’t always have the best trade-off profiles. You can always make things better, faster, and cheaper, but that often leads to more brittleness. It’s much harder to preserve and protect decentralization and resilience while improving system performance.
There has been eight years of research, and probably about 30 of the 180 papers in our portfolio are strictly about the Basho era. There have been enormous wins. Hydra didn’t exist as code three years ago; now it’s on mainnet, and people are using it. An open-source ecosystem is growing around it, evolving every week. Mithril didn’t exist until two years ago as a paper, and now it’s on mainnet, working its way into wallets throughout the year.
The sidechain set of things is in the code side and will be on mainnet sooner than people think. It will add many dimensions of complexity to the system. The same concept applies to Roll-Ups and availability. They work their way in, and yes, you have to implement things like tiered pricing and Babel fees to discuss the trade-off profile. However, every now and then, you create revolutionary new protocols.
We are among the most cited and respected research groups in the world. When you look at the raw citation count, there are thousands of citations for things like parallel chains, Ouroboros, and all the different papers, including Ledger Redux, that discuss how to put a protocol together that moves the trade-off window. You keep what you want while gaining something new and leaving fewer people behind in the consensus process. Along the way, you can’t achieve perfection. You have two choices: wait a little longer for science to catch up and give you something more, or make a trade-off and unfortunately lose something you used to have.
It’s above my pay grade to make that decision, which is why Voltaire is essential. It puts people in place to make that decision. The MBO structure we’re proposing for open-source development becomes part of the product backlog. A bunch of engineers from all over the world come together to build and deploy that. Another thing to notice is that Solana goes down a lot.
Why? Because they chose a different trade-off window. They built a model that’s very fast but very brittle. It also has a huge toxic waste problem that proof of history creates, which burdens the chain. You can’t throw that data away because it’s part of your consensus.
They hide that from the user because the set of people who can participate is very small. There are no Raspberry Pis making blocks; that’s their decision. They did this because they wanted to be the fastest in the market, especially during a time when DeFi was exploding, and people didn’t care about decentralization or operational resiliency. They cared about different things that were local short-term concerns. In the long term, did that benefit them?
We don’t know; it’s for the markets and people to decide. In five, ten, or fifteen years, we’ll start getting answers to that question. The point is, you don’t know today. So when you try to make decisions about decentralization and scalability, you have to ask what is easy for everyone to participate in and what is the really hard stuff that requires a lot of foresight, understanding that you’re changing something very valuable. Cardano, which our competitors never mention, has been up for over 2,100 days, 100% of the time, 24/7.
Found an error in the transcript?
Help improve this transcript by reporting an error.