On Hydra Scaling
Summary
- •Charles Hoskinson discusses Hydra, addressing misinformation about its throughput and the project's progress.
- •Aalos Kosis's 2020 blog post outlines Hydra's potential, stating a single Hydra head can achieve up to 1,000 TPS, with hypothetical scaling to 1 million TPS.
- •Simulation data from engineer Philip Kant confirms concurrent TPS of about 1,000.
- •Comparisons are made with other networks, such as the Lightning Network achieving 10,000 TPS and Chop Chop protocol reaching 43 million messages per second.
- •Cardano's transaction model differs from traditional TPS systems, focusing on complex transactions with multiple outputs rather than raw transaction counts.
- •Hydra is evolving to include middleware for developers, enhancing off-chain capabilities and reducing main network load.
- •Hoskinson emphasizes the importance of accurate representation of Hydra's capabilities and the impact of misinformation on public perception.
- •The extended UTXO model offers advantages for scalability and efficiency, with ongoing improvements in proof languages and transaction sizes.
- •Future developments include advancements in side chains, finality reduction, and ongoing collaboration within the Cardano ecosystem.
- •Hoskinson calls for community vigilance against misinformation and highlights the extensive research and development backing Cardano's technology.
Full Transcript
Hi everyone, this is Charles Hoskinson broadcasting live from warm, sunny Colorado. Today is October 5th, 2023, and I wanted to make a video to talk a little bit about Hydra, prior comments, and some unpleasant commentary that people have been sharing. There are a few people on the internet who are claiming that there’s a great degree of dishonesty from myself, in particular, and others in our organization about the throughput of Hydra and prior historical statements. Facts are tricky things, and I know people on the internet don’t like them, but let’s go ahead and look at some source material and discuss Hydra as a project and where it’s going. I’m going to share my screen here.
First things first, here is a blog post that Aalos Kosis wrote back in 2020, talking about Hydra: Experimental Validation of the Hydra Head Protocol. As a first step towards experimentally validating the performance of the Hydra Head protocol, we implemented a simulation. The simulation is parameterized by the time required by individual actions, such as validating transactions, verifying signatures, and carrying out a realistic and time-correct simulation of a cluster of distributed nodes forming a head. This results in realistic transaction confirmation time and throughput calculations. We see that a single Hydra head achieves up to roughly 1,000 TPS.
By running 1,000 heads in parallel, one for each stake pool of the Shelley release, for example, we should achieve a million TPS. This is basically hypothetical. So, can we just reach any TPS? In theory, the answer is yes, but there’s more to the story, and we discuss that in detail. This was written three years ago by the chief scientist of the organization, not an off-hand remark from me.
Now, let’s take a look at a TPS chart generated by Philip Kant, who was an engineer working with us at the time. This is actual data generated from a simulation that we ran locally, showing concurrent TPS at about 1,000 TPS. Does this make sense? When we look at competing literature on throughput limitations of off-chain payment networks, a credible group of researchers achieved about 10,000 transactions per second using the Lightning Network, which is a simpler network. If we look at other literature regarding Byzantine Atomic Broadcast, Chop Chop, a state machine replication protocol, achieved 43 million messages per second with an average latency of 3.
6 seconds. So, there’s a lot of literature floating around—competing literature, more fundamental protocols, and simulation data that shows you can achieve high throughput with these channels. Three years have passed, and we’ve all learned a lot. You have the Hydra website, where you can see the live roadmap of Hydra Head, where we’re implementing in a completely open-source and public way on Mainnet, not a testnet. There’s a large ecosystem starting to form around Hydra, and you can see various features they’re focused on and what they’re doing.
At any given time, you can sculpt these protocols to maximize throughput. I will remind everybody that throughput means something different in a UTXO system than it does in an account-based system. If that is lost on you, let’s go ahead and share the screen again and go to eo.org. This is one of my favorite websites because it gets the point across repeatedly.
This is a live view of the Cardano network from a transactional perspective, block by block. This block was empty, and we’re just going to wait a few seconds for a new block to arrive. Let’s take a look at it and see what’s in it. A few transactions are in it. This block right here, with 384 outputs, represents one transaction where 384 things happened in that one transaction.
That’s the same as this transaction right here with three outputs. So, we only fill 44% of it. You can go very fast with this particular network, and these transactions are only going to get more dense and have more things happening per transaction. Cardano is not a TPS system; it’s a transaction-per-transaction system, and every one of those outputs can be more sophisticated—scripts, proofs—not just the raw movement of value. So, what does a thousand TPS mean?
It just means that you can take a very simple payload, put it into a channel, run those channels in parallel, and then do lots of them. But in practice, does that make any sense? It would make sense in a video game, it would make sense in micro-tipping, and it would make sense for a variety of off-chain applications. But that’s not where Cardano is at. Cardano is doing large NFT drops, Dex transactions, Oracle transactions, Jed transactions, and complex, rich smart contracts with lots of things going on.
What happened over a three-year period is that Hydra pivoted a little bit. It pivoted into building some middleware that’s going to be really easy for developers to plug into their applications, working with Plutus to help get a lot of that complex logic that should not run on the main network but should run in a different network, like batching, voting, and event-oriented programming. What’s happened is an open-source ecosystem forming around that concept, and they’re building applications that are driving the roadmap of Hydra. Hydra is evolving, and it’s being done in public; everybody can see it. It’s running on Mainnet right now, and over time, those types of applications will ferment and become very standard.
This lowers the overall load of things happening on the Cardano main network, which is the intent. At any given time, you can push protocol development to massively increase throughput if you really wanted to. What use cases would we want for that? There might be a collection of dApps where that makes sense, and they would build those capabilities on top of the floor that’s been constructed. The point of prior comments was that people kept trying to advertise, “In 2000, we have this many TPS.
” What we’re trying to say is that’s not how these systems work in practice. Yes, you absolutely can achieve TPS, but it’s a pointless empirical metric. It means nothing if one transaction can have 343 things happen in it. Wouldn’t you much rather have 5,000 things happen in that transaction and have 10 of those, as opposed to 1,000 TPS of atomic transactions where you only change one thing? You get a lot more throughput that way and also use a lot less space.
Solana is over 200 terabytes in size; it’s approaching a petabyte if it continues down this path because everything’s on-chain. That’s what you have to do to achieve high throughput at layer one in these current generation protocols. There are plenty of protocols we’ve been investigating, like Oror Leos, which is the input endorsers workstream that will allow mass parallelization and high-speed layer one. We’re also looking at rollups as an ecosystem, side chains as an ecosystem, and yes, evolutions of Hydra. I made a 45-minute scalability video about these things, but what people do is take a single quote, a single paragraph, a single notion that was there to elucidate a broader point, emphasize that, and then ascribe dishonesty, which is extraordinary to me.
Given that so many people are commenting on this, spreading misinformation, at some point, it warrants an explanation. Everything I say has source material behind it—either a blog post, a simulation, a paper, something. I just showed my sources, written by people who are held to high standards and produce evidence. I don’t appreciate people running around saying, “Oh, he’s just making it up,” or that his former employees PM me and tell me, “Oh, he’s just lying.” It’s not okay, it’s not fair, and it has to stop.
The internet has become a cesspool where anything goes, and misinformation spreads. People absorb it, and it does have an impact. In the Ethereum community today, there are still developers who honestly believe that Cardano can only process one transaction per block for smart contracts. They actually believe this, and when you show them all these dApps, they say they can’t exist. They say, “You can’t do that in extended UTXO,” and when you show them the code running live on Mainnet, they say it must be fake.
There must be some smoke and mirrors here because they can’t get past information that was misrepresented in 2021. What does this mean? It means that when they’re recommending systems to people, they say, “Don’t even consider Cardano; it can’t work by design.” Meanwhile, we have the single best paradigm for on-chain, off-chain, and for side chains. Channel isomorphism is awesome for where the entire industry is going.
We have the single best paradigm for applying rollups long-term. Why? Because rolling up these stateless outputs in a UTXO model is a lot easier than this global state system of accounts and all the complexity that exists there. Trying to manage that non-determinism is the enemy of distributed systems; we’re on the deterministic side. This has a real impact on adoption and people’s opinions of the system overall.
Every time a lie is said, it takes ten times as much effort to undo that in people’s memory. So, when people run around saying Hydra’s failed, we lied about Hydra, and there’s no way to achieve any of these performance claims, what they’re really doing is taking the hard work of dozens of people and everyone building on it and saying it just doesn’t exist. It is not currently an emphasis of the Hydra protocol to try to maximize throughput; it doesn’t make any sense in this network. We’re not even using all of the layer one throughput at the moment. It’s much more about enabling the applications that people want to have and giving them a more graceful way to transition to off-chain and on-chain, doing things like batching and all the nice applications that people want, like gaming, the metaverse, and NFT drops.
These are the discussions that the team is having; this is the direction of the technology, and you can go straight to the source material on GitHub and see that. But people ignore that and say, “Oh, well, because you don’t have a channel that’s running, I guess, spam transactions a thousand per second, obviously that’s a failed project.” Well, who needs that? Where’s that going to come from? What use case is currently in the system that requires that level of throughput?
And also, where’s all that stored? Who’s paying the price for that? It just doesn’t make sense. We’ve made videos throughout the years and had discussions about these things, and the nuance is lost. So, every now and then, I just have to make a video to set the record straight and say that there is evidence for all these things, especially given that they’re deployed and live on-chain.
Simulations were always done. Anytime you look at a fundamental technology, you always do the simulation and look at the output and the result. Of course, they come to me and to a lot of other people. There are many discussions that have been had, and these were all public and published years ago. No one seems to care when they lie.
This can be a reference point for people to post and refer back to when they see the lie because we, as a community, have to be vigilant about these things and hold people accountable when they do so. Open-source projects are complex; they’re multifaceted, and technology is ever-evolving. If you look at the history of Cardano, some of the core assumptions we made have stayed true, giving us a long-term competitive advantage. Liquid non-custodial staking with Ouroboros is the greatest example of that, whereas Ethereum went down a very different road, which I believe is going to centralize their network and create unpleasant regulatory realities, especially in Europe. When you look at extended UTXO, it’s very easy to graft on an accounts model, but at the core of extended UTXO, you get all the benefits that Satoshi saw back in 2009 and why he chose that system over an accounts-based system, which is a simpler system.
Those benefits include the ability to seamlessly move on-chain and off-chain through isomorphic state channels. They include predictable pricing, easier rollups, and the ability to batch large amounts of activity together. All you have to do is increase the sophistication of your proof languages and your underlying script language. It takes a while for these things to evolve, and you’ll notice that Plutus 1 is a very different animal from Plutus 2. In many cases, there’s a tenfold reduction in transaction size from Plutus 1 to Plutus 2.
Something that was multiple kilobytes has gone down a factor of ten. Most app developers have upgraded to Plutus 2, and you can see those amazing savings in just one generation of the language. Imagine where we’ll be in three to five years with the continual evolution of these things. Imagine what it will be when the things on the wire are not just scripts but complex proofs involving SNARKs that are a roll-up of many off-chain activities and how much throughput a system like that can get. You don’t change the underlying core of extended UTXO, and Hydra is another case study in this.
Over time, as the years go by, it gets more sophisticated. A tail protocol is added in, and provisions for high throughput are added in. Finally, when applications require that inter-head communication, there are plenty of great protocols for that, and it just becomes an indispensable piece of middleware that developers have alongside all the other middleware in the ecosystem. Mythril is another great example of that. Entire blockchains are built to try to achieve what Mythril has achieved for us.
It’s already running on Mainnet, people are playing around with it, and the next generation is already under design. At some point, it works its way into the standard node as part of the software that stake pool operators run. There are almost 200 papers behind Cardano, a massive ecosystem of researchers and engineers. Were they all just wasting their time? Were all of them just doing nothing practical about these things?
No, the paradigm itself was built for the future, and it takes time to open up and realize. Together with the developers on Cardano, the governance advancements on Cardano, and everything along the line, all of these things together is how you evolve an ecosystem. Bitcoin would die to have what we have, and frankly, Ethereum is chasing it, and they can’t get there because of poor design decisions with the EVM and the account model. It’s very easy through side chains for us to borrow what they have; it’s a lot harder for them to get what we have in terms of scale. That’s a matter of time.
When you look at our future, it’s very bright, and I believe very strongly in the design choices that were made. These were not easy design choices. The use of formal methods, writing AGI specifications, verifying that the code is correct, all of the property-based testing that’s done, the use of HLL as a base language, and all the things that had to be done to make that protocol great—it’s not perfect. There are certainly major improvements that need to be made with interfaces and a better data availability story. We’ve got to get the side chain story out.
Finality? Sure, we have plans already to reduce that from 12 hours to 10 minutes on side chain transactions. It’s stuff that comes iteratively, and yes, it has to be more open-source. There has to be more collaboration from many different companies, and that was the point of Intersect. But every step of the way, nobody gives up; they don’t go home.
We’ve been working on this since 2015 as a movement—literally millions of lines of code. No matter how many bear markets come, how many cycles come, people show up every day and work hard on this. But then we get called liars; we’re told we’re being dishonest. That’s the nature of cryptocurrencies, and that’s what makes this space so remarkably difficult for people. You do everything; you tell everybody upfront what you’re trying to do, what you’re trying to accomplish.
You document it rigorously, you go and do it, and then people claim you didn’t even do anything. They say you’re just a wallet because they can’t understand what’s been accomplished. The only metric they have is price, and it’s one big video game for them. Meanwhile, it’s the life’s work for other people. Then they ask, “Why don’t we have mainstream adoption?
Why do people not come in?” If you’re punting arbitrarily for picking one standard over another, why would you invest? Why would you put effort into the ecosystem and space? If you’re trying to make contributions to an open-source project and those contributions are ignored, how long does that have to happen before people say, “We don’t want to be there?” If the people who have been around for nearly a decade now, working hard every single day, don’t get any benefit of the doubt year after year after year, then what chance does a normal person have who’s just fair-weather?
You get what you put in. We’re just going to keep chipping away at it. Hydra is not going anywhere; it just keeps getting better. Mythril keeps getting better. We keep making advancements to the theory of extended UTXO and the theory of Ouroboros.
There are all kinds of extensions that have been written down and are actively being developed right now, from finality gadgets to Genesis and other such things. There’s a rich strategy for side chains coming that I think people will be very happy with, and there’s already great technology being built in the side chain layer. It’s not hypothetical; great teams are doing that, like Midnight, for example. The drum beats louder and louder. It takes a village; each and every one of us has to defend what we’ve created.
Cardano is a global ecosystem; it’s got millions of people. When you see dishonest things, step up and fight for it. We have to protect what we’ve achieved together, and we can do so easily because, at the end of the day, we’re standing on bedrock. Every single protocol in Cardano is based on some form of peer-reviewed foundation—thousands of discussions, difficult engineering conversations with reason and logic behind why they were built, and rigorous specifications. This is the life’s work of so many engineers, some still here, some now somewhere else.
We owe it to all of them and all the time they put in to make sure that the work is referred to fairly and accurately. Just that simple. So now you have a video. Anytime you see anything about Hydra, just post this one, and we’ll keep chipping away at it. I’ll see everybody at the Cardano Summit.
Cheers!
Found an error in the transcript?
Help improve this transcript by reporting an error.