Back to videos

Whiteboard: DApps and Development

Sunday, January 2, 20221:05:1084,360 viewsWatch on YouTube

Full Transcript

hi this is charles hoskinson broadcasting live from warm sunny colorado always warm always sunny sometimes colorado it's a new year january 2nd 2022 and it's going to be a fun year there's going to be a lot of things that happen a lot of things to learn a lot of things to do a lot of things to build and of course as i mentioned we're going to change the podcast format gradually over time build a studio downstairs and we're going to have a lot of fun with that but the other thing is we're going to do a lot more pedagogy a lot more education a lot more teaching some will be done by me some will be done by lars and others but to start the new year off on sunday i figured it'd be a lot of fun to do a whiteboard video and talk about dabs talk about d5 talk about development but first i actually got some new books in so some of i'm trying to build a nice meditation practice and i looked at john cabot zinn's recommended reading and he recommended about 30 books so this is some of the 30 books that are trickling in as amazon ships them to me so why meditate from matthew ricard for those of you who know him he's a french monk who's eponymously known as the happiest man alive and if you're a chan master hoof print of the ox this is oldie but a goodie and then zen books always have the best names so this one's from elizabeth hamilton untrain your parrot and other no-nonsense introductions in the path of zen there we go so just a little side thing okay let's go ahead and get to it shall we got my coffee this is gonna be a long one this is gonna be a fun one put on your hats put on your boots get strapped in we're gonna have some fun okay so first thing we're gonna do share screen and screen share boom there we go all right so if you're really starting to dig in and gain a better understanding of cardano the development experience what i'd recommend is starting with the introduction docscardano.org it's up here you guys can see it on the screen and basically it covers a whole area about plutus rosetta other components like native tokens and so forth and you can learn about pollutus understand the extended utxo model about the ledger model how plutus works the scripts work and so forth if you want a deeper introduction into plutus there's paper wrote a little while ago with james gabby and lars brunius and it's the utxo versus account based smart contract blockchain paradigms so basically it compares and contrasts what people are used to in the ethereum world versus what people are doing now in the cardano world with extended utxl and i'll read off the abstract just give you guys an idea and the conclusion we implement two versions of a simple but illustrative smart contract one in solidity on the ethereum blockchain platform and one in pollutus on the cardano platform with annotated code excerpts and source code attach we get a clear view of the cardno programming model in particular by introducing a novel mathematical abstraction we call the idealized extended utxo for each version of the contract we trace how the architecture of the underlying platforms and their mathematical effects the natural programming styles and natural classes of errors we proved some simple but novel results about alpha conversion and observational equivalence for cardano and explain why ethereum does not have them we conclude with wide-ranging and detailed discussion in the light of examples mathematical model and mathematical results so far so really when you go down to the conclusion of this paper and this was written before we launched alonzo so there's even more to it basically what the conclusion is getting at is here we hope this paper will provide a precise yet accessible entry point for interested readers and a useful guide to some of the design considerations in the area we have seen that ethereum is an accounts based blockchain system whereas cardano like bitcoin is utxo based and we have implemented a specif specification in solidity and in plutus and we have given a mathematical abstraction of cardinal idealized extended utxo it raised them some surprisingly non-trivial points both quotes both quite mathematical and more implementational which are discussed in the body of the paper the accounts-based paradigm itself lends to imperative programming style a smart contract is a program that manipulates a global state mapping global inputs accounts to values the utxo based paradigm lends itself to a functional programming style the smart contract is a function that takes a utxo list as an input and returns a utxo list as an output with no other dependencies this is the heart of the matter okay so i am not a dogmatic person i believe that you need to have a multi-paradigm world and so a multi-paradigm world means that you have to blend both functional and imperative styles if you're going to do useful things there are many cases for subsets of applications and expressions where a functional only style could work but in practice you'll find that there's certain things that may be easier to do in an imperative style and so really from the very beginning there was a question of expressiveness and model so let's talk about that to start with so expressiveness is one of those properties so when you look at things like bitcoin bitcoin script is kind of the foundational model of our industry because that's where we started in 2009 and that does have that functional-esque functional like experience it's kind of a stack-based assembly it's based on a language called fourth and basically you have this utxo model so you have inputs and you have outputs and those inputs and outputs basically are operated on by some sort of function and that has a whole bunch of terms and conditions that are defined by the scripting language and there's a big limitation to the level of expressiveness that you have okay so then when you go all the way to the other side you have things the java virtual machine the common language runtime etc etc so these are full program environments you have maximal expressiveness here you can run full applications like for example on the jvm you have minecraft okay lots of things are built in c sharp you guys are quite familiar with that so then you have things the evm and they say okay we need to restrict some things because well if we don't and we have maximal expressiveness this is too open and thus we have security issues all right so then you look at well where do we put utxo on this list so bitcoin script is kind of here and then there's a question mark of where do you draw the line you could certainly replicate what evm is doing you can do it here here so one of the first parts of the research agenda when we were looking at taking the utxo model because we really like this functional idea and we'll talk a bit about the value proposition of it we say well let's let's make it a little bit more extensible so we wrote the extended utxo model and really what we did is we said okay we think that for a large class of applications the sweet spot will be somewhere in the middle between what you can do on these big things what you can do with bitcoin the most valuable cryptocurrency in the world and most used at the moment and what the evm is doing because the question is this space between these two is it really needed are these features and functionality that are creating a larger more open attack surface really required for defy are they really required for oracles are they really required for stable coins are they really required for your nfts are they really required for any of these daps that we see in the industry because if they're not required you're not using this but you're paying for it and you're paying for it in very particular ways first off this imperative model that the evm is using is very hard to scale in practice the introduction of a global state is one of the reasons but there are others so that's one cost another cost is that there are many things you can do and just because you're not doing them doesn't mean an attacker an adversary can't exploit that so you have to be very careful with your expressiveness this is one of the reasons why the bitcoin crowd stay here they've created over a trillion dollars of value with their use cases and they say guys the more that we push it this direction yes we gain some more programmability more expressiveness more potential for value but then that openness also creates some issues for us so what 2022 is effectively about as we build a defy adapt ecosystem for cardano is really asking what is required for these in this new model and it turns out that if there are other things that are needed to start replicating and emulating that well then you could write sips cardano improvement proposals and there's already three that have been written in direct collaboration with the d5 vendors to do this increase the expressiveness thus allowing more things okay and that's really what the next six to nine months are about as we build out this d5 ecosystem together as a community is getting that sweet spot of expressiveness because extended utxo is a completely new model no one's ever done it before we've done utxo we invented extended utxo ergo was first to market with this concept and they've already given a lot of great advice and clarity so we said wow that's that's something to learn and as they grow and we grow we're kind of learning where that sweet spot needs to be now if we preserve that functional style then and locality so local state instead of this concept of a global state to be manipulated then what's nice about it is that you also get a lot of security why well because this is really kind of a mathematical object that is very easy to model it's very easy to understand your preconditions your post conditions and your invariants it's very easy to use property based testing furthermore it's very easy to write specifications and if you can write specifications you can do property based testing you can do design by contract you can do all kinds of great things then you can start doing verification against those specifications and those verifications allow you to certify software against standards what does that mean it means that you can get a high assurance that the smart contract as implemented is correct okay now you can do this in the imperative sense people do it all the time it's very expensive and it's very time consuming how you write your code how you design your program how inputs and outputs are handled the purity of the functions in the system the amount of side effects the system has the amount of things a system can do leads to a combinatorial explosion of possibilities and that explosion of possibilities tells you basically the bounds of how expensive it's going to be to get high assurance that your software is correctly written this is one of the core reasons why we chose instead of starting with an account space model on this side and dialing it back like what ethereum did to start with a functional utxo model and dial it forward when you dial things back you're taking away when you dial things forward you're gradually adding and you add based on a need and each of those needs you can think deeply about in the context of preserving features that you care about for example like deterministic cost deterministic cost is saying when you build your smart contract what you think it's going to cost is actually what it ends up costing the concept of local state versus global where things change in unpredictable ways every good programmer knows the more that you can keep in a deterministic world and not move into a non-deterministic world the better because it gives you a lot more certainty about predictability of behavior for a contract okay the other reason why we chose to start here and then dial it forward instead of starting here and dialing it back is because as you're scaling up expressiveness you also can model performance very easily and we have with bitcoin information since 2009 13 years ago think about that 13 years of history here that we can draw from in that particular model and understand how do we make the model more performant so let's talk about that so first off when we look at blocks because that's what a blockchain system does they're a heartbeat so you have a time between blocks to make them and this is where the transactions happen and this is where they get aggregated so the most obvious way to scale is to just start with the blocks and make them bigger and we're actually doing that so with node 1.3.3 we're going to restart the block agenda that we started last year and i think we increased it by about 12.5 percent last year so the blocks will get bigger and this is part of a larger optimization program okay that's led by an interdisciplinary team an intercompany team everything from library optimization and that's led by galwa they're a military contractor to things like improving a lot of the efficiencies inside the core node software and the network stack and each time that's done the blocks can get bigger bigger blocks more tx can fit inside the blocks okay and that's a continuous process so for example this is about mid-january and once that happens blocks start getting bigger okay and that'll just keep going until we reach a kind of a maximum threshold that's really determined by propagation so we want within five seconds 95 percent of the peers to have received the block that's the empirical measure that we're using so you look at the one second three second five second propagation time and then you just keep increasing the block until you get there and there's tons of things they can do to optimize that data structures that can be put in to optimize that there's a lot of coding theory that can also be used for this so for example you'll hear terms like fountain codes okay and that's being used in various places in the industry there's some great research out of arizona state university that was done by the dash community and they talked about that there's also great papers being produced out of stanford and that's out of david she's lab and david is a professor there and he's studying the co-development of proof-of-stake and the network stack and he recently published a great paper okay so that's one thing to do improve your propagation rate optimize and then you can expand the blocks so that gets you somewhere second when you look at the block cost it looks like this kind of a heartbeat so this is dead space this is dead time so if i am a processing node that's actually processing the blocks these blocks here then i have situation where i'm not doing anything in this area and i'm doing a lot of stuff here oh no and then nothing here oh no not a lot there well pipelining is basically about saying hey how do we be smarter about that with pipelining and we do stuff during that dead space that effectively allows you to increase the throughput of the system even more okay so that's what we're doing in tandem with the block size increase so both are happening we're optimizing libraries making things work faster we're optimizing the core node reducing syncing time and these types of things optimizing the network stack so our propagation window is better so we can get 95 percent broadcast within five seconds despite the fact that blocks are getting bigger and then you can give the people who process things more work to do during the dead time to simplify the concept of pipelining okay that's a simple optimization and that's also underway by the core team so this is happening in parallel to the pipelining effort then there's this question of well wait a minute why don't i do blocks in parallel this is something that we designed with aurabor since the very beginning we really wanted to do this i even mentioned it in my 2017 whiteboard video so you'll see a lot of people talk about dag protocols and usually they break down to [Music] this concept of key blocks and input blocks or something like that so what happens is that you have these heartbeats and those heartbeats are relatively aligned and synchronized maybe they're 20 seconds or they're 30 seconds and then in the meantime you have some sort of intermediate process where lots of micro blocks are being constructed and they're put in a different type of data structure and then somehow they're serialized and represented inside those heartbeats those key blocks so we wrote a paper called parallel chains back in 2018 that kind of explained this concept we also came up with a concept called input endorsers back in 2016 with the original orborus paper so we've been thinking about this concept for quite some time as has the industry so when you see these types of things you'll see things like tangle with iota you'll see different notions of consensus the metastable consensus that avalanche is doing and everybody has some concept of how do we do more between the heartbeats you'll see sharding protocols that come like for example polychard that was promoted veswana's work and ethereum 2 is pursuing this as well so this is really the holy grail if you can achieve it with a caveat two of them one that you're constrained by network throughput so you can basically ratchet up your tps as high as you want to go in these types of models you can be a million tps if you really want to as long as you're able to broadcast that and people are able to keep up with that so going back up there's this issue of propagation can you propagate what you need to propagate in those windows to maintain the security assumptions the protocols okay and it's also a data representation question as well so data representation is basically what i are the people in the network validating and maybe you move to a heterogeneous model where the people processing the blocks have a different view than the rest of the network yet there's still inclusive accountability through some sort of structure and we're examining these types of things for example with mithral where you have a small part of your chain and you can always verify things but then other actors have different views so if you do that you may be able to ratchet tps up a lot the other issue is optimism and i'm not talking about the think positive thoughts i'm thinking about the belief and reality of network conditions all dag protocols and all protocols that attempt to do this idea of input blocks between the heartbeats have to have some degree of optimism that people are honest you will see a litany of attacks in literature of the last five years in particular where when people are not honest in the way that they construct these things and you have conflicting transactions and blocks and when the serialization step occurs things get put back together performance tends to degrade considerably this is why you'll see in a lot of these protocols this concept of slashing concept of fraud proofs you'll see this concept of bonds these other economic mechanisms because ultimately you want to keep people honest you want to create some notion of punishment because if you don't those input blocks will sometimes have an attacker come through and they'll be able to damage the network performance not permanently but for a period of time and in some cases the performance will be less than a single shard protocol so you actually gain nothing when you do this this is why we thought so hard about the theory with parallel chains and input endorsers and it's why we thought so carefully about the co-evolution of the proof-of-stake protocol with the network stack because we really wanted to understand how all these pieces fit together and we started building some more sophisticated primitives for inclusive accountability we don't necessarily need to implement these things with our bores in fact it may be counterproductive at the moment there are no punitive measures that we had to put in that require people to bond or slash and produce fraud proofs or so forth but they always could be added in to further accelerate the system so the question is where do these things take you realistically speaking if you look at real network conditions in a byzantine network you probably can use this with pipelining to get yourself to about 500 to 1000 tps with a mixture of scripts plus regular transactions value transactions let's call them okay that's really where you can go and then as you evolve the network stack and figure out more clever ways of propagating better data representations that window can increase over time and then you perhaps can put in some punitive measures to further optimize things especially if you wish to more deeply sharp now you can increase this even more if you want to but the safety of the system decreases and almost always your centralization increases to a point where occasionally you even have to reset your network and i don't think that's a real cryptocurrency but some people in the industry disagree and that's fine people will make decisions accordingly okay so where are we at with this well some blog posts are coming we have the scientific design done we understand the design space incredibly well and after pipelining input endorsers will be the number one topic to implement i want both pipelining input endorsers and aggressive optimization agenda done this year october is our deadline because there are three hard forks one in february one in june one in october i want these things done this year don't really care the cost don't really care who has to be hired i don't care if it's internal external it could be millions could be tens of millions of dollars it doesn't [ __ ] matter needs to get done we have the science done we did the hard work of writing the papers and this is achievable engineering it should be done it can be done it will be done we will find a way because it's important as we dial up the expressiveness of the system what's going to happen is more use and utility is going to come and this is really going to take us to millions to billions of users there already are millions of users so when we go to tens and then hundreds and then eventually the billions the system has to be able to handle that capacity there are a lot of people where the number one developed cryptocurrency in the world in terms of github commits there are a lot of people wake up every day almost 15 companies now that are working on this in many cases in parallel there is no academic process in this engineering in that we have to publish a paper and wait for peer review those have already been published okay there's almost 130 of them at this point more to come that's over it's now specification it's now engineering and so it's a coding problem and that is much easier to solve than original novel science so this is where we're at for those now there's more to say about this in particular all of these things is the concept of on-chain but the whole reason you do extended utxo is that it because of that locality and determinism it makes it significantly easier to do things off chain and there are three mechanisms to do that that i'll talk about today okay so first off there's hydra and this is just that good old-fashioned payment and state channel idea there's a great team here they're making phenomenal process progress there's already a hydra node on the test net we're playing around with it doing things with it very important and what's going to happen is you're going to just see these blips where they do more and more and more through releases throughout the year and not much needs to be said there it's a rapid development process and that team really wants to see that go and the spos are a very natural organic set of people to operate and run hydra lightning really paved the way the problem with lightning is that they don't have the e they're not expressive enough so it's exceedingly difficult in practice to get lightning working with the promises that have been made which is why it's taken so many years for these vendors to do that because we're more expressive with the e means we can do this faster and better and we're making great progress on it so that's one dimension just get transactions into a different layer two network process them there they're nearly fieldless extremely cheap and there's a lot of load balancing that can be done and there's thousands of stake pools that can receive that run channels and they're going to be more and more involved with this as we move quarter one quarter two quarter three quarter four and i told the team i really want to see some form of 1.

0 hydra running on mainnet before the end of the year october is a big month for us so they're working hard at this and we will continue adding resources as a project to make sure that something comes out here because there's so much low hanging fruit for microtransactions and payments and there's a lot of smart contract stuff that can be done in these subnets that i think would reduce bloat on chain second you can do off-chain computing off-chain okay and that's what's already happening sunday is doing it other people are doing it there's something called eigen layer there's dozens of projects that are figuring out ways to offload computation there's a really cool paper just to give you an idea of of this called ace stands for asynchronous contract execution and it was written out of eth zerk very nice university einstein went there and basically a synchronous contract execution is about how do you designate people off chain to do some computing and then return it and you're able to prove that the computing was done correctly given some sort of m of n trust assumption so this area is going to become long-term i think one of the largest areas of research and thought and create marketplaces for off-chain computing because hydra is a very specific case and a very specific design and it has a design surface for micro transactions and specific contract patterns here it's a more general design and the design surface is much more about picking a group of people that you trust mfn and there are many different approaches and these map incredibly well to the delegated spo model because again they can be a service provider to do these types of things and these map incredibly well to the ut excel model because this is local it's deterministic it's significantly easier to build the proofs necessary and verify the computation was done correctly with or without the network there you don't require global stator synchronization finally side chains okay now we have a huge advantage with ouroboros what auroboris does is it builds a very strong route of trust you end up getting thousands of in this case 3 000 spos and that's just only going to increase and once you have that what you can do is sort them called cryptographic sortition pick a subset and we'll call that i don't know s subset of the spos and then what you can do is bootstrap a sidechain and this is rotating so you'll have a different subset every epic you can bootstrap a side chain that's fast really fast because you can use high performance bft protocols okay this is what like harmony one with rapid chain or any of these guys that are in the bft space like al grant or so forth there are advantages because you get fast finality and you get high performance and it's something the eos community even tried to chase but they didn't really have the theoretical underpinning because you have to start with a strong route of trust first then you use that to create a subset you use the subset then to create a side chain they did things in a very odd way so this is the orborus main chain and then we have these side chains the yellow side chain like catalyst the evm these types of things okay and each of those chains can run protocols that are highly optimized and extremely fast with a large amount of throughput so you potentially get thousands of tps because you have a permissioned bft protocol where the permissioning is coming from the main chain so it's still decentralized actually but it behaves a permission protocol and because you have a well-known fixed quorum you can optimize the hell out of the network stack which means that you have much better propagation if you have high performance protocols that also have high amounts of optimization with the network stack you can achieve high throughput very high throughput and this is really what we were talking about in 2016 when i wrote the cardinal white paper and i was talking about the difference between the settlement layer and the computation layers by the way you'd also see cardinal ad i call it application domain that's what i was talking about here so if you read the original white paper that was the cardinal id computation layers you can have many of them and they are just subsets of the spos so you can have s1 here and s2 here and s3 here easy to select them as needed okay this allows you to have a very vibrant beautiful ecosystem of chains and it's all backed by very strong theory and very strong engineering and practice and this is actually the direction that ethereum 2 is chasing with their whole oracle chain idea and these sharded chains they're effectively doing that they're creating a strong route of trust at the base leveraging that route of trust then to create high performance bft chains we support that and we wrote a protocol specifically for this two of them the orborus bft protocol can be highly optimized the other thing is we wrote proof of stake side chains that's peter gagie back in 2018 and this i think was 2018 2019 okay so those weren't just papers papers papers papers those are real things and they have real consequences for this system and they mapped out the entire theory so we can speak with certainty about things because we're not just talking out of our ass we wrote papers they went through the peer review process they were challenged people who don't work for us who are independent in the academic community looked at these things with decades of experience and said there's novelty there and then we actually implemented these things we implemented obft in the byron reboot okay and if you look at the mamba project we re-implemented orborus bft for the scala evm code that we inherited from our work on the etc project long legacy there are many years of effort you guys can see the incredible performance that these chains have as we ratchet up the complexity okay this is something that it's in scope for 2022 this is in scope for 2022 and through partners we're going to see what we can do with eigen layer and these other ideas and this would be something very magical for the community to pick up that middle part but that's something that will continue throughout the years because it's so valuable and it is not a clear distinction the things in hydra connect to that the things in the side chains connect to this they're all kind of together so this is the off chain concept now you'll see that we're putting together many micro workshops summits for specific things like for example texas every single dex is building something like this in their own way for these workshops bring them together have them talk to each other so we can all learn together and we can look at some patterns and the same concept with the cardano d5 alliance as we do these workshops we're going to learn a lot some of the things that the workshops are going to request are for us to ratchet up the expressiveness of the system so new sips will come there are already three sips as i mentioned slated for june that add things like read only utxo and so forth and they're going to come in to help with that those new cryptographic primitives new technologies and other things will enable us to have more efficient off-chain computing and also improve the trust model so that it gets more and more decentralized very rapidly it's a high priority on a quarterly basis a lot of people are going to be meeting and the d5 alliance we're going to make sure we meet with them on a regular basis and there's going to be a lot of communication and it's kind of a spiral it's frustrating at first because there's a lot to do and there's actually a lot of places to aggregate so if you look at for example here there's a stack exchange to ask questions there's also a great developer discord right here that has over 11 000 members in the developer discord and that's only going to continue to grow okay so it's a very big very nice community a lot of stuff going on and heavy investments will be continuously made this year by us the foundation emergo dc spark the dapps in the ecosystem and if you are a community member asking what can i do if you should demand that the people building on cardano contribute to alliances contribute to the documentation attend workshops open source their code at some arbitrary point and share ideas and also work with everybody on how we can build out that middle okay now there are some other things to do like for example the things that go into the blocks themselves can be optimized so things like script compression for example that's coming as well we're originally going to put it in with the original one so hard fork but the haskell libraries were not efficient enough so we paid galwa and they came in and they worked and improved that by a considerable margin so that's going to be coming in february batching rollups these types of optimizations so the things that physically go in the blocks can be optimized as well so that's in scope people are working on these types of things ideas are being proposed and discussed pipelining can always be improved there's always a process to amortize computation doing that with the epic boundary blocks for example with 1.3.3 and more can be done and the first generation of these input endorsers will be quite efficient but there's a lot of room there to continue to improve that the same for network optimizations that's happening then the things that happen off chain you have three clear paths that are going to continue to optimize and get better hydra alone for the microtransaction world can move us into the millions of transactions per second these kinds of things are going to be built just as much by the dapp ecosystem as they are by the core custodians of the project and side chains is an endless river because you can you're only constrained by the hardware of the spos the more side chains the more profit they make the better the hardware they get this is a very scalable model okay so let's then talk about developer experience that matters a lot okay it's a lot to ask to go into this world of functional programming it's happening more and more because of multi-threading and the need for quality but you need some imperative programming i started this by saying hey i believe in a multi-model world okay and that's why we have the side chains there's already people like dc spark for example who have an evm side chain that they've created today you can use it play around with it do stuff on it deploy applications on it so if you live in this world and you want to use the ethereum tooling you have a path to do that better faster and cheaper than pretty much any other blockchain or at least comparable with the third generations that are barking around on market this functional programming model is new functional program itself is very old but extended utxo it's new and this is hard for the moment and how we make it easier is we make it easier together we work with the alliances we work with each other and we use the cip process and we divide and conquer and we decide how do we write things for example the p-a-b people keep asking when p-a-b it's the stupidest [ __ ] question in the world it's like asking when jvm guys it's here and it has many components to it and so really it's a question of okay which part of the pab do you need for your application and in some cases those parts are already built in other cases they're still under development but some people can already use it some people aren't using it at all in deploying applications furthermore pab diversity is coming we're likely going to take components of the pab and rewrite those components and typescript so they're javascript native and write lots of bindings to make it very easy for people to use this in a web setting and orientation we're building a light wallet ourselves it's very important that these things work in the browser we've already successfully compiled parts of the pab with ghcjs into javascript so you can already use those parts so it's not when pab it's what part of the pab do you need and over time that grows and grows and grows using utility but there's going to be many conversations about this so the dev experience is a connection of tutorials standard development kits video content explaining how to do things things like documentation good interfaces okay tooling and so forth there's more and more and more and more but really all of these things come from communication and cooperation now we understand that it is a competitive disadvantage to say well we have to spend 12 months or whatever working together to improve the experience for this particular model that has huge advantages from scale to quality okay and that's completely fair which is why we have that side chain model which is why we support the evm which and so if you want to build now go talk to dc spark become a partner of theirs and start building there start testing things on the mamba side chains which are imminent for the betas and start building there and what you're using is solidity what you're using is ethereum tooling which is quite mature at this point has years of history behind it and so forth okay so if there's an immediate commercial need this is the way to go and you accept all of the trade-offs that the ethereum ecosystem has if you want to be a pioneer which is why we call it plutus pioneer and you want to pioneer a completely new model that long term has enormous advantages from our ability to do things off chain with hydra and off chain technologies and deploy side chains has enormous advantages in our ability to verify things and make sure that they're correct so you don't run into be one of those 10.5 billion dollars of hacked d5 that keeps hacking again and again and again and also deterministic cost and by the way you're going to probably long term end up being the cheapest programming model in terms of operating cost then you can be a pioneer here and you can work with us the d5 alliance the dap outreach programs and there's a lot of people 11 000 people in the discord who are there there's a stack exchange it's a challenge it's hard but developing for the iphone when it first came out was hard developing for android devices when it first came out was hard it didn't invalidate the model or say these models are broken or bad it just said you have to be patient with the 1.0 there's a lot to learn there's a lot to do and to the rest of the industry especially the bitcoin maximalist guys if you are ever going to have smart contracts this is the way you should be looking at cardano as the beta test for bitcoin you're welcome but they're so self-righteous and they're so stuck up their own asses then they think everything is just their world they're blind to the fact that the only way to make bitcoin truly support smart contracts is either to outsource it to some other layer of the system in which case you've completely escaped the entire trust model that bitcoin is built on or extend bitcoin to have extended utxo so every single thing we're doing here these development patterns these kits these canonical ways of doing things these interfaces all of these things basically are trailblazing a model just like bitcoin trail plays the model for us for doing that and if you want to be a participant in something like that that's very exciting you'll learn a lot a lot to build it's it's a very fertile time and you'll get a great network effect and long term the advantages are clear you'll be able to scale to high throughput you're going to be building on bedrock on granite and the things that you build will last a long time you're impatient there is are other options available to you and those options will become more and more attractive month after month and are always there and then what will happen is a polyglot model where in the future applications will use both they'll use stuff in the functional realm stuff in the appearative world they'll use stuff on chain on chain and off chain okay on chain on chain and off chain what the hell do i mean by two on chains well this is cardano plus it's side chains but then on chain could also be another network like ethereum it could be bitcoin itself could be anything and then obviously all the off chain stuff we've talked about okay so a dap will end up looking like this all of these things together that's the future so why are we fighting each other the things here could bring just as much economic value and the things here could bring just as much economic value and if you're an spo you could be routing all these things and making money from all these things so why god's name are we maximalist we're all good for each other lots of rap bitcoin on ethereum right now think about it think it through really really kind of put that out there and ask yourself what are we doing okay so i hope this tutorial gives you guys a more concrete understanding of the ecosystem there's a there's a lot going on here there's a lot of thought here there's a lot of moving pieces here and this was not just some well we'll just talk about it for a while there are 600 people at just my company and well over a hundred of those 600 people are engineers specifically devoted just to this stuff and then as i mentioned there's more than a dozen companies doing stuff in this stack everything from formal verification to building pieces of infrastructure and we're getting to a point where if it's a matter of cutting a check we'll cut the check to accelerate things because there are certain things that really do need to be in the stack and i want them there we want them there and it's not an academic concern we did the research they delivered the researchers and the theory is clear we understand this model works we understand how it works and we know why this model is going to grow the community continues to grow the community continues to be solid and stable and capable and there's so many areas for optimization there's so many areas for collaboration there's so many areas for the dap and defy ecosystem to contribute back and help grow and evolve the system and we're doing that together every indication we have from the conversations we've had last six months the overwhelming majority of people are patient systematic and excited about being pioneers and building out this new model and those who aren't they have options okay it can't be everything to everyone but at the end of the day this is the model we're going for on chain on chain off chain it's going to happen one way or another and we have science on our side we have design principles that have come from decades of hard work we have legacy that connects to academics some of which are in their 60s who witnessed this stuff back in the 70s and they're kind of walking through with us so this is it we're getting it done this is the dap and defy model i hope it really gives you guys a little bit of clarity and a little bit of better understanding about why we did things the way we did things and it's always easy to start with high expressiveness and centralization it's a lot harder to start with low expressiveness and high decentralization and then gradually ratchet up the expressiveness it takes longer there's a lot more to do there's a lot more debates because you're really drilling into very specific details but when you do this you preserve the most important thing which is censorship resistance and decentralization or else why the [ __ ] are we even doing this as an industry why aren't we just all on amazon web services if you don't care about these properties we're achieved nothing as an industry if we have don't have decentralization and then in terms of expressiveness if you start to open you get 10.

Found an error in the transcript?

Help improve this transcript by reporting an error.