Back to videos

Summary

  • Charles Hoskinson discusses the concept of verified tweets and access control in social media platforms, particularly Twitter (X).
  • Access control is crucial in cybersecurity, involving various authentication methods, including one-factor to three-factor authentication.
  • The idea of verified tweets involves registering a decentralized identifier (DID) during account creation, enhancing security and user experience.
  • Verified tweets would be signed with a user's key, making it difficult for unauthorized users to impersonate them, even if they gain access to the account.
  • The DID standard is a W3C standard that can be stored on blockchains like Bitcoin, Ethereum, or Cardano, promoting self-sovereign identity (SSI).
  • Hoskinson suggests implementing veracity bonds, where funds are attached to information packages, enhancing trustworthiness and accountability.
  • He emphasizes the need for social media platforms to adopt cryptographic credentials to combat issues like bots and deep fakes.
  • The concept of a verification marketplace could incentivize users to obtain verified DIDs, creating a more trustworthy social media environment.
  • Hoskinson highlights ongoing projects in the Cardano ecosystem, such as Iagon, and broader initiatives like IPFS and Filecoin, aimed at improving decentralized data management.
  • He expresses a desire to build a social network that incorporates these principles, acknowledging the complexity involved in decentralizing data.

Full Transcript

Hi, this is Charles Hoskinson broadcasting live from warm, sunny Colorado. Today is January 9th, 2024, and I’m making a video to talk about a topic I discussed maybe a year or two ago. I can't recall exactly when, but I originally addressed Jack Dorsey, so it was probably two years ago. I talked about Twitter 2.0 and the kinds of things one could do with blockchain technology, the cryptocurrency industry, and decentralized social networks.

One of the ideas that came up was the concept of verified tweets. So, I wanted to make a quick video about this concept and discuss it further. Without further ado, let me share my screen. Alright, here we go. Verified tweets.

Normally, the account flow of Twitter, or X, or whatever they call it these days, starts with a user—let's call this user Bob. When Bob logs into Twitter, the first step is access control. Access control allows the user to have access to their account. The concept of access control is broad in cybersecurity and information security, with thousands of nuances and interesting aspects. For example, is it a single-user or multi-user system?

There are policies that dictate the access control flow. If Bob signs in versus Alice signing in—maybe Alice is a social media expert for Bob, who is a celebrity—when she signs in, she can tweet but can’t read the private messages. Various login systems have different access control systems connected to either one or many users, with policies behind what those users can do. Typically, you have one-factor to three-factor authentication. You can technically have more than that, but three-factor is seldom seen.

Generally speaking, one factor is a password, two-factor is usually a password plus some mechanism—like Google Authenticator or a text message—and three-factor is typically used in high-security situations. For example, a SCIF holds classified information and usually requires three factors: what (a password), what you have (a CAC card), and what you are (biometric authentication). You have many options for biometrics, such as palm scans, iris scans, fingerprints, or some combination of those. They have sophisticated scanners, so even if your password gets compromised or someone steals your access card, you need all three of these things to gain access. Most people are moving to two-factor authentication, and the highest level of security involves a piece of hardware.

Old methods, like just using a password, are often the worst way of implementing access control. You can also use other forms of access control. I’m a huge fan of the web of trust, which is a challenge-response system. You have a public key and a corresponding private key. When you create an account, you register your public key.

The server, which has your account, sends a challenge to Bob. They encrypt something with your public key, and the only way you can decrypt that is if you have a copy of the private key. You then send back the plaintext message. There are many ways to implement these challenge-response protocols, and this is a simplification, but it requires access to a public-private key pair. This is a strong method and much easier because you generally just enter a PIN code or type something, and it’s instantaneous.

You never need to remember a password, and you can layer it with other factors. For example, you can add a U2F key and biometric authentication, such as a fingerprint plus a U2F key. This would be nearly unhackable because the probability of all three being compromised is very low, which is why it’s the standard for SCIFs. Access control is powerful and important. Next is entry with policy.

This means that your view of the user experience, the cloud product, and the user interface follows the policy. For example, in a multi-user access scenario, Alice sees the ability to tweet but can’t click the button to see private messages, while Bob gets the full view. An obvious question arises: what happens when someone gains access to your account and is not a legitimate user? Let’s say we have a hacker, whom we’ll call James. If James bypasses access control and gains access to your account, he can tweet unauthorized things.

We might see those tweets and think, “Oh, Bob is acting strangely.” James has successfully caused chaos. The concept of a verified tweet is that during account creation, when Bob creates his account, he registers a decentralized identifier (DID). This comes from the self-sovereign identity (SSI) world, and the DID is a W3C standard. The whole space has moved in this direction; it’s a way of organizing and managing identity.

When you register a DID, you can verify that DID and its real-life human identity. You can also add cryptographic credentials, such as the X.509 standard, PGP, or a public key from a crypto system like Bitcoin’s secp256k1 elliptic curve or the Twisted Edwards curves we use in Cardano. Once you’ve registered and verified that DID, you get two things instantly. First, you never need a password again because you can use a challenge-response protocol.

The user experience is as simple as tapping whatever hardware device or PGP you’re using to manage your key system. It’s similar to spending a cryptocurrency transaction and is significantly more secure because these are one-time challenge responses sent over an encrypted channel. They’re fresh and generated on the fly based on a login request. Only if the person has your key can they log in, and the probability of that happening is very low. The second thing is that if James somehow gains access to your account—let’s say he’s an insider at Twitter, which actually happened when Bill Gates, Bill Clinton, and Biden’s accounts were compromised—there’s now a concept of verified and unverified tweets.

A verified tweet is one that has been signed by the key. Even if James is an insider, he’s unlikely to have a copy of Bob’s key. By signing with Bob’s key, you create a user experience where the tweet visually looks different and has authentication. This is similar to what you see with secure connections. For example, when you go to Twitter, it says the connection is secure.

When you click the lock, you can see the certificate information that shows it’s the real Twitter. The organization name is twitter.com, and you can see the entire certificate trail. When you subscribe to people in this system, you would be adding Bob’s DID to your list. The software would automatically get Bob’s cryptographic credentials, his public key.

Whenever Bob tweets, your client-side application can verify that tweet is legitimate. Now, if James tweets on behalf of Bob, he won’t be able to replicate the signature. It will appear differently in the GUI and indicate that it’s unsigned or unverified. This is important because, in organizations, especially when they do official communications, every communication should have a property called non-repudiation. This means you can’t claim it wasn’t you.

They use signatures to achieve non-repudiation and verify the integrity of the message. You hash the message and sign the hash, including it in the metadata. Anyone who wants to verify can hash the message and check the signature against the signature scheme and their list of people. They can verify that it was signed by Bob and that it’s the appropriate message. This property is called non-malleability.

This system would solve the problem because even if someone got access to the account, they likely wouldn’t have access to the keys, which would probably be stored in hardware. Let’s say Alice doesn’t have the ability to sign tweets or signs tweets under a different key. This way, you can see who sent particular tweets, creating a significantly more usable and interesting system. It really does solve these types of problems and embraces the W3C standards our industry has created. The DID standard can even be stored on a blockchain.

For example, when you verify them, you could put them on the Bitcoin, Ethereum, or Cardano blockchain. There are plenty of SSI vendors that can do these types of things. I’m a huge advocate of this because I think it cleans up the biggest inconvenience of the internet, which is weak access control. It also provides a more nuanced way of handling communications so that you don’t run into escalation of privileges. Even if you have perfect access control, if insiders get compromised, the service itself can impersonate you and communicate on your behalf, leading to scenarios where people think you’re acting strangely.

The onus of security would then be on the user, and it’s up to them to get a DID and program their access control accordingly. A lot of work has to be done in our industry. For example, I’d love an access control DSL to get granular about things and allow people to paint by numbers on how access control will work. Big services like X really shouldn’t be where they are in 2023; they should embrace these types of cryptographic credentials. Our industry pioneered the innovation of these technologies, and over 100 million people live in that reality.

If you don’t, you run into situations where all tweets are created equally, leading to bad tweets that create market disruptions and manipulations. You also open up interesting communication patterns. If someone tweets something interesting, you could have other signers as well. There’s a concept of vouching, which is not just a retweet or a You can endorse something, knowing it was endorsed by a trusted agency or person, allowing you to infer a truth metric. Another cool concept is veracity bonds, where you attach funds to a package of information with a truth value.

When people see this, they can ask who’s vouching for it. If there’s money behind it—say a million dollars worth of ADA—if it’s proven false, the money is lost. Would you trust that piece of information? I certainly would trust it more than information people aren’t willing to vouch for. Wouldn’t it be fun if journalists were held to that same standard?

When they publish a story, they have to put money behind it, and if the story turns out to be misleading or untrue, the bond gets called, and they lose their money. Now we’re getting into truthiness. These are some foundations of a next-generation social network and the power of verification. I hope Elon Musk sees this video, as well as others in the space who build things like Mastodon and these big social networks. Jack Dorsey, I’d love for you to see some extension to your work to include the DID standard so we can start getting to verified tweets.

This would completely solve the problem of bots. If people impersonate me or others, all their tweets would be marked as unverified. There’s also the issue of deep fakes. This concept can include an NFT, creating an origin NFT. Whenever you create content, your content would also create an NFT showcasing the date of creation and the story behind it.

You can use these same principles to trace back to some sort of author, identified by the DID. This sorts out your generative AI issue. If anything doesn’t have an origin NFT, it gets categorized as unsigned or unverified. It’s not necessarily untrue; it could be a copy of something true, but at least that nobody vouches for it. You can even have a verification service where people run algorithms to give you a probability of something being fake.

This is similar to a truth market. If a piece of content goes viral, in addition to replies and likes, you could have a verification button. You could donate a microtransaction, a small amount of cryptocurrency, and if enough people click that button, several dollars could go to a verification service to check the tweet and construct a community note. That’s how you create truth markets, using instruments of truth like veracity bonds and non-repudiable signatures. Social media needs this, and if X doesn’t do this, I don’t anticipate they will survive.

They won’t die overnight, but someone who does implement this will have a significant advantage. They’ll be able to combat deep fakes and completely remove bots. For example, you could require a verified DID to post or create segregated user experiences. You could have “proper Twitter” and “no man’s land,” where no man’s land has all the unverified stuff, and proper Twitter has all the verified content. This creates a strong incentive for people to get verified.

You can create verification marketplaces and truth marketplaces, with different protocols that a person uses. Micro-tipping is effective, and maybe in a monthly subscription, you get access to key management services in case you lose your keys. You could also get a certain number of credits for veracity verification as a service. These are the kinds of next-generation business models—kind of web 2.5.

It’s still an application that lives on a server but uses some blockchain concepts and a backend to facilitate upgrades for a better trust model overall. I hope this content was helpful. I really enjoy discussing these topics. I kind of regret not building a social network. One of these days, I think I’m going to have to.

It’s just one of those things where I’ve always been so busy, and the technology to build them is pretty involved. You have to figure out how to decentralize all the data. There are great projects on Cardano, like Iagon, and in the broader ecosystem, like IPFS and Filecoin, looking into these things. It’s going to take some time to work through, but this is one of the killer applications. Projects the App Protocol are really starting to take the fediverse to the next level and upgrade and refine what the prior protocols had inside the fediverse, correcting the deficiencies we have.

I hope this was helpful. It’s been a wild ride today, and just be careful what you read. Cheers!

Found an error in the transcript?

Help improve this transcript by reporting an error.