Back to videos

Summary

  • Charles Hoskinson discusses Elon Musk's recent comments about Twitter, now called X, and his desire to change or eliminate Community Notes due to their inconvenient political implications.
  • Community Notes is described as a form of swarm curation, where diverse groups aim to converge on an accurate truth, but the effectiveness can be compromised by bots or conflicts of interest.
  • Hoskinson emphasizes the need for decentralized identifiers (DIDs) and KYC processes to combat bot issues on platforms like X.
  • He praises Grok 3, an advanced AI model that has rapidly developed into a significant player in AI data centers, and discusses its potential for critical thinking and analysis.
  • The concept of prediction markets is introduced as a way to create economic incentives for truth, allowing participants to stake money on the accuracy of claims.
  • Hoskinson argues that combining economic incentives, AI, and crowd-sourced intelligence can lead to a more accurate understanding of reality.
  • He critiques the current state of social media discourse, noting a decline in quality and an increase in polarization and bot activity.
  • The importance of creating a truth-oriented algorithm for Community Notes is highlighted, suggesting that visibility should be based on the relevance and quality of content rather than sensationalism.
  • Hoskinson calls for humility and collaboration in improving Community Notes, stressing that no single solution will suffice and a multimodal approach is necessary for a truth engine.
  • He concludes by advocating for a society that values critical thinking and open dialogue, emphasizing the need for leaders who can accept criticism and foster constructive discourse.

Full Transcript

Hi, this is Charles Hoskinson broadcasting live from warm, sunny Colorado—always warm, always sunny, sometimes Colorado, increasingly more often Wyoming. I wanted to make a video about something that Elon Musk has been talking about. You guys on the left always think that I do nothing but praise him, but every now and then, he says some stupid stuff that impacts all of us. Twitter, now X, is my largest platform. I have about a million people who listen to me on X, though I'm not sure how many are bots.

He told us he got rid of the bots, so there we go. Musk wants to get rid of Community Notes or change them because he seems to not like what's being said there, and it’s inconvenient to the political views he holds. Community Notes is something that Zuckerberg and others have talked about. It’s the idea of swarm curation, where a large group of people comes together, each with different viewpoints and incentives. The hypothesis is that if they’re all looking at the same claim, the aggregate of those people's truth function will converge to something that’s generally accurate.

Many academic papers have been written on swarm curation, discussing whether it’s a good or bad idea. However, it really is a garbage in, garbage out situation. Why? Because if the swarm curating the information is filled with bots or has conflicts, you will not converge to an accurate or objective account. It’s a combination of incentives, quality, and diversity.

What incentives do they have? Who is in the room, and what are their domains and skills? Alex Pentland writes about this in a book called "Social Physics." There’s also the concept of human or bot resistance. Musk is removing Community Notes because he doesn’t the fact-checking being done there.

What’s extraordinary to me is that fundamental building blocks are still not fully integrated into X. For example, years ago, I made a video saying you need to use decentralized identifiers (DIDs) as your basis of identity inside the system, along with some form of KYC, usually done with a credit card. That’s how he’s been doing it, but there are many different ways to guarantee that you’ve killed the bots. Then you have Grok, which is extremely advanced. Grok 3 is a technological marvel.

In one year’s time, they went from nothing to having one of the largest AI data centers in the world and a frontier model comparable to the top models from Claude and OpenAI. It’s a huge win. You need to tune Grok to take all known sources and have a critical thinking module. What you do is create a long list of queries and thinking structures that allow you to analyze something deeply. Grok has this ability, and it’s easy to create templates for the queries and thinking structures.

This includes the who, what, where, when, and how we all learned when we were kids. Then you go deeper and think about the argumentation structure—whether it’s an inductive or deductive argument, and if there are known fallacies. You can ask fundamental questions like, “What would have to be true for this to be true?” This is crucial when looking at a tweet. If someone has an insane conspiracy theory, the probability of that being true is quite low because a lot of strange things would have to be true for that to be true, which are unlikely events.

An AI can do this instantaneously, and then you have your swarm intelligence. The swarm intelligence augments the AI, so you concatenate them together. You have the Grok view combined with the swarm intelligence, and the aggregate of these two is the Community Note. This is an imperfect system, but it’s a self-improving system. Grok will get more advanced over time, the datasets will get more refined, and the thinking models will be partnership-driven.

There’s absolutely no reason for Facebook not to partner with X and take Llama 4 and Grok 3, putting these thinking structures together and agreeing on some baselines about how they’re going to do that. On the Community Note side, you need an economic component. Traditionally, we use prediction markets. Remember when Kamala Harris was running against Trump? The prediction markets, like PolyMarket, said Trump was going to win, while the press said he was going to lose.

PolyMarket was right. Money is on the line, so you have to ask the question: What is the economic value of objective reality for this viewpoint? That’s what the prediction market is trying to figure out. The prediction market helps create economic agency. For example, there was a big scandal with Libra, centered around a gentleman named Hayden Davis.

I saw reports yesterday that he killed himself, but they were unconfirmed. Is it true? I don’t know. It would be nice if we had a process where we could create a bounty for knowing if this is true or not. It’s pretty grim and dark, but it’s relevant.

Markets can form for the truth, and if you create economic rewards, people will participate, including those directly connected to the facts. This enhances the intelligence of the swarm. You have an economic angle, an AI angle, and a crowdsource angle. These three things together can form an extremely accurate view of reality. The more controversial the issue, the stronger the economic incentives are to figure it out, leading to more data for the AI to analyze, stronger logical structures, and more people participating in the swarm.

If it’s a diverse swarm, you’ll get a balanced, objective view of reality. There is no fact-checker in the world that will produce a better outcome than this—absolutely none. You can also put in veracity bonds for a claim, which adds more economic weight. In addition to the prediction market, you can say if it’s a lie, then you lose a bond. When people start making claims, you ask beyond reputation: What’s at stake?

What do we gain? What do we lose? For example, there’s a person in China who burned $1.3 million of Ether to say that the Chinese government is using brain-computer interfaces to control people. He literally destroyed the Ether just to publish that message.

The seriousness of his willingness to lose $1.3 million to get our attention is whistleblowing. It doesn’t mean it’s true; critical thinking has to come in. What would have to be true for this to be true? A lot of facts would have to be true, and we have to do that analysis.

But it creates agency behind that. What did you pay to tell me? What are you willing to lose to tell me? Can we create a marketplace to figure out the truth? Can we create a thinking machine programmed to analyze things at a depth that humans are incapable of?

Can we get a collective intelligence, a swarm intelligence of all the people? When you combine these four things together, you create a perfect truth engine. Elon does a lot of things I and he does a lot of things I really don’t He’s not a god; he’s absolutely guilty of Dunning-Kruger, and you see it come up from time to time. He was in the White House the other day complaining about people being 150 years old on Social Security. That’s obviously corruption.

When you hire people under the age of 20 to be on your Doge team, they probably don’t know anything about mainframes, nor does Elon, because he was born after that time. Many of these COBOL systems created by IBM have a default date of 1875. So, talking out of your ass without really thinking deeply about things can make you look foolish. We have a lot of people in our organization who are older and were pioneers in programming language design, which means we accumulated a lot of expertise in the old ways of doing things, whether it be Ada, COBOL, or Lisp. There’s some familiarity there.

Anyway, that’s a claim, and you have swarm intelligence. The Grok view looked at all the thinking tools and asked what would have to be true for this to be true, revealing many uncomfortable things. In the prediction market, how confident are you, and how much money are you willing to put on it? That’s a claim that Elon made. He put his reputation on the line, but is he willing to burn a million dollars if he’s wrong?

What about $10 million? What about $100 million? How much money before you really start taking it seriously? This is how you get to a true society. No one thing solves the problem; no one thing can get you where you need to go.

It’s a combination of things, a multimodal approach, and then you can get to a truth engine for a system. That’s how you fix Community Notes. I do not know if Elon is fully committed to free speech. I know that he has a worldview and is committed to a free platform for his expression, which I believe is valuable because his expression is one that tens of millions of Americans also believe in. They were systematically deplatformed, and no one seemed to have any issue with that on the left because they disagreed with his speech.

But it’s not the same as free speech; that’s for everyone. You have to be willing to have some rules behind it. It comes down to this type of weighting. Once you’ve done this weighting, you can create a system where the relevance or visibility is weighted by the truthiness of the situation. The more relevant and engaging the content, the more visible it is.

The less relevant, the less visible. This is not censorship. People say, “I’m over here, and nobody gets to see my post; you’ve censored me.” What we call quality curation—high-quality, interesting, engaging, well-thought-out, balanced information—is more visible. You at 3:00 in the morning, high and drunk, talking about how leprechauns molested you when you were 12 is perhaps less visible.

Maybe that’s the world we want to live in. The challenge is that the algorithms we have today flip the script because that kind of sensational content is a lot more clickable than nuanced geopolitics that have no clear answer. Productive, mature societies live in a truth orientation and an objective reality, while broken societies live in a click orientation. In a nutshell, that’s everything that’s wrong with social media. I’m a very busy man, but one of these days, I’ll fix this because I’m getting tired of it.

I’m getting tired of people looking for saviors to come in and fix everything. Elon Musk is not a savior; he’s a brilliant entrepreneur who does many incredible things, whether it be Neuralink restoring function for the crippled or soon to be the blind, getting us to Mars, or inventing amazing AI technology in both computer vision and large language models. In no way does that excuse poor behavior, Dunning-Kruger, offensive statements, or attitude. You have to look at people in a nuanced and balanced way. Some days, I wake up and say he’s a great man, and other days, I wake up and think, “Boy, I wish he didn’t do or say that.

” I’m willing to wager that some of the people listening to this video have the same opinion about me. The first step in gaining wisdom is to admit and acknowledge imperfection in oneself. Community Notes are an essential building block if they’re built the right way toward building an algorithm that is truth-oriented, enforcing conversations in a way where the economics are for objectivity, discovery, and insight, as opposed to the base natures of mankind. What we’ve seen with long-form podcasts like Joe Rogan and Lex Fridman is that they are rewarded because there’s a hunger in society for real conversation. Some people on the left attack these podcasts as alt-right propaganda or evil, but at the end of the day, all they’re really doing is having a conversation.

If we had a truth engine, you’d be able to put it into that conversation, and for real people, the quality would radically go up. For propagandists, the quality would go down because they’d realize they’ve been discovered. We have to get to a point where we build that. I just gave you a blueprint for it. It’s not that hard, and we don’t need Grok for it.

Llama can do it, and many other technologies can do it. The Allen Institute is building a completely open AI system that will be powerful enough in three to five years to run locally with all of those thinking tools and have enough ability to search the web to sort everything out. Mainstream media is also terribly corrupt; we all acknowledge that. But that doesn’t mean every single journalist is evil or wants to write propaganda. Absolutely, they say stupid things.

Yesterday, I saw journalists complaining about Elon Musk’s executive protection detail being deputized as U.S. Marshals. Any semblance of critical thinking would show how stupid that criticism is. You’re taking former Special Forces operators and police officers—some of the best in the world—and giving them credentials to protect a man who faces daily assassination threats and is in possession of some of our nation’s most sensitive secrets.

It is an enormous inconvenience for them to disarm. Is there a problem with the people getting the credentials? We trusted them to be Navy SEALs, Green Berets, Rangers, and SWAT police officers, so obviously not. Is there a problem that Musk needs to be protected? Obviously not, and he’s paying for this out of his own pocket.

Would you rather have U.S. taxpayers pay $7.5 million a year for people to protect Musk instead of him paying for it himself? Why are we criticizing Trump for deputizing these people as U.

S. Marshals and saying it’s a coup? I literally saw a tweet yesterday; it’s a talking point, propaganda, vacuous, with no real critical thinking value. It’s a complaint for the sake of a complaint, a talking point to push a political agenda. We see it all the time on the right, on the left, and everywhere in between.

I’m just tired of it. I’m fatigued. I’m drained. There’s been so much of it for so long that people don’t even know why they’re angry anymore. They don’t know what they should feel about any one thing.

It’s so weird and out there. You wait for some leader to show up and tell you what to do because you’re drained and don’t want to put the time in. It’s not a society that’s going to get ahead, especially one that has tools as powerful as the ones we have. The emergence of quantum computing, synthetic biology, and AI is going to change everything. Never has there been a time when the human race has had more capability and less wisdom.

We have to return to wisdom, and the first step in that is an addiction to the truth and an embracing of the processes of the truth. We need the humility to admit that sometimes we don’t know everything. That’s why the process will get us there. It’s why I studied mathematics when I was younger. Every mathematician is addicted to not only the truth but the rigor to prove the truth beyond a reasonable doubt for everyone to agree.

It’s a very high bar, a very high standard. We can’t get there deterministically, but we can get there probabilistically as a society with AI, where every year we get better at it. So please, don’t get rid of Community Notes; just fix them. If you don’t know how to fix them, have the humility to admit it and invite people in who do. It’s a very necessary thing.

If you keep doing it, at some point, I’m going to leave X because there’s no point in being on a platform where I’ve noticed over the last two years that the quality of discourse has gone down. Now, in every comment, there are dozens of bots replying, radical polarization—either radical love or radical hate—regardless of the issue, and absolutely nothing meaningful to engage with. I see news come through X, and the news that comes through is hard to verify. We should all know if Hayden is dead or not; it should be pretty straightforward. There have been great wins, the Trump assassination attempt, where we actually got to see the real story, not what the media tried to cram down our throats.

The first headlines from CNN said Trump collapsed at a rally. Everybody knew he’d been shot; they heard the gunshots, the crowd was screaming, and he was bleeding from the ear. That’s the headline they ran with, and the president wouldn’t even commit to talking about that as an assassination attempt when his own Secret Service told him before he made that speech. Where’s the truth to power there? It’s propaganda.

Just because they’re doing that, should X also do that? Should whatever the White House’s talking points are on the Ukraine conflict be the only thing allowed to be discussed on X? How are we any better than the platforms we sought to replace? No, Blue Sky is not the answer because their moderation censors and removes anything that’s not left-wing. It’s about visibility, and you need a metric based on objective reality for what is visible and what’s not.

You don’t make it totally invisible; if people want to find it, they can find it. At the end of the day, these engines or curation machines should bring you the collective reality of what’s going on and help you navigate that and connect to these ideas. The builders of the platforms have a responsibility to build them in a way that increases the collective intelligence, awareness, and empathy of everyone. You don’t want to build them in a way that’s always declining. That’s why I’m a crypto guy, and I call it how I see it.

I can criticize people one day and praise them the next. That’s why I said Elon was brilliant with Grok; it’s incredible. Now, I’m making this video saying it’s stupid about Community Notes. I’m a free-thinking, independent, objective person. If you think for once that I’m going to lose any access or commercial opportunities for speaking, that’s an indication of how sick society is.

People are so petty that the minute you don’t support them 100% of the time, you can no longer talk to them, interface with them, or do business. We should never have any of those people as leaders in our society. We can’t move forward if that’s the case. People have to have the freedom to disagree with each other, hold each other accountable, and point out when stupid things are said. Every single person that’s criticized me in Cardano, believe me, there’s been a ton of it.

They have not lost any political power; in many cases, they’ve gained it as a result of their criticism. Sometimes it’s harsh, absurd, and bizarre, but they’re still around because there’s nothing in Cardano that says you only get to use it if you like Charles and agree with him. In fact, the entire governance system was built the opposite way.

Found an error in the transcript?

Help improve this transcript by reporting an error.