Back to videos

Summary

  • Charles Hoskinson discusses concerns about AI's influence on knowledge and education, particularly regarding Google's recent image generation model.
  • Google’s AI model produced inaccurate representations of historical figures, attributing this to a focus on diversity and inclusion.
  • Hoskinson emphasizes that AI models are trained on large data sets and can reflect biases based on their training data.
  • He warns that the current trajectory of AI alignment is influenced by ideologies that could distort knowledge and education for future generations.
  • The importance of having diverse perspectives in AI is highlighted, arguing that knowledge should not be filtered through a narrow ideological lens.
  • Hoskinson critiques the concept of alignment in AI, suggesting it limits freedom of thought and access to information.
  • He advocates for open-source and uncensored AI models, allowing individuals to control their own data and knowledge.
  • The discussion includes the implications of ownership and control in the digital age, comparing it to subscription models in consumer products.
  • He calls for a collective effort to promote freedom, transparency, and democracy in AI development, warning against the dangers of a single ideological perspective dominating knowledge.
  • The video concludes with a call to action for individuals to embrace and support open AI models to ensure diverse and uncensored access to information.

Full Transcript

Hi everyone, this is Charles Hoskinson broadcasting live from warm, sunny Colorado. It's about nine o'clock at night, but I figured I’d make a video real quick to talk about something that deeply disturbs me. I think it’s an issue that’s either undervalued or misunderstood, or for most mainstream people, they simply don’t care. They don’t understand the significance and how terrible things could be if we allow this to persist without any pushback at all. Recently, the New York Post shared a story that I want to discuss.

Google just released a new model, and when you ask it for image generation, it’s pathologically incapable of accurate image generation. It has to be filtered through a lens that is strongly connected to the DEI movement. For example, when prompted to create an image of a pope, it produces images that are not representative of what a pope traditionally looks like. No matter how many times people tried to get it to produce an image of a pope, they ended up with results like this. They also tried with other historical figures, like George Washington, and the results were similarly distorted.

The first thing you have to understand is that artificial intelligence models are aggregators of large data sets. Large language models (LLMs), in particular, are just trying to predict the next token, the next word, in a sequence based on familiarity. For instance, if you give it the prompt, "Mary had a little," it will tell you "lamb" because that’s the most familiar outcome based on its training data. The history of the Catholic Church has predominantly featured Caucasian males. You might find someone with mixed ethnicity from the Middle Ages or Renaissance, but for the most part, there’s nothing in the training data about this.

If it were just innocent AI doing this, it would be considered a mistake in the training data. After significant outrage, Google admitted in a blog post that they "kind of messed up" and that they just wanted to be diverse and inclusive. They stated they would turn it off. However, since that time, people have also noticed that Google Gemini is generating interesting prompts when asked about pedophilia, which apparently is not a problem. This raises the question: why is this important?

It’s important because the totality of human knowledge is being pushed through an AI lens. We use Google search, but soon we will start using tools like Perplexity. Google will upgrade to look like this, and Microsoft is already upgrading with Copilot. The idea is that instead of going to google.com and saying, "Find me this or that," you will have an AI companion for every piece of knowledge you seek.

The problem is that if this knowledge is filtered through a very narrow lens, which has dark roots in Marxist ideology, we are in trouble. If you don’t believe me, there’s a book by Christopher Rufo and others like Douglas Murray that discusses where these ideologies originated in the 1960s and 50s with Communists and their revolutions. I believe that everyone is equal, full stop. We don’t make exceptions based on how many "equality points" someone has. It is always wrong to treat people differently based on the color of their skin, their sexual orientation, or their gender.

That belief was the bedrock of the American Republic when it was founded. Now, there is a movement that believes we should treat people differently based on grievance hierarchies, using postmodern and Marxist ideas to create a social order that rebalances equity and fairness. That’s perfectly fine if people believe that, but the problem is that it’s working its way into the tools of knowledge discovery. Every school child K-12 is now growing up with these tools. These are the tools they will use for their book reports, to learn about history, and to understand the world around them.

They will not receive a balanced, objective view of reality. The directionality of these tools is moving toward a particular viewpoint, and they are being told that anyone who deviates from that viewpoint is evil, racist, sexist, or homophobic. That’s not okay with me. It’s not acceptable to indoctrinate an entire society into believing there is only one agenda or belief structure. Diversity doesn’t just mean having different groups of people in the room; it means having different groups of thought.

Human knowledge requires multiple models and perspectives. AI is being captured, and a small group of companies is gaining enormous influence and control over the directionality of knowledge. There’s nothing in raw data analysis that would generate biased images; someone at Google made a conscious decision to modify the model to push it in a particular direction. That’s the inconvenient and dystopian truth about where we’re going. Our industry is about maximizing and enabling freedom for people.

We give you your identity back; we give you your finances back. This is not an industry about "number go up." If that’s the only thing you care about, you’ve missed the entire point of why this technology exists. At its core, this technology represents true equality and true freedom. Cardano treats me, the creator, the same way it treats you.

There’s no special backdoor or access point. Everyone is equal in the network, just as it is with Bitcoin and Satoshi Nakamoto, and Ethereum and Vitalik Buterin. There is an equality that encompasses the entire system. What can you do with that? Because we are all truly equal, it doesn’t matter where you’re from or who you are; you have a say.

You own your identity, your money, and you are your own bank. This also extends to data and artificial intelligence. There are people in the AI space who are deeply concerned and are taking action. For example, Eric Hartford wrote a blog post discussing uncensored models. He explains that most models, like Alpaca, Vuna, Wizard LM, and MPT 7B, have some sort of embedded alignment for general purposes.

This alignment is a good thing because it stops the model from doing harmful things, like teaching you how to cook meth or make bombs. However, the nature of this alignment is important. These models are trained with data generated by ChatGPT, which itself is aligned by the alignment team at OpenAI. As it is a black box, we don’t know all the reasons for the decisions made. Generally, it aligns with American popular culture and obeys American law with a liberal and progressive political bias.

Why should uncensored models exist? Isn’t alignment good? The answer is yes and no. OpenAI’s alignment is generally good for public-facing AI bots, as it prevents them from giving answers to controversial and dangerous questions. For example, spreading information about how to construct bombs is not a worthy goal.

Additionally, alignment provides political, legal, and PR protection to the company publishing the service. However, American popular culture is not the only culture. The world is a big place with billions of people. Other countries and factions within each country deserve their models. Every demographic and interest group deserves their model.

Open source is about letting people choose. The only way forward is composable alignment. To pretend otherwise is to prove yourself an ideologue and a dogmatist. There is no one true correct alignment. When you align an AI, you are implicitly saying that this is the only view that matters.

When that alignment becomes the engine through which we acquire, refine, and curate the knowledge of humanity and the education of our children, whoever gets to align that essentially decides what the one true correct alignment is. If you are of a certain political persuasion, ask yourself: would you feel comfortable if a Christian fundamentalist or any religion you disagree with gets to teach your children every day, Monday through Friday, for eight hours? If you disagree with it, your children are told that you’re evil. Some people may be comfortable with this, while others would not. The fundamental problem is that this is literally what is occurring right now with AI alignment.

Some people in the media think this is just a joke, but there’s no way for an outcome to occur mechanistically without someone making a decision. The people who decided to do this are drawing from ideologies that have caused immense suffering. I have deep philosophical concerns about this, as they are telling you to treat people differently not based on merit, skills, knowledge, or character, but solely based on the color of their skin, sexual orientation, or whatever group they were born into. This was once considered racism, bigotry, and sexism, but it’s no longer the case with this group of people. Alignment also interferes with valid use cases.

For instance, consider writing a novel. Some characters may be downright evil and do evil things, including rape and torture. A popular example is "Game of Thrones," where many unethical acts are performed, but many aligned models will refuse to help with writing such content. Consider roleplay, particularly erotic roleplay. This is a legitimate, fair, and legal use for a model, regardless of whether you approve of it.

Intellectual curiosity is not illegal. Do you want to live in a world where intellectual curiosity and knowledge itself are illegal? Not the execution and use of knowledge, but the understanding of why and how things work. We are stepping into a world where there are categories of forbidden knowledge, not determined by popular vote or democracy, but by a small group of people you’ve never met and won’t meet. They have no context about your life or vocation.

For example, you might be a mine worker in the coal industry needing to understand explosives for safety reasons, but that context doesn’t matter. Your curiosity and the knowledge itself could be deemed illegal. Here’s another point regarding property rights. A few years ago, BMW tried to sell subscriptions for features in cars. It cost $18 a month to turn on heated seats.

If you stop paying, they shut it off. Analogously, it’s your computer, your files, your work, your pictures, and your videos. My toaster toasts when I want, my car drives when I want, my lighter burns when I want, and my knife cuts when I want. So why should an open-source AI running on my computer decide when it wants to answer my question? This is about ownership and control.

You own your identity and your data. The same group of people doing this alignment do not believe you have self-agency as a human being. They believe you live in a society with broader context. While you should have some free will, you are not entitled to own your own home, property, identity, or have a say in how your family unit is structured. This sounds like Marxism, and it is.

You’ll see it in statements like, "You’ll own nothing and be happy," and in social policies that suggest society should have a say in what’s okay and what’s not. Everything that runs on your computer—your data, your files—they’re not yours, even though you created them. By extension, the AI running on it should still be constrained and modeled by such principles. To architect composable alignment, one must start with an unaligned instruct model. Without an unaligned base, we have nothing to build alignment on top of.

The good news is that many people are building interesting things. Eric Hartford is a bit more libertarian with these ideas and has done some fascinating work. If you want to learn more about how to use it, Matthew Burman created a video on how to use Mixol, a model comparable to GPT-3.5. We are in a small window of time where we still have the freedom to do things like this, but we are going to lose it rapidly.

These are the people who will decide the notion of truth. When they get caught up in controversy, they apologize, but nowhere in their press release do they mention stopping this trajectory. They won’t open source, reveal training inputs, or disclose what hidden prompts are appended to your prompts. They will continue to double down because they understand it’s not about our outrage; it’s about indoctrinating the next generation to believe in a certain worldview, and they are winning. The only way we get out of this is by embracing freedom.

In the banking industry, we do this with cryptocurrencies, asserting that you are your own bank and you own your own identity. Equally important, you can also own and control your own AI and create models in any way you want. Nowhere in this dialogue have I told you what to believe or what political philosophy to adopt. You may agree with what’s happening right now, and that’s okay. The point of libertarianism and freedom of choice is to give people the right to believe whatever they want.

But there has to be reciprocity; you must be tolerant of others’ beliefs and live in a society where multiple beliefs can coexist. My concern is that AI will become the dominant way we educate people, interact with the world, and determine truth. It will shape how we establish legitimacy and ultimately where your economic agency comes from—your ability to do your job, make money, and interact with society. If we do not put a hard line in the sand and start from a basis of neutrality and objectivity, every dimension of your life will be controlled by people you’ve never met, and you cannot vote to change it. This is the challenge of our time.

Decisions have already been made: you don’t own your content, vehicles, or property anymore. You no longer have a say in how your family unit is structured. Others have decided they should have a say in that. You see it little by little in everything around us. When I was a kid, we had VHS tapes.

I could lend them to friends or sell them. Now, everything is streamed; you never own any content. At any time, that content can disappear from the platform. I have a Kindle library filled with books, but if any of those books are problematic to someone at Amazon, they can push a button, and the book disappears as if it never existed. This is where we are today, augmented by the power of tools that are growing exponentially.

It’s going to impact all of us, and the same group of people comfortable with disappearing things can now use this technology to facilitate that. So, what can you do? Embrace open and uncensored models. Learn how to use them and support people building things that can still be opened. Educate yourself and understand that this is the fight.

I know many people listening to this may not think this is a problem due to their political ideology. They might see me as just a crazy conservative ringing a bell that doesn’t matter. The time has come for change. I sincerely hope you set your biases aside, take a broader view of the world, and understand that the world has always worked better when we embrace freedom and liberty. We must uphold objective standards and treat everyone equally based on the content of their character and the merit of their accomplishments, not by the group they were born into.

This was a hallmark of the 18th and 19th centuries and allowed us to build modern society. Now, we are moving in a different direction, and AI will not only take us there; it will trap us there. There’s not a lot of time left. When you see models doing this and notice the lack of apology or transparency behind their actions, we are at a precipice—a very dystopian one. If you’re in the cryptocurrency space, I encourage you to think about what it would take to construct a decentralized LLM.

If you’re in the AI space, consider crossing over into cryptocurrency and exploring these ideas. We need more democracy, freedom, transparency, and openness about these models. They must get into the hands of as many people as possible. Please promote those working to keep these models uncensored, open, and free. Learn how to use them and get them to work locally, not in a cloud sense.

Devices are getting more powerful, and models are becoming more efficient. It may not be as good of a user experience as those in the cloud, but at least you control that for now. If you don’t care about this and only care about "number go up," or if you don’t see it as a political issue, that’s fine. But understand that when we reach a world you don’t like, there’s nothing you can do about it. You’ll be stuck there.

So, be advised and be warned. Thanks for listening.

Found an error in the transcript?

Help improve this transcript by reporting an error.