Podchaser Logo
Home
The Case for Authoritarianism | Vitalik Buterin & Noah Smith

The Case for Authoritarianism | Vitalik Buterin & Noah Smith

Released Tuesday, 25th June 2024
Good episode? Give it some love!
The Case for Authoritarianism | Vitalik Buterin & Noah Smith

The Case for Authoritarianism | Vitalik Buterin & Noah Smith

The Case for Authoritarianism | Vitalik Buterin & Noah Smith

The Case for Authoritarianism | Vitalik Buterin & Noah Smith

Tuesday, 25th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

The Chinese government and the Russian government have

0:02

a lot of resources devoted to pushing their

0:04

message out there and the US government doesn't

0:07

Right liberalism doesn't US government sits back and

0:09

says, you know from an Olympian remove And

0:12

it's like I am the overall mighty hegemon

0:14

of information And so I'm going to let

0:16

all these tiny little actors play it out

0:19

You know and then one of those

0:21

tiny little actors is the government of China

0:23

a country four times the size of the

0:25

United States With you know arguably a higher

0:27

GDP in the United States Welcome

0:36

to Bankless where we explore

0:38

the case for authoritarianism. What

0:41

did I just say what get into that? This

0:43

is Ryan John Adams. I'm here with David Hoffman

0:45

and we're here to help you become more Bankless

0:48

now, I want to make it clear before we

0:50

get into this episode both of our guests today

0:52

do not want Authoritarianism to

0:54

win 21st century. Okay quite the opposite

0:57

quite the opposite I think both have

0:59

dedicated their lives in various ways to

1:01

pursuing anti-authoritarian ideas and a

1:03

Vitalik's case technologies So today's episode

1:06

is more of a steel man The

1:08

question is if totalitarianism outcompetes free

1:11

societies and wins the 21st century's

1:14

how might it win? And

1:16

what if the information anarchy of the

1:18

internet spells the downfall of liberalism? This

1:20

is a fascinating conversation with Noah Smith who

1:22

is an economist and Vitalik Buterin who you

1:25

know from crypto I would call

1:27

this topic a non crypto topic today But

1:30

actually game theorizing on how the authoritarians might beat

1:32

us might just be the most crypto thing ever

1:34

before we get into the conversation Our friends and

1:36

sponsors over at steak wise want you to know

1:38

what they're up to in the world of liquid

1:40

staking on theorem If you're a

1:42

solo staker But your ETH is locked up

1:44

in a liquid because you're solo staking You

1:46

can continue to be a solo staker in

1:48

the steak wise protocol while also being able

1:50

to mint OS eth in order to use

1:53

your solo staked ETH in defy on layer

1:55

2's and I good layer or anywhere

1:57

else across the ethereum landscape you get to

1:59

keep your rewards that your node is

2:02

earning while doing more with your

2:04

ETH. And if you're not a

2:06

solo staker, Stakewise is introducing a

2:08

vaults marketplace to choose perks that

2:10

you want to add onto your

2:12

staked ETH between custom MEV strategies,

2:14

DVT, insurance, APY boosts, all

2:16

things available through the Stakewise vaults marketplace. There

2:18

is a link in the show notes if

2:20

any of this stuff peaked you. Bankless were

2:22

of course known to be a crypto podcast

2:24

but if you are a long time listener

2:26

of Bankless, you know that at the end

2:28

of the crypto rabbit hole comes conversations on

2:30

how do we structure society and

2:33

which structures do better than others and how

2:35

should we prepare for unfavorable structures like authoritarianism

2:37

and how can we prevent that in the

2:39

first place. And with this episode, we kind

2:41

of skip straight to the bottom of the

2:44

rabbit hole talking about how technology is changing,

2:46

society is changing and how that's gonna impact

2:48

the way that society is organized. So without

2:50

further ado, let's go ahead and get right

2:52

into the conversation with Vitalik Buterin and Noah

2:54

Smith. But first a moment to talk about

2:57

some of these fantastic sponsors that make this

2:59

show possible, especially Kraken, our preferred place

3:01

to buy or sell your crypto in

3:03

2024. If you do not have an

3:05

account with Kraken, consider clicking the links in the show notes getting

3:07

started with Kraken today. If you want

3:09

a crypto trading experience backed by world-class security

3:11

and award-winning support teams, then head over to

3:14

Kraken, one of the longest standing and most

3:16

secure crypto platforms in the world. Kraken is

3:18

on a journey to build a more accessible,

3:20

inclusive and fair financial system, making it simple

3:23

and secure for everyone, everywhere to trade crypto.

3:25

Kraken's intuitive trading tools are designed to grow

3:27

with you, empowering you to make your first

3:29

or your hundredth trade in just a few

3:31

clicks. And there's an award-winning client support team

3:34

available 24-7 to help you along the way,

3:37

along with a whole range of educational guides, articles

3:39

and videos. With products and features like

3:41

Kraken Pro and Kraken NFT Marketplace and

3:43

a seamless app to bring it all

3:45

together, it's really the perfect place to

3:47

get your complete crypto experience. So check

3:49

out the simple, secure and powerful way

3:51

for everyone to trade crypto, whether you're

3:53

a complete beginner or a seasoned pro.

3:55

Go to kraken.com/banklists to see what crypto

3:57

can be. Not investment advice, crypto trading.

6:00

Welcome back to Bankless. Hey, great to

6:02

be back. We also have Vitalik Buterin.

6:04

He is a philosopher, I would say,

6:06

in the context of today's conversation and

6:08

probably know him as a co-founder of

6:10

Ethereum. Vitalik, welcome to Bankless as well.

6:12

Thank you. Good to be back. So

6:14

we are doing this episode on a

6:17

post, an argument that Noah put out

6:19

on his sub-stack. And the title of

6:21

that post is, How Liberal Democracy Might

6:23

Lose the 21st Century. And

6:25

I want to provide some context for why

6:27

we're having a Bankless episode on this. Isn't

6:30

Bankless a crypto podcast? This is certainly a

6:32

crypto adjacent topic, but I guess

6:34

I'll give some framing for this. You know,

6:36

when I read Noah's article, I was kind of

6:38

a know your enemy type of reaction for me

6:40

because Bankless listeners will know we are very much

6:43

a friend of liberalism in lowercase L.

6:46

You know, civic rights, free speech, free markets,

6:48

private property. We want that to succeed. I

6:50

mean, we have a horse in this race.

6:52

And I think in order to help

6:55

it succeed, you have to understand the

6:57

points at which it will fail, liberalism,

6:59

that is. So in the past,

7:01

I think especially in the way that I was

7:03

brought up, I've been guilty of a blind faith

7:05

in liberalism. You know, like it'll obviously win. I

7:07

think I'm sort of a maybe a victim of

7:09

just like a child in the US, like growing

7:11

up in the 90s. And

7:13

I don't want to live in a blind

7:16

faith of like liberalism will always win. I

7:18

want to live in an actual reality. And

7:20

I think that's why Noah's article was so

7:22

instructive. And so Noah

7:24

has this argument for

7:26

how authoritarianism might actually

7:29

outcompete Western liberal democracies.

7:31

And Vitalik, I think you called his argument.

7:33

I saw a Farcaster tweet about this, the

7:37

strongest case for authoritarianism. So

7:39

I think you thought it was a pretty good

7:41

case. I know you have some takes here. So

7:43

that's what we're going to do in today's conversation.

7:46

Number one, we want to just frame out the

7:48

argument. So have Noah explain it. Maybe Vitalik have

7:50

you help? And then two,

7:52

we want to talk about maybe the counterpoints to

7:54

this argument. And then three, we want to finish

7:56

off with where do we go from here? Does

7:58

that sound good? Amazing, I'm

8:00

getting thumbs up. All right, so let's start

8:02

with you Noah. So let's frame this up

8:05

because I think we need some background. Can

8:07

you just explain what we mean by liberalism

8:09

and why so many of us have this

8:12

blind faith in it? I can't be the

8:14

only one. So explain liberalism and why it

8:16

just feels like everybody thinks that liberalism

8:18

has won already, at least in the

8:20

West. Well, so when people

8:23

say liberalism, there's really, I think, three

8:25

things that they mean. The first thing

8:27

is markets, your right to basically buy

8:29

and sell stuff if you want. The

8:31

second is an own stuff, et cetera,

8:33

property rights, all that. The second is

8:35

democracy, your ability to elect your leaders.

8:37

And the third is kind of civil

8:39

rights, your ability to kind of do

8:41

whatever you want, you know, as

8:44

long as it doesn't hurt anybody else, of course that's always

8:46

being, who knows what that really means. But

8:48

those are sort of the three things people mean when

8:50

they say liberalism, your ability to elect your

8:52

leaders, live your life the way you want to

8:54

and buy and sell stuff and own stuff.

8:57

Noah, why, you start the article this way,

9:00

why were we raised in this age, and

9:02

you say you were raised in this age

9:04

of liberal triumphalism, like the sense that liberalism

9:06

has won already? Right, in the 20th century,

9:08

at the beginning of the 20th century, we

9:11

were just in the middle of the industrial revolution,

9:13

the really fastest part of it. And people kind

9:15

of didn't know how society was gonna be organized.

9:18

There were a lot of different ideas about how

9:20

we were gonna organize an industrial society and nobody

9:22

really knew what that was gonna look like. And

9:25

varieties of socialism from

9:27

evolutionary socialism, which

9:30

is basically what Sweden looks like

9:32

now, to revolutionary socialism, which

9:34

is basically, I don't know what North Korea

9:37

or something, Cuba maybe looks like now. And

9:39

then of course you had various other things, social

9:42

Darwinism and various kind of racial

9:44

supremacy theories. And then you had,

9:47

in the United States, the big idea that everyone was

9:49

pushing was that both free enterprise,

9:51

which is what we now call economic

9:53

liberalism, free enterprise and democracy

9:55

were both good things. And that was

9:57

the best way to organize society American

9:59

society. So there were sort of all these hats

10:01

on the ring of what society is going to

10:04

look like once we move from farms to factories

10:06

and offices. And I think

10:08

that by the end of the 20th century,

10:10

that question had been answered in favor of

10:12

liberal democracy by most people. In

10:15

China, which was still a

10:17

lot poorer than America at that time, they were

10:20

experimenting with various ways to liberalize the

10:22

society and people were

10:25

experiencing many kind of new

10:27

freedoms, new personal freedoms and

10:30

economic freedoms and sort of personal freedoms if

10:32

not democracy. They didn't have democracy, but you

10:34

could certainly do a Mao impersonation as a

10:36

joke in 2004 or something. And

10:38

lots of people did this professionally, these Mao impersonators,

10:40

they were women by the way, who would dress

10:42

up as Mao. And so in China. And

10:45

of course, Russia had the Yeltsin period and even in

10:47

the early Putin period, people thought like, oh, Putin's going

10:49

to be a liberal, blah, blah, blah, because he had

10:51

the support of educated sort of liberal thinking elites in

10:54

the city. And so by the end of the 20th

10:56

century, by the 1990s and early 2000s, I think that

10:58

people generally thought, okay, this is

11:00

what works. It was Francis Fukuyama's thesis, the

11:02

end of history, blah, blah, blah. And

11:05

so I think that now it's the strength

11:07

of China that's really challenging that.

11:09

And not just the strength of China, but

11:11

the weakness of the United States. So the

11:14

United States has looked remarkably

11:16

weak since at least 2008. In

11:19

the war on terror period, the United States looked kind

11:22

of angry and pissed off about 9-11,

11:24

was becoming less liberal. But then

11:26

in 2008, the United

11:28

States economic model sort of appeared

11:30

to have collapsed, the kind of

11:32

financialized capitalism that we had. And

11:34

then after the election of Trump

11:36

and the divisive rise

11:39

of social media movements, I would

11:41

say, people started asking, okay, is

11:43

this society just total chaos? And

11:45

then after that, we started discovering

11:47

all these things that our society

11:49

had seemingly lost the ability to

11:51

do, like build housing or build

11:54

trains or build literally

11:57

anything. And so America got this. images, the

11:59

build nothing country. And China almost seemed a

12:01

mirror image of that. Whereas in China, you

12:03

could build anything you want because the government

12:06

just says, do it and you do it.

12:09

And, you know, China was economically growing and

12:11

strong. And then if you go to their

12:13

cities, you see giant glittering new malls and

12:15

massive train stations and beautiful

12:17

high-speed trains. They can take you anywhere really fast.

12:20

And you see, you know, LEDs on all

12:22

the buildings, right? And then drones delivering stuff,

12:24

right? You're a doorstep, I don't know, or

12:26

a little delivery robots anyway. And so then

12:28

all of these things, I think have caused

12:30

people to question, was Fukuyama not

12:33

just wrong, but the opposite of right? Is

12:35

China style authoritarianism actually gonna win

12:37

now? Is that the

12:40

model that works now? And

12:42

so I was trying to think, okay, how could that be

12:44

true? You know, of course it's possible

12:46

that there's no model that really works and it's

12:48

all just contingent in the fact that China happens

12:50

to be this really big country that has historically

12:52

authoritarian instincts and happens to be only at a

12:54

third of American per capita GDP anyway. And,

12:57

you know, it's sort of in its rising phase and

13:00

it just happens to be really big. And it's all

13:02

just this big illusion and they put LEDs on the

13:04

buildings, but actually they don't really look that nice. But

13:06

I wanna steel man the idea that authoritarianism is gonna

13:08

win in the 21st century. I

13:11

thought, okay, so how do I do that, right?

13:14

And I thought, what was the strength of liberalism?

13:16

Why did we think it might've succeeded in the

13:18

20th century? And why might that strength

13:20

turn into a weakness now in the 21st century?

13:22

And the only thing I could think of was

13:24

the internet. That's the only thing that's different now.

13:27

Like people aren't different very much, right? We

13:30

have less lead poisoning maybe, I don't know.

13:32

But like industry isn't that different. Like there's

13:34

a few different things. The main thing that's

13:36

different now, between now and 1992 is the

13:39

internet. And

13:42

so I was thinking, how could the

13:44

internet have totally changed the game in

13:46

terms of whether liberalism or, you know,

13:48

sort of, I don't know, authoritarian totalitarianism,

13:50

whatever, naturally is stronger. And so

13:52

I thought, well, the internet's all about

13:54

information. So what's the strength of

13:57

liberalism with regard to information? The strength of

13:59

liberalism... regard to information, everybody will tell you,

14:01

Friedrich Hayek will tell you, and a lot

14:03

of people tell you, it's to aggregate information.

14:05

So briefly, Hayek's theory is that a market

14:07

aggregates information about costs and preferences, and that

14:10

you know what to produce. Producers know what

14:12

to produce because they know what people want

14:14

to buy, and they know what their costs

14:16

are going to be, and consumers

14:18

know what to consume because, you know, they know what

14:20

the producer's costs are going to be. And so analogously,

14:23

you can think of democracy as revealing information about what

14:25

voters want, and you can think

14:27

of civil society as revealing information about how

14:29

people want to live, right? People argue about

14:31

whether they like this or that music, I

14:34

don't know, or whether they think gay marriage

14:36

is okay. So then you aggregate this stuff

14:38

with public debate, the marketplace of ideas. So

14:40

you can think of liberalism as this giant

14:43

information aggregator. Now, how does the

14:45

internet change that? Well, the internet

14:47

makes information aggregation much easier, right?

14:50

So we can get information much

14:52

more easily. Maybe that actually

14:55

reduces the benefit of liberalism

14:57

because an authoritarian state can

15:00

get much better data about what to produce,

15:02

what to tell people to produce. And,

15:05

you know, an authoritarian state can get much

15:07

better information about whether the citizenry is angry

15:09

and you need to respond to what they

15:11

want. And you can get much

15:13

better information about what kind of things, what kind of

15:15

behaviors you can restrict with only pissing off a few

15:18

people versus what kind of

15:20

behaviors you would piss off everybody by restricting.

15:22

And so authoritarian states can get all this

15:24

information from the internet, especially with AI, especially

15:26

with sort of universal surveillance, kind of that

15:29

your phone is like a surveillance device that

15:31

tells you everything about what everything you do

15:33

in your whole life can send to a

15:35

central party apparatus or some authoritarian organ and

15:38

tell you everything about you. And so now

15:40

maybe that information gathering has gotten so easy

15:42

for authoritarian states. Now, that doesn't mean I

15:45

think they're better at it because of technology,

15:47

but maybe they're less bad. Maybe they'd still

15:49

have a disadvantage, but it's ameliorated, right? It's

15:51

less bad than it used to be. Meanwhile,

15:53

maybe there were some advantages that authoritarian states

15:56

always had over liberal states that have

15:58

gotten more pronounced in the age of the

16:00

internet. So for example, disadvantages of liberalism that

16:02

were always there that have been exacerbated by

16:04

the internet. And so I was thinking,

16:06

okay, well, in the internet, we spend all our time

16:08

on Twitter just arguing and the smartest people in the

16:10

world are wasting their time arguing on Twitter, like

16:13

with complete idiots who think that like, you know,

16:15

they're like, but did you adjust the inflation adjusted

16:17

graph for inflation? And like, how many times, like,

16:19

is it worthwhile to have the highest IQ people

16:21

on the planet sitting there explaining once again that

16:24

yes, inflation adjusted means you have adjusted for inflation.

16:26

Thank you very much. And so that's, you know,

16:28

giant waste of time. And so when I look

16:31

at financial capitalism, I look at, you know, Elon

16:33

Musk literally had to drive himself nuts to just

16:35

to get enough funding to build some cars. Whereas

16:37

the people who run BYD did not in China,

16:39

right? They did not have to drive themselves nuts.

16:41

Maybe they're nuts anyway. I've never met them, but

16:43

you know, Elon had to basically break himself with

16:45

stress over the model three rollout, raising money to

16:47

do this thing because the funding wouldn't give him

16:49

the money just, you know, because like, oh, cars

16:52

cool. And like, maybe this is

16:54

analogous to a lot of things in financial

16:56

capitalism. The idea that fundraising for

16:58

long-term projects is so goddamn hard

17:01

because everybody's out there saying, it'll never work.

17:03

It'll never work. And the people that

17:05

will work, you know, or just bullshitting, pumping

17:07

and dumping and blah, blah, blah, blah. That

17:10

in order to keep the fickle market

17:12

focused on providing capital for a very

17:14

long-term project for a large public company,

17:16

maybe that's just not possible. And that's

17:18

why GM and Ford and all these

17:20

old line companies seem so unresponsive is

17:23

because everything is just quarterly earnings. And

17:25

it's this information tournament. You know, if you really

17:27

want to invest for the future, you've got to

17:29

spend inordinate excessive amounts of

17:32

time, you know, on the internet

17:34

and yelling that you're good. And

17:36

so maybe this was always a problem with financial

17:38

capitalism. I think it probably was, but now that

17:40

the internet allows massive real-time dissemination of

17:42

bullshit information, like all the people who said the model

17:44

three would never work and would break Tesla and Tesla

17:46

would die. And it would never, nobody would buy it

17:48

and it would never succeed, right? There are all those

17:50

people and they're like, says it's going to fail. And

17:52

then, you know, that is required. Elon

17:55

Musk to drive himself nuts fundraising. So

17:57

I think of this as an information tournament. You've

18:00

got people yelling bullshit and you've got people

18:03

yelling truth and truth does not automatically drive

18:05

out bullshit because bullshit is very easy and

18:07

cheap to generate. It's really

18:09

easy to make a misleading graph. You

18:11

know, it's hard to make a graph that teaches you something. It's

18:13

easy to make a graph that if you decide on your point

18:16

ahead of time, you just want to bullshit. It's

18:18

easy to make that graph. It's easy to make bullshit

18:20

arguments from an ideological standpoint. It's like ideology is like

18:22

a muscle suit. So everybody just herps

18:24

their derp as I would love to say.

18:27

And so everybody just throws their ideology, you

18:29

know, into the ring and it just becomes

18:31

this giant shouting match. And so meanwhile in

18:33

China, they're just like, okay, there's just one

18:35

ideology. It's Xi Jinping thought. What is Xi

18:37

Jinping thought? Well, it's not really anything interesting.

18:39

It's just this one dude and he's sort

18:41

of a boomer conservative. And he's like, let's

18:43

make some cars. Der, let's not

18:45

make internet stuff. The internet stuff is

18:48

not real innovation. Let's make cars instead.

18:50

Der, and that's not optimal, right? You

18:52

haven't really optimized, but maybe that's less

18:54

bad than having a bunch of people

18:56

screech that like, you know, arise in

18:59

the price of literally anything is because

19:01

corporations are profit gouging and the, you

19:03

know, the evil corporations are hoarding all

19:05

the stuff. And you know, like maybe

19:07

obviously we're dealing with many very flawed

19:09

systems here, there's no perfect system, but

19:12

maybe in the age of the internet,

19:14

the internet helps authoritarians get real information,

19:16

you know, for all his authoritarianism, Xi

19:18

Jinping was also able to see the

19:20

white paper protests and COVID lockdowns and

19:23

know really early, really quickly

19:25

through the internet, when to cancel zero COVID,

19:27

right? As soon as people started getting a

19:29

little bit upset, there were like a few

19:31

hundred people at those protests and still it

19:34

moved national policy. So maybe authoritarianism has become

19:36

more responsive in the age of the internet

19:38

while liberalism has been paralyzed by

19:40

people shouting disinformation and bullshit all day. That's

19:42

a great articulation of it. And I want

19:44

to continue to steel man this. Eventually Vitalik

19:47

want you to kind of weigh in and

19:49

try to articulate what Noah is saying here,

19:51

but no, let's continue to steel man the

19:53

argument. Cause you made a whole bunch of

19:56

connections that I just want to reinforce, but

19:58

your basic idea is that totalitarianism. might be

20:00

better adapted to this world that we find

20:02

ourselves in the 21st century. And

20:05

the core reason why is because, as

20:07

you say, the internet or the

20:09

cost of information has gotten

20:12

very cheap, right? Is whereas

20:14

in the 20th century, maybe the

20:16

cost to produce information was a

20:18

lot higher. And so

20:20

this technological shift of cheap

20:22

information has really possibly

20:24

given totalitarian authoritarian regimes a fitness

20:27

advantage, like kind of in this

20:29

Darwinian struggle of which society is

20:31

going to produce the most economic

20:34

output. And so let's just

20:36

reinforce that a little bit. This

20:38

idea that liberal democracies are

20:40

information aggregators. I think bankless listeners

20:42

will be more familiar with the

20:45

idea of capital markets as

20:47

information aggregators. You know, there's that clip

20:49

that Milton Friedman pencil clip, where he

20:51

talks about how it's from the 1980s

20:53

and we'll include a link in the

20:55

show notes. But he basically holds up

20:57

a pencil and he says, you

20:59

know, no single person knows how to

21:01

make this pencil from scratch. And then he

21:04

goes through all of the different components of

21:06

the pencil, you know, the graphite inside, the

21:08

rubber, all of it sourced from

21:10

different places in the world. And

21:12

he makes the point that all of these

21:14

things require specialized skills and labor. And so

21:17

something as mundane as a

21:19

pencil is really this unique creation of

21:21

capitalism. And isn't it great that we

21:23

can all coordinate around price systems and

21:25

have market signals, we could do this

21:27

without war, you know, we could do

21:29

this in a peaceful way. And

21:32

so that is kind of like information

21:34

aggregation theory as applied to capital. But

21:37

what about applying that to democracies?

21:40

So what about the idea that

21:42

I think is core to your

21:44

argument here, that liberal democracies are

21:46

information aggregators? And so they have

21:48

a superiority in their ability to

21:51

aggregate information effectively, like leading to

21:53

better decisions and like more buy

21:55

in and public goods. That

21:57

wasn't necessarily clear to me going into

21:59

this episode. Could you steel man that

22:01

a little bit? Why are democratic liberal

22:03

democracies information aggregators and why have they

22:05

in the 20th century had an advantage

22:08

there? Right. So I would direct

22:10

you to Bruce Bueno de Musquita's Selectorate theory, which is

22:12

a really interesting theory. The idea is in

22:15

a democracy, right? So we have the same

22:17

understanding of how markets aggregate information. But let's

22:19

talk about democracies also aggregate information just a

22:21

different way. Say you have two

22:24

parties, right? And one party's like, I'm going to raise your taxes, I'm

22:26

going to lower your taxes. And the other party says, I'm going to

22:28

lower your taxes. And the party says,

22:30

I'm going to raise your taxes says, okay, I'm going to raise your

22:32

taxes and I'm going to buy you health care with that. Right. And

22:35

the other party is like, no, I'm going to lower your taxes and you can

22:37

go buy your own health care if you want, you can buy whatever you want.

22:40

And so those are the two ideas on offer.

22:42

And so the question is, what are the masses want? Right?

22:45

Then you have people vote based on that.

22:47

This is an incredibly simplified, stupid model, obviously.

22:50

But this is the first model you learn

22:52

in public economics because it's just illustrative, right?

22:54

It's an illustrative example. And

22:56

so then you have people vote on which of these they

22:59

like better. And so you do like the

23:01

high tax, high services candidate, do you like the low tax, low services candidate

23:04

better. And then you vote on them. And if

23:06

more people want high tax, high services, they'll vote

23:08

for that candidate and less people want

23:11

it, they'll vote for the other candidate. And so

23:13

then by voting, you aggregate information about what people

23:15

want about people's preferences. And

23:18

by the way, this is called the median voter

23:20

theorem. You've heard of it. And so that's the

23:22

idea of the median preference gets into policy because

23:24

then the candidates do what

23:26

they say they're going to do and everything works nice.

23:28

You get your high tax, high service, Denmark, or you

23:30

get your low tax, low service, I don't know, Hong

23:33

Kong, whatever it got. I know they didn't

23:35

really have a democracy, but like, anyway,

23:37

you can't use the United States for that

23:39

anymore because we're not that anymore. The UN

23:41

Singapore is way different. There's no real libertarian

23:43

example I can use versus that, but this

23:46

is how people used to talk about this. So that's how democracies

23:49

aggregate information. Now, are democracies perfect

23:51

information aggregators? Well, no, but neither

23:53

are markets. There's reasons why this

23:55

information aggregation fails. And so

23:58

the idea that democracy is the least bad.

24:00

system, you know, which is a famous Winston

24:02

Churchill quote, this idea came from the idea

24:04

that well, when you have a totalitarian state,

24:06

when you have, you know, Nikolai

24:08

Chachisku is in charge, right? You know, he's like,

24:10

oh, I'll ban abortion. I'll do these other things

24:12

that he like thinks are right because he and

24:14

his buddies think that all the guys around him

24:17

are like, that sounds legit. Let's do that. Then

24:19

the normal people don't like it. And he's

24:21

like, oh, well, okay, they bitch and moan, but

24:23

it's just a few loud people, blah, blah, blah.

24:26

But because you don't have the aggregation, because they

24:28

can't vote for Chachisku and you can't see him

24:30

punished at the polls. You can't see him thrown

24:32

out, you know, in favor of some other leader,

24:34

blah, blah, blah. The leadership just doesn't realize what

24:36

the people really want. And so does things that

24:39

the people don't want and then gets thrown out

24:41

violently via revolution, which causes chaos in society, which

24:43

leads to problems. Although maybe in the long term,

24:45

it's good, but then, you know, it's better if

24:47

you can throw the bums out with

24:50

an election than throwing them out by hanging them

24:52

from a gas station and by burning the Capitol.

24:55

And so that's the idea of democracies, agreeing

24:57

information about what voters actually want. Vitalik, you

24:59

put this into a pretty interesting metaphor that

25:01

I kind of want to bring up in

25:03

this point of the conversation on Warpcaster, which

25:05

is where we saw your interest in this

25:07

article. You say, this might be the

25:09

strongest case for authoritarianism. And then you link to Noah's

25:11

article. Basically, the war for people's

25:13

hearts and minds has no stable equilibrium except

25:16

local hegemony of one dominant elite, much

25:18

like and for the same reasons as what

25:20

Hobbes points out for regular war. And so

25:22

this is Thomas Hobbes Leviathan concept that I

25:24

think you're alluding to. And you're saying Hobbes

25:27

alludes to this idea that first, there's a global

25:29

state of anarchy, and then there's

25:31

a war against all, which suppresses the anarchy,

25:33

which leads to a governing elite with a

25:35

monopoly on force, which is kind of like

25:37

how we have the stable equilibrium of countries

25:40

to this day. And you allude to the

25:42

fact that this produces a same pattern with

25:44

instead of a state of physical anarchy, you

25:46

have a state of information anarchy. And

25:49

I think, again, alluding to the fact that

25:51

like putting out a tweet is so cheap

25:53

these days. And so the same pattern exists

25:55

where like, if we want truth, we kind

25:58

of need a governing elite of a monopoly

26:00

of memes is kind of how you say

26:02

it. Maybe you could also just like add

26:04

to this illustration of just like what happens

26:06

when like information markets and capital markets interact

26:08

with each other and how they can kind

26:10

of get distorted and overall just how you

26:12

resonated with Noah's article. Yeah. And so I

26:14

think you definitely gave a pretty good introduction

26:16

to, I guess, the thesis

26:18

already, right? But basically, if

26:20

you think of like what the

26:23

public discourse game is, and like

26:25

you imagine the most pessimistic possible

26:27

interpretation of the public discourse game,

26:29

there is basically no truth seeking.

26:31

And instead, what you have is

26:34

you have multiple tribes. And

26:36

each of these tribes

26:39

basically fires off a type

26:42

of missile or warship

26:45

or a tank or

26:47

whatever. That could be a meme

26:49

or it could be an article,

26:51

could be a tweet or a

26:53

video or whatever. And often, you

26:55

know, these million of these infomissiles

26:57

fired at you would have some

26:59

common themes. And so you

27:01

have one group that's like basically trying

27:04

to essentially have their memes colonize your

27:06

brain. And then you have a different

27:08

group that's also trying to have their

27:10

memes that are completely different memes go

27:12

and colonize your brain. And

27:14

so you basically have this like zero-sum

27:17

conflict, right? It's like, you know, if

27:19

we say, you know, one side is

27:21

pushing capitalism, the other side is pushing

27:23

socialism. Or if

27:26

let's say, you know, it's a foreign

27:28

policy issue and like, let's say, you

27:30

know, Greenland and Sweden are

27:32

at war. And you know,

27:34

you have one group saying, you know,

27:36

support Greenland and the other saying support

27:39

Sweden. These are just like very zero-sum

27:41

things. So like you have people pushing

27:43

in one direction, you have people pushing

27:45

in the other direction. And it basically

27:47

all kind of roughly stums up to

27:50

zero. And you just have like a

27:52

huge amount of wasted effort, huge amount

27:54

of stress, a huge amount of people

27:56

not getting literally killed, but you know,

27:59

like definitely getting like much

28:01

worse, you know, life's emotional experiences than

28:03

they otherwise would. And so you basically

28:06

ask the question of like, okay, so

28:08

you have this war of all against

28:10

all that looks very similar to a

28:12

war between two armies to conquer territory,

28:14

except instead of it being two armies

28:16

battling over a forest, you have two

28:18

meme armies battling over each and every

28:20

person's brain. And you ask

28:23

like, what is the equivalent

28:25

of a peace treaty,

28:27

right? And the equivalent

28:30

of a peace treaty is so

28:32

basically in the Yahubzian case, like

28:34

you have local territorial monopolies, right?

28:36

And then after that, you know,

28:38

you had things like the

28:41

treaty of Westphalia, which like formalized a

28:43

lot of this, and then they skipped

28:45

going further and further from there, basically,

28:47

you know, saying that, okay, you know,

28:49

we have this notion of territory, and

28:52

within each territory, then,

28:54

you know, you have a local

28:56

monopoly. And actually, the Westphalia example

28:58

is interesting, right? Because I think

29:02

that was also when the concept of

29:04

a coyos regio, a s relegio, right?

29:06

Like who has the region has the

29:08

religion came about, right? Basically,

29:10

yeah, that one of the ideas

29:12

is that kind of the local monarch

29:15

would also have the ability to choose

29:18

the religion of the country. So it's

29:20

an interesting example to harken back to

29:22

you, because we're basically saying like, even

29:24

back then, it kind of, you know,

29:27

like recognize that like this concept of

29:29

thing, like one person

29:31

having hegemony over a piece of territory,

29:33

and then different people having hegemony over

29:36

different pieces of territory is something that

29:38

applies to a physical war. And it's

29:40

also something that can, in

29:42

the same way, apply to information war, right?

29:45

And so the way that this works in,

29:47

you know, the space of information war is

29:49

basically like, okay, yeah, you know, you have

29:51

one country, and, you know, in the cell

29:53

one country, you're supposed to, you know, the

29:55

only memes that are allowed to spread are

29:57

the Xi Jinping thought memes, and you other

29:59

country and then in that other country the

30:01

only memes that are allowed to spread are

30:03

some different memes. In the third country you

30:06

have some different memes that are

30:08

allowed to spread. You

30:11

have this equilibrium where

30:14

basically you don't have at least

30:17

as much zero-sum mimetic warfare

30:19

because for every country there is

30:21

one dominant elite that has a

30:23

reliable hold over the meme war.

30:25

There might be other groups that

30:27

want to get their memes out

30:30

into the meme war but they're

30:32

just so much less powerful than

30:34

the dominant party. It's like the Asia-ISU-S

30:37

governments versus a random cultist syntax or

30:39

whatever. The second group has no chance

30:42

and so most of the time they

30:44

don't try and so there's no bloodshed.

30:46

That's the analogy

30:49

that I made between

30:53

authoritarianism though it's a very

30:55

related concept. Info-hegemony

30:57

as opposed to infohigeminy might be

30:59

one of the ways to think

31:01

about it and how

31:03

things ended up turning out with

31:05

physical warfare. One of the things

31:07

that this thesis then implies is

31:09

that if there is

31:12

this analogy then if we

31:14

want to argue that infohigeminy is

31:16

something that's actually better than infohigeminy

31:18

then we might

31:20

want to look for deep and

31:23

enduring reasons why physical

31:25

war and meme war actually are

31:27

different from each other. Okay so

31:29

Vitalik you were just framing things

31:31

in this Hobbesian world where we

31:33

have this anarchy of information because

31:35

the cost to produce information has

31:37

been very cheap and the only

31:40

remedy is that we have some

31:42

sort of centralized monopoly on information

31:44

almost like a ministry of truth.

31:46

That's the force that will bring equilibrium and

31:48

cure the anarchy and so that's the idea

31:51

here. Noah could you make

31:53

the jump for us because I'm still not clear

31:55

on the jump between you said the internet may

31:58

have possibly brought this about. And

32:00

so the cost of information, the

32:02

price to create information has maybe

32:04

plummeted or the cost to distribute

32:06

information has plummeted. Why

32:09

is the cost to distribute,

32:11

propagate information going down? Why

32:13

does that help totalitarian types of

32:16

regimes and ministries of truth? That

32:18

link is not quite clear, I think, in the

32:21

case we've made so far. All right. So there's

32:23

a cell phone maybe in your pocket or

32:25

close to you right now. Unless

32:28

you've taken extraordinary precautions, that

32:31

cell phone records information about your entire

32:33

life. Everything you do, everything you buy,

32:36

everything you search for, everyone you talk

32:38

to, everything you say to people online.

32:40

Maybe in real life too, if it

32:43

is sneaky enough, but certainly online,

32:45

where you are day to day,

32:47

minute to minute, that cell phone,

32:49

that little brick knows everything about

32:51

you, right? And so what can

32:53

that tell someone? Well, if you're

32:55

a large corporation or if

32:58

you're a government who owns a bunch of large

33:00

corporations or a government, it can tell

33:02

you what you'd like to buy. You know, it

33:04

can mine your data and say, oh, you know, I think

33:07

this guy really likes broccoli now, maybe

33:09

I'll go produce some broccoli. So

33:12

that can aggregate information about what you

33:14

want. Right. Of course, it

33:16

can do it in the Hayekian way. Right. In

33:18

Hayek you need prices. Right. How do you know

33:20

whether people like broccoli? You look

33:22

at price data. If the price of

33:24

broccoli goes up, that means maybe people

33:27

like broccoli more now. But

33:29

on your phone, you can see exactly who liked broccoli,

33:32

when they bought broccoli, what they were doing, when they

33:34

bought broccoli, whether they were talking about broccoli, whether they

33:36

searched for broccoli, blah, blah, blah. You get a lot

33:38

of information about that. And that information

33:40

that you couldn't get in 1957. That's

33:43

information you couldn't get in 1995. And

33:46

that's information that's now available. Noah, can I just

33:49

like regurgitate that just to make sure I totally

33:51

understand the way that I wasn't trained in as

33:53

an economist like you were, but just like as

33:55

a meme, I always understood like, why did communism

33:57

fail? Oh, because like central planning

33:59

is. inefficient, right? It just doesn't have

34:01

the information that a free

34:04

capitalistic market has. But I think what you're

34:06

saying is like, there's such a strong centralization

34:08

of information due to modern technology that all

34:10

of a sudden, like central planning, perhaps has

34:12

a lot more of information that it previously

34:14

would not have had thanks to technology. I

34:16

think that's just like what you're saying in

34:18

a short way. Right. And the hypothesis here

34:21

is that it's gone from being, say, 20%

34:23

as good as free markets

34:25

to being 60% as

34:28

good as free markets. It's still significantly

34:30

worse, but the disadvantage is less. And

34:33

perhaps that disadvantage is now small enough where it

34:35

can be more than compensated for by advantages and

34:37

other domains. And that's the hypothesis. Of

34:39

course, I think that personally, you know,

34:41

I was briefly a finance professor and

34:43

I think, you know, about capital allocation, things

34:46

like that. Personally, obviously, whether people want

34:48

to buy broccoli or not, that's the

34:50

classic example. Right. But if you think about

34:52

productive efficiency, each companies are best. Right.

34:55

Suppose you have one country where the government is

34:57

allocating money to companies and saying, okay, you can

34:59

have this much funding. You can have that much

35:01

funding. And another country where how much

35:03

funding you get is based on, you know, a

35:05

whole bunch of investors deciding what's priced to value

35:07

your stock at and what interest rate to charge

35:10

you in the bond market. Right. And then in

35:12

the other one, you just have some banks which

35:14

are owned by the government saying, okay, we think

35:16

you're going to have good opportunities or you get

35:18

money, you get investment capital. And so

35:20

the internet can provide the people doing,

35:23

you know, loan evaluation or whatever with

35:25

massively more amounts of information about both

35:27

who's buying their products and how their

35:29

stuff is organized and what their technology

35:31

is like and all these things about

35:33

a company that you just couldn't get

35:35

that information in 1995, even if you

35:37

like. And I know because the Japanese

35:39

bureaucracy really tried and they

35:42

weren't, they, you know, Mitty was constantly behind the

35:44

curve on this and they were the best of

35:46

the best. Back in the 90s, they were just

35:48

racking up one L after another doing this. And

35:50

of course, when the Japanese economy thrived, it was

35:52

often, you know, kind of because people just

35:54

went around Mitty. You

35:56

know, sometimes Mitty did have some big success, but that's another

35:59

topic. And that was most earlier, but then, so

36:01

I guess the idea is that maybe

36:03

the internet, when we say the internet, we don't

36:05

just mean like people arguing on Twitter, right? We

36:07

don't just mean people podcasting like

36:09

we're doing now. We also mean things

36:11

like the massive data collection. So the

36:13

amount of data about, you know, buying

36:15

behaviors and demand and what people are

36:17

doing with their workday, you know, and

36:19

whether people are productive and all these

36:21

things and about who has what technology

36:24

in their company. The amount

36:26

of information is vastly greater. We store massive

36:28

amounts of information. So databasing is really the

36:30

thing here. And so databasing and then

36:32

the fact that it's all networked means that you can

36:34

transmit it easily, but you know, you can run regressions

36:36

on it too. We have much better

36:39

information. I'm sorry, much more, I don't know whether it's

36:41

better, much more information about what companies are doing, how

36:43

they do it, and what they might be able to

36:45

do in the future than we did 20 years ago,

36:47

30 years ago. And so an

36:50

authoritarian state might be able to use that to allocate

36:52

capital, say that before they were only 20% as

36:55

good at allocating capital as a, you know,

36:57

market economy. Let's say now they're 60% as

37:00

good at allocating capital as a market

37:02

economy. Well, that has significantly eroded their

37:04

disadvantage. You know, there's still disadvantage. Maybe

37:06

markets are still the best, you know,

37:08

and Hayek's still formally right. But the

37:10

difference has shrunk to the point where

37:13

authoritarian states as other strengths that were

37:15

always there can now shine through more.

37:18

That's the worry. You know, I don't think

37:20

this is true necessarily. I just think it's

37:22

worth thinking and worry about. I don't think

37:24

any of this is true. You know, I'm

37:26

making a case here. I'm being a bit

37:29

of a lawyer for this idea because I

37:31

don't really strongly believe that this is right.

37:33

And I also think that like, you know,

37:35

I fervently hope that in 20 years we'll

37:38

be saying, well, that's why Xi Jinping's, you

37:40

know, regime collapsed because, you know, obviously liberal

37:42

democracies are much better. I

37:44

hope we're saying that. I want to be able to say

37:46

that in 20 years, but I don't know standing right here.

37:48

I don't know what's going to happen. And so I'm sort

37:50

of pushing this scary idea so that we can think about

37:52

it. This is the hypothesis.

37:54

Okay. So Vitalik, if you remind, have we

37:57

sufficiently explained the argument that no, is. making

37:59

or would you add anything else? I know,

38:01

Noah, you touched upon information tournaments a little

38:03

bit and kind of like the drive by

38:05

explanation, but you know, like maybe we could

38:07

touch upon that. Or just like in general,

38:10

do you think we've articulated the case he

38:12

was making in his article, Vitalik? Or what

38:14

would you add? Yeah, I mean, I think

38:17

I would only add one small thing,

38:19

which is like, you know, we talked

38:21

about info Hobbsianism, but there's definitely a

38:23

kind of generalized, you know, Hobbsianism that

38:25

you can talk about and like, to

38:28

the extent that you can model aspects

38:30

of finance as a war against all then

38:33

like, fine, you know, throw that in there.

38:35

Like, you know, if you

38:37

think about like, some like billion dollar

38:39

hedge funds, and like using, like high

38:41

leverage to try to like attack and

38:43

like break particular companies positions and like

38:46

if you interpret that as a zero

38:48

sum behavior, then like, you could

38:50

kind of squint and make a

38:53

case for like putting up financial

38:55

walls to protect against that, then

38:57

you know, you can apply similar

38:59

ideas to potentially,

39:02

yeah, offline,

39:05

internet things potentially to the

39:08

biospace, like basically, yeah, it's

39:11

a pretty generalizable argument. And so

39:14

you can try to like apply

39:16

it issue by issue to like

39:18

different kinds of things and basically

39:20

see them and like is that

39:22

say, zero sum game that's like

39:25

analogous to physical warfare in the

39:27

right ways. And that it

39:29

feels like it has the same equilibrium and like

39:31

if it feels like it doesn't then you know,

39:33

you can like actually look like dig into the

39:36

specific example and explore the reasons why. Yeah, one

39:38

example that Noah gives maybe this gets into the

39:40

idea of like wasteful information tournaments that might be

39:42

going on in like Western liberal democracies is the

39:44

idea of an election. And you

39:47

know, you competent that the average US

39:49

politician spends about 30 hours of their

39:51

work week actually just trying to raise

39:53

funds, raise capital in order to what

39:55

in order to go get elected again.

39:57

And it's like which leaves you the

39:59

question of how much of their time

40:01

is actually spent on governing. And the

40:03

question of, well, is this just wasteful,

40:05

right? It's like, will a regime that

40:08

does not need to have elections, will

40:10

that regime just out-compete, maybe govern more

40:13

and spend the 40 hour work without

40:15

actually governing, not wasting all this capital

40:17

on getting elections? So is

40:19

part of the idea here, Noah, that we have

40:21

this waste going on in liberal

40:23

democracies? And how would you pattern match that

40:26

with what we've discussed so far? Let's think

40:28

about why Congress people are out there fundraising

40:30

their entire time. If you're holding national office,

40:32

what do you use money for? Use

40:35

it for television advertisements, use

40:37

it for internet advertising, use it for

40:40

ads, right? So now suppose

40:42

the other side also raised a bunch

40:44

of money and uses that for ads,

40:46

okay? Now, television isn't the

40:48

internet, but the fact that the

40:50

internet makes it very easy to

40:52

spread misinformation. So for

40:54

example, we've seen a lot of

40:56

people spread, misinformation about

40:58

how good the economy is doing. Often

41:01

in order to discredit Biden, but sometimes

41:03

to defend Biden too. We've

41:05

seen people use alternative methods of inflation

41:07

that are just absolute terrible methodology, but

41:09

broadcast that with scary, scary charts or

41:12

numbers to like a whole bunch of

41:14

gullible people. We've seen people

41:16

do charts where you adjust one of the

41:18

lines for inflation, the other isn't to show

41:20

that people's purchasing power is collapsing when actually

41:22

it's not, because you've just inflation adjusted the

41:24

wages and you didn't inflation adjust the prices.

41:27

Those are just a couple of things I encounter in my daily

41:30

life on the internet arguing, massive disinformation

41:32

and these memes take hold, and a

41:34

lot of people believe them, right?

41:37

And so how do you counter them? How

41:39

do you counter these memes? As a politician, if someone

41:41

shows a viral chart showing that wages

41:44

are flat and prices are way, way up, and

41:46

it's because they adjusted the wages for inflation. So

41:48

the wages only increased slowly while they didn't adjust

41:50

the prices for inflation. So the price, you know,

41:52

and so they show this chart and it goes

41:54

around and now you're a politician and you've actually

41:56

done a good job and wages have

41:59

gone up adjusted for inflation. You know,

42:01

real wages have gone up and you've done a good

42:03

job and now you have to counter that message that

42:05

it's as easy to misunderstand that chart. Like

42:08

it always goes viral because it looks really dramatic

42:10

at total fucking disinformation bullshit. Like

42:13

I know you should adjust the inflation

42:15

for both. You adjust both time series for inflation if

42:17

you're going to compare this. You

42:20

know, this is not an ambiguous case, right? This is

42:22

just a mistake. And often it's an intentional mistake. People

42:24

intentionally do this just to get clicks. I could do

42:26

it tomorrow. I could just show you

42:28

how it's done. Right. But I'm

42:30

not going to. But you can and people do

42:32

it all the time. The Wall Street Journal did it by

42:34

accident once and had to like retract it. So

42:36

what I'm saying is how do you counter that? Well,

42:38

perhaps you can pay to put your own message out there.

42:41

Actually, real wages went up and you know, you can pay to

42:43

do TV ads saying real wages went up. You know, wages went

42:46

up by this much for the cost of living only one. But

42:48

that costs money, money, money, money. And

42:50

you have to be out there fundraising

42:53

for that money all day long while

42:55

someone else, you know, a 27 year

42:57

old staffer does the job of governing.

42:59

So the cost to create misinformation is

43:01

just like very cheap. And it takes

43:03

a very high cost to sort of

43:05

validate or verify that information is true

43:07

or not. Right. So in all

43:09

of this, I'm curious, Vitalik, you called this like sort

43:11

of what we're up against.

43:14

You seem to find Noah's argument

43:16

that he's making pretty compelling as

43:18

to why totalitarianism, maybe let's call

43:20

it, could dominate and beat the

43:23

idea of like Western civil liberties.

43:26

And this is all very ironic, I think,

43:28

because it would be sort of defeating a

43:30

device that was supposed to propagate democracy

43:34

and liberal values, let's say, which is like

43:36

the Internet. And so the idea that a

43:38

tool of liberal democracy creation

43:40

could actually be used by totalitarian

43:42

regimes to beat them at their

43:45

own game is somewhat counterintuitive. So

43:47

why did you call this argument

43:50

sort of what we're up against?

43:52

And maybe you consider it a

43:54

good argument against liberalism in the

43:56

21st century. I think the idea

43:58

of like info war as. a

44:01

zero sum game is one of

44:03

those ideas that's like at

44:05

the top of a lot of people's minds. It's

44:08

definitely something that is concerning a lot

44:10

of people. And it's definitely something where

44:12

if you just go and look

44:15

out onto the Twitter verse

44:17

yourself, you can very clearly

44:19

see evidence for it. I

44:22

think also just one of the interesting

44:25

things about the being in the

44:27

crypto space is that in

44:29

a lot of ways, we get to be a

44:31

couple of years ahead to some of these trends.

44:34

There is definitely huge

44:37

amounts of zero sum info

44:39

warfare that's happening between

44:41

Ethereum and Bitcoin maximalists,

44:43

Ethereum and the XRP

44:45

army, Ethereum and Solana,

44:47

Ethereum and whoever. At

44:50

least between the more hawkish

44:52

and maximalist factions in each

44:54

of which, under

44:57

the surface, there's a lot of

44:59

devs that actually get along quite well. That's

45:02

not visible if you just look at Twitter. But

45:05

the info war of all against all layer

45:07

is definitely a

45:09

layer that exists. We

45:12

see how all of these

45:14

info wars are coming out

45:16

and we see the

45:18

obvious need for some kind

45:20

of better

45:22

way of actually

45:24

doing this function of aggregating

45:27

and eliciting a lot of different

45:30

people's wisdom and thinking power and

45:32

information. A lot of what we

45:35

have today is just not actually meeting

45:37

that. It is worth thinking

45:40

about basically, what is the worst

45:42

case scenario? I would evolve this

45:44

and the worst case scenario definitely

45:46

seems to be an

45:51

outcome where all of these problems don't get solved

45:53

at all. It turns out the more serious versions

45:58

of all the problems actually. are

46:00

true and actually will continue

46:03

to be true and like basically

46:05

nothing survives aside from essentially

46:09

islands of various sizes that are

46:11

run in a very centralized way

46:13

internally, right? Have you ever

46:15

felt that the tools for developing decentralized

46:18

applications are too restrictive and fail to

46:20

leverage advancements from traditional software programming? There's

46:22

a wide range of expressive building blocks

46:24

beyond conventional smart contracts and solidity development.

46:26

Don't waste your time building the basics

46:28

from scratch and don't limit the potential

46:31

of your vision. Cartesi provides powerful and

46:33

scalable solutions for developers that supercharge app

46:35

development. With a Cartesi virtual machine you

46:37

can run a full Linux OS and

46:39

access decades of rich code libraries and

46:41

open source tooling for building in web3.

46:44

And with Cartesi's unique roll-up framework you'll

46:46

get real-world scaling and computation. No more

46:48

competing for block space. So if you're

46:50

a developer looking to push the boundaries

46:52

of what's possible in web3, Cartesi is

46:54

now offering up to $50,000 in

46:57

grants. Head over to Cartesi's grant application

47:00

page to apply today. And if you're

47:02

not a developer, those with staked CTSI

47:04

can take part in the governance process

47:06

and vote on whether or not a

47:09

proposal should be funded. Make sure you're

47:11

vote ready by staking your CTSI before

47:13

the votes open. Are you worried about

47:15

the security of your cross-chain transactions? Cross-chains

47:18

with confidence using Transporter, the revolutionary token

47:20

bridging app designed to give you peace

47:22

of mind. Powered by Chainlink CCIP, Transporter

47:24

is your trusted gateway for securely moving

47:27

assets like ETH, native USCC, and Link

47:29

and so many more across some of

47:31

your favorite blockchains. Over $2.8 billion has

47:33

been hacked from token bridges to date.

47:36

Transporter puts a stop to this by

47:38

ensuring your transfers are protected by the

47:40

most robust security features available. Chainlink CCIP

47:42

provides level 5 security backed by multiple

47:45

decentralized Oracle networks and an independent risk

47:47

management network. Transporter also provides real-time tracking

47:49

throughout your transaction with its newly engineered

47:51

user experience so you'll never have to

47:53

second-guess the safety or location of your

47:56

assets ever again. And the best part?

47:58

Transporter makes it simple. Whether you're a

48:00

blockchain beginner or a seasoned trader, Transporter's

48:02

intuitive interface lets you execute cross-chain transactions

48:05

with just a few clicks. No additional

48:07

fees, just a low cost for using

48:09

CCIP which can be paid in link

48:11

or your blockchain's native gas token. But

48:14

don't just take my word for it,

48:16

see for yourself why Transporter offers a

48:18

stress-free bridging experience. Experience the future of

48:20

token bridging at transporter.io and just send

48:23

it. Arbitrum is

48:25

the leading Ethereum scaling solution that

48:27

is home to hundreds of decentralized

48:29

applications. Arbitrum's technology allows you to

48:31

interact with Ethereum at scale with

48:34

low fees and faster transactions. Arbitrum

48:36

has the leading DeFi ecosystem, strong

48:38

infrastructure options, flourishing NFTs, and is

48:40

quickly becoming the web-free gaming hub.

48:43

Explore the ecosystem at portal.arbitrum.io. Are

48:45

you looking to permissionlessly launch your

48:47

own Arbitrum Orbit chain? Arbitrum Orbit

48:49

allows anyone to utilize Arbitrum's secure

48:51

scaling technology to build your own

48:54

Orbit chain, giving you access to

48:56

interoperable, customizable permissions with dedicated throughput. Whether

48:58

you are a developer, an enterprise, or

49:00

a user, Arbitrum Orbit lets you take

49:02

your project to new heights. All of

49:05

these technologies leverage the security and decentralization

49:07

of Ethereum. Experience Web3 development the way

49:09

it was always meant to be. Secure,

49:11

fast, cheap, and friction-free. Visit arbitrum.io and

49:14

get your journey started in one of

49:16

the largest Ethereum communities. Yeah,

49:19

isn't the worst case scenario what

49:21

Noah pointed out, which is basically

49:23

like liberal democracies are on their

49:25

way out? So in a similar

49:27

way that agricultural societies disrupted hunter-gatherers

49:29

and like, you know, industrialization kind

49:31

of like beat out the monarchies,

49:33

then maybe information tech just, you

49:35

know, spells the end of

49:37

liberal democracies. Isn't that the worst

49:40

case scenario for fans of liberalism? Yeah,

49:42

yeah, I think it absolutely is. Well,

49:44

I hope that that's not true. Maybe

49:47

we can get into how this argument

49:49

could be wrong. And

49:52

I'll throw this over to Noah to

49:54

start. So you've put forth the argument,

49:56

which is, you know, the cost to

49:58

create information as it plummets. Maybe

50:00

it gives an advantage to totalitarian

50:02

regimes and disadvantage is liberalism and

50:05

the idea of liberalism and these

50:07

totalitarian regimes just outcompete liberal democracies

50:09

in the 21st century. So

50:11

let's talk about how this argument could be

50:13

wrong. And I just want to maybe throw

50:15

this to you Noah. So we've had technologies

50:18

in the past that have brought the cost

50:20

to propagate information like down to zero. Like

50:22

one of those technologies was the printing press,

50:25

right? And so it did not

50:27

lead to totalitarian regimes kind of

50:29

taking charge and winning. More or

50:31

less led to renaissance, more or

50:33

less led to the splintering

50:36

and forking of sorts of

50:38

different religions, Protestantism sort of

50:40

had its way with the

50:42

introduction of the printing press.

50:45

So it seems like we've had technologies

50:47

like the internet in the past and

50:49

it bred more liberalism, it bred

50:52

more freedom. Like why is that

50:54

not the case here? I'm asking you to

50:56

maybe argue with yourself. Do you think that's

50:58

a compelling reason for why this thesis might

51:00

be wrong? Well good. So I think that

51:02

that's basically something

51:04

about lowering information costs made liberalism

51:07

strong in the 19th century and

51:09

then really in the 20th century.

51:12

And maybe in earlier centuries too, you know, you

51:14

can make an argument that the 30 years war

51:16

was won by the less illiberal side and they're

51:18

both pretty illiberal, but you could make an argument

51:20

that the protestant states at least, you

51:22

know, didn't have the overarching and at the

51:24

time quite corrupt Catholic church and that made

51:26

them more liberal and that the Habsburgs were

51:28

really in some sense the bad guys. But

51:31

so you can make that argument. But I think

51:33

to make the argument that I'm making, you need to look

51:35

at non-linearities, right? You need to look at a U curve.

51:39

The idea is that as information becomes

51:41

cheaper, it becomes possible to aggregate

51:43

information with mechanisms like markets more easily. With

51:45

the printing press, you can have people on

51:47

the telegram and all this stuff. You can

51:50

send information about prices farther and faster and

51:52

this allows you to get, you

51:54

know, aggregate information through the price mechanism

51:56

faster. Same with like voting. It's easier

51:59

to get information about what the candidates

52:01

actually want. Of course

52:03

you had plenty of disinformation there too at the

52:05

time, but still you can make this argument that

52:08

information aggregation becomes easier with just a

52:10

little bit of reduction of information aggregation

52:13

costs, but then that plateaus over time.

52:16

You know, so you have this thing where at

52:18

first a little bit of information technology like the

52:20

printing press and the TV and the radio make

52:22

it easier to aggregate preferences, but

52:25

then the internet doesn't necessarily make it

52:27

much easier on top of that to,

52:29

you know, figure

52:31

out what people want to buy and

52:33

stuff like that for the market, say,

52:36

to aggregate information about preferences, democracy. It

52:38

doesn't make it much better. And then

52:40

let's say that the disadvantages might

52:42

be a concave function like this, right? They

52:45

might be an upward bending function where

52:48

the social resources that you waste on

52:50

information tournaments might simply

52:53

increase a lot. And so

52:55

you have this crossover point, right? At

52:57

first your information tournament cost is only

52:59

increasing slowly while your information aggregation benefit

53:02

is increasing very quickly. And here's where

53:04

liberal democracy wins. But then when they

53:06

cross over here, here's where a liberal

53:09

democracy starts losing because the costs keep

53:11

growing and growing and growing while the

53:13

benefits asymptote or let's say decline in

53:15

its convex or I

53:18

mixed up concave and convex, but it's

53:20

whether the set under it anyway, but

53:22

the curve go up. So convex costs,

53:24

right? Or in concave utility is

53:26

the whole idea here. And you get this

53:28

pattern with a lot of things, right? You get

53:30

this pattern with like investment in the solo

53:32

model of growth, right? You have diminishing returns

53:34

to capital, but you have straight line depreciation costs

53:37

or even increasing depreciation costs as you build

53:39

more and more capital. And so eventually there's

53:41

some crossover point where building more capital hurts you

53:43

instead of helps you. It's like, do you

53:45

need one more bridge? Do you need one

53:47

more office building? At

53:49

some point the balance flips, right? So to get

53:51

this argument, you need to get an argument where

53:54

you have this crossover and this flip. And

53:56

the idea is that when information costs get,

53:58

you know, They start out very high because

54:00

you're just like Grog the caveman running around

54:03

your club. And then as you get better

54:05

technology, you get information costs gets lower and

54:07

lower because people learn how to write and

54:09

things like that, and printing press and television

54:11

and radio, blah, blah, blah, and liberal democracy

54:13

better and better and better. And then you

54:15

hit some crossover point with the internet where

54:18

suddenly your benefits have really just

54:20

asymptoted out while your costs continue to

54:22

explode from information tournaments. So in order

54:24

to make this argument, you need to

54:26

make an argument for a nonlinearity. You

54:29

can't just say more information equals more

54:31

good or or more liberal or right.

54:33

That's straight line thinking. Right.

54:36

So you have to think non-linearly and have some

54:38

crossover point in order to make this argument. I'm

54:40

wondering if you could give your perspective. I'll pull

54:42

in a metaphor from the AI safety people of

54:45

just like, what do you think your like P

54:47

doom is when it comes to like

54:49

totalitarian structure being the most fit as

54:51

a result of the curves that Noah

54:53

is illustrating. You talk about in your

54:55

warp cast, just like some reasons about

54:57

why this argument could be wrong. Maybe

55:00

you can like, give us some assurances

55:02

about why this is a thought experiment

55:04

and not reality. Well, I mean, I

55:06

think one thing that we

55:08

have to kind of nail down first is

55:10

the difference between being the best fit and

55:12

winning, right? Because one

55:15

property that a lot

55:17

of like systems that are organized

55:20

in a very centralized way have

55:22

in practice is that like economies

55:25

of scale and extraction are higher

55:27

than economies of scale in actual

55:29

production. And so even

55:32

if they're kind of less fit in

55:34

some utilitarian sense of

55:36

improving human flourishing, like it's

55:38

really could easily still ends

55:40

up like extracting more and

55:42

succeeding more in zero sum

55:44

conflicts for various reasons. So

55:46

that's like one big caveat

55:49

that's really important to make,

55:51

right? But yeah, I mean, in terms of like,

55:53

P totalitarianism or

55:55

P whatever, I think,

55:58

what are the challenges of giving this number? that

56:00

it's so hard to define what all of

56:02

these terms are even going to mean 50

56:05

years from now. Because we're

56:08

talking about transitions through some

56:11

pretty massive technological changes, whatever

56:13

AI is going to do, whatever

56:16

that's going to do with the

56:18

global economy, whatever advances in biotech

56:20

are going to do, whatever advances

56:23

in other kinds of digital technology

56:25

are going to have. The

56:28

concept of something like

56:30

private property, in some

56:33

sense, becomes less and less meaningful with

56:35

every passing year as things become more

56:37

and more digital. And

56:39

as we're just basically turning

56:41

into everything being network effects.

56:44

It's hard for me to

56:46

give percentages because

56:49

there's definitely a case to

56:51

be made that if you took someone from 150

56:54

years ago and you woke them up today

56:56

and you showed them how any modern society works,

56:58

then they would say, obviously, totalitarianism has won.

57:02

What do you mean? You can't legally hire someone

57:04

without filling in a whole bunch of forms and

57:06

giving them 40. It's the

57:08

most totalitarian thing out there. It's

57:11

difficult to make a numerical comparison

57:13

for even just some of the

57:15

reasons. Maybe

57:18

just to start going into the

57:21

counter arguments a bit. Yeah,

57:25

I think basically if

57:28

we go back to the

57:30

info Hobbesian thesis, which basically

57:32

says that the info war

57:34

of all against all, one

57:36

is a war, meaning it's

57:38

a negative sum game rather than zero or

57:40

a positive sum. Two,

57:45

it does not have stable equilibria that

57:47

are not physical borders. And

57:50

three, it does have stable equilibria

57:52

that are physical borders. I

57:54

think those are the three claims. I

57:57

think you can ultimately attack each one of them

57:59

in the future. And, you know, we

58:01

can start from the end, right? No, it

58:04

does info Hobbesianism even have equilibria that

58:06

are physical borders, right? Well, go on

58:08

Twitter. And one of the

58:10

first things that you see is, you

58:13

know, you'd still see people from,

58:15

and like, or bots of different

58:18

countries that are very hostile toward

58:20

each other, that are still

58:22

fighting the memoir against each other in all

58:24

kinds of different contexts, right? Even

58:27

if they have a very strong

58:29

censorship internally, like, you know, the memoir

58:31

continues, right? Like, that's one question, right?

58:33

And then, you know, we kind of

58:36

joke about how, like, oh, you know,

58:38

the only names that you're allowed to

58:40

spread are, you know, the local governments,

58:43

and like approved thought or whatever. But I

58:45

feel like actually go to like any one

58:48

of these countries, and possibly

58:50

with the exception of North Korea, though,

58:52

like, probably even there, like, it's not

58:54

actually like that at all, right? And

58:56

like, it's actually, you know, the meme

58:58

ecosystem is still actually quite

59:01

porous in practice. So that's like the

59:03

first question, right? Like,

59:05

basically, in a

59:07

digital environment, then, look, is

59:09

the natural equilibrium even like

59:11

national borders, or is

59:14

it the case that like, oh, well, actually,

59:16

the only equilibrium that makes sense from

59:19

a Hobbesian perspective is one where

59:21

there's basically a single elite that dominates the entire world. Right?

59:24

That, in some ways, is like an

59:26

even scarier thought, right? Because we're not

59:28

even talking about hegemony within

59:31

one country. We're talking about like a

59:33

single hegemonic actor that just takes over

59:35

the entire world. And at that point,

59:37

even if they're completely unfit, there's like

59:39

basically, yeah, no pressure that can effectively

59:41

unseat them, right? Like, world government is

59:43

deeply scary in a way that national

59:46

government does not. Then the

59:48

question is like, well, if that's

59:50

actually the case, then like maybe,

59:52

yeah, there's enough people around

59:54

the world to just find that kind of

59:56

scenario as horrible as I do that they'll

59:58

actually start fighting again. against it. So

1:00:01

that's the first thing that I think

1:00:03

is worth being skeptical about. Are there

1:00:06

actually equilibria that don't involve or that

1:00:08

do involve borders? And then the second

1:00:10

one is, are there

1:00:12

equilibria other than war that don't

1:00:14

involve at least physical borders? And

1:00:18

I think here what's interesting

1:00:20

is this is where a lot of

1:00:22

the differences really start to shine. And

1:00:25

this is, I think, something that's

1:00:27

very second nature to the crypto

1:00:29

space. Which is basically that the

1:00:31

possibilities for defense exist in the

1:00:34

digital world that just do not

1:00:36

exist in the physical world. Theoretically

1:00:39

in the physical world, people can

1:00:41

wear body armor. But in practice,

1:00:43

body armor is super inconvenient. It's

1:00:45

sweaty. It's incredibly annoying in a

1:00:48

whole bunch of ways. It's unfashionable.

1:00:50

I'll say that. Yeah, it's not

1:00:52

cute. Yeah. I mean, you can

1:00:54

stick body armor into suits if

1:00:56

you want. But these days, even

1:00:58

suits are becoming more and more

1:01:00

lame. But then in the digital

1:01:03

world, if you think about your Internet

1:01:05

experience using HTTPS in

1:01:07

2024 versus your Internet

1:01:09

experience using HTTP in

1:01:12

2009, do

1:01:14

they feel different? Obviously, the

1:01:16

applications feel different. But does the difference

1:01:18

between HTTP and HTTPS feel like anything?

1:01:20

And I think the answer is

1:01:22

currently no. Now, that's a

1:01:25

bit of us, I think, somewhat

1:01:27

of an artificially unreasonable example,

1:01:29

right? This gets into the

1:01:31

divide between what I

1:01:33

call cyber defense and what I

1:01:36

call info defense in my big,

1:01:38

long, techno optimist manifesto from last

1:01:40

year. Right. And basically, it's

1:01:42

like, cyber defense is the

1:01:44

type of defense where every reasonable person can

1:01:47

agree who the bad guy is. And

1:01:49

info defense is when there is room

1:01:51

for an interpretation. But even in

1:01:54

the case of info defense, right, you know, you

1:01:56

can ask things like, what even

1:01:58

are the physical world equivalent? What

1:02:02

are the physical world equivalence of

1:02:04

prediction markets? Or to even go

1:02:06

low tech, what even are the

1:02:08

physical world equivalence of different

1:02:11

groups of people being able to be on different social

1:02:13

medias and be on group chats, right? One

1:02:18

of the kind of lines of skepticism

1:02:20

that I think you can really legitimately

1:02:22

raise against inferring the extents to which

1:02:24

the in-house fears of war of all

1:02:26

against all on Twitter is basically that

1:02:28

Twitter is the worst of it that

1:02:31

you see, and it's the worst

1:02:33

of it precisely because you can see it. If

1:02:36

you think about private group chats, for

1:02:38

example, private group chats consistently maintain higher

1:02:40

levels of quality and high levels of

1:02:42

productive discourse. Smaller social media platforms, whether

1:02:44

it's Farcast or whatever else, they maintain

1:02:47

higher levels of discourse. Actually,

1:02:55

Noah even wrote an article about

1:02:57

how the internet wants to be

1:02:59

fragmented. The

1:03:01

counter argument is, what if this

1:03:04

vision of the global water cooler that

1:03:06

we all got addicted to in the

1:03:08

2010s just happens to be the worst

1:03:10

possible version of all of this from

1:03:13

an info warfare being negative, some perspective.

1:03:16

And now we actually are learning and we

1:03:19

actually are adapting and the internet is already

1:03:21

in a whole bunch of smaller

1:03:23

and larger ways reconfiguring itself to

1:03:26

already starts to become

1:03:28

less warlike in ways

1:03:30

that have no equivalence again in

1:03:32

the physical world because the

1:03:35

mechanics of constructing digital walls and constructing physical

1:03:37

walls are just so fundamentally different. So I'll

1:03:39

stop here. Noah, what do you think of

1:03:41

those arguments against the thesis here that Vitalik

1:03:43

just made? I like

1:03:45

the observation about the internet becoming more

1:03:48

fragmented. That will reduce information

1:03:50

tournament costs, as in we

1:03:52

don't spend all our day arguing on Twitter. We

1:03:54

spend our day talking to people in Discord who

1:03:56

have useful information for us and who... are

1:04:00

not just some angry jerk blowing off

1:04:02

steam by talking bullshit, and then everyone

1:04:04

else has to jump in and refute

1:04:07

that bullshit. And then

1:04:09

we just talk to interesting people with interesting ideas, and maybe

1:04:11

we lose a tiny bit of that long tail of

1:04:13

information because maybe there's some totally random person

1:04:15

who'll pop up and tell you a little

1:04:18

bit of extra information you didn't

1:04:20

know that wouldn't have been invited to your Discord. But

1:04:23

now, at least we've reduced the information

1:04:25

tournament. So I think that's that

1:04:27

point I really like, the idea

1:04:29

that perhaps there's some natural

1:04:31

self-equilibration mechanisms in the marketplace of

1:04:34

ideas. I like that idea. The

1:04:36

analogy to Hobbsianism, I think we

1:04:39

shouldn't lean too hard on that analogy

1:04:41

to show why this idea is wrong.

1:04:44

I think, yes, the idea of

1:04:46

information tournaments has some superficial similarity

1:04:48

and some real similarity to

1:04:50

Hobbsianism, but it's not a complete analogy. And

1:04:52

I think saying, okay, here's

1:04:54

why, if we make everything seem exactly

1:04:56

like Hobbs's, Leviathan, if we

1:04:58

just make a one-to-one analogy between these

1:05:00

concepts, and then we show how the

1:05:02

classic sort of assumptions of Hobbs don't

1:05:04

hold, well, then we've disproven the information

1:05:06

tournament thesis. I think we shouldn't do

1:05:08

that. We shouldn't lean too

1:05:10

hard on that analogy. And so, for

1:05:13

example, one idea is

1:05:15

that the analogy of information competition

1:05:17

to violence. If

1:05:19

two people meet in a town square and

1:05:21

have a physical fight, right? One

1:05:23

knocks the other, like if Ryan and David

1:05:25

meet in the town square with clubs and

1:05:27

go at it, and then one

1:05:30

of them will bounce the other with a club, and

1:05:32

then he falls over unconscious, and then you're done, right?

1:05:35

But then in information, quote unquote warfare,

1:05:37

yes, there's a competitive thing, but it's

1:05:39

like, imagine if both people had infinitely

1:05:41

powerful suits of armor and were just

1:05:43

wailing on each other, like in some

1:05:46

Marvel movie. So I can yell correct

1:05:48

information, someone else can yell disinformation, or

1:05:50

we can yell two competing forms of

1:05:52

disinformation, right? I can tell you, so

1:05:55

suppose that you tell me that the job

1:05:57

market is terrible under Biden, was amazing under

1:05:59

Trump. And I tell you that the job market

1:06:01

was terrible under Trump and is amazing under Biden. Well,

1:06:03

those are both wrong because it has been great under

1:06:06

both. Like the job market

1:06:08

has been great throughout both of those president's terms.

1:06:10

And so that's the truth. But suppose we have

1:06:13

competing disinformation, right? And we just wail on each

1:06:15

other and wail on each other and wail on

1:06:17

each other. Physical violence, you

1:06:19

know, naturally ends in one person winning and

1:06:21

the other person getting clubbed to death. I

1:06:23

mean, yes, I know World War One lasted

1:06:26

a long time. It's not always quick victory.

1:06:28

But then information warfare can go on forever.

1:06:30

You know, that's just one difference between real

1:06:32

violence and, you know,

1:06:35

disinformation war. And so

1:06:37

the cost of disinformation war, there may

1:06:39

be no natural resolution. So the thing

1:06:41

about Hobbes, the idea is that everyone's

1:06:44

always running around trying to fight everyone.

1:06:46

And so you need this Leviathan, right?

1:06:49

That's similar to the idea that maybe

1:06:52

having information, you know, filtered through some

1:06:54

monopolist at the New York Times and

1:06:56

CBS News in like 1960, maybe

1:06:59

that was actually good. That's the analogy there.

1:07:01

But there's other dimensions in which the analogy

1:07:03

breaks down. For example, borders, right? Do we

1:07:05

really need to have information

1:07:07

hegemon? Do we really

1:07:09

need information to stop and start

1:07:12

at national physical borders? No,

1:07:14

we don't. So yes,

1:07:16

we have Twitter and you go on Twitter

1:07:18

and you see people from all countries, right?

1:07:20

But the Chinese government and the Russian government

1:07:22

have a lot of resources devoted to pushing

1:07:24

their message out there. And the US government

1:07:26

doesn't, right? Liberalism doesn't. US

1:07:29

government sits back and says, you know, from

1:07:31

an Olympian remove and is like, I am

1:07:33

the overall mighty hegemon of information. And so

1:07:36

I'm going to let all these tiny little

1:07:38

actors play it out, you know. And then

1:07:40

one of those tiny little actors is the government

1:07:43

of China, a country four times the size of

1:07:45

the United States with arguably a higher GDP in

1:07:47

the United States. And so that's

1:07:49

one of the tiny little atomistic actors. And so

1:07:51

the rest of us are these tiny little guys

1:07:53

having to run up against that behemoth, having to

1:07:55

run up against Russia, who has less resources, but

1:07:57

still a lot more than your average American. and,

1:08:00

you know, it has a little bit more practice

1:08:02

pushing up bullshit to Americans. And

1:08:04

so I'm having to fight those guys

1:08:06

every day on Twitter and

1:08:08

in the information space and they're much better

1:08:10

resources than I am. And there's no physical

1:08:13

border there, right? There's a physical border for

1:08:15

where those armies can go, but there's not

1:08:17

a physical border for where their information things

1:08:19

go. And so you can have cross

1:08:22

border information warfare in a way that it's

1:08:24

very difficult to have cross border physical warfare.

1:08:27

And that's a structural difference between the two things

1:08:29

that makes the analogy break down a little bit,

1:08:31

I think. And so the idea that, okay, well,

1:08:33

information is borderless and therefore you're not going to

1:08:35

get a Hobbesian situation. Well,

1:08:38

but information warfare is borderless in

1:08:40

a way that violent physical warfare

1:08:42

is not. And so I

1:08:44

think that that means that the analogy can't, you know,

1:08:47

we can't just lean too heavily on that analogy and

1:08:49

say, well, since information doesn't have borders, you're not going

1:08:51

to get a Hobbes situation. Well, yeah, but Russia and

1:08:53

China are going to continue to just

1:08:55

use their resources to push out their

1:08:57

messages to everywhere. And if you look

1:09:00

at, I think as an apple bomb

1:09:02

had a great article in Atlantic recently that shows that

1:09:04

the combination of Russian message crafting

1:09:06

and Chinese money around the

1:09:09

world is actually proving a fairly effective

1:09:11

propaganda tool. So for example, this idea

1:09:13

that there were all these secret bio

1:09:15

labs in Ukraine, you know, that's made

1:09:17

up by the Russians, secret American bio

1:09:19

labs in Ukraine. And that's why Russia

1:09:21

had to like go invade Ukraine. There's

1:09:23

made up other Russians, but it is

1:09:25

being spread in, you know, poor

1:09:27

country and developing countries and in developed countries,

1:09:29

but people in developing countries are buying it

1:09:32

a bit more because they don't have that

1:09:34

as robust a counter information ecosystem. You

1:09:36

know, it is being spread by Chinese networks and

1:09:39

China really picked this up and ran with it

1:09:41

and distributed this and China's commercial connections to the

1:09:43

world allowed it to do this. Ryan and David,

1:09:45

have you ever heard the Ukraine bio labs idea?

1:09:48

I have not. No. Okay. But

1:09:50

talk you for that one. Yes, I have. Right.

1:09:53

And it's borderless, but these well-resourced states

1:09:55

that have borders for the collection of

1:09:57

taxes and, you know, and borders

1:09:59

for the enforcement. of crime and borders for where

1:10:02

their army goes, do not have borders for where

1:10:04

their information goes. And yet they in some sense,

1:10:06

raise the information cost to the rest of us

1:10:09

because I'm sitting there battling whole governments

1:10:11

on Twitter. Yeah, okay. I mean,

1:10:13

I think I get the feeling that, Noah's

1:10:16

somewhat disagreeing with me, but actually agreeing with quite

1:10:19

a lot of, I mean, like what I said,

1:10:21

right? Which is that like, one

1:10:23

is that it's like a major

1:10:26

difference between physical warfare and internet

1:10:29

warfare is definitely the internet

1:10:31

war is significantly,

1:10:34

like much less of like actually

1:10:36

war like, and I mean, like,

1:10:38

especially once you get off Twitter

1:10:41

and like that's a place where the analogy fails.

1:10:43

But I think on the first one of like the

1:10:46

issue of borders, that's one

1:10:49

thing that's kind of valuable to disentangle there,

1:10:51

I think is like the

1:10:53

idea of like info hegemony as this

1:10:55

abstract concept, right? Which can exist at

1:10:58

any level of a stack, right? Like

1:11:00

you can have info hegemony in a

1:11:02

country, you can have info hegemony inside

1:11:04

one of the one Musk's companies, you

1:11:07

can have info hegemony in your family

1:11:09

or in your cult or across the

1:11:11

entire world. And then there is kind

1:11:14

of this very,

1:11:17

like much more specific ideology

1:11:19

that like really emphasizes the

1:11:21

idea of national sovereignty and

1:11:24

basically treats info hegemony as

1:11:26

being one part of national

1:11:29

sovereignty alongside like physical military

1:11:32

hegemony was in a local area, right? I

1:11:35

think these kind

1:11:37

of disanalogy is between or

1:11:40

like the breakdown in the analogy

1:11:42

between like physical and info Hobbesianism,

1:11:44

like it basically, yeah, to me,

1:11:46

it definitely suggests that specifically

1:11:49

the nation state bound version

1:11:51

of all of this is

1:11:54

one that's less likely to be a

1:11:56

long-term stable equilibrium, which could mean something

1:11:58

good or it could mean something bad.

1:12:01

The good thing that it means could

1:12:03

be that we find some kind of

1:12:05

better approach and some kind of better

1:12:07

equilibrium that's not involved,

1:12:10

like basically unlimited info

1:12:13

hegemony that any particular person

1:12:15

is subjected to. Or

1:12:17

it could also mean something worse.

1:12:19

And the something worse is basically

1:12:21

info hegemony at an actually worldwide

1:12:23

level. And so the question to

1:12:25

basically think about is, if you

1:12:27

have all of the different

1:12:31

actors, and we can think of

1:12:33

them as being nations, or in

1:12:35

some cases, there are meme plexus

1:12:37

that have partial overlap with nations.

1:12:39

Sometimes there are meme plexus that

1:12:41

overlap collections of nations or even

1:12:44

parts of nations. If

1:12:46

you are one of these meme plexus,

1:12:48

what to do is the ultimate safety.

1:12:50

The ultimate safety is basically banishing all

1:12:52

competing meme plexus from the world

1:12:54

forever. Because if they're not from the world, then

1:12:57

wherever they're not banished from, they can come back.

1:12:59

And so this is the

1:13:02

bigger risk, which is basically going

1:13:04

out even into the long, long term, into

1:13:08

a time when even the

1:13:11

words like democracy and

1:13:13

totalitarianism and United States

1:13:15

and like Russia and

1:13:18

so on are long forgotten. What is

1:13:21

the long term equilibrium? And

1:13:23

is that a global

1:13:25

info hegemon? And is that

1:13:28

a situation that eventually we

1:13:30

fall into and once we fall into, it's super hard to get

1:13:32

out of? Yeah, I think

1:13:35

basically, the porousness of borders

1:13:37

and the way in which digital borders

1:13:39

are much weaker than physical borders, it's

1:13:42

both a blessing and a curse

1:13:44

in that exact way. It's like

1:13:46

the blessing is basically that it

1:13:48

enables other possibilities and the curses

1:13:50

that there's

1:13:53

definitely something worse than nation

1:13:55

scale authoritarianism that might be

1:13:58

looking at it. eventualism.

1:14:00

Yes. So in other

1:14:02

words, right now we're seeing

1:14:04

so-called sharp power by China reach

1:14:07

into many areas of the

1:14:09

rest of the world. We're seeing companies

1:14:12

censor what they say in their own

1:14:14

home markets because China threatened to

1:14:16

cut off access to the Chinese market. And

1:14:19

various actors basically, China's conditional

1:14:21

opening saying, well, dangle the promise of this giant

1:14:24

Chinese market, which usually does not materialize, but let's

1:14:26

say we can dangle it and once in a

1:14:28

while it will, for anyone who's just willing to

1:14:30

go push our message back in your own countries,

1:14:33

blah, blah, that's called sharp power. And

1:14:35

so we're seeing the effects of that across borders.

1:14:37

The question is, so what's the scenario? I guess

1:14:40

my question, again, I don't really strongly believe in

1:14:42

this thesis, right? It's just an idea I had.

1:14:44

I'm not going to be like, yes, liberal democracy

1:14:46

is going to fail. I'm not that guy, right?

1:14:49

I think that there's a good chance that

1:14:51

everything I'm saying here is overblown and that

1:14:53

this is not even a good way of,

1:14:55

and that liberal democracy has other advantages. In

1:14:58

addition, like a feeling of inclusiveness and

1:15:01

public goods provision, those are other arguments

1:15:03

for why liberal democracy is good. Maybe

1:15:06

those are more important and I just haven't even

1:15:08

considered them. Maybe the information tournament problem isn't actually

1:15:10

that big. And what looks like all these people

1:15:12

wasting these resources is actually just people having fun.

1:15:14

Well, actually, we're just as good at getting information

1:15:16

as we ever were, which never was really great,

1:15:19

but we're just watching people have fun shouting each

1:15:21

other for consumption purposes. And that's how we're using

1:15:23

our leisure time. And it looks like we're, in

1:15:25

actuality, you go back to like 1950 and

1:15:28

the average person was worse informed than now. And

1:15:30

that the actual amount of time and effort we

1:15:32

spent was maybe about the same because now we're

1:15:34

just consuming, we're just having fun. So these are

1:15:36

counter arguments that I can make to this, right?

1:15:38

I can make counter arguments to everything I just

1:15:40

said, but I think it's

1:15:43

worth asking if this

1:15:45

thesis, if the scary theory is

1:15:47

wrong and if the new totalitarianism

1:15:49

is going to lose, you know,

1:15:51

if Xi Jinping is going to

1:15:53

lose, what's the scenario by

1:15:55

which it loses? And so I think economically

1:15:57

the scenario by which it loses is that.

1:16:00

Well, China still does not nearly as rich

1:16:02

as the West. And as this Chinese state

1:16:04

gets more controlling of the Chinese economy, it's

1:16:06

going to get worse and worse. It's already

1:16:09

slowing down much faster than

1:16:11

other developed countries did back in their

1:16:13

day. It's slowing down early and China's just, you know,

1:16:16

yeah, they can produce a bunch of EVs, but they're

1:16:18

all going to rest in the parking lot. And,

1:16:21

you know, unless they're massively subsidized and in the

1:16:23

end, all we have to do, you know, is

1:16:26

wait and maybe they'll be Soviet Union too.

1:16:28

And so that's economically, I think that's the

1:16:30

argument politically. The argument is eventually

1:16:32

when the economy slows down and when people

1:16:34

realize that Xi Jinping isn't that competent and

1:16:36

he's like been around for like 20 years

1:16:39

and people get really restless, you're going to

1:16:41

see the same pressures you saw in Korea

1:16:43

and Taiwan, you know, for democratization, blah, blah,

1:16:45

blah. So those are the

1:16:47

counter argument. The counter arguments are just like, just

1:16:49

wait, bro. Like people said that totalitarianism was going

1:16:51

to win in the 30s. They

1:16:53

said it was going to win in the 70s. They

1:16:55

were full of shit both those times and they're full

1:16:57

of shit now. Just wait, you know, wait and like

1:16:59

do the normal thing. But

1:17:02

in terms of the marketplace of information

1:17:04

with the massive messaging apparatus

1:17:06

of China and to some degree Russia,

1:17:09

which are now coordinating hegemonizing

1:17:11

the American Internet with bullshit

1:17:14

and the European Internet with bullshit and the

1:17:16

Latin American in and the Middle Eastern with

1:17:19

bullshit, you know, how do we beat

1:17:21

that? Like, what's the scenario? So this is

1:17:23

my question to Vitalik. What's the scenario

1:17:25

where that loses and how does it lose?

1:17:28

So I think one of the

1:17:30

sets of arguments that we haven't talked

1:17:33

about at all is just the possibly

1:17:36

very large benefits of

1:17:38

inflow pluralism in a

1:17:40

context where when people are sharing info, they're

1:17:42

doing something other than fighting each other. Right.

1:17:45

I think one of

1:17:47

the best things that you can

1:17:50

have in an ecosystem

1:17:52

is not just having like

1:17:54

one group that has a

1:17:56

tight internal consensus where they

1:17:58

all agree. with each other on

1:18:00

everything and then they just kind

1:18:02

of veer off

1:18:04

in their own direction and assume that

1:18:07

each other are right, have perfect confidence

1:18:09

in each other. And instead

1:18:11

actually have an ecosystem where

1:18:13

you have different groups that actually

1:18:16

have some form of competition with

1:18:18

each other. This

1:18:20

is something that the world

1:18:22

as a whole definitely

1:18:24

has as a unit. This is

1:18:26

something that the US as a

1:18:29

country definitely has quite

1:18:31

a bit of that as

1:18:33

a unit. You can identify

1:18:35

sub bubbles in a bunch

1:18:38

of ways. You have the reds and the

1:18:40

blues, you have the East Coast and the

1:18:43

intellectual cluster, then you have Silicon Valley,

1:18:45

you have people who care about different

1:18:48

branches of tech. And there's

1:18:51

lots of these subcultures that

1:18:53

actually have the ability to

1:18:55

actually take their ideas and

1:18:57

actually push them forward to

1:18:59

their conclusions and start executing

1:19:01

on them. This is something

1:19:03

that I

1:19:06

think the better crypto ecosystems

1:19:08

tends to actually have. And

1:19:11

the ones where it actually

1:19:13

literally is just a call

1:19:15

to personality around one

1:19:18

guy calling the shots or

1:19:20

the ones that pretty quickly

1:19:23

tends to fail. Sorry, Craig.

1:19:26

So the benefits of

1:19:29

actually having this pluralistic environment

1:19:31

where you actually have different

1:19:33

groups and where each of

1:19:35

those different groups actually has

1:19:37

enough breathing room that it

1:19:39

can actually take a couple

1:19:42

of steps and actually start

1:19:44

pushing its ideas forward

1:19:46

to some kind of conclusion. That feels

1:19:48

like something that it's very

1:19:50

plausible to believe that it's something that has

1:19:53

massive benefits and you just can't really come

1:19:55

up with good ideas without it. I

1:19:58

think what's interesting about that is that

1:20:00

the kind of world where there's

1:20:03

one leading dictator

1:20:06

and you can't get anywhere if you

1:20:08

disagree is obviously the exact opposite of

1:20:10

that. But then the

1:20:12

world water cooler is also the exact

1:20:15

opposite of that. So both the dictator

1:20:17

and the world water cooler are actually

1:20:19

not infoporalism. And so actually

1:20:22

figuring out how do we

1:20:24

optimize for infoporalism, which basically

1:20:27

means both pluralism existing and

1:20:29

at the same time the

1:20:32

interaction between the different groups

1:20:34

actually being not just

1:20:36

competitive but also you don't

1:20:38

want like folk kumbaya, you want some

1:20:40

kind of healthy mixture of competitive and

1:20:42

cooperative. So the answer of how do

1:20:45

we beat the info Leviathan of China

1:20:47

and Russia, the answer is we hide

1:20:49

from it. That it is very powerful

1:20:51

in the town square and rules the

1:20:53

town square. So we go to our

1:20:55

houses and talk in whispers to each

1:20:57

other where the monster cannot get us.

1:21:00

Well, there's this famous infographic or comic or

1:21:02

whatever you call it that got it that

1:21:04

was in a gasoline store code exposed. I

1:21:06

think it was actually the one on that

1:21:09

community that this dice this and civilization where

1:21:11

basically, you know, someone starts off making a

1:21:13

garden and then the garden expands and then

1:21:15

the garden expands. And then the fence kind

1:21:18

of eventually stretches out across the entire world.

1:21:20

And then at some point, you know, the

1:21:22

garden becomes the new reality. Yeah, so I

1:21:24

think I'm not advocating for

1:21:26

kind of like, you know, like

1:21:29

people constantly playing cat

1:21:31

and mouse games and expanding a

1:21:33

lot of energy running. I'm basically

1:21:36

advocating for developing the

1:21:38

tools to make the

1:21:40

internet landscape one

1:21:42

that actually is friendly

1:21:45

to multiple groups

1:21:47

that are not hegiments that

1:21:49

actually are able to. And

1:21:52

where the actual incentives aligned for them to

1:21:55

interact in ways that are more productive. I mean, I

1:21:57

think I know if there's a yeah.

1:22:00

But you can never get 100% of the way there, right? And

1:22:04

there will always be there to stay

1:22:06

crazy things and fight zero-sum games against

1:22:08

other people who are saying the opposite

1:22:11

crazy thing. But

1:22:13

you can clearly go much further than zero,

1:22:15

right? The world water cooler is

1:22:17

probably the closest to zero that we can have.

1:22:19

And then the question is like, well, even

1:22:22

if you look at the early

1:22:24

internet architecture, right? You

1:22:26

had a lot of blogs, you had a lot of forums, and

1:22:29

it did not look like anyone running and

1:22:31

hiding from a single main thing. But

1:22:33

it did look like something that produced a lot of

1:22:35

productive output, right? That's a great

1:22:38

point. What we're really running and hiding

1:22:40

from is maybe the inefficiency of centralized

1:22:42

information marketplaces rather than the

1:22:44

fact that Russia and China are in there trying to

1:22:46

do their thing. Right. I

1:22:49

mean, ultimately, again, the info

1:22:51

ecosystem has a mechanism by

1:22:53

which it's vulnerable to the

1:22:57

internet. And it's

1:22:59

also going to be vulnerable to the Democratic Party, and it's

1:23:01

also going to be vulnerable to the Republican Party. So

1:23:04

to a lesser extent, because we do

1:23:06

have more avenues to push on those

1:23:08

actors to be nicer, but to some

1:23:10

extent still, right? And so I

1:23:13

think there's an extent to

1:23:15

which those two things are similar

1:23:17

problems. And

1:23:20

to me, the ideal outcome is not

1:23:22

having healthy info ecosystems in

1:23:24

a particular subset of

1:23:28

countries that we label liberal democracies, and

1:23:30

then they can prosper and everyone else

1:23:32

goes to dust. The ideal outcome is

1:23:34

the world becomes a healthy

1:23:37

information ecosystem, right? Where a

1:23:39

pretty wide diversity of different

1:23:41

actors can participate, which inevitably

1:23:43

means a some that have

1:23:45

quite crappy intentions, but we

1:23:48

still figure out how to

1:23:50

model through and make that

1:23:52

work reasonably well. So here's my

1:23:54

question, and I have to go fairly

1:23:56

soon here, but you guys are

1:23:58

all fans of Block of

1:24:00

course. And so maybe blockchain

1:24:03

can help here because maybe so Americans

1:24:05

don't necessarily need blockchain to talk to

1:24:07

each other. I know there's there's Farcast

1:24:09

or whatever but maybe Chinese people do

1:24:11

or Russians maybe there are ways

1:24:14

for people to completely secretly talk to each other so

1:24:17

obviously you can use signal to just do

1:24:19

encrypted but then someone can like maybe raid

1:24:22

the office of signal or you know

1:24:24

kill anyone who's signal on their phones

1:24:26

I don't know but maybe are there

1:24:28

ways technologically for blockchains to help Chinese

1:24:31

people talk to other Chinese people about what they

1:24:34

don't like about Xi Jinping and for Russian people

1:24:36

to talk to other Russian people about what they

1:24:38

don't like about Putin etc etc have you seen

1:24:40

freedom tool no this interesting group

1:24:42

it's this company beta I think

1:24:45

based out of Kiev and

1:24:47

they have like people from a

1:24:49

bunch of ex-Soviet countries the

1:24:52

company is called Raramo and they

1:24:54

built this thing called freedom tool

1:24:56

it's so with I believe it

1:24:58

was a like pussy riots connected

1:25:00

lawyer Mark Fagan and basically what

1:25:02

it does is that if you

1:25:04

have a Russian passport then you

1:25:06

can digitally prove using I mean

1:25:08

like zero knowledge proof technology that

1:25:11

you're a Russian citizen without revealing

1:25:13

which one you are and then

1:25:15

if you can like go and basically participate

1:25:18

in an online vote and the results of

1:25:20

the online voter like visible and they're guaranteed

1:25:22

to be tamper-proof right so

1:25:24

basically yeah it is an anonymous

1:25:26

voting system that basically lets us

1:25:28

you know have like shadow votes

1:25:30

among the you know shadow Russian

1:25:32

nation and like actually yeah at

1:25:34

least they don't create consensus of

1:25:36

like you know like what to

1:25:39

Russian people actually think it's

1:25:41

very clearly that's kind of like a version

1:25:43

of the one the beta thing in a

1:25:46

lot of ways right but it's

1:25:48

interesting because to me

1:25:50

one of the big debates

1:25:52

that you sometimes get around the discussion

1:25:54

of like internet anonymity is basically like

1:25:56

the 1990s idea is like Freedom

1:26:00

comes from the fact that on the internet, nobody

1:26:02

knows you're a dog. But

1:26:04

then you have this debate of

1:26:06

either you're verified, but then if you're verified, people

1:26:08

know who you are, they can go after you.

1:26:11

Or you're anonymous, but if you're anonymous, then

1:26:13

no one has any need to trust you.

1:26:15

Unlike these days, it's pretty much impossible for

1:26:17

you to distinguish yourself from a bot. And

1:26:20

so the question is, can we actually

1:26:23

solve both of those two things at the same

1:26:25

time? And can we actually have through on

1:26:28

privacy and at the same time

1:26:30

various forms of trustworthiness, whatever that

1:26:32

means in a particular concept

1:26:34

and have those things together? And

1:26:37

it actually feels like this current

1:26:39

batch of zero

1:26:41

knowledge proof technology. And

1:26:44

there's a couple of blockchain

1:26:46

use cases in there

1:26:48

too, especially for making

1:26:50

these votes censorship resistant.

1:26:53

It feels like that batch of things

1:26:55

actually manages to solve both of those

1:26:58

problems. And so the frontier for creating

1:27:00

an info sphere that is at least

1:27:02

guarded against, shall we say both centralized

1:27:05

and decentralized cyber attacks

1:27:08

is actually better than ever in some

1:27:10

ways. So I think it's,

1:27:13

yeah, in this way, it's an exciting

1:27:15

time. We actually get to see

1:27:17

some of these things go live and see how they work. So

1:27:20

there's a partial answer for you there, Noah. We're

1:27:22

making some progress on that front. Guys, this has

1:27:24

been a fantastic conversation. I've really enjoyed to kind

1:27:26

of the interplay between the two of you. I

1:27:29

guess as we sort of bookend this,

1:27:31

maybe I'll read a quote. So even

1:27:34

if your thesis is right, Noah, you

1:27:36

still think it's a good idea to

1:27:38

continue fighting for liberal democracy. You say

1:27:40

this, we should continue fighting for liberal

1:27:42

democracy and hope that technology and human

1:27:44

nature allow for its continued victory. Maybe

1:27:47

that's a place to end this episode and

1:27:49

we appreciate both your time. Thanks so much. I've

1:27:51

muted himself, but I'm sure he's saying thanks as well. I

1:27:54

think he is as well. Thankless nation. We

1:27:57

will include a link to Noah's article that

1:27:59

we discussed throughout the day. duration of today's

1:28:01

episodes called how liberal democracy might

1:28:03

lose. We of course hope it doesn't.

1:28:05

And we are betting that crypto is

1:28:07

part of the solution and

1:28:09

part of the reason why it does not lose.

1:28:12

Of course, got to remind you, even though we

1:28:14

didn't discuss crypto today, it is risky. None of

1:28:16

this has been financial advice. You

1:28:18

could lose what you put in, but we are headed

1:28:20

west. This is the frontier. It's not for everyone, but

1:28:22

we're glad you're with us on the bankless journey. Thanks

1:28:24

a lot.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features