Podchaser Logo
Home
04 | If you control AI, you control the world

04 | If you control AI, you control the world

Released Tuesday, 14th November 2023
Good episode? Give it some love!
04 | If you control AI, you control the world

04 | If you control AI, you control the world

04 | If you control AI, you control the world

04 | If you control AI, you control the world

Tuesday, 14th November 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

This is an ABC podcast. I

0:08

remember the first time I sat down to

0:10

use OpenAI's Chat GPT.

0:14

I logged on to the chatbot's website and

0:17

was presented with a blinking cursor and

0:19

that now familiar prompt, send a message.

0:22

I had the knowledge of the internet at

0:25

my fingertips. I could ask it

0:27

anything and its powerful brain would

0:29

sort through all that knowledge and give

0:31

me an answer in a lovely

0:33

digestible form. So what

0:35

did I type? Write

0:37

a short rhyming poem from my

0:40

dog to me. And

0:42

then I added please and they didn't want to sound

0:44

rude.

0:50

Dear James, With you I long depart

0:52

to that wondrous open park. Birds

0:55

to chase and grass to tread, thoughts

0:57

of sausages fill my head. So

0:59

please, dear friend, with a loving heart,

1:02

let's go and let my adventures start.

1:06

I think for an AI, that's pretty good.

1:09

I showed it to my dog, but I think

1:11

she prefers the more avant-garde stuff.

1:24

This is Hello AI Overlords,

1:26

a science fiction series about how artificial

1:28

intelligence has burst into our lives

1:31

in just a few short years. I'm

1:33

James Pertill. At the start of the

1:35

year, it was Chat GPT and its

1:37

amazing ability to communicate that

1:40

was the thing that broke AI

1:42

into the mainstream. But like everything

1:45

in AI, it didn't come from nothing. Its

1:47

rise has given one company, OpenAI,

1:50

and one man, its CEO, Sam Altman,

1:53

a huge amount of

1:54

power and a lot of say over

1:56

where AI is heading. It's

1:58

created an arms race. as the world's

2:01

massive tech companies scramble to

2:03

control the most important technology

2:05

in the world. This episode, what

2:08

happens if one man controls the

2:10

future of intelligent machines?

2:20

If modern AI and chat GPT

2:23

in particular had a face, it's

2:25

this guy. Sam Altman. What

2:27

was Sam Altman's pitch to you?

2:29

Sam Altman, everybody! Sam

2:32

Altman is the CEO of OpenAI,

2:35

the creator of chat GPT, arguably

2:38

the world's most advanced AI tool.

2:40

And depressingly, Sam is young. He's

2:43

six months younger than me. I'm 39. The

2:46

whole world wants this technology. The whole

2:48

world needs the benefits of this. For me personally,

2:51

the thing I'm most excited about is using

2:53

this technology to

2:54

increase the scientific progress.

2:56

Like other tech CEOs before him, Sam

2:58

Altman's rise to power has followed a similar

3:00

pattern. He built a cool app, dropped

3:03

out of Stanford. By 30 was

3:06

leading Y Combinator, one of the world's best

3:08

startup investment companies. And for

3:10

a brief period, eight days to be exact,

3:13

he was the CEO of Reddit. And

3:16

by the way, he's a prepper. He

3:19

said he has guns, gold, antibiotics

3:21

and gas masks stored on a rural

3:23

property somewhere. Just in case. But

3:26

at the moment, he's optimistic about

3:29

the future. I am a huge believer that the

3:31

only sustainable way that our

3:33

lives all get better is scientific and technological

3:36

progress. And as young tech CEOs do,

3:39

Altman made a lot of nice sounding statements

3:41

about his mission and how he's going to change

3:43

the world for good. We started this company

3:45

because we thought AI could destroy the whole world and we wanted to figure

3:48

out how to prevent that. For those that know

3:50

him, like reOne Child who used to work

3:52

at OpenAI, Sam Altman is

3:54

a straight shooter. He'd often have

3:56

kind of asked me anything like we'd have CEO as a

4:00

anything, he would ask them hard questions,

4:02

he'd always answer them directly, usually pretty

4:04

succinctly, and when he didn't know stuff he

4:06

would just say, I'm not sure. I

4:08

still do feel a lot of admiration and respect for him. I think

4:10

he's a very plain spoken but

4:13

really focused on the important

4:15

elements kind of guy. But the interesting

4:18

thing about Sam Altman is he has this

4:20

crazy ambitious dream. He

4:22

wants to create an artificial super

4:25

intelligence, a machine smarter than

4:27

any human. You can't just

4:30

conjure that up from nothing. You need

4:32

to take a series of steps and make a bunch

4:34

of technological advances to get

4:36

there. And that's where chat GPT

4:39

comes into it. Because up until 2017

4:42

computers were terrible at reading

4:44

and writing. Sure they could be taught to

4:46

recognize certain words, but

4:49

they struggled with sentences and paragraphs.

4:52

In the computing world, reading and writing is

4:55

known as natural language processing. It's

4:57

an incredibly complex trick and

5:00

computers were getting straight fails. It

5:02

was just garbage, you know, like

5:05

every time somebody tried to do anything

5:07

with computers and natural language, it was an embarrassment.

5:10

Jeremy Howard is a world leading machine

5:13

learning expert and yep, that's

5:15

an Australian accent. And

5:18

in 2017, Jeremy was living in

5:20

tech central San Francisco

5:23

and he believed teaching AI to read and

5:25

write was the next big step. The

5:27

vast body of knowledge of

5:30

what humans have written down and the huge ability

5:32

to communicate with humans through text

5:35

was outside of the purview of computers.

5:38

It was the biggest thing holding

5:41

back in my opinion, computers

5:43

from being, you know, as

5:45

useful a tool as they might be. Jeremy

5:48

had an idea. So he downloaded

5:51

all of Wikipedia. That's

5:53

three billion words. This idea

5:56

that if you train a big enough language model

5:59

for a long enough time, time on enough

6:02

general text, you know Wikipedia

6:04

covers a lot of different territory, you

6:07

end up with something

6:09

with a huge amount of kind of latent capabilities.

6:12

Jeremy hoped that his AI could

6:14

do more than just be an encyclopedia.

6:17

It would hopefully understand the relationship

6:20

between words. So if you fed

6:22

it the sentence the day after Wednesday

6:25

is, it would know the answer was not

6:27

potato or the battle of Waterloo

6:29

but Thursday and if it understood

6:32

enough of these relationships maybe

6:34

that would add up to intelligence. So

6:37

Jeremy used his fancy new AI to

6:39

work out if a movie was hot or

6:41

not. So I actually picked the hardest

6:44

and most well studied task which is

6:47

to read an entire multi-thousand

6:49

word movie review and

6:51

say whether it's a positive or

6:53

a negative sentiment. The experiment worked.

6:57

The model was better at working out if

6:59

a reviewer liked a movie than any AI ever.

7:02

For sure I had goosebumps. It was definitely

7:05

beating the best ever result

7:08

and so I just set to work trying to figure

7:11

out is this real because if it is real

7:13

this is the thing I've been wondering about for 30 years. Jeremy

7:17

had figured out a way to teach AI to read

7:19

and he clicked publish on his research. What

7:22

happened next shocked him. OpenAI

7:25

picked up the idea and ran with

7:27

it. This is what it had been looking

7:29

for, a way to teach AI to

7:32

read and write. A

7:34

few months later OpenAI released its

7:36

own large language model and it

7:38

said it was partly based on Jeremy's

7:41

ideas. It was an early version

7:43

of the now world famous chat GPT.

7:46

It was basic but it worked. AI

7:49

could now digest human knowledge in

7:51

written form. Sam Altman had taken

7:53

another step towards his goal

7:56

of super intelligence but

7:58

the success would put him on a collision. course

8:00

with his own principles and

8:03

he'd because

8:10

Altman didn't want to just develop super

8:13

intelligent AI. He wanted to do

8:15

this safely. Now

8:17

Altman knew the history of Silicon Valley. Small

8:19

company invents technology, dominates

8:22

the market and becomes a tech

8:24

giant. Think Google with search

8:27

or Apple with smartphones and Silicon

8:30

Valley historian Margaret O'Mara says

8:32

Altman had an idea about how

8:35

to avoid this happening to AI.

8:37

You

8:37

know he is very much committed to open source

8:40

stuff. He's an open source guy. That's

8:43

his philosophy.

8:44

And Altman was so dedicated to

8:47

this principle it's there in the company's

8:49

name. Open AI. The

8:52

open means open source. Sharing

8:54

the source code of software so it can't be

8:57

controlled by one company. But

8:59

there's a problem with this.

9:01

Money. They have money.

9:04

They've

9:04

already acquired the top research,

9:06

many of the top researchers on this in the world

9:08

work for one of these companies. Now

9:11

they're all racing against one another.

9:13

If you're a growing AI company you're

9:15

going to need people and

9:17

you can't match the salaries of the big players

9:20

without a lot of money. So

9:23

Altman was in a bind and

9:25

he knew from his time growing startups at Y

9:27

Combinator you've got to think big

9:30

because you also need money for computers.

9:33

Training AI ain't done in your laptop. Like

9:35

basically imagine a huge

9:39

warehouse like maybe 10,000 square

9:42

foot warehouse which is

9:44

just filled with racks of machines. RiiOne

9:46

Child actually went inside the data

9:48

centers that trained open AI's early

9:51

GPT models. Each of which cost

9:53

like

9:54

I don't know 20 or 40 thousand dollars.

9:56

Or when you start running your model just the sound

9:59

of all of the sound,

9:59

turning on is like

10:02

you know a jet turbine or something.

10:05

Altman had reached the crossroads. Taking

10:08

the money would allow investment growth

10:10

and speed. Not taking the money

10:13

would mean giving up on his dream of

10:15

super intelligent AI. And

10:17

so in 2019 he announces

10:20

his decision. He's taking

10:22

the money. OpenAI signs

10:24

a one billion dollar deal with

10:26

Microsoft and its technology

10:29

will no longer be shared freely

10:31

with the world. Some employees

10:33

even quit and protest at OpenAI

10:36

becoming too commercial and

10:38

they start their own AI company, Anthropic.

10:41

But with Microsoft's money Altman

10:44

can pursue his ambition and

10:46

his company begins training an enormous

10:48

AI model much much larger

10:51

than anything that's been seen before.

10:53

Personal money terms it's absolutely ridiculous

10:56

like oh that you spend like millions

10:59

of dollars more money than most people see in their lifetime

11:02

on a language model where it's

11:04

like you're not even sure if it'll be useful for anything.

11:07

So Sam Altman has to compromise

11:09

on one of his big ideals. He

11:11

believes he has to do so to

11:13

invent the technology for a better

11:15

future. But a better future

11:18

for who? Yes it was a dream

11:21

job. This is Richard Methenge and these

11:23

days he's the head of the African Content

11:26

Moderators Union and he's been listed

11:28

among the top people in AI. But

11:31

back in 2021 he was just starting a new job

11:35

with an outfit called Summer. This

11:37

was a company OpenAI had contracted.

11:40

Summer is based in Richard's hometown of

11:42

Nairobi, Kenya. Richard's

11:44

job and that of dozens of other workers

11:47

was to sift through the output of

11:49

chat GPT as it was being developed.

11:52

Because the model that's trained from all the text

11:54

from the internet ends up spitting

11:56

out a lot of terrible stuff. So

11:59

humans had to to tell AI what was

12:01

acceptable and what was not. We

12:03

had two factions. One was dealing

12:06

with violent content and

12:08

the other one, which was myself

12:10

and my team, we were actually working

12:13

on sexual content. We were

12:15

training the chatbots how to

12:18

work with toxic pieces of

12:20

text that was very disturbing, very

12:23

grotesque, very traumatizing.

12:26

For 10 hours a day, five

12:28

days a week. He and his team worked

12:31

in a secluded room. Often

12:33

the material they had to read was

12:35

chat GPT's written depictions of

12:38

child sex abuse and bestiality.

12:42

So chat GPT would churn out content

12:45

and Richard would tick a box saying

12:47

if it was acceptable or not. But

12:50

as the months wore on, the flow

12:52

of disturbing content didn't slow

12:54

down and the team grew

12:56

more and more traumatized.

12:59

And some of them, by the way, as we speak

13:01

right now, it was so harsh to

13:04

the point that they even separated

13:06

or they even divorced with their partners

13:09

just because of the experience

13:11

that they went through in terms

13:14

of content moderation. And the kicker, they

13:16

got paid peanuts. It

13:19

was less than less than a dollar

13:21

an hour. Less than a dollar.

13:24

Less than a dollar an hour.

13:26

Now, OpenAI says the story is more

13:28

complicated. It believed Summer

13:30

was paying its contractors more

13:32

than $1 an hour. Summer says

13:34

it didn't do anything wrong and

13:37

that it offered workers living wages.

13:40

Either way, Richard says he and

13:42

his team haven't had an apology from OpenAI

13:44

and they've set up a union to

13:46

help African content moderators negotiate

13:49

with big tech. You don't just work

13:52

with someone and throw them

13:54

away, you know, just

13:56

because you have maximized the process. OpenAI

13:59

was now working. billions and

14:01

it was no longer nonprofit nor

14:04

very committed to open source. Instead

14:06

it was allied to Microsoft. Sam

14:09

Altman had done what it took to chase

14:11

his dream. He may have had to compromise

14:14

on his open source ideals and

14:16

take billions in funding but

14:18

he was about to change the world. ChatGPT

14:22

was unleashed and everyone

14:24

lost their mind. This is reOne

14:27

Child who worked at OpenAI. I think

14:29

the public reaction is something I had completely

14:32

just like completely gobsmacked

14:34

by like I had no ability to predict. ChatGPT

14:37

was the fastest selling consumer

14:39

app ever. It made OpenAI

14:42

and Sam Altman famous. It

14:44

wasn't super intelligent but it was

14:47

freakishly good. It could pass

14:49

uni exams and it looked like

14:52

it could replace a lot of workers. People

14:55

woke up to the potential of AI. People

14:57

like leaders of countries, politicians

15:00

and they were nervous about this new power

15:03

that Sam Altman wielded. What

15:05

future was he creating? Somehow

15:13

Sam Altman had found himself as

15:15

the spokesperson for modern AI

15:18

and he was happy to use his unofficial

15:21

status. In May 2023 he

15:23

hopped on a plane for a 22 country, 25 city world tour. It's

15:29

basically a very nerdy conteci

15:31

tour. Instead of getting drunk and

15:34

passing up beneath a historical statue

15:36

he addressed the United States Congress.

15:39

My name is Sam Altman. I'm the chief executive

15:41

officer of OpenAI. He met Francis president

15:44

Emmanuel Macron. Thanks for the opportunity to speak to

15:46

you today about large neural networks. It's

15:48

really an honor to be here. Then Britain's prime minister

15:50

Rishi Sunak. We think it can be

15:53

a printing press moment. We have to work together

15:55

to make it so. Then India's PM Narendra

15:57

Modi. We believe that the benefits of the tools

15:59

we have to deployed so far vastly outweigh the

16:01

risks, but ensuring their safety is

16:03

vital to our work. Margaret O'Mara

16:06

says Altman presented himself to world leaders

16:08

as the inevitable new tech superpower.

16:11

Sam Altman's on a world tour as

16:14

if he were a leader

16:17

of a nation state, a head of state, is kind

16:19

of in a way underscoring that

16:21

even if that was the intent.

16:23

It wasn't just a simple PR

16:25

trip. Altman's message was more

16:27

complicated than the arguments of previous

16:30

tech CEOs talking to governments. He

16:33

said AI was great, but

16:35

it could be dangerous, and it

16:38

needed to be regulated. We think that regulatory

16:40

intervention by governments will be critical

16:42

to mitigate the risks of increasingly powerful

16:45

models. Altman was like an evangelist

16:47

preaching the coming of the higher power.

16:50

We are working to build tools that one day could help

16:52

us make new discoveries and address some of humanity's

16:54

biggest challenges, like climate change

16:57

and curing cancer. He said super intelligent

16:59

AI was right around the

17:01

corner, and it was going to be everywhere

17:04

in our lives. Like a really

17:06

clever Google Assistant. So

17:08

when I gave the current generation of AI go,

17:11

it would seem this future is

17:13

not here yet. Hey Google, are

17:15

you there? Of course I am. I'm always here

17:17

for you whenever you need me. I'm trapped

17:20

in the bathroom. I need your help

17:22

to get me out. Sorry,

17:24

I didn't understand that.

17:25

Just tell me if you need

17:27

help with anything or is this a good time

17:29

to have fun? I need help getting

17:32

out of the bathroom. It's an emergency. Bummer,

17:34

I still don't understand. Okay. But

17:37

I'll be here if you need me.

17:38

I'm actually quite hungry and I

17:41

need help. I've been in here for some

17:43

time now Google. Is there anything

17:45

you can do? Hey

17:48

Google? Anyone? Hello? But

17:55

Altman says super intelligence is

17:57

coming. And it's not all. Good

18:00

news. My worst fears are that we cause

18:02

significant, we the field, the technology,

18:05

the industry cause significant harm to

18:07

the world. For example, AI could

18:09

be used to hack elections through

18:12

creating videos of politicians doing

18:14

things they didn't actually do or using

18:16

their voice to frame them for scandals.

18:20

Whatever the invent of use, whoever

18:22

has this power will steamroll

18:24

those who don't. It is essential that powerful

18:27

AI is developed with democratic values in mind.

18:29

Now Altman prides himself on being

18:31

unlike the other tech CEOs. He's

18:34

a cool CEO. And now after

18:36

years of hard work, he's the

18:39

guy who's found himself as the

18:41

voice for AI. The multimillionaire

18:44

from Y Combinator who's behind disruptive

18:46

companies like Airbnb is

18:49

the guy government seems to be listening to. They

18:52

ask him, what should be the rules

18:54

around this technology? And the future

18:57

that he sketches up is kind of what you'd

18:59

expect from a guy out of Silicon Valley.

19:02

It's a future where a bunch of CEOs make

19:04

a lot of money. Altman

19:07

says, open AI and other tech companies

19:09

should continue to own and operate the

19:12

most powerful AI models. And

19:14

yeah, they should be regulated, but

19:17

not too much.

19:18

Well, at the end of the day, these are all

19:20

private sector companies and their purpose

19:23

is to make money. It's capitalism.

19:26

So when you scrape away the hype and

19:28

Altman's warnings, his vision of the

19:30

future is a familiar one. It's

19:33

the same Silicon Valley dream we've heard

19:35

for years from Google, Facebook

19:37

and others. Technology

19:40

will make the world better. It

19:42

will solve everything. And

19:44

yeah, it will be very profitable.

19:46

The Sam Altman's and others in this whole

19:48

debate, they are very,

19:51

very privileged. They are by and large,

19:53

extremely wealthy, extremely wealthy

19:56

and great wealth can create a bubble.

19:58

Even if you have

19:59

every desire to stay in touch

20:02

with the pulse of the world and

20:04

what people are thinking, great

20:06

power and wealth is very isolating.

20:08

You think he's being a little naive?

20:10

I fear so. Yes, I do. I

20:12

do. I wish it were otherwise.

20:14

I wish it were otherwise.

20:15

Altman's world tour feels a bit like

20:17

a global victory lap for his vision

20:20

of AI and his message is

20:22

that he's the good guy. He wants to

20:24

share AI with the world, not lock

20:26

it up but others grow worried that

20:28

actually he's tightening his control.

20:31

OpenAI is now believed to be valued

20:33

at $100 million. It's

20:35

gone up 3x in a few months. I

20:38

don't see why it won't go up another 3x in another few

20:40

months.

20:44

That's Jeremy Howard, the Australian who developed

20:47

the language model that helped OpenAI

20:49

make chat GPT. You

20:51

know, these AI companies, I suspect

20:53

are on track to being the most

20:57

powerful organizations in the world and will

20:59

continue to grow. But what's stopping

21:01

others from making their own powerful

21:04

AI? Well, Jeremy

21:06

says there's a couple reasons. First,

21:09

there's a big cost barrier. It costs

21:11

over a billion dollars to train a cutting

21:13

edge AI and that cost is

21:16

going up. And then there's

21:18

a data barrier. GPT-3

21:20

was trained on the entire internet. GPT-4

21:24

was trained on the internet plus data

21:26

created by GPT-3. GPT-5

21:29

will be trained on data from GPT-4. And

21:33

so we're at the point now where you need an

21:35

existing very large language model to

21:37

train a new one. This positive

21:40

feedback will be so great that

21:43

that company becomes

21:46

the biggest monopolist in history.

21:50

So Jeremy says only a few companies

21:52

will have the resources to build the

21:55

best AI. And that means

21:57

whoever is in front now or in

21:59

the next. years will dominate

22:01

AI for at least the next

22:04

few decades. They'll make the British

22:06

East India companies seem like peanuts. They'll

22:09

be vastly, vastly powerful. And

22:12

so yeah, what happens to the rest of us in that situation?

22:14

Well, we don't really have much to add, do we?

22:18

We then become, most of the world then becomes

22:20

actors with little ability

22:23

to generate surplus economic value.

22:26

Even that already right now, the humans

22:28

in the world who we feel that cannot

22:31

generate huge amounts of surplus economic value

22:33

we treat like absolute crap, you should

22:36

assume that when most of us are like

22:38

that, we'll be treated exactly the way that

22:41

we treat those people now.

22:45

Now we don't know how the future

22:48

will work out, but it looks

22:50

like the early idealistic

22:52

phase of AI research has

22:54

passed. We're now entering

22:56

a new era of profit. We've

22:59

seen this before. Google's first

23:02

motto way back in 2000 was don't

23:05

be evil. I quietly got

23:07

rid of this five years ago. And

23:10

this is why we're talking about Sam Altman and

23:12

this scramble for control. There's

23:14

two sides to Altman. One a

23:16

techno optimist excited about

23:19

future technology. And the

23:21

other side, well, he's

23:23

just another Silicon Valley CEO.

23:26

I know how power works. I've seen it enough times

23:28

that when people start making

23:30

bucket loads of money, they get

23:32

surrounded by people who want them to keep making

23:35

bucket loads of money so that they can also make bucket loads

23:37

of money and they will convince themselves, oh,

23:40

you know, the more important thing for us is to make bucket

23:42

loads of money. Is

23:44

it a case of power corrupt? I

23:47

think it drags in the wrong kinds of people.

23:50

I think you can convince yourself very

23:52

easily that you being

23:55

in control of vast power and resources

23:57

is actually a net point to the world.

23:59

convince themselves that. I

24:02

think that's very

24:06

unlikely to be true.

24:08

This is Hello AI Overlords, a

24:10

science fiction series. I'm James

24:12

Pertill.

24:17

Our show is made on the lands of the Wajak

24:20

Nunga, Wurundjeri and Palawa, with

24:24

production by Jordan Fennell, Erica Volles

24:26

and Will Ockenden. Our sound engineer was Marcus

24:29

Hobbs. Next

24:31

episode, we've talked about the history

24:33

of AI and the companies that control it. Now

24:35

it's time to talk impacts.

24:38

I was brought on as a

24:40

background actor in a very

24:42

large superhero movie.

24:44

And hear from people whose lives were disrupted

24:46

by the sudden arrival of this powerful

24:49

new technology. There were weird vibes. There were a lot

24:51

of people from

24:52

the studio there.

24:55

One of the PAs

24:55

was coming through and kind of like hand selecting people

24:59

from a list. Actors who went on strike

25:01

saying studios wanted to digitally clone them and

25:03

do the amount of acting work. They said, you know, we got

25:05

this really

25:08

cool opportunities and we're going to

25:10

scan you with the full body scan.

25:12

It's like, you know, video games. It's cool. It's like, it's the thing

25:15

that we

25:15

do. This is just like a new way

25:17

of doing it. That's the story of 2023, the

25:19

year the world

25:22

woke up to AI. You can find our previous

25:25

episodes right now on ABC Listen.

25:28

Search for science friction. Don't forget

25:30

to tell a friend about the show. See

25:32

you soon.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features