Podchaser Logo
Home
The AI Bubble Is Bursting

The AI Bubble Is Bursting

Released Friday, 12th April 2024
 1 person rated this episode
The AI Bubble Is Bursting

The AI Bubble Is Bursting

The AI Bubble Is Bursting

The AI Bubble Is Bursting

Friday, 12th April 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Call Zone Media. Hi,

0:05

I'm Edzeitron and welcome back to Better Offline.

0:18

As I've discussed in my last episode, there

0:20

are four intractable problems that are going to stop

0:22

generative AI from going much further.

0:25

It's massive energy demands, its massive

0:27

computation demands. It's hallucinations

0:29

when it authoritatively tells you something that isn't

0:31

true or makes horrible mistakes in images,

0:34

and the fact that these large language models have

0:36

this insatiable need for more training data.

0:40

Yet I think what might pop this bubble is a far

0:42

simpler problem. Generative

0:44

AI simply does not deliver the

0:46

magical automation that everybody has been

0:48

fantasizing about, and I don't

0:51

think consumers or enterprises are actually

0:53

impressed. A year

0:55

and a half after launch, it seems

0:57

like the kind of immediate and unquestioning in fact

0:59

youation with chat GBT and for

1:01

that matter, or other generative AIS has

1:04

softened. Instead,

1:06

there's this rising undercurrent of

1:09

apathy and mistrust and of

1:11

course failure that's kind of hard to ignore.

1:14

In June twenty twenty three, traffic to chat

1:16

GPT's website, where people access the

1:18

chat GPT bot in a web browser fell

1:21

for the first time since launch, starting a trend

1:23

that's continued for five of the following eight

1:25

months. According to data from similar Web,

1:28

people are becoming more aware of the technology's

1:31

limitations, like as I mentioned, hallucinations,

1:34

which, as I note, is when chat

1:36

gpt confidently asserts things

1:38

that aren't true, which can be in writing,

1:40

when it gives you an incorrect fact, or

1:42

in an image when it gives a dog eighteen

1:44

legs to make matters worse.

1:47

According to data from data Ai,

1:49

which used to be known as Appanny, chat

1:51

gbt's downloads in iOS have begun

1:53

to drop from a high of just over seven hundred

1:55

thousand a week to a plateau of around

1:57

four hundred and fifty thousand to five hundred

2:00

thousand a week since early

2:02

twenty twenty three, which sounds

2:04

impressive until you hear that only seven point

2:06

three five percent of people who downloaded

2:09

chatgbt in January twenty twenty

2:11

four actually used the app again thirty

2:13

days after they downloaded it, cratering

2:16

from a high of twenty eight percent a month after the

2:18

app launched in June twenty twenty three. In

2:21

fact, things immediately

2:24

appear to have fallen apart in July twenty

2:26

twenty three, only two months after launch, only four

2:28

point five nine percent of users opened

2:30

the app for a second time. Numbers

2:33

like these tell the story of a buzzy new application

2:36

that isn't actually providing users with much

2:38

utility. I think the generative

2:41

AI engine has started to sputter for

2:43

customers, for businesses, and indeed

2:45

for the startups that create them. That's

2:48

bad news for any industry that's

2:50

yet to reach profitability or indeed sustainability,

2:53

and especially for generative AI, which

2:55

remains relying on a kind of an indefinite

2:58

supply of cash to operate. Back

3:00

in April twenty twenty three, Dylan Patel,

3:03

chief analyst at Semi Analysis,

3:05

calculated that GPT three, the previous

3:07

generation of chech GPT current ones known

3:09

as GPT four, cost around seven

3:12

hundred thousand dollars a day to run. It's

3:14

about twenty one million dollars a month, or two

3:16

hundred and fifty million dollars a year. In

3:18

October twenty twenty three, Richard Windsor,

3:21

the research director at large of Counterpoint

3:24

Research, which is one of the more reliable analyst

3:26

houses, hypothesized that open AI's

3:28

monthly cash burn was in the region of one

3:31

point one billion dollars a

3:33

month, based on them having to raise thirteen

3:35

billion dollars from Microsoft, most of it, as

3:37

I noted in credits for its Azure

3:39

cloud computing service to run their models.

3:43

It could be more, it could be less. As

3:45

a private company, only investors and other

3:47

insiders can possibly know what's going on

3:49

in open Ai. However,

3:52

four months later, Reuter's would report

3:54

that open aye made about two billion dollars

3:56

in revenue in twenty twenty three, a remarkable

3:58

sum that much like every other story about

4:00

open ai, never mentions profit. In

4:03

fact, I can't find a single reporter

4:06

that appears to have asked Sam Mormon about how much

4:08

profit open ai makes, only breathless

4:11

hype with no consideration of its sustainability.

4:14

Even if open ai burns a tenth

4:16

of windsors estema about one hundred million dollars

4:19

a month, that's still far more

4:21

money than they're making. There's not a

4:23

single story out there talking about them making a profit,

4:25

and I don't think they make in one.

4:28

Here's one thing we can be certain of. Though things

4:32

are getting more expensive, progress

4:34

in generative AI means increasingly

4:37

complex models, and as I previously mentioned

4:39

open AI's attempted God damn

4:41

it a Rakis model

4:43

one, built specifically to wow Microsoft by making

4:46

chatgiputy more efficient, failed to actually

4:48

make it more efficient. Their

4:50

attempts to make this a better company

4:53

are not working. Windsor

4:56

the aforementioned analyst in a separate blog

4:58

also pointed out that there's nothing really sticky

5:00

about these companies. There's nothing

5:03

stopping someone from switching from, say,

5:05

chat GPT to anthropics

5:07

clawed two model. They're all trained

5:09

on similar data sets, and they all produce

5:12

very similar answers. And what one

5:14

model might be better a one thing than another, they're

5:17

fundamentally very very similar. There's

5:21

also nothing stopping someone from simply giving

5:23

up on generative AI altogether. It

5:26

doesn't seem to be the plug and play automation

5:29

god that everybody's been replicate to be, and

5:32

judging by the plateauing Chat GPT user

5:35

numbers, I think that might already be happening.

5:39

It's also important to remember that while generative

5:41

AI is shiny and new artificially,

5:44

intelligence is absolutely not, and over

5:46

the past decade it's found a number of homes

5:49

from expensive security apps that detect when

5:51

a hacker is trying to break into a corporate network. The

5:53

spam filters proof for reading tools

5:55

like grammarly, Plenty of Things,

5:57

even Siri on your iPhone. In

5:59

these context, AI is either a small

6:01

component of a larger product or something

6:03

that directly builds on human efforts. This

6:06

stuff is actually valuable. AI

6:08

based spam filters are typically better than those

6:10

reliant on hand coded rules, for example, But

6:13

it's also from a marketing perspective,

6:15

can of boring generative

6:18

AIS A law is that it can supplant

6:20

humans either partially or entirely, producing

6:23

entire creative works that otherwise would

6:25

have taken hours and carried a real financial

6:28

cost. But behind this glitzy

6:30

technology and media hype, the unspoken

6:32

truth is that generative AI holds

6:35

way over the financial markets because it's

6:37

regarded as a tool to eliminate

6:39

an entire swarth of jobs in the creative

6:41

and knowledge economies. It's a

6:43

ghastly promise and it underpins the vast

6:46

market value of otherwise commercially

6:48

unviable generative AI companies like open

6:50

Ai and Anthropic, and it's what is

6:52

driving I believe the multi billion dollar

6:54

investments we've seen from Microsoft,

6:56

Amazon, and Google yeah,

6:59

I see no evidence of mass adoption of

7:01

generative AI, and by research

7:03

suggests the enterprise adoption, which is

7:05

the meat of what would actually make these companies

7:07

money, it just isn't there. Deep

7:11

within the earnings reports and the quotes of

7:13

every major cloud provider claiming that the

7:15

AI revolution is here is a deeply

7:17

worrying trend. The AI

7:20

revenue really isn't contributing much to

7:22

the bottom line outside of vacuous

7:24

media coverage, and

7:26

I think the internal story is going to be much bleaker.

7:37

In early March, The Information published

7:39

the story about Amazon and Google tamping down

7:41

GENERATIVEAI expectations, with

7:43

these companies dousing their salespeople's

7:45

excitement about the capabilities of the tech

7:48

they're selling. A tech executive

7:50

is quoted in the article saying that customers

7:52

are beginning to struggle with questions, simple

7:54

questions like is AI actually providing

7:56

value? And how do I evaluate how

7:59

AI is doing? And a Gartner

8:01

analyst told Amazon Web Services

8:04

sales staff that the AI industry

8:06

was at the peak of the hype cycle around

8:08

large language models and other generative

8:10

AI, which is a

8:13

somewhat specific code for it's

8:15

not going to get much better anytime

8:18

soon. This article confirms

8:20

many of my suspicions that and

8:23

I quote the information here, other

8:25

software companies that have touted generative

8:27

AI as a boon to enterprises are

8:29

still waiting for revenue to emerge, citing

8:32

the example of professional services firm

8:34

KPMG buying forty seven thousand

8:36

subscriptions to Microsoft's Copilot

8:39

AI at a significant discount on

8:41

the thirty dollars a c sticker price.

8:45

Except KPMG bought the subscriptions

8:48

without really have engaged whether their employees

8:50

actually got anything out of it. They bought

8:52

it, and I'm not kidding you entirely

8:55

so that if any KPMG customers

8:57

ask questions about AI, they'll

8:59

be able to answer them. Was

9:03

so clearly in the bot Oh my god. Anyway, as

9:06

I've hinted, it's also not obvious how

9:08

much AI actually contributes to the bottom

9:10

line. In Microsoft's Q four

9:12

twenty twenty three earnings report, gief

9:14

financial officer Amy Hood reported that six

9:17

points of revenue growth in its Azureine

9:19

Cloud Services division was attributed to

9:21

AI services. I went around

9:23

the web and I read every bloody article about their earnings.

9:26

I looked, and I looked and everyone was saying,

9:28

Oh, this is really good. I found someone who said

9:31

it was six percent of their revenue, and I went, that

9:33

sounds like complete bollocks to me. So I

9:35

went and spoke with Jordan Novette, who's

9:38

covered Microsoft for many years. He is a great cloud

9:40

reporter over at CNBC, and he actually covered

9:42

Microsoft's earnings for the NBC itself,

9:45

and he confirmed that what this means is that

9:47

AI contributed six percent

9:50

of the thirty percent of year over

9:52

year growth in Microsoft's as

9:54

your Cloud services. That

9:56

is a percentage of a percentage. So

9:59

by the way that that means,

10:01

so thirty percent growth year of year. So

10:03

six percent year of a year growth is from AI.

10:06

Could be good, but also all of the rest

10:08

of it came not from new products, just from the natural

10:10

growth of the company. It's

10:13

unclear how much money that really is, but six percent

10:16

of the year over year growth isn't really

10:18

that exciting anyway

10:20

elsewhere. Amazon CEO Andy Jasse,

10:22

who took over from Bezos a few years ago and

10:25

was the chief of Amazon Web Services,

10:27

said that generative AI revenue was still

10:29

relatively small, but don't

10:31

worry, you said it would drive tens of billions of dollars

10:33

of revenue over the next several years, adding

10:36

that virtually every consumer business Amazon

10:38

operated in already had or would

10:40

have generative AI offerings. Now

10:43

they can just say that stuff. I

10:45

really want you to know. You can say what you want on earning

10:47

scores, as long as you're not just outright

10:50

lying, like saying we have one hundred billion dollars in

10:52

cash, but you have fifty dollars. That is a

10:54

lie. You can't do that. But you can be like iah

10:56

At We've got all sorts of AI and everything. Now it's

10:59

bloody magically. Take you don't

11:01

take a look, but it's there. I promise you. They can

11:03

just say what they want. But

11:06

don't worry, they're not the only ones. Salesforce

11:09

chief financial officer Amy Weaver said

11:11

in their most recent earning score that Salesforce

11:14

was not factoring in material contribution

11:17

from Salesforce's numerous AI products

11:19

in its financial year twenty twenty five. Guidance

11:22

software company Adobe shares slid

11:25

in their last earnings. It's the company failed to generate

11:27

meaningful revenue from its masses of AI products,

11:30

with analysts now worried about its ability

11:32

to actually monetize any of these generative

11:34

products. Service now claimed

11:36

to its earnings that generative AI was meaningfully

11:38

contributed to its bottom line. Yet a story from

11:40

The Information quotes their chief financial

11:42

officer as saying that from a revenue

11:45

contribution perspective, it's not going

11:47

to be huge. Going

11:49

to be a bit honest and feeling a little insane

11:51

with this stuff. I feel crazy

11:54

every time I think about these stories,

11:56

because elsewhere in the media, so

12:00

many people are saying how big

12:02

and successful the generative AI revolution

12:05

is, and it's going to be Yeah.

12:07

Every time I look at the actual places where

12:09

they write down how much money it makes, any

12:12

of the actual signs of growth and

12:14

significance and utility

12:16

and adoption, it's

12:19

just not there. It's just breathless

12:21

hype, with this kind of whisper of stagnation

12:24

and non existent adoption. And

12:27

while there are startups beginning to mind

12:29

usefulness out of general AI, and they

12:31

do inside by automating internal queries

12:33

and customer sport questions, these

12:36

are integrations rather than revolutions,

12:38

and they're far from the substance

12:40

of a true movement. Maybe

12:44

the darker truth of the generative AI boom

12:46

is that it's a feature, not a product, and

12:49

that these features might be built entirely

12:52

off the back of large language models

12:54

which are unsustainable to run, grow,

12:58

or even make better. What

13:00

if AI only drives a couple percentage

13:02

points of real revenue growth of these companies.

13:05

What if what we're seeing today is

13:07

the upper limit, not the beginning. Honestly,

13:10

I'm beginning to believe that a large part of the

13:12

AI boom is just hot air, and it's

13:14

being pumped up through a combination of executive

13:17

bullshittery and very compliant

13:20

media that's so happy to write stories imagining

13:22

what AI can do, yet

13:25

seems unable to check what it can do or

13:27

what it's doing. It's

13:30

so weird. Now,

13:32

there's a bloke over at the Wall Street Journal called Chipcutter

13:35

who should really look into if you want to know why your

13:37

boss keeps asking you to go back to the office. Wall Street

13:39

Journal's Chip Cutter. He loves to write

13:41

things about how bosses are good and

13:43

how returning to the office is good. He

13:46

wrote a piece in March about how AI is

13:48

being integrated into the office, and most of

13:50

it was just hundreds of words of him

13:52

guessing about what people might do but

13:54

when he gets to the bottom and he starts

13:56

talking about companies using it, it's

13:58

almost entirely exact samples of people

14:01

saying, yeah, it makes too many mistakes

14:03

for us to rely on it, and we're just experimenting

14:05

it. Elsewhere in the media,

14:07

the New York Times talked with Salesforce is head

14:09

of AI Clara's share and

14:12

in this I think six hundred and seven hundred

14:15

word article, didn't really

14:17

get to say much of anything about AI or

14:19

what their products do. All she said was that

14:22

the Einstein Trust layer handles

14:24

data. And you may think I'm being facetious here, that's

14:26

all she said about that, and

14:28

then she added that it would be transformational

14:31

for jobs the way the internet was. What

14:35

what does that mean? Why am I reading

14:38

this in the newspaper? Why is this

14:40

what I read in the newspaper? How is this helping?

14:43

I know I rant a lot on this podcast,

14:45

and I'm going to keep doing it. You're stuck with me, all right,

14:48

it's free, Okay, you don't pay for this unless

14:50

you do cooler zone media, which you should pay for anyway.

14:53

I know I'm ranting, But the reason

14:55

that this stuff really infuriates

14:57

me is it's misinformation on

15:00

some level. I know it's kind of dramatic

15:02

to say, oh, they're misinforming people by suggesting

15:04

that AI can do stuff, but it

15:06

is. It is misinformation. When

15:08

you're letting corporate executives go

15:10

in the newspaper and talk about how amazing their products

15:13

will be without asking them what they can

15:15

do today, you're just giving them free

15:17

press. You're not giving them credit for

15:20

stuff they've done. You're giving them credit for things

15:22

they're making up on the spot. And

15:24

when you do that, you make the rich

15:27

richer and the poor poorer. You

15:29

centralize power in the hands of assholes,

15:32

people who are excited, people who are borderline

15:35

masturbatory, jumping around saying,

15:38

oh God, I can't wait till and

15:40

replace humans with fucking computers.

15:43

Good news is they're not going to be able to, but

15:45

that's what they're excited about, and that's what they're getting media

15:47

coverage around. The media has

15:50

been fooled, just like they were

15:52

with the metaverus, by this specious

15:55

promise train of the generative AI

15:57

generation, and these worthless executives

15:59

champions these half truths

16:02

and this magical thinking has spread

16:04

far faster due to the fact that AI

16:07

actually exists and is doing something,

16:09

and it's actually much easier to imagine

16:11

how it might change our lives, unlike the metaverse.

16:14

Even if the way it might do so is somewhere

16:16

between improbable and impossible, it

16:20

is easy to think about how my work.

16:23

You know, you could use an AI to or make data

16:25

entry or boring busy work.

16:28

Surely all of this you can automate, right, And when

16:30

you use chat GPT you can almost

16:32

kind of, sort of somewhat see how it might happen.

16:35

Even if when you open up chat GPT and

16:37

try and make it do something, it's

16:39

always a bit off, never seems

16:41

to quite do it. I'm in a very in my day

16:43

job, my PR firm. I'm in a spreadsheet

16:45

and document heavy business. Of all

16:47

the people who this could help, you think it would

16:50

be me A lot of my work

16:52

is hey, all of these things I need him in a spreadsheet.

16:54

It can't bloody do it. And

16:57

I'm sure you listeners will

16:59

probably email me and say, oh, I've used CHATGPT

17:02

for this, don't care. I really mean

17:04

that this thing is not changing

17:06

the world. And actually I think far more of you have already

17:08

shared. Thank you. By the way, easy. At Better off Line dot

17:10

com. You can email me your ideas and

17:12

your angry comments, so you can go on the reddit

17:15

to complain. But the

17:17

thing I'm hearing for most people is, yeah, I've

17:19

tried it and it didn't do enough. I

17:22

tried it and there were too many mistakes. There

17:24

was a Wall Street Journal article back in February

17:27

about how Amazon and

17:30

Google were having trouble selling AI services

17:33

because well, when they

17:35

went to sell them these companies, these financial

17:37

services companies in particular, they

17:39

were saying, yeah, but these hallucinations could actually

17:42

get the sec mad at us. And the answer

17:44

that they had was, yeah,

17:47

what if we just made it so that the models would sometimes

17:49

say they don't know stuff. Every

17:52

time you get to a reckoning

17:54

with AI where you want it

17:56

to be better, where you're like, hey, AI

17:59

executive, how will this

18:01

actually be fixed these hallucination problems,

18:03

for example, they come up with the most meanly

18:05

mouthship. And I truly believe

18:07

it's because there is no answer to these problems, as

18:09

I said in the previous episode,

18:12

and I think that's why I can't find any

18:15

companies that have integrated generative AI in a

18:17

way that's truly improved their bottom line other

18:19

than Klana, which allows

18:21

you to do zero percent interest free loans

18:24

on almost anything. It's a very worrying

18:26

company anyway. They claimed that

18:28

their AI powered support bot was

18:30

estimated to drive a forty million dollar

18:33

amount in profit improvement in

18:35

twenty twenty four, which

18:37

does not, by the way, despite it being troubleted

18:40

by members of the media, otherwise mean

18:42

that they made forty million dollars in profit. I

18:44

actually can't find what profit improvement

18:46

refers to. And

18:49

this is the classic AI boom

18:51

story. By the way, there's always this weird verbal

18:54

judo going on where they're like, yeah, sir,

18:57

forty million dollars in profit improvement, upwards,

19:00

downwards and side to sidemark, it's really good, And

19:03

I think it's just headline grabbing. I think

19:05

it's just buzz. And

19:07

despite fears to the contrary, AI

19:09

doesn't appear to be replacing a large

19:11

amount of workers, and when it has, the results

19:14

have been pretty terrible, like when

19:16

Microsoft replaced MSN dot COM's editorial

19:18

team with a series of AI bots

19:20

that have spread misinformation and conspiracy

19:23

theories, things like Joe Biden falling

19:25

asleep. It's so weird. Interestingly,

19:28

There was also a study in Boston Consultancy Group,

19:31

and just as a note, if anyone would

19:33

love the opportunity to just replace workers with

19:35

robots, it's BCG, mckenzi,

19:38

Accentia. All these companies were absolutely

19:40

they would be giving, however much open

19:43

ai wanted to do that, and then they would

19:45

charge fifty million dollars for in integration that didn't work,

19:47

which I guess makes AI perfect them putting

19:49

that aside. In a study from BCG,

19:52

they found the consultants that solved business

19:54

problems with open AI's GPT four model

19:56

performed twenty three percent worse

19:58

than those who didn't use it, even

20:01

when the consultant was warned about the limitations

20:03

of generative AI and the risk of hallucinations.

20:07

Yeah, really great stuff. To

20:10

be clear, I am not advocating

20:12

for the replacement of workers with AI. However,

20:15

I'm saying that if it was actually capable of

20:17

replacing human outputs, even

20:20

if it was even anywhere near doing so, any

20:22

number of these massive, horrifying firms

20:25

would be doing so at scale, and planning to do so

20:27

more as the models improve. They'd be fuddling

20:29

cash right up. Open ayes

20:31

ass it would be incredible, but

20:34

the reality of the AI boom is kind

20:36

of a little more boring. It

20:39

recently came out that Amazon's cashless

20:41

just walk out technology in

20:43

some of their stores. You could walk in, scan a QR

20:46

code, and then you could just grab your rouse

20:48

tomato sauce and your condoms or whatever,

20:51

your weird magazines. I don't know what they sell,

20:53

and there I'm not giving any more money to Amazon than

20:55

I need to. Anyways, everyone

20:58

thought, oh, it's just AI. You could just walk in. It's the

21:00

cameras would tell you through computer vision

21:03

what you had bought. It would be great. Now it turns

21:05

out that there's one thousand workers in India that

21:07

were monitoring these cameras and approving

21:09

transactions. Worse still,

21:12

open AI used Kenyan workers who

21:14

were paid less than two dollars an hour

21:16

to train chet GPT's outputs,

21:19

and they currently pay fifteen

21:21

dollars an hour. I think for American contractors,

21:23

no benefits of course, you know, fuck

21:26

workers, right, that's the thing underneath

21:28

this whole thing. It's just this undercurrent

21:31

of disrespect for human beings, and it pisses

21:33

me off. And I realized I'm pissed off

21:35

about a lot of things. You'd be listening for like an hour

21:37

now half an hour in this episode.

21:39

Anyway, I'll keep going. But

21:42

yeah, like I said, if AI was

21:44

changing things, if AI was actually capable

21:47

of replacing a person, it

21:49

would have happened. It would be happening right

21:51

now, It'd be happening at scale. It would

21:53

be so much worse than things feel now, unless,

21:57

of course, it just wasn't possible.

22:01

What if what we're seeing today is not a glimpse

22:03

of the future, but actually the

22:05

new terms of the present. What if generator

22:08

of AI isn't actually capable

22:10

of doing much more than what we're seeing today. What

22:12

if there's not really a clear timeline when

22:14

it will actually be able to do more. What

22:17

if this entire hype cycle has been built,

22:20

goosed, and propped up by this

22:23

compliant media, ready and willing to take

22:25

whatever these career embellishing bullshitters

22:28

have to say. What if this is

22:30

just another metaverse, but with a little bit

22:32

more product. Every

22:35

single time I've read about the amazing

22:37

things that artificial intelligence can do,

22:39

I just see somebody attempting to add fuel

22:42

to a fire that's close to going out. When

22:44

the Wall Street Journals Joanna Stern wrote about

22:46

Sora open ayes yet to be released

22:48

video generating chatbot. She talked

22:50

about how its photorealistic clips were good

22:52

enough to freak her out. And I

22:54

get it at first glance.

22:58

These do look like people, these

23:00

images do. They look like something approaching

23:02

a video. They

23:04

look almost real,

23:07

kind of like text some chat GBT is almost

23:10

right or it's right fully, but

23:12

it doesn't feel right. But

23:16

much like the rest of these outputs, you

23:19

look a little closer, and they have these weird errors

23:22

like cars disappearing in and out of the shot,

23:24

or a different car coming out from

23:26

behind something, or completely different

23:28

images between frames. Are these strange,

23:31

unrealistic moments of lighting, and

23:34

they're never much longer than thirty seconds.

23:37

Stern, who by the way, I deeply respect,

23:39

isn't really afraid of what Sura can

23:42

do. But what would happen if open ai

23:44

was able to fix the hallucination problems that

23:46

makes these videos kind of

23:48

unwatchable. Well,

23:51

it's easy to imagine tools like Saura could eventually

23:53

play a role in online disinformation campaigns,

23:55

churning out like lifelike videos

23:58

of politicians saying or doing appalling things.

24:01

We can all breathe a sigh of relief in knowing

24:03

that the videos themselves are often so flawed.

24:06

You can pretty much instantly see their AI

24:08

generated Also, SRA is not available

24:10

to the public yet, and I don't even know if

24:12

it ever will be. You just need

24:14

to look at the hands or the backgrounds.

24:17

Look at the people in the background of any AI

24:20

generated photo or video. They

24:23

often contain too many fingers so you can't

24:25

see their faces. Or

24:28

in Sora's videos, their legs don't

24:30

look right. It's so weird

24:33

and I don't I don't

24:35

know how to put it perfectly, but

24:38

they don't feel human. Just

24:41

to be clear, though Sora is dead

24:43

on arrival, no one actually has

24:46

access to it. It's unclear when it will come out.

24:48

Every journalist that has quote unquote use

24:50

Sura has just given a prompt

24:52

to open ai to run. But

24:55

there's also a very obvious problem that kind

24:57

of relates to something I mentioned in the previous episode.

25:00

Open AI and every generative AI company,

25:03

they're all dependent on high quality data

25:05

to train the models, and video data

25:07

is so much larger, more complex,

25:10

and harder to find.

25:12

There's less of it because it's visual

25:14

media, and

25:17

it's just a much bigger,

25:19

more complex model and a much harder

25:21

computational task to create video

25:24

moving image is it's actually

25:26

kind of putting aside my anger

25:29

about Generative AI. It's amazing they've done even

25:31

this, but to be clear, as amazing as it might

25:33

look, it isn't enough to do anything.

25:35

It's just a kind of a do hickey.

25:38

And this data is so much more complex than

25:40

the text based data that open ai is

25:43

running out of to make CHATGBT spit

25:45

out words. Even

25:47

if there were enough data, there's pretty

25:50

good reason why open ai is

25:52

coy about when they'll release the model. Like

25:55

I said, it's expensive and complex

25:57

to run, and at no point has

25:59

anyone on how the fuck this actually makes them

26:01

any money, how they sell this. It's

26:04

so weird, to

26:06

be clear, when you use surra it turns

26:08

text prompts into a video. You can't

26:11

edit the video, you can't change the video. The video

26:13

is what the video looks like. There's

26:15

no way to make Sora make the same

26:17

thing multiple times, which

26:20

makes the very basics of making

26:22

film, which is multiple angles

26:24

of the same thing, completely goddamn

26:26

impossible. In fact, consistency

26:29

between the same two prompts

26:32

is impossible from these models because they're all probabilistic.

26:36

We've recently seen some of the first quote unquote

26:38

movies made with sourra and the first one was

26:41

called Airhead, which is about a minute

26:43

long. It's this man with a balloon

26:45

head walking around and it's got this it's

26:47

very twee. It sucks. It's just putting

26:50

aside the aipart. It's just crappy,

26:53

and it's got a guy being like, yeah, having a balloon head is

26:55

difficult. Yeah, it's weird. I hate

26:57

having a balloon head. I hope I don't get popped.

27:00

It sucks. It's really bad filmmaking.

27:02

But also each shot, and

27:04

there's multiple mbile shots of this

27:06

guy with a yellow balloon head looks completely

27:08

different. It's a different balloon every goddamn time.

27:11

And it's so funny because you have these

27:13

guys on twittering like, oh my god,

27:16

oh my god, I am crying

27:18

and pissing myself. This is the best thing I've ever

27:20

seen. But

27:22

it isn't. It's so

27:25

close yet so far away, And

27:27

the only reason it's impressive is people

27:29

are willing to sit there and say, but what

27:31

if it wasn't shit? But

27:34

it is it really is like

27:37

every other generative output. It's

27:40

superficially impressive, kind of sort

27:42

of lifelike, but once you look at

27:44

it for more than a moment, it's just flawed,

27:47

terribly, irrevocably flawed.

27:51

It's time to wake up. We

27:54

are not in the early days of AI. We're

27:56

decades in and we're approaching the top of

27:58

the s curve of innovation. There

28:01

are products being built, don't worry, but

28:03

it's all things like Claude

28:06

Author, which creator Matt Schumer calls

28:08

a chain of AI systems that will write an entire

28:10

book for you in minutes, and I call

28:12

a new kind of asshole that can shit more

28:15

than you'd ever believe. Generative

28:18

AI is the ugliest creation of the

28:20

rot economy, and its main selling point

28:23

is that can generate a great deal of passable

28:25

material. Images generated

28:28

from generative AI models like open Ayes

28:30

Dali all have the same kind of eerie

28:32

feel to them, as they're mostly trained

28:34

on the same data, some of it licensed from

28:36

shutstock, some of it outright plagiarized

28:39

from hundreds of artists. Without

28:41

sounding too wanky and philosophical, everything

28:44

created by generative AI feels soulless,

28:47

and that's because it is no matter how

28:49

detailed the problem, no matter how well trained the model,

28:51

no matter how well intentioned the person

28:54

writing that prompt these are

28:56

still mathematical solutions to the emotional

28:58

problem of creation. One

29:00

cannot recreate the subtle fuck ups and

29:02

delightful little neurological errors that make

29:05

writing a book, or a newsletter or a

29:07

podcast special. While this podcast

29:09

is admittedly trying to generate what I

29:11

believe AI might do in the future, it's

29:14

not generative, and it's not generated

29:16

as a result of me mathematically considering

29:18

how likely an outcome is. My

29:20

fury is not generated by

29:22

an algorithm telling me that this is the right

29:25

thing to be angry at. I'm pissed off

29:27

because I feel like we're all being like to and

29:29

treated like idiots. What

29:31

makes things created by humans special

29:34

isn't doing the right thing or the best thing,

29:36

but the outputs that result in us fighting

29:38

past are on imperfections and maladies

29:41

like the strep infection I've been fighting for the last

29:43

few days, and

29:45

like, look, to my knowledge, you can't

29:48

give a generative AI strep throat. But

29:50

if I ever find out it's possible, I will make it my

29:52

damn mission to give it to chat GBT. All

30:08

of this hype is predicated on solving

30:10

problems with artificial intelligence models

30:12

that are only getting worse, and open

30:14

AI's only answers to these

30:17

problems are a combination of will

30:19

work it out eventually, trust me, and we

30:21

need a technological breakthrough in both chips and

30:23

energy. That's why Sam

30:25

Moultman has been trying to raise seven trillion

30:28

dollars and that's not a mistake,

30:30

by the way, to make a new kind of AI chip,

30:32

because there's no sign that this

30:34

or even future generations of

30:37

chips will actually fix anything. Generative

30:41

AI's core problems, it's hallucinations,

30:43

its massive energy, and its massive

30:45

unprofitable compute demands are

30:48

not close to being solved. I've

30:50

now watched a frankly alarming amount

30:52

of interviews with both open AI CEO Sam

30:55

Moultman and their CTO mirror Marati,

30:57

and every time they're saying the same specious

30:59

empty talking points, promising that

31:01

in the future chap GPT will do

31:03

this and that as all evidence points

31:06

to their models getting worse and through

31:08

the last years, by the way, they've just said the same

31:10

thing in every interview. They always

31:12

talk about chat GPT being like

31:15

something or help creatives,

31:17

they never really say how, which just kind of weird.

31:20

But yeah, generative AI models they're expensive,

31:22

they're compute intensive, and they

31:24

don't seem to provide obvious, tangible, mass

31:26

market use cases. Marathi in

31:28

Oltmund's futures depend heavily on

31:31

keeping the world believing that development and improvement

31:33

of these models capabilities will continue

31:35

at this rapacious pace of progress,

31:38

even though it's unquestionably slowed, with

31:41

open AI even admitting themselves

31:43

that their latest model, GPT four, may

31:45

actually be worse at some tasks. I

31:48

study from UC Berkeley last year found

31:51

that GPT four was actually worse at coding them

31:53

before and that chat GPT was at

31:55

times refusing to do

31:57

certain tasks. Nobody wants

31:59

to work anymore. Well,

32:03

I feel like I'm walking down the street

32:06

telling people their houses are on fire,

32:08

only to be told to stop insulting

32:11

their new heating system. These

32:13

models aren't intelligent. They're mathematical

32:15

behemoths generating the best guess on

32:17

training, data and labeling, and

32:19

thus they don't really know what they're being asked to

32:21

do. You can't fix

32:23

that. You can't fix hallucinations.

32:27

You can't just make these

32:29

problems go away with more compute,

32:31

you can mitigate them. The

32:34

current philosophy, by the way, is that you can use another

32:36

model to look at another model's outputs,

32:39

which, as I mentioned in the previous episodes, is very

32:41

silly. But seriously, everyone

32:43

telling you hallucinations are going away, look

32:45

a little deeper and look at when they

32:47

actually failed to tell you how they were.

32:50

It's just very silly. Look. Every

32:52

bit of excitement for this technology right now

32:54

is based on this idea of what it might do, as I've

32:56

said, and that quickly gets

32:59

conflated with what it could do, which

33:01

allows Sam Mortman, who by the way,

33:03

is far more of a marketing person than an

33:05

engineer. His one startup, Looped

33:07

was a failure. He's failed upwards.

33:10

He was in Why Combinat he did this. It's actually ridiculous.

33:12

He's so famous. All

33:14

of this bullshit it allows him to sell

33:16

the dream of open AI, and he's selling

33:18

it based on the least specific promises I've

33:21

seen since Mark Zuckerberg said we'd

33:23

live in our bloody oculus headsets. And

33:25

it's frustrating because

33:27

this money and this attention could go to important

33:30

things. We have real problems in society.

33:34

I believe that Sam Moortman and pretty

33:36

much anyone in a position of power and

33:39

influence in the AI space has been tap

33:41

dancing this entire time, hoping

33:43

that he could amass enough power and revenue

33:45

that his success would be inevitable. Yet

33:48

I think his hype campaign has

33:50

been a little bit too successful, and

33:53

it's deeply specious, and he, along

33:55

with the rest of the AI industry, has

33:57

found himself suddenly having to deliver a future

34:00

these not even close to developing. I

34:03

am always scared of automation taking

34:05

our jobs. I think it's always worth being scared

34:07

of. But I don't think that's the thing the tech industry

34:10

is working on right now. I don't think

34:12

they're close, and I think there's

34:14

something more imminent to fear, and

34:17

that thing is the bottom falling out of

34:19

generative AI as companies realize that

34:21

the best they're going to see is maybe a few

34:23

digits of profit growth. Companies

34:25

like Nvidia, Google, Amazon, Snowflake,

34:27

and Microsoft, they

34:29

have hundreds of billions of dollars of market

34:32

capitalization as well as expected revenue

34:34

new growth tied into the idea that

34:36

everyone's going to integrate AI into everything,

34:38

and that they'll be doing more than they are today.

34:42

You can already see the desperation coming

34:44

from these companies, like Microsoft, for

34:46

example, which in March effectively absorbed

34:49

a company called Inflection AI into itself,

34:51

kind of an acquisition by Stealth. Inflection

34:54

AI is a public benefit company that portrays

34:56

itself as a nicer, gentler version of

34:58

open AI. It's called product, a chat

35:00

GPT style chatbot touts its empathetic

35:03

tone, its humor, and its emotional awareness.

35:06

Inflection was created in twenty twenty two with an all

35:08

star founding team that included Reid Hoffmann, the

35:10

founder of LinkedIn, and most of Fars Suleiman,

35:12

the British born co founder of deep Mind, which

35:15

Google acquired in twenty fourteen. In

35:17

mid twenty twenty three, Microsoft took part in a one

35:19

point three billion dollar funding round which

35:22

saw the company acquire a significant stake in Inflection,

35:24

alongside other AI players like Nvidia.

35:27

Inflection's core product has the

35:29

same inherent underlying issues as every other generative

35:31

AI product, Hallucinations, for example,

35:34

but it has an accomplished team that has

35:36

taken a different approach to its competitors.

35:39

Whereas chat GPT and clored two tend to be,

35:41

or at least aspire to be, functional

35:44

tools that provide information or complete tasks.

35:46

Inflections sought to make its product feel a bit more

35:48

organic. For Microsoft, the appeal

35:50

was obvious. It has so much riding on its

35:53

AI ambitions, but in terms of

35:55

money spent as well as its share price, that

35:57

it can't really afford to appear stagnant

35:59

or worse as though it made a bad bet.

36:02

Acquiring Inflection would help it maintain

36:04

its image, especially with idiot Wall Street analysts.

36:07

But here's the problem. Microsoft

36:09

already holds a massive stake in Open AI,

36:11

and regulators both in America and Europe

36:14

are wary of market consolidation. Acquiring

36:17

Inflection it'd give them a

36:19

little too much scrutiny. So

36:22

Microsoft took a third, nastier path. Instead

36:25

of buying the company, it bought the employees, with

36:27

Suliman and the majority of his coworkers jumping

36:29

ship to found Microsoft's new AI division.

36:33

It secured the talent a subsequent

36:35

six hundred and fifty million dollar licensing

36:37

deal, yet another example of Microsoft

36:40

basically paying itself, and

36:43

then gave that deal to the shell

36:45

of Inflection. You know, the one without any of the staff

36:47

left, giving it access to the company's

36:49

tech its IP and

36:52

there's nothing regulators could do to stop him.

36:54

To be clear, Microsoft is in a position where it

36:56

could easily absorb the shock wave of a potential AI

36:59

bubble burst. It still prints money from

37:01

its other business units like office and cloud

37:03

computing, Microsoft Windows, and the Xbox

37:06

gaming system, and the same is

37:08

true for the other big names like Google and Nvidia.

37:10

They're well insulated for any slowdown in AI

37:12

investment or from a growing apathy towards

37:15

AI enterprise customers. I

37:17

will note, however, these

37:19

massive investments in data centers, if

37:22

they're all for nought, you will

37:24

see a form of crash.

37:27

I can't say the same for startups, though other

37:29

companies aren't going to be so lucky. Stability

37:32

AAI, the developer of stable diffusion, a

37:34

generative AI that can produce images from written

37:36

prompts innovative for the time,

37:39

is perhaps the canary in the coal mine of PKI.

37:42

Stability AI rode the same waves as

37:44

Open AI, especially in twenty twenty three, but

37:46

now that money is tighter and skepticism is

37:48

higher, it's struggling to stay aflow. Although

37:51

the company raised one hundred million dollars. In

37:53

early twenty twenty three, it burned through

37:55

nearly eight million dollars a month, and in a recent

37:57

attempt to raise further cash they

37:59

fail. The company routinely

38:01

missed payroll and, according to Forbes,

38:03

a master sizeable debt with the US tax authorities

38:06

that culminated in threats to seize the company's assets.

38:09

They owed debts to Amazon, Google and core

38:11

Weave, and each compute provider that specializes

38:13

in AI applications. With

38:16

negligible revenues and rapid cash

38:18

burn combined with no obvious way to monetize

38:20

the product, stability Ai is now

38:23

in turmoil, with its key talent leaving

38:25

the company in March, followed by the company's CEO

38:27

and co founder, Amount Mustak. Its

38:30

ongoing existence is in question, with The Financial

38:32

Times writing in March that the company's future,

38:34

despite once being seen as among the world's most

38:36

promising startups, is in doubt. While

38:40

it would be fair to say that stability Aii was

38:42

unique at its internal turmoil, its

38:45

external pressures, the ability you or lack

38:47

thereof, to monetize an expensive product, and

38:49

its reliance on external funding to survive much

38:52

more common across the industry. Its

38:54

survival depended on investors believing in a lofty

38:56

future for AI, where it's integrated

38:58

into every facet of our lives and it plays a

39:01

role in almost every industry, which, of

39:03

course we now know it doesn't. While

39:06

that belief hasn't been shattered, or at

39:08

least not yet, it's fair to say

39:10

that expectations and aspirations are increasingly

39:12

tempered after reaching the apex

39:15

of the AI pissfest. The tech industry

39:17

is getting a hangover, and companies like Stability

39:19

can't survive the headache.

39:22

But to be clear, I am not excited

39:24

for the AI bubble to pop, and

39:26

on some level, as weird as it sounds, I kind of hope it

39:28

doesn't. Once it bursts, the

39:31

AI bubble will hit far more than

39:33

the venture capitalists that propped it up. This

39:36

hype cycle has driven the global stock

39:38

markets to their best first quarter in five

39:41

years, and once the markets fully

39:44

turn on the companies that falsely promised

39:46

an AI revolution, it's going to lead

39:48

to a massive market downturn and another

39:50

merciless round of layoffs throughout the tech

39:52

sector, led by Microsoft, Google and Amazon.

39:56

This will in turn suppressed tech labor and

39:58

flood the market with techtile. It's going to

40:00

suck for everyone involved in software. A

40:03

market crash led by the tech industry will

40:05

only hurt innovation, further draining

40:08

the already low amounts going into the hands

40:10

of venture capitalists that control the dollars

40:12

going into new startups, and

40:14

once again, the entire industry

40:17

will suffer because people don't want to build

40:19

new things or try new ideas. No,

40:21

they want to fund the same people doing

40:23

the same things or similar things again

40:25

and again because it feels good to be part of a

40:28

consensus. Even if you're wrong. Silicon

40:30

Valley will continually fail to innovate

40:33

at scale until it learns to build real

40:35

things again, things that people actually use,

40:38

and things to actually do something. I

40:41

don't know if I want to be right or wrong here. If

40:43

I'm wrong, generative AI could replace millions

40:45

of people's jobs, something that far too many

40:47

people in the media are excited about, despite

40:50

the fact that the media is the first industry

40:52

that open AI kind of wants to automate.

40:55

If I'm right, we're going to face a dot

40:57

com bubble s town turn in tech,

41:00

one that's far worse than what we saw in the last few

41:02

years. In any case,

41:05

I do wish the tech industry would get their heads

41:07

out of their asses. I'm tired.

41:10

I'm tired of watching tech firms life their teeth

41:12

about the future that will live in the metaverse,

41:14

that our future will be decentralized and paid for in

41:16

cryptocurrency, and that our world will

41:18

be automated with chatbots. I

41:21

truly think that these companies think regular

41:23

people are stupid, which is why Microsoft

41:26

put out a minute long Super Bowl commercial for

41:28

their Copilot AI that featured several

41:30

prompts like write the code for my three D

41:33

open world game that don't actually do anything

41:35

that prompt I just mentioned. Go type it into

41:37

Copilot. It will give you a guide to coding

41:39

a game no code created. Also in

41:42

the commercial, he types in classic

41:45

truck shop called Pauls. But none

41:47

of these image generators can actually do words,

41:49

so it just looks like go

41:51

and do it. Trust me. It's funny. But

41:54

every time that these big tech

41:56

booms happen, every

41:58

time they say, oh, we're going to live in the

42:00

metaverse and oh we're going to be able to automate

42:02

everything, every

42:04

time they lie, the world turns

42:07

against the tech industry, and this

42:09

particular boom is so craven

42:11

in its falsehoods that I think it'll have

42:13

a dramatic chilling effect on tech

42:15

valuations if the bubble pops

42:18

quite as severely as I expect. And

42:20

Sam Mortman desperately needs you to believe

42:23

the bubble won't pop. He needs you to believe

42:25

that generative AI will be essential, inevitable,

42:28

and intractable, because if you don't,

42:30

you'll suddenly realize that trillions of dollars

42:32

in market capitalization and revenue

42:34

are being blown on something it's kind

42:37

of mediocre. If

42:39

you focus on the present, what open AI's

42:41

technology can do today and will likely do

42:43

for some time, you see in terrifying

42:46

clarity that generative AI isn't

42:48

really a society altering technology.

42:50

It's just another form of efficiency driving

42:53

cloud compute software that benefits

42:56

kind of a small amount of people. If

42:59

you stop saying things like AI could

43:01

do or AI will do, you

43:04

have to start asking what AI can

43:06

do, and the answer is not

43:08

that much and probably not that much more. In the

43:10

future. Surra is not going

43:13

to generate entire movies. It's going to continue

43:15

making horrifying human adjacent

43:17

creatures that walk like the atats from star

43:19

Wars and cartoons that look remarkably

43:22

like SpongeBob SquarePants Chat

43:25

GPT isn't going to run your business because

43:27

it can barely output a spreadsheet without fucking up

43:29

the basic numbers, if it even understands

43:31

what you're asking it to do in the first place. I

43:35

think that AI has maybe three quarters

43:37

to prove itself worthwhile before the apocalypse

43:40

really arrives. When

43:42

it does, you're going to see it first in the real infrastructure

43:45

companies, starting with Nvideo, who's grown

43:47

to about two trillion dollars in market capitalization

43:50

because of the chips they make, which are pretty much the

43:52

only ones that can power the AI revolution.

43:54

There are other companies that AMD in Micron,

43:56

but m Video is the one that's really grown. If

43:58

you watch any of their notes, they're

44:01

insane. They're just full of fan fiction. Once

44:04

Nvidia starts to see growth slow,

44:06

and Oracle in particular Oracle massive

44:09

data center company, massive data based company

44:11

as well one of the largest customers Microsoft

44:13

building data sentence for them. Once

44:15

that starts slowing down, that's when you should

44:17

start worrying. But the real pain's

44:20

going to come for Amazon, Microsoft

44:22

and Google when it's clear

44:24

that there's not really that much revenue going into

44:27

their clouds. Once

44:29

that happens, once you start

44:31

seeing Jim Kramer on CNBC saying

44:33

I don't think the AI boom is here, despite

44:35

having saying it just was, that's

44:38

when things get nasty and

44:40

the knock on effects will be horrible. It's

44:42

going to be genuinely painful, worse than we've seen the last

44:45

few years. And it's all a result

44:47

of the same problem. It's

44:49

all a result of the growth of all cost tech economy.

44:52

When things are made to expand, when things

44:54

are made to build more rather than

44:56

build better, When you're building solutions

44:58

to use compute power to sell

45:01

cloud computing services rather

45:03

than helping real people make their lives

45:05

better. Tech

45:08

is not building for real people anymore. And

45:10

the AI revolution, despite its spacious

45:12

hype, is not really for us.

45:15

It's not for you and me. It's

45:17

for people at Satching the Della of Microsoft

45:20

to claim that they've increased growth by twenty

45:22

percent. It's for people at sam

45:24

Altman to buy another fucking Porsche.

45:27

It's so that these people can feel important and be

45:29

rich, rather than improving society at all.

45:33

Maybe I'm wrong, Maybe all of this is the

45:35

future, maybe everything will be automated,

45:38

but I don't see the signs. This

45:41

doesn't feel much different to the metaverse.

45:44

There's a product, but in the end, what's it

45:46

really do? Just like the metaverse,

45:49

I don't think many people are really using it. All

45:51

signs point to this being an

45:53

empty bubble. And I'm sure

45:55

you're sick of this too. I'm sure that you're sick of the tech

45:58

industry telling you the futures here when

46:00

it's the present and it fucking sucks. And

46:02

I'm swearing a lot, and I'm angry, but

46:05

I'm justified in this anger off feel and I'm not

46:07

telling you how to think. And I've heard from some of you

46:09

saying, oh, don't tell me how to think, and I agree. I agree.

46:11

I'm not here to tell you to be angry about anything.

46:13

But I want to give you at least my

46:16

truth, and I want to give you what I see

46:18

is happening, because I don't feel like enough people

46:20

are doing that in the tech industry. And

46:22

that's what better Offline is going to continue to be. I

46:25

really appreciate you listening. It's been

46:27

about a month month and a half since we started. It's

46:29

only going to get better from here. Thank you, thank

46:41

you for listening to Better Offline. The editor

46:43

and composer of the Better Offline theme song, It is

46:45

Mattersowski. You can check out more

46:47

of his music and audio projects at Mattasowski

46:50

dot com, M A T T O.

46:52

S O w Ski

46:55

dot com. You can email me at easy

46:57

at Better Offline dot com, or check out Better

46:59

Offline to find my newsletter and

47:01

more links to this podcast. Thank you so much

47:03

for listening. Better Offline

47:06

is a production of cool Zone Media. For more

47:08

from cool Zone Media, visit our website

47:10

cool Zonemedia dot com, or check

47:12

us out on the iHeartRadio app, Apple Podcasts,

47:15

or wherever you get your podcasts.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features