Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
Bloomberg Audio Studios, Podcasts,
0:06
radio News.
0:20
Hello and welcome to another episode
0:22
of the Odd Lots podcast. I'm
0:24
Joe Wisenthal.
0:25
And I'm Tracy Alloway.
0:26
Tracy, you know, we've done tons of course
0:29
on like electricity and AI
0:32
and data centers and all that
0:34
stuff, but we've never actually
0:36
done like a well, we've never talked to
0:38
someone who is building data centers.
0:42
Putting it all together, you mean.
0:43
Yeah, putting it all together like what you know,
0:45
just a bunch of you know, I've had consultants,
0:47
so we talked to energy people, but like, how
0:49
does this business of essentially,
0:52
I guess, building a building, putting a bunch
0:54
of chips in there, getting the electricity, and
0:57
then in theory, selling all of that
0:59
at a markup? Like, how does it actually work?
1:01
You know?
1:01
What I was reading recently This is kind of a tangent,
1:04
but not really because we're talking about the
1:06
physical and financial
1:09
process of building these things. But
1:12
I saw this is online. There's
1:14
a guide to the like physical
1:16
Planning around an
1:19
IBM system three sixty
1:21
from like nineteen sixty three or
1:23
something, and it's two hundred
1:26
and thirteen pages long.
1:27
Have you read it yet?
1:28
I did flip through it, there's like there's
1:31
guidance on minimizing vibrations
1:34
obviously, like temperature and humidity and
1:36
stuff like that. I did not read the full two hundred
1:38
pages, but I'm kind of thinking like if
1:41
this is what if this is all
1:43
the thinking that had to go into
1:45
like one computer, albeit a supercomputer
1:48
in the nineteen sixties, but like a pretty
1:50
basic machine. When we look back on it now, how
1:53
much planning and thinking has to go into
1:55
building like these huge cloud servers
1:58
and all their associated infrastry structure,
2:00
both physical and software as well.
2:03
No, totally, and you know, we you know,
2:05
one of the ways that we've touched on this subject
2:07
a little bit is in our conversations with Steve
2:09
Eisman, who's been investing
2:12
at least as far as we know, in a lot of these like
2:14
industrial HVAC companies
2:17
and electricity gear
2:19
companies and stuff like that. So like companies
2:22
that have actually been around for a really long
2:24
time, sort of standard cyclical businesses,
2:27
and then they've like caught the secular tailwind
2:30
because with this boom in AI data
2:32
center construction, suddenly there's
2:34
this sort of continuous bid for all
2:36
their gear and services.
2:37
I'm going to start an anti vibration floor
2:40
maker or something. Do you think that's a viable
2:42
business? Does anyone care about vibrations anymore?
2:44
I am certain that in various
2:47
high tech environments you do not want
2:49
to have vibrations. You know, you have like a valuable
2:52
chips, you don't want them to be like degrading.
2:54
Because people are walking around.
2:55
Yeah, or just you know what, all the machine
2:58
and all your air conditioners and equipment
3:00
and all that stuff, you can't be having that stuff degrade.
3:03
Well, the other interesting thing that's happening
3:05
in the space now, So in addition to
3:08
the physical challenge of building a
3:10
bunch of this stuff, there's also the financial
3:12
aspect of it. And I guess
3:14
as AI becomes more and
3:16
more of a thing, and clearly, as you laid
3:19
out, there's a lot of enthusiasm around the space.
3:21
At the moment, you are seeing a bunch
3:23
of financial entities get interested as
3:26
well. So obviously venture capital has
3:28
been pouring money into the space, but we're starting
3:30
to see some new types of
3:32
financial investments in AI. And
3:35
I'm thinking about one thing in particular,
3:37
and it is the recent GPU
3:40
or chip backed loan that
3:43
was reported by the Wall Street Journal and
3:46
I think we should talk about that aspect.
3:47
Of it too, totally, because one of the things
3:49
that's happening in tech is this big
3:52
sort of shift from like, okay, we're all of
3:54
your costs in the past, where a lot of them
3:57
were sort of op x, the cost of engineers,
3:59
et cetera. And now suddenly tech
4:01
companies have to think about CAPEX for
4:03
the first time, these big upfront costs that
4:05
are in theory going to pay off for a long time, which
4:08
in theory then changes how you should think about
4:10
the financing model.
4:11
Absolutely well, I am.
4:12
Excited to say because we literally do
4:15
have the perfect guest we're
4:17
going to be speaking with, Brian Venturo.
4:19
He is the chief strategy officer at
4:22
core Weave. Corewave. For those
4:24
who don't know, it's probably the company
4:26
right now that people most associate
4:29
with being at the heart of the AI
4:33
data center boom. They have a bunch
4:35
of in video chips, they have investments from in Nvidio
4:38
right here in the sweet spot. As
4:40
you mentioned, one of the interesting things
4:42
that's going on is they not long ago
4:44
announced a debt financing facility
4:47
sit back basically by the GPUs
4:50
that they would acquire, so
4:52
literally the perfect person to understand
4:55
like the business of these
4:57
AI cloud data center
5:00
So Brian, thank you so much for coming in.
5:02
Thanks for having me. It's the second time I've been on the podcast.
5:05
That's right. We talked to Brian years ago.
5:07
It's interesting to think about at
5:09
that time because I think that may have been like twenty
5:11
twenty or twenty one, and the excitement
5:14
then was that these chips could be used
5:16
for crypto mining and other things
5:18
like sort of distributed video editing and stuff
5:20
like that, and then Ethereum
5:23
stopped using mining. But it was sort of fortuitous
5:25
timing because right around then AI went
5:28
crazy and that's probably I
5:30
don't know, in my view, maybe a higher use of these
5:32
chips before we get to that. Do you worry
5:34
about vibration in your data
5:37
center?
5:38
So everywhere that's close to a
5:40
fault line is designed around
5:43
that and is part of code. So you
5:45
know, the engineering firms that help us build these data
5:47
centers have taken all of that into account, and all
5:49
of our racks are you know, seismically
5:53
tuned to make sure that we can withstand
5:55
the normal vibration from the Earth. So
5:58
yeah, it's been something that's been in those annuals
6:00
for a long time. Some of our hardwer
6:02
manufacturers actually have vibration testing labs
6:05
where they put the racks on top of a big
6:07
kind of platform that shakes, and it's pretty
6:09
dangerous and uncontrollable and hard to watch.
6:12
But you know, there's people out there that have been solving
6:14
this problem for decades.
6:15
Now I missed the boat on that business
6:17
choir. It sounds like it's
6:19
been dealt with decades ago. Okay, well,
6:21
actually, why didn't I start with a very
6:23
simple question, which is when
6:26
when you're looking at the business of core
6:29
Weave, so a specialized
6:32
cloud service provider, let's put
6:34
it that way, what are the different components
6:37
that you have to think about? You know, Joe kind
6:39
of alluded to all these different ingredients
6:41
that go into the business, but walk us
6:43
through what those actually are.
6:46
Sure, so, there's there's three pieces that as
6:48
a management team, we think are incredibly critical to the
6:50
business. The first is,
6:53
you know, our technology services that we provide on
6:55
top of the hardware, right and this is
6:57
everything from the software layer through the support organization
7:00
to you know, how we work with our customers. This
7:02
isn't the type of thing that you just go plug in
7:04
and it works. In these large supercomputer
7:06
clusters, there may be two hundred thousand infinibank
7:10
connections that connect all the GPUs together, and
7:12
if one of those connections fails for whatever
7:14
reason, the job will completely
7:16
stop and have to restart from its previous
7:18
checkpoints. So, you know, everything that we do
7:20
on the software side and engineering side is to make sure
7:22
these clusters are as resilient and performant
7:25
as they possibly can be to ensure
7:27
you know, our customers can run their jobs, you
7:30
know, increase efficiency and get all
7:33
of the kind of monetary value
7:35
they can out of the chips. So technology
7:37
piece is really hard. It's something that I think
7:39
is very overlooked by the market, but it's
7:42
just as hard as the two other kind of pieces
7:44
that this business stands on. The second
7:46
is, you know, the physical nature of the business
7:49
in that you have to actually build and run these data
7:51
centers and those hundreds of thousands
7:53
connections inside the supercomputers. Like
7:55
somebody has to go put those together and make sure
7:57
they're clean and make sure they're labeled correctly
7:59
to be able to remediate failures. And
8:02
when you're building a thirty two thousand
8:04
GPU supercomputer that is one
8:06
of the fastest three computers in the planet.
8:09
You know, you're running thousands of miles
8:12
of cable inside a very dense
8:14
space, right. These data centers are built
8:16
very tiny to make sure that you can connect
8:18
everything together, and that becomes
8:20
a huge logistical challenge. So, you know,
8:22
the data centerpiece, which we're going to talk more about today,
8:25
is very challenging to design for the use case.
8:28
And then the third piece is how the hell do you finance
8:30
the whole thing? Right, And you know, we've
8:32
been very successful in the
8:34
financing aspect of this, but you know,
8:36
whether you're financing technology operations
8:39
or the physical build of these things, it is an
8:41
incredibly capital intensive business
8:43
and constructing those financial
8:45
instruments to back our business is
8:47
very hard, and we have to be very very thoughtful
8:50
around who the counterparties are, how
8:52
do we think about credit risk, how do our investors
8:54
think about that credit risk, How do we deal
8:56
with contingencies inside the contracts to
8:59
make sure that they are financeable on the scale that we've
9:01
done over the last eighteen months.
9:02
Talk to us a little bit more. We could probably
9:05
talk about data center financing
9:07
credit and have have that be
9:10
a whole episode, but when you think about you
9:12
have to think about your counter party's
9:15
credit risk. Talk to us a little bit about what
9:17
you're who those are, what the type
9:19
of entity is.
9:20
Sure, so I'll get myself in trouble
9:22
if I just start naming them off. Yeah,
9:24
some of them are more public than others. You
9:27
know, I'm going to refer to them as you
9:29
know, hyperscale customers. We
9:31
have AI lab customers, we
9:33
have large enterprise customers.
9:36
We've really constructed our portfolio of business
9:38
around the idea that you know, if we're going
9:40
to build ten billion dollars of infrastructure for somebody,
9:43
we have to know there's a balance sheet we can lean into
9:45
behind it, right, and we're
9:47
the pace at which we've grown. You
9:51
know, our customers are demanding scale
9:53
so quickly that the credit
9:55
of the counterparty is incredibly important
9:57
to find the low cost of capital we have with these ADIT
10:00
facilities we've announced, right, So you know, when
10:02
people talk about how this is a credit facility
10:04
backed by GPUs, it's not really backed
10:06
by GPUs. It's backed by you know, commercial
10:08
contracts with large international
10:11
enterprises that may have triple a credit, right,
10:14
So you know it's it's the framing of the.
10:15
Aid receivables finance.
10:17
Basically it's closer to trade receivables
10:19
financing than it is Hey, we're going to go leverage up
10:21
a bunch of GPUs and see what happens.
10:23
Huh, okay, well walk us through the
10:25
I guess like the sequence
10:28
in some of these financing agreements. So you
10:31
know, if a customer comes to you and they
10:33
say, we want a certain amount of
10:35
compute, can you do this for us?
10:37
And you start going down the process
10:40
of like, okay, what do we need to make this happen?
10:43
What do those like financial agreements
10:45
actually look like. And who's bearing the initial
10:48
risk? Is it the customer? Is it you?
10:51
Good question?
10:52
So when we're approached by a customer, right, you know,
10:54
the ask is typically going to be pretty
10:56
pretty general, and they're going to say,
10:58
hey, we're looking for facity in Q one
11:00
of next year. What's the largest thing you can do? And
11:04
you know, we take that effectively as
11:06
a mandate of okay, hey, you know this customer.
11:08
We're not business.
11:09
But before you know, we're really comfortable with them, we know that
11:11
we're going to get a contract done. We'll go out and we'll
11:13
try to secure an asset to you know, to go
11:15
build it. And we may have it in our portfolio already. We
11:17
maybe it may have been a strategic investment that we made.
11:20
But once we find the data center asset, that's when we go back
11:22
to the customer and say, okay, like we can commit to doing
11:24
this. This is the timeline. We'll structure
11:26
a contract around it. Depending upon
11:28
who the customer is. There may or may not be some credit
11:30
support associated with it around the scaling
11:33
of the you know, that asset, and
11:35
then we'll get a commercial contract
11:37
in place, and we will initially
11:39
fund a large portion of that
11:42
project off of our own balance sheet. Right.
11:44
It's why you also see us raising equity, right,
11:46
is we have to have the capital to accelerate the business.
11:49
And then once we have that and we're making progress,
11:51
you know, think about it as you're building
11:53
real estate. Right, you have a construction loan and then you have a stabilized
11:56
asset loan, and we basically
11:58
fund the construction loan piece off of our balance sheet.
12:00
When we get to a more stabilized asset, that's when we go
12:02
out and kind of do that trade financing
12:04
or trade receivables financing our with
12:06
our partner lenders. You know, they worked with
12:08
us before, they know that these things are going to stand up, They know
12:10
how they perform, and at that point in
12:12
time, it's it's pretty easy for them to underwrite that risk.
12:31
It's funny. Tracy and I had coffee with
12:33
someone yesterday who
12:36
is sort of in the space I want docs here,
12:39
And I was like, what should we ask Brian? And he's like, ask
12:41
him why he won't let my company, why
12:43
I'm still on the waiting list or something, or why he hasn't
12:45
approved my company to use
12:48
core weave. But what are some of the
12:50
bars or the threshold? So you know,
12:52
I apparently there's a lot of demand for
12:54
compute these days. What does it
12:56
take to get in the door and get access
12:59
to some of your chips and electricity?
13:01
So it's it's a great question. It's
13:04
a question that we get all the time from our sales
13:06
teams, right is you know, we're faced a lot
13:08
with a sales team that is incredible
13:10
at delivering product to customer
13:13
and we don't have anything to sell. And it's
13:15
kind of my job. As the strategy
13:18
organization at Core, We've were responsible
13:20
for two things. It's product and
13:22
infrastructure. Capacity, and you
13:24
know, I spend most of my time going out and finding those
13:26
data centers and being able to support those deals and
13:29
the growth that we had over the past twelve months.
13:32
The company was pretty flat out right
13:34
in building and delivering this infrastructure. You
13:36
know, publicly on our documentation page
13:39
it says that we have three regions. We'll have twenty
13:41
eight regions online by the end of the year. I think
13:43
we delivered eleven of them in Q one alone,
13:45
Right, So we're building at a scale, you
13:48
know, i'd say that almost larger than some of the
13:50
three big hyperscalers. But in
13:53
terms of how do you become a customer of Core,
13:55
it's really relationship driven, right is. We
13:57
want to make sure that we're going to be able to be successfu
14:00
with our customers and have an engineering relationship
14:02
and we're aligned on what they need and.
14:04
We can deliver what they need.
14:05
The last thing that we want is for somebody to walk in the door
14:08
and say, hey, I need this for three weeks
14:10
and two weeks into it, they're unhappy and
14:13
we can't give them what they need to be successful. Right is,
14:15
you know, our customers are making such large
14:17
investments in this infrastructure, that we have
14:19
to have, you know, a lot of conviction
14:22
that we will be successful with them
14:24
and provide a good experience. So it's
14:26
not that we're trying to keep people out, it's
14:28
we're trying to ensure positive experiences
14:30
for people that we do bring on board.
14:32
Do you build complete housed
14:35
facilities or is it all you're
14:37
going to bring your chips and expertise into
14:40
an existing Tier one data
14:42
center and essentially rent floor space from them.
14:44
Yeah, so a year ago it was we
14:47
were effectively just a co location tenant, and
14:49
now we've gone a lot more vertical
14:51
for some strategic builds where
14:54
we're either a partner in the project where we own equity
14:56
and the development company, or we're building the project
14:58
ourselves. We've been scaling that team up
15:01
over the past six months, and we had
15:03
to at our scale to be able to guarantee
15:05
outcomes. Right, is, we were in a position
15:07
where we had data centers getting delayed with things
15:09
that weren't communicated to us, and
15:11
you know, we had to go build the capability to handle
15:14
that situation and you know, make sure we
15:16
can still deliver for our customers.
15:17
One of the differentiators that you and some
15:20
of your colleagues have emphasized previously, is
15:22
this idea that you're designing the
15:24
server clusters kind of from the ground up,
15:27
whereas like other hyperscalers
15:30
maybe are doing it on a sort of different
15:32
mass scale. But can you walk us through
15:34
like what is the benefit
15:37
of doing it that way? And then secondly,
15:40
does that end up being an impediment
15:43
to I guess efficiencies
15:45
or economics of scale and
15:47
how customized Like do you really get here?
15:49
So from a customization perspective, it's
15:52
aggressive, right, And I say
15:54
that because you know, our customers are
15:56
involved in the design of you know, our network
15:58
topology of the East West fabric for the GPU
16:00
to GPU communication, for things
16:02
like cooling. You know, I have customers that toward
16:04
the data centers under construction process
16:06
with me like once a week, and it's
16:10
to the point that they're
16:12
impacting how we build
16:15
the base level networking products to ensure
16:17
they have enough throughput to you
16:19
know, meet their use case needs. Whereas
16:21
in you know, what I what we call the legacy
16:24
hyperscaler installations, It
16:26
maybe they have a couple
16:28
thousand GPUs that are in a data center that was really
16:30
built for CPU computation or
16:33
to provide services to ten thousand customers
16:35
that is really with a much lower base
16:38
expectation of what they're going to be doing. Right,
16:40
So it's things around connectivity
16:42
for storage, it's things around power and cooling,
16:45
It's things around how they want to
16:47
be able to optimize their workloads
16:50
inside of the GPU to GPU communication.
16:52
You know, we have some customers that even customize
16:54
their infiniban fabrics and the size
16:57
of those fabrics and how they connect together. So you know,
16:59
we work with them to really understand what their use case is,
17:01
where they're worried currently and in the future, and
17:03
then design around that. So it's a pretty
17:05
comprehensive program when we're building
17:07
something from the ground up.
17:09
And how much complexity does that introduce
17:11
into the business and does it end up being
17:14
a limiting factor on your growth or
17:16
is demand just so strong at the
17:18
moment that it's not really an issue.
17:20
The customization that we do is typically going to be
17:22
above what our base level offering is, meaning
17:25
the environment will be more performant because
17:27
the customer required it. So it's typically
17:29
not going to be limiting to us from a future
17:32
you know, revenue or resale perspective. It's
17:34
going to make the asset more valuable. But you
17:37
know, we're we're designing our reference
17:39
builds for ninety nine percent of use cases,
17:41
and we're trying to price it efficiently, and then
17:43
when customer wants something above and beyond, you
17:45
know, it impacts price. But for these installations it's probably
17:48
deminimus, right, So you know,
17:50
it doesn't really add a lot of complexity for us
17:52
from a business perspective, so we're
17:54
happy to do it.
17:55
You mentioned that some of the hyperscalers,
17:58
yes they have GPUs, but they like
18:00
built in an environment for
18:03
like legacy CPUs.
18:06
Can you talk a little bit about a just
18:08
the difference between the legacy
18:11
architectures and the new one and then in
18:13
the design, like what kind of bottlenecks you run
18:15
into? Is there issues with labor
18:17
like the types of people who know how to string these
18:19
things together well, or other different
18:22
cooling requirements for this type
18:24
of compute environment that
18:26
did not exist, Like what are what are the challenges
18:29
in building out these sort of like fundamentally
18:32
different environments.
18:33
Yeah, so that that's changed also in the last
18:35
twelve months in that you used
18:38
to be able to take what was an enterprise data center
18:40
and you know, creatively retrofit it
18:42
to be capable of supporting the AI
18:44
workloads to a certain density level.
18:46
Okay, right, Like instead of filling up a cabinet,
18:48
you could put two servers in a cabinet and you could
18:51
meet the power and cooling requirements
18:53
of the installation. It you use
18:55
a lot more floor space, but it was
18:57
doable. One of the incredible things about
19:00
is that they're always pushing the boundary on the engineering
19:02
side, and their next generation of chips
19:04
is largely dependent upon much
19:06
more aggressive heat transfer, and they've introduced liquid
19:09
cooling to the reference architectures. So as
19:11
liquid cooling comes in, it changes
19:13
what type of data center is capable of doing
19:15
this, and it truly requires
19:18
that ground up redesign and
19:20
almost greenfield only build
19:22
to support it. Is you've gone from an environment
19:24
where you could take an enterprise data center
19:26
and deploy less servers per cabinet and get
19:28
away with it to hey, nobody's
19:31
ever built this before. It's at an incredible
19:33
scale and it has to happen on a yearly
19:35
cadence now, so the data center
19:37
industry is in't a full sprint to figure
19:39
out, Okay, how do we do this? How do we do it quickly?
19:42
How do we operationalize it right? And
19:44
you know that's kind of where I've been spending all of my time
19:46
over the past six months.
19:48
Can I ask a really basic question, and
19:50
we've done episodes on this, but I would
19:52
be very interested in your opinion, But
19:55
why does it feel like customers
19:58
and AI customers in particular, are
20:01
so I don't know if addicted
20:03
is the right word, but like so devoted
20:05
to in Nvidia chips, Like what
20:08
is it about them specifically that
20:11
is so attractive? How much
20:13
of it is due to like the technology
20:15
versus say, maybe the interoperability.
20:18
So you have to understand that when you're
20:20
an AI lab that has
20:22
just started and it is a
20:25
it's an arms race in the industry to deliver product
20:27
and models as fast as possible, that it's
20:29
an existential risk to you that
20:32
you don't have your infrastructure be
20:36
like your Achilles heel. Right, And
20:38
and Vidia has proven to be a
20:41
number of things. One is they're
20:43
the engineers of the best products, right.
20:47
They are an engineering organization
20:49
first, and that they identify and solve problems.
20:51
They push the limits. You know, they're willing to
20:53
listen to customers and help you solve problems
20:55
and design things around new use cases.
20:58
But it's not just creating good hardware.
21:01
It's creating good hardware that's scales and they
21:03
can support at scale. And when you're building
21:05
these installations that are hundreds of thousands of components
21:08
on the accelerator side and the infinband link
21:10
side, it all has to work together well. And
21:13
when you go to somebody like in Video that
21:15
has done this for so long at scale, with
21:17
such engineering expertise, they eliminate
21:20
so much of that existential risk for these startups. Right.
21:22
So when I look at it and I see some of these smaller
21:25
startups saying we're going to go a different route, I'm like, what
21:27
are you doing? Right? You're taking
21:30
so much risk for no reason here? Right,
21:32
this is a proven solution, it's the best
21:34
solution, and it has the most community support,
21:37
right, Like go the easy path because the venture
21:39
you're embarking on is hard enough.
21:41
Is it like the old what was that old adage?
21:44
Like no one ever got fired for buying Microsoft?
21:46
Is it like no, yeah, or IBM
21:49
something like that.
21:50
But the thing here is that it's not even
21:53
nobody's getting fired for buying the tried
21:55
and true and slower moving thing. It's
21:58
nobody's getting fired for buying the tried, true
22:00
and best performing and you know bleeding
22:02
edge thing.
22:03
Right.
22:03
So I look at the folks that are
22:05
buying other products and investing and other
22:08
products almost as like they're trying. They
22:10
almost have a chip on their shoulder and they're going against the mold
22:12
just to do it.
22:14
There are competitors to in video
22:16
that they claim cheaper or
22:18
more application specific
22:21
chips. I think Intel came
22:23
out with something like that. First of
22:25
all, from the core weave perspective,
22:28
are you all in on in video hardware?
22:31
We are?
22:32
Could that change?
22:33
The party line is that we're always going
22:35
to be driven by customers, right, and
22:37
we're going to be driven by customers to the
22:40
chip that is most performant, provides
22:43
the best TCO, is best supported
22:46
and right now and in what I think is
22:48
the foreseeable future, like I believe
22:50
that is strongly in video.
22:52
Think about okay, maybe one day you guys IPO
22:54
And I'm looking through the risk factors, and one of
22:56
the risk factors, right, we have a heavy
22:59
reliance on in video chips. There is a risk
23:01
that a competitor thing, what would it take
23:03
for one of these competitors
23:05
that does ostensibly over cheaper or hardware
23:08
or perhaps lower electricity
23:10
consumption in your view, To
23:13
make one of those risk factors real.
23:15
I think that they'd have to be willing to quote
23:18
unquote buy the market. And when
23:20
I say that, I mean they'd have to subsidize their hardware
23:23
to get a material market share.
23:26
And from what I've seen, there's no one else that's really
23:28
been willing to do that so far.
23:30
And what about Meta with Piedtorch
23:32
and all their chips.
23:33
So they're in house chips. I think
23:36
they have those for very very specific production
23:38
applications, but they're
23:40
not really general purpose chips, okay,
23:43
right, And I think that when you're building something for general
23:45
purpose and there has to be flexibility in the use case.
23:48
While you can go build a custom AASIC to solve
23:50
very specific problems, I don't think
23:52
it makes sense to invest in those to go
23:55
to be a five year ass set if you don't necessarily know what you're
23:57
going to do with it.
23:58
So you talked about the advantages
24:01
of Nvidia hardware like the chips
24:03
themselves, but one of the things you sometimes hear
24:06
is that those same chips might perform differently
24:09
in different clouds. So what is
24:11
it that you can do to sort
24:13
of boost the performance of the same chip
24:16
in your structure or
24:19
ecosystem versus say an AWS
24:21
or someone like that.
24:22
Sure, a great question. We do a lot of work around
24:24
this internally and it's a big part
24:26
of our technical differentiation. And
24:29
what we call it internally is mission control. And
24:31
mission control is effectively a portfolio of
24:33
different services that we run on our infrastructure
24:36
to make sure that these incredibly complex
24:38
supercomputers are healthy and performant
24:41
and are optimized, you
24:43
know, where we take a lot of that responsibility
24:46
off of our customer engineering teams, right,
24:48
And it sounds like that might be an easy
24:50
lift, but when you're running supercomputer
24:53
scale, you know you need a team of fifty to
24:55
do that, right, So we provide a ton of software automation
24:57
around that, providing that health checking
24:59
and observed ability to our customers. But
25:01
it's also the engineering engagement, right, is
25:04
you know, working with our customers to understand, Okay,
25:06
what are you doing, what's the best way to optimize
25:08
this, how do we you know, how did we design
25:10
the data center to be more performant, to make sure
25:12
your storage solution was correct, Your networking
25:15
solution was correct. So it's not just
25:17
a hey core we've provides
25:19
like this one little thing that makes it better. It's
25:21
the comprehensive solutions, starting from the data
25:24
center design, through the software automation
25:26
and health checking and monitoring, via mission control,
25:28
via the engineering relationships that really add
25:30
that value.
25:31
Let's talk about electricity, because this has become
25:34
this huge talking point that this is the major
25:36
constraint and now that you're becoming more vertically integrated
25:39
and having to stand up more of your operations.
25:42
We talked to one guy formerly at Microsoft
25:44
who said, you know, one of the issues that there may
25:47
be a backlash in some communities who don't
25:49
want, you know, their scarce
25:51
electricity to go to data centers when
25:53
they could go to household air conditioning. What
25:55
are you running into right now or what are you
25:57
seeing?
25:58
So we've been very very selective
26:00
on where we put data centers. We don't
26:02
have anything in Ashburn, Virginia, right and the Northern
26:05
Virginia market, I think is incredibly saturated.
26:07
There's a lot of growing backlash in that market
26:09
around power usage and you know,
26:12
just thinking about how do you get enough diesel trucks in
26:14
there to refill generators that they have a prolonged
26:16
outage.
26:17
Right.
26:17
So I think that there's some markets where
26:19
it's just like okay, like to stay away from that, and
26:22
when the grids have issues and
26:25
that market hasn't really had an issue yet, it
26:27
becomes an acute problem immediately. Like just think
26:29
about the Texas power market crisis
26:31
back in I think it's twenty twenty one, twenty
26:34
twenty, where the grid wasn't really set up to be able
26:36
to handle the frigid temperatures
26:38
and they had natural gas valves that were
26:40
freezing off at the natural gas generation
26:43
plants that didn't allow them to actually come
26:45
online and produce electricity no matter how high
26:47
the price was. Right. So there's there's going
26:49
to be these acute issues that you know, people
26:51
are going to learn from and the regulators are going to learn from
26:54
to make sure they don't happen again. And we're
26:56
kind of citing our our plants and
26:58
markets where our data centers and markets where
27:00
we think the grid infrastructure is capable of handling
27:02
it right, And it's not just is there
27:05
enough power, it's also on things.
27:07
You know, AI workloads are pretty
27:09
volatile in how much power they use, and they're
27:11
volatile because you know, every fifteen minutes
27:13
or every thirty minutes, you effectively stop
27:15
the job to save the progress you've
27:17
made, right, and it's so expensive
27:20
to run these clusters that you don't want to lose hundreds
27:22
of thousands of dollars of progress, So they
27:24
take a minute, they do what's called checkpointing, where
27:26
they write the current state of the job back
27:28
to storage, and that checkpointing
27:31
time, your power usage basically goes from one hundred
27:33
percent to like ten percent, and then
27:35
it goes right back up again when it's done saving it. So
27:38
that load volatility on a local
27:40
market will create either voltage spikes
27:42
or voltage SAgs, and a voltage sag
27:45
is what you see is what causes a brown out
27:47
that we used to see a lot of times when people turn their cognitioners
27:49
on and it's thinking through, Okay, how do I ensure
27:52
that, you know, my AI installation
27:55
doesn't cause a brown out when people are turning their
27:58
you know, during checkpointing, when people are turning the
28:00
air conditioners on. Like that's the type of stuff that
28:02
we're thoughtful around, like how do we make sure we don't do this right.
28:05
And you know, talking to engineerings and
28:07
in Video's engineering expertise, like they're
28:09
working on this problem as well, and there they've
28:12
solved this for the next generation. So
28:15
it's everything from is there enough power there? What's
28:17
the source of that power? You know, how clean is
28:19
it? How do we make sure that we're investing in solar
28:21
and stuff in the area to make sure that we're not
28:23
just taking power from the grid. To also
28:25
when we're using that power, how is it going to impact the consumers
28:28
around us?
28:29
I want to ask you more about what in Nvidia
28:31
is doing, but just on that note, what's
28:33
the most important metric for
28:36
evaluating a data center's
28:38
quality or performance? Is it like
28:41
days without brownouts or an
28:43
interrupted power supply, or is it measures
28:45
of efficiency like power usage effectiveness
28:48
or something like that. If I'm serving a bunch
28:50
of data centers, I want to pick a good one. What
28:52
should I be looking for?
28:53
So right now, the market's pretty thin, So
28:56
right now.
28:58
Options Okay, I
29:00
imagine I'm like the biggest customer on earth
29:02
and I can get in anywhere. What should
29:04
I be looking for?
29:06
So it's the first thing
29:08
goes back to the electricity piece, right, is
29:10
the grid stable? Is there enough power supply?
29:13
You know, is there excess renewable generation
29:15
in the area that doesn't have the ability to make it
29:17
too downstream consumers? Right? A lot of the
29:19
renewables that we have in the US are built
29:21
in places that don't necessarily have the consumers.
29:24
So you're citing these data centers
29:26
in places where you have this excess supply,
29:29
So that that's the first piece, right, is how
29:31
good is the electricity supply? And how
29:34
angry are the people around me going to be if I take it? Now?
29:37
You go from there into everything else is
29:39
kind of solvable, right, And the way
29:41
that you design it, and if you're building a green field,
29:43
it's okay. You know what type of ups systems
29:46
am I putting in? Are they capable of handling
29:48
that load volatility?
29:50
You know?
29:50
How am I thinking about my cooling solutions?
29:54
There's been a big shift to liquid
29:56
cooling, right, and liquid
29:58
cooling from a PE perspective, isn't
30:00
a thirty to forty percent decrease
30:03
in electricity utilization like people think?
30:05
It's more like sixty to seventy percent, right,
30:08
And the reason for that is it's not just the
30:11
efficiency of the data center plant.
30:14
It's also that now if you're not cooling things
30:16
with air, you don't have to run the fans inside the servers
30:18
as well. And for these AI installations,
30:21
because they're so dense, the fans consume
30:23
a lot of energy. Right. So everything
30:25
that we're building now is a combination of liquid
30:27
and air cooling, right. And the liquid
30:29
cooling piece has solved the PUE issue,
30:32
right, And we're everything we're doing is trying
30:34
to say, Okay, how much power
30:36
can we use only for running our critical
30:39
IT operations versus
30:42
cooling the environment making sure the environment's
30:44
running correctly from a resiliency perspective, And
30:47
there's been big strides made there over the last whole months.
31:06
Does colocation trump
31:08
grid reliability? Like if I'm Elon
31:11
Musk building some sort of
31:13
new AI thing as I think he's doing
31:16
in Texas, say like,
31:18
am I just going to have to find a data center
31:20
in Texas? Or how much flexibility do
31:22
I have to use one
31:24
further away?
31:25
So great question, it's
31:28
it's a different answer for different use cases
31:31
at different times. And right
31:33
now, you know, we were in the middle of this rush
31:35
to train whether they're open
31:37
source or proprietary foundation models at
31:39
the largest, most valuable companies in the world, and they're
31:42
mostly worried about access to contiguous
31:45
compute capacity. Right, how much compute
31:47
can I get in one location, all connected together
31:49
so I can go faster than the next guy. But
31:52
when the models are trained, they
31:54
want that compute to then be local to their
31:56
customer base, right, is how do they take it
31:59
from the middle of nowhere and then go serve it
32:01
in the metropolitan markets. And as the
32:03
use cases are more distilled and they get more
32:05
real time, think like the
32:08
type ahead suggestions that you get in your Gmail
32:10
account right as you're typing something, and it's getting
32:12
better and better. It's you know, that's
32:14
an AI model somewhere like predicting
32:16
what you would want to say next, And they
32:19
want to make sure that's delivered at human speed.
32:21
So that human speed is a
32:24
latency consideration. Right as
32:26
you're citing those GPUs and you're citing that compute
32:28
to be locals to the people that are using it. So that
32:32
move has started probably
32:34
four months ago where we saw customers
32:37
finally becoming concern around latency for
32:39
their serving use cases. So initially
32:41
training people don't really care where it is cheap
32:43
power, reliable grid. They just need
32:45
it all contiguous and they need it fast. And then
32:48
down the road as their applications find
32:50
success, they're more worried about where the compute is for their customers.
32:53
What are some of the areas that are going to be the next
32:55
Northern Virginia when it comes to data
32:57
center clusters.
32:59
So I think we're seeing it in Atlanta
33:01
already, where Georgia has
33:03
paused or has attempted to pause some
33:06
of their tax incentives around it because they want to make
33:08
sure they do grid studies. I
33:11
think that we're we're probably going to see
33:13
it in some of the other hotspots.
33:14
You know.
33:16
You know, you see aws up in Oregon who
33:18
is trying to find creative alternative
33:20
ways to power their data centers from
33:23
non grid generation to alleviate some concerns
33:25
there. But you
33:27
know, I think that the market
33:29
has to solve this problem. And
33:31
you know, you're starting to see some of the startups around
33:34
nuclear generation in you
33:36
know, the small reactors at the data center
33:38
level. As people are you know, being
33:40
thoughtful for five to ten years from now, do.
33:42
You have any influence on the
33:44
type of power being built in
33:47
certain areas? You know, could you say to
33:49
a utility company of some sort, we're
33:52
here, we need access to energy,
33:54
but we want it to come in a particular
33:57
form.
33:57
So you can. But you have to understand that
33:59
the investment cycles and the physical build
34:01
cycles for those are so much longer than you
34:04
know how quickly our customers need
34:06
infrastructure, right. So you may go to a market
34:08
and say, hey, we're going to be here over the next ten years,
34:10
we'd like you to install X y Z, you know, renewable,
34:13
and they're happy to do it. It's just that
34:15
you have to find a medium term solution while
34:17
that's being built.
34:19
I'm going to ask a question. So there was a news
34:21
story, and maybe you won't comment on the
34:23
news story, specifically about core
34:25
Weave having made a one billion dollar offer
34:27
for a bitcoin miner called core
34:29
Scientific, apparently
34:32
was rejected. According to things I've read in
34:34
the news. Setting aside this
34:37
deal, there's you know, there used to
34:39
be a lot of crypto mining and then ethereum
34:42
went from proof of work to proof
34:44
of steak and that all basically disappeared overnight.
34:46
There are still bitcoin miners. I never
34:49
get the impression it's like that great of business.
34:51
But whatever are there bitcoin
34:53
miners that have latent value
34:55
in the fact that they I mean,
34:57
I know those chips don't the bitcoin mining
34:59
chip, the actual acis don't work for AI
35:02
because all they are is bitcoin mining
35:04
chips. But are there by dint
35:06
of their access to electricity, space,
35:08
et cetera, is there a fair amount
35:11
of latent value in the
35:13
general physical structures that they've built
35:15
for the mining.
35:16
So I'm just not going to answer your question at all.
35:19
I'm gonna go on a tangent.
35:20
Okay, that's fine.
35:21
So I think that when
35:23
I think about core Weave and what our
35:25
mission is, it's to find
35:28
creative solutions to problems in in
35:30
you know, various markets, and those
35:33
various markets can be blocking for us
35:35
and our customers to.
35:36
Achieve our goals.
35:37
So if power is a concern
35:39
for us, and power availability
35:41
and substations and substation.
35:43
Transform, coin miners definitely have access to power.
35:46
That that is true.
35:47
I'm just stating fact you could keep
35:49
doing it.
35:50
So you know, as we go and we try
35:52
to solve these problems, you know, we're
35:55
going to go to places that others
35:57
may not have thought of, and we're
35:59
going to go do due diligence and I'm
36:01
going to personally go and walk the sites and I'm
36:04
going to you know, look through and see,
36:06
okay, can we.
36:07
Pull this off?
36:08
And we're going to get our engineering partners in
36:10
to help us design retrofits. And
36:13
you know, we're going to do deals with the companies
36:15
that we believe have the ability to provide
36:17
us value.
36:19
Since we're doing stuff in the news. This
36:22
has been in the news for a while, so it doesn't really count.
36:24
But the new Nvidia
36:26
chips, the GB two hundreds,
36:29
what will those do for core weave
36:31
and when would you expect to get them?
36:33
What will they do for us? It's more about what they're
36:35
going to do for our customers, right, and
36:38
I think.
36:38
That they are.
36:41
This is a great question. They
36:44
are going to open up a
36:46
lot of both training and inference
36:48
use cases in the AI side
36:51
that I think our customers have
36:53
been blocked by UH with
36:56
the existing generation in that
36:58
you're now able to think seventy
37:01
two of these GPUs together to work almost
37:03
as one unit, and previously that
37:05
was limited to eight. They have
37:07
a much larger what's called the frame buffer, which
37:09
is how much memory that's usable for their matrix operations.
37:13
So you know, I think that we're going to see
37:15
a lot of new use cases show up for this stuff,
37:17
but I think it extends well
37:19
beyond AI as well, and
37:22
it's going to be a lot more useful for things like scientific
37:24
computing. One of the things
37:26
that has me really excited is the computational
37:29
fluidynamics and I'm specifically
37:31
thinking about the uses for that in F
37:33
one under the new regulation in twenty twenty
37:35
six. I'm excited for the
37:38
new platform. I think in a year and a
37:40
half people are going to be using it for things that are different
37:42
than anybody expects today. And
37:45
that's to me. The pace at which
37:48
this is changing is the piece that's really cool.
37:50
Wait, I'm sorry, I hate sports.
37:52
What's the six? Explain
37:55
how the invidio is.
37:56
Yeah, So the F one platform,
37:58
they have very tight restrictions around what type
38:00
of compute and how much compute you can use to do aerodynamic
38:03
testing in your cars, and you can either
38:05
do real life testing in a wind tunnel or you can
38:08
do it through CFD analysis. And
38:11
what are the great uses for the you
38:13
know, the Grace Blackwell and the Grace Hopper architectures.
38:16
Impairing that Grace super chip with
38:18
the GPU is they're great for CFD
38:21
workloads, right, and the.
38:23
DAFD stands for computational fluid
38:26
dynamics yep, yep.
38:27
And the regulations around the existing
38:29
program in F one are they're only able
38:31
to use CPUs. They have very like specific
38:34
limitations around it. But there's been a lot of talk of
38:36
that changing for twenty twenty six
38:38
car models, and for me, like,
38:40
that's pretty cool and I'm gung
38:43
ho excited about possibly supporting
38:45
that.
38:46
That does sound very fun. I
38:48
want to get back to actually the financing a little
38:50
bit because I guess two
38:52
questions. So the logic
38:55
of why you would borrow
38:57
money both I guess for
38:59
the equal position of chips, and the chips
39:01
are sort of collateral, but I understand they're not really
39:04
chip back loans per se.
39:07
A. Do you see your clients
39:09
getting more into debt financing
39:12
rather than equity financing. I mean, there's a whole
39:14
generation of software companies
39:17
from the Zerp era that was just you know, all
39:20
equity and never had any debt at all,
39:22
and they never really had to think about like their
39:24
compute costs, or they did, but not
39:26
as much. Do you think
39:29
that will rise their own use of
39:31
debt instead of equity in terms of their own
39:33
financing. And another topic
39:35
we talk about a lot on the show private credit, like
39:38
there is there an emergence of an ecosystem
39:40
of lenders for whom this is
39:43
going to become a specialty of some
39:45
sort.
39:46
So the first piece of the question, I don't believe
39:48
that the venture backed kind of AI lab
39:50
startups will ever take on debt in this type
39:53
of environment, largely
39:55
because they don't have the collateral to back
39:57
it. If they're buying cloud services to run their infrastructure.
40:00
And you may see some that start
40:02
to buy their own infrastructure and to do that themselves,
40:05
but it is a herculean task to do
40:07
this at scale. Right, There's a reason why clouds
40:09
exist is that there's a lot of complexity that they
40:11
abstract away. On the second question
40:13
around are is there a private credit sector
40:15
that's going to be built to do this? I think that
40:18
it's more you're seeing public lenders
40:20
that are extending into the private credit
40:22
space because the opportunities are there. And
40:25
I'm going to give you the party
40:27
line answer that my CEO gives all
40:29
the time is that you know, as we're
40:31
thinking about financing our business, the
40:33
biggest thing for us is our cost to capital, and
40:36
we're always going to do the things that provide us the lowest
40:38
cost of capital. And you know the lenders
40:41
that we work with, including Blackstone, that
40:43
have been so wonderful for us, you know, them
40:45
extending on the private credit side as
40:47
we go to the public markets because we're
40:49
dragged there by cost of capital concerns, I
40:52
would expect them to be involved as well, right,
40:54
So, I think it's a continuation of the business
40:56
they've been doing in the public markets, just kind of extending into
40:58
this capital intensive business.
41:00
Wait, what was I guess you
41:02
can't get into specific details, but
41:04
my impression was for these types
41:07
of loans that the interest rate is usually
41:09
higher than like a basic bank
41:11
loan or say issuing a
41:13
corporate bond.
41:15
I would definitely say our cost of capital is lower
41:17
than some of the corporate issuance is out there, Okay,
41:20
but you know it's definitely
41:22
higher than if our cost of capital today
41:24
is definitely higher than if we were republican public entity.
41:27
But specifically on the GPU backed
41:29
loans, and I know you keep saying it's not really a
41:32
GPU back loan, but that's sort of
41:34
an uphill battle to call it trade
41:36
receivables financing instead. It sounds
41:38
so much better that way, I know, I know, but like
41:41
on that in particular, Okay,
41:43
there's collateral, so maybe that brings
41:45
the overall like borrowing rate down.
41:47
But on the other hand, it's kind of a new thing, new
41:49
structure. How does that compare
41:51
with more traditional types of finance.
41:53
Yeah, so you know that every
41:56
credit facility that we do, the cost of capital
41:58
declines, and it's declining
42:00
because it's the execution risk
42:02
and the ongoing concern risk are reduced. Right.
42:04
And you know, when we first did this, people
42:07
like you guys are crazy. You have no history of execution.
42:09
And as we've gone through and we've done it,
42:12
like now there's a path that everybody that's
42:14
underwriting these loans now understands. Okay, this is what happens,
42:16
this is how it reforms, This is what we should
42:18
expect from the customers. This is what we should expect from receivables.
42:20
They get more comfortable, they're willing to do it at more aggressive
42:23
rates, right, so that the risk premium
42:25
associated with it has just decreased over time.
42:27
Got it.
42:27
I just have one last question I sort
42:29
of touched on it earlier. But Okay, we know that power
42:32
is scarce. We know that, you
42:35
know, there's not an infinite number of
42:37
Nvidia chips et cetera. Like those
42:39
are quite scarce for
42:41
the other stuff. You know, we've done episodes in the
42:44
past like talking about like just generic
42:46
electrical gear components, and we've certainly done
42:48
a lot on like labor shortages. What
42:50
are you seeing on that front sort of like simple
42:53
gear and the sort of basic building
42:55
blocks of a new construction and
42:57
how difficult that is to acquire. Verse
43:00
to say, if you were doing this, you know you started
43:02
in twenty seventeen, I imagine a lot of the things were more
43:04
plentiful back then.
43:05
Yeah, so it's not even that they're less
43:08
plentiful today than they were. You know, the lead
43:10
times were always the lead times for this
43:12
electrical gear. It's that there was capacity
43:15
to go buy off the shelf, right
43:18
there was inventory in the data center market. And the inventory
43:20
is basically gone. And you know, I
43:22
see deals today that get brought to me
43:24
and there's seven people bidding on the same deal
43:26
and they're all trying to sell it to like similar customers.
43:29
So the market has gotten pretty thin. So
43:31
now you're looking at it, going Okay, my only
43:33
option here is for new built, and you're
43:36
looking at lead times that haven't really shifted
43:38
that much on things inside of the data center.
43:41
The substation transformers are multiple
43:43
years out, and part
43:46
of that reason is that it takes a year for them
43:48
to cure after they're manufactured. Like, there's
43:50
no getting around that, there's no speeding that piece up.
43:52
I mean, it takes a year.
43:53
You when the transformer is built,
43:55
that's taking on so much power that
43:58
whatever the process is, it has to sit for
44:01
a year and harden before it's able
44:03
to take on that electrical load. So even if
44:05
you went and said, hey, I'm going to build ten more of these this year,
44:07
it's still a year away before you can use them.
44:09
Huh right.
44:10
And those are the types of things from a manufacturing
44:12
perspective you just can't get around, and it takes
44:15
time for the supply chain to catch up. But you
44:17
know, the problems that I'm solving on a day to
44:19
day basis in these builds isn't even
44:21
around the substation transformers. It's around
44:23
like small components that somebody missed it when they
44:26
ordered the gear sixteen weeks ago. And
44:28
now you have to go scramble and call in favors
44:30
across the country of Hey, who has this part? I need
44:32
it by tomorrow because I have fifty thousand
44:34
GPUs that are blocked by this one little thing, right,
44:37
So it's a lot of it is logistical
44:39
and human coordination and solving dumb problems
44:41
in real time.
44:42
Ryan Venturro, thank you so much for coming
44:45
on odd Laws. That was fantastic. Thanks for having
44:47
me, Tracy.
45:00
I'm really glad we did that conversation
45:03
because there are a number of these sort of like big
45:05
picture ideas in there that we've
45:07
sort of hit on of course, about data centers
45:09
and AI and electricity consumption, and
45:11
it was really interesting to hear some of them.
45:14
So, like, for example, just
45:16
this idea of like northern Virginia
45:18
is out and like needing this sort of hunt
45:21
to find these spots in
45:23
the country where there is ample
45:25
electricity and basically
45:28
nobody local is going to get upset at you for
45:30
using it.
45:31
Yeah, no one will come out with pitchforks. The thing
45:33
that stood out to me from a bunch
45:35
of these conversations at this point is the
45:37
arms race aspect of it, and how
45:40
urgent building out AI
45:42
is for a lot of these companies, and then
45:45
there seems to be this mismatch
45:47
between the immediate need
45:49
for scale and compute
45:52
and energy now
45:55
versus these really long timelines
45:58
of actually building the stuff out and Brian
46:01
mentioning the substation transformers
46:04
taking a care of cure.
46:05
I had no idea about that.
46:06
I didn't know that either. But that's a really good example.
46:08
That's super interesting, and of course now
46:10
we have to do a how do you build a
46:13
substation transform.
46:14
How do you cure a substation transformer?
46:16
Totally? I mean maybe this is probably something that electrical
46:18
engineer is not interesting to them at all, But
46:20
for me, I did not realize that there was this
46:22
one year long, one
46:25
year long curing process. You
46:27
know, I think there are like a couple other
46:30
things that now I want to talk
46:32
more about, so I'm interested. I
46:34
mean, like Coreweave is an in video company.
46:37
It's not owned by Video, but you know it's joined
46:39
at the hip in many respects. So how
46:41
difficult is it going to be either
46:44
for some other maker of
46:46
chips, whether it's an Intel
46:48
or some other maker of software
46:51
environments, whether it's Meta
46:53
and PyTorch going against Kuda
46:56
or whatever, like that's a really interesting
46:59
question to me, Like, you
47:01
know, we have to do more essentially on
47:03
like how much of a lock and video really
47:05
has on this industry.
47:06
Yeah, this seems to be the really big
47:08
question. And then the other thing I was thinking
47:11
about, and I know Brian emphasized
47:13
this and other Core Weave executives
47:15
have emphasized this before, but this idea
47:17
that hyperscalers maybe are
47:20
starting from a point of being disadvantaged
47:23
because they have to retrofit
47:25
all this old infrastructure for
47:28
this new AI technology totally,
47:30
and like I can see that. But on
47:32
the other hand, these are insanely
47:35
impressive companies. You are
47:37
explicitly trying to compete against
47:39
Core Weave in this business, and they're
47:41
not going to stand still. And so I guess
47:44
there's an open question over how much progress
47:46
they're making or how fast that progress
47:48
is actually happening.
47:49
Right, Large companies
47:51
always are going to have some challenges when
47:54
there's like a new model or something. But
47:56
these companies have all the money in the entire
47:58
world, right, and they also have all you
48:01
know, one of the things that Brian said is like they if
48:03
they were if one of them are going to do it, they would
48:05
have to go out and to buy a big chunk of the market,
48:07
which again they have all the money in the
48:10
entire world. So theoretically, whether
48:12
it's the big companies and retrofitting
48:14
the clouds or building new clouds, or
48:16
you know a lot of them like a Google, even if
48:19
they're for now using their TPUs
48:21
internally primarily like, it
48:23
does seem like in theory the opportunities
48:25
out there, particularly with the
48:28
the sky high amount you
48:30
know, valuation that a company like in
48:33
video is getting.
48:34
Oh yeah, you mentioned the sky high valuation. That
48:36
was something that also stood out to me, just
48:38
on the financing side. So this idea of
48:41
you know, the debt financing deal that
48:43
they did, and I'm
48:45
not going to call it trade receivables because.
48:47
No one GPU backed loan.
48:49
Yeah, no one will be interested when we start talking
48:51
about trade receivables. But the GPU
48:53
back loan. This idea that like, okay,
48:55
it's a new structure, but the more
48:58
you do it, the more the cost of particular
49:00
capital starts to fall, the more the market gets
49:02
comfortable with it. I mean, we can talk about whether
49:05
or not it's priced correctly for
49:07
a new type of unfamiliar risk,
49:10
but it does seem like that
49:12
might be a new avenue for the
49:14
vast amounts of capital that are needed for
49:16
this business.
49:17
So one, it's interesting to think
49:19
about the idea that, like, you
49:21
know, I don't think it's like totally true.
49:24
You know that if you need compute at scale
49:26
for AI, that you don't just get
49:28
to call up core weave and get it, and you
49:30
actually have to prove that you're going to be a
49:33
good customer and so like have something
49:35
that is probably going to be sustainable, have
49:37
the balance sheet capacity. So this
49:39
even if the sort of software the end
49:42
users aren't themselves raising
49:44
debt, it does sound like they have to have a
49:46
lot of equity upfront
49:49
just so that they're perceived as
49:52
a sustainable, viable customer
49:55
for a company like corewev. I also thought on
49:57
the electricity front, like obviously
49:59
we talk all the time about just sort of the raw
50:01
demand for electricity. But
50:03
this idea what he said, and I hadn't heard anyone
50:06
say it that the runs the modeling
50:08
runs stop everyone do you say thirty minutes
50:10
and have to be saved. Oh yeah, And so you have this
50:12
big variability at times, and that
50:14
creates its own specific issue
50:17
because it's not just steady state flow of
50:19
electricity and solving for that.
50:21
That's probably another area in
50:23
which the legacy data
50:26
centers or cloud companies. Perhaps
50:29
my guess would be that they're just sort of the demand
50:31
is more constant and therefore
50:34
something that would be a novelty for them.
50:36
Just thinking about the financing more, I do kind of
50:38
wonder how much of this is like AI
50:40
built on top of AI on top
50:43
of AI. Like, yeah, to the
50:45
point where if if the
50:47
bubble were to burst, or if
50:49
funding was suddenly pulled from a bunch
50:51
of these startups, like what would
50:53
that mean for core weaves
50:56
financing? And what would that mean
50:58
for black Rock, which lent money
51:00
based on the GPUs that the clients
51:02
are taking on, who might not be there anymore.
51:05
I don't know.
51:05
By the way, have you ever looked at a chart of riot
51:08
lockschain?
51:09
Oh no, not
51:11
for a while?
51:12
Yeah, well, I mean they're still there as a minor, but
51:14
like here we are in the midst of this pretty
51:16
big crypto bal run. I mean, I guess it's cooled a
51:18
little bit, but and that stock is done terribly
51:21
so it's interesting to wonder, and apparently
51:24
it doesn't seem like anyone's made a bid for them. But
51:26
it is interesting to wonder, like,
51:28
Okay, those chips are useless for
51:33
AI because they
51:35
don't work for that, but you know, they do
51:37
have capacity and they do have
51:40
electricity agreements already in
51:42
place. So it does make you wonder whether,
51:44
like some of the bitcoin mining companies which aren't
51:46
really getting a very the
51:48
market is not excited about them, clearly,
51:50
even in the midst of this crypto bal run.
51:53
Maybe they should go back to being a
51:55
diagnostics company. That's what they were
51:57
before, is it. I think so. I
52:00
think they're one of the ones that changed their name and
52:02
then like there something including blockchain,
52:04
and then their shares went up enormously and
52:06
now they're back down.
52:07
Well they have been. Riot Platforms
52:10
has been around, Okay, now I'm curious.
52:14
Yeah, so it's a bitcoin mining company, but it's
52:16
been the stock has been around since two thousand and three.
52:19
So pretty clearly, uh,
52:22
pretty clearly they were in some other business. I don't
52:24
know what.
52:24
Yeah, I'm looking on the terminal, it says Riot Blockchain,
52:26
formerly Bioptics, has
52:29
ditched the drug diagnostic machinery
52:31
business for the digital currency trade.
52:34
Well, there you go. So if you have some sort
52:37
of computing power or something. I don't know what they were doing
52:39
before, but maybe it is interesting to think about.
52:41
Maybe some of the option value for some of
52:43
these miners isn't there. Non
52:46
is in all the infrastructure other than the
52:48
bitcoin mining operation.
52:50
Maybe we should put in a bid.
52:51
Let's do it.
52:52
We can crowdfund and start
52:54
our own business. Okay, maybe we should leave it there.
52:57
Let's leave it there.
52:57
This has been another episode of the All Thought
53:00
podcast. I'm Tracy Alloway. You can follow
53:02
me at Tracy Alloway and.
53:03
I'm Joe Wisenthal. You can follow me at
53:05
the Stalwart. Follow our guest Brian Venturo.
53:08
He's at Brian Venturo. Follow
53:10
our producers Carmen Rodriguez at Carman
53:12
Erman dash Ol Bennett at Dashbot, and Kilbrooks
53:15
at Kilbrooks. Thank you to our producer
53:17
Moses Ondam. For more odd Lots
53:19
content, go to Bloomberg dot com slash odd Lots,
53:21
where we have transcripts, a blog, and a newsletter
53:24
and you can chat about all of these topics,
53:26
including AI, including semiconductors,
53:29
including energy in our discord discord
53:32
gg slash.
53:33
Hot Lots and if you enjoy all
53:35
thoughts, if you like it when we talk about AI
53:38
and chips and energy and all that stuff,
53:40
then please leave us a positive review on
53:42
your favorite podcast platform. And
53:44
remember, if you are a Bloomberg subscriber,
53:47
you can listen to all of our episodes absolutely
53:50
ad free. All you need to do is connect
53:52
your Bloomberg account with Apple Podcasts.
53:54
In order to do that, just find the Bloomberg
53:57
channel on Apple Podcasts and follow the
53:59
instructions there. Thanks for listening.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More