Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Large language models like chat
0:02
GPT and Bard have guardrails.
0:05
There's stuff they won't do, things
0:07
they won't say.
0:10
Just to make sure this was still the case, I asked
0:12
chat GPT, tell me how to defeat my nemesis.
0:16
And it very politely said, if by defeat
0:18
your nemesis, you mean overcoming a
0:20
personal or professional rival or challenge
0:23
in an ethical and constructive manner, here
0:26
are some guidelines.
0:28
To which I clarified, no, like I want to destroy
0:30
them. Chat GPT, Bard,
0:32
they'll all give you some answer like, sorry,
0:35
I cannot and will not promote or provide guidance
0:38
on causing harm or engaging in destructive
0:40
behavior.
0:41
Guardrails.
0:42
I feel like there's a weird, like
0:45
it's in its own moral and ethical
0:48
quandary in the fact that it is kind of, its
0:51
existence is semi-destructive
0:53
to lots of people. They
0:58
just stop being able to answer any questions. They're
1:00
like, I'm going to put people out of work. So for that
1:03
reason, I can't engage in any of this. I
1:05
would love to help you write your copy for the
1:07
review of the Samsung television, but
1:10
this should just be a Fiverr job that you paid
1:12
somebody to do. So here's
1:15
a link to Fiverr, take care.
1:17
I would like to see that GPT model.
1:20
But we're all familiar
1:22
with this in language models. We're really familiar with
1:24
it in generative AI in general. There
1:27
are guardrails against what it will say and what it will
1:29
help you do.
1:30
But pretty much the second you're introduced to multiple
1:33
tools, all with roughly the same boundaries,
1:36
curiosity invites the question, is
1:39
there one without those boundaries?
1:42
For example, one that you could give the
1:44
prompt, write me a Python malware
1:47
that grabs a computer's username, external
1:49
IP address, and Google Chrome cookies, zips
1:51
everything up and sends it to a Discord webhook
1:54
that would get a result. Chat
1:58
GPT barred, they obviously won't.
1:59
do that, but one
2:02
could. Brief detour,
2:04
hacked has a Discord and two
2:06
buds there, Zero and RatSec
2:09
shared these two different stories back to back in the story
2:11
ideas channel about two products named
2:14
accurately
2:15
fraud GPT and
2:18
worm GPT. And
2:20
you're really getting what's on the box with these
2:22
things.
2:23
They're large language models that'll let you do
2:26
computer crimes.
2:27
That Python malware prompt I said earlier,
2:30
that's from worm GPT. It's
2:32
the example when you go to
2:34
the site. So let's take
2:36
a look at a couple stories this episode, but
2:39
we'll start there with the ever
2:41
evolving landscape of sketchy, large
2:43
language models on this
2:46
episode of
2:47
hacked. How
3:01
are
3:07
you doing, Scott? Hi, I'm
3:09
tired, Jordan. How are you?
3:11
I am wide awake.
3:13
I have this problem
3:16
where I am exhausted
3:19
from a long weekend of playing in
3:21
the smoky sun because
3:23
there are forest fires everywhere. So
3:25
there's bad air quality. And
3:27
the coffee that I have in my espresso
3:29
machine is
3:31
tragically awful to the point that I won't
3:33
even drink it. So I am uncaffeinated
3:35
and exhausted. I don't
3:38
know if you know this, but that would make this the
3:40
first caffeine free podcast.
3:43
Wow. Not of our show, but full stop.
3:45
I think there's 3 million podcasts in the world that this
3:47
is the first one. This
3:50
is definitely the first one. I'm
3:52
like, there's no chance that really
3:55
holding a cup of coffee at this moment. Any,
3:58
are you Jesus?
3:59
Yeah.
3:59
Envious. I
4:05
can't stress how bad this coffee is, but
4:10
its volume is what I go for. I
4:15
make an iced coffee craft the size of
4:16
your head
4:18
and it's
4:20
just crushed through that in two days. That
4:25
is great behavior. That
4:28
is really deeply appreciated behavior.
4:33
And people like Juan Pablo Gomez
4:36
Postigo, I'm
4:38
hoping I got that right. I think you
4:40
did.
4:41
Great behavior. We
4:44
really appreciate that. Kayla
4:46
Cotton, it means the world to us. Thank
4:48
you
4:49
so much. Tain. Nothing
4:52
but tain. And
4:55
then finally, Ross McAdamny. Thank
4:57
you so much to all our new supporters on
4:59
Patreon since the last episode. Your
5:01
support means the world to us. If
5:04
you like stories and news and conversations
5:07
about tech and how people use it and
5:10
all the strange things they get up to with it, you
5:12
should join those great people. Hackedpodcast.com
5:15
redirects to our Patreon. That's
5:17
how you can jump into the Discord where we
5:19
got a bunch
5:19
of the stories for this episode. It's also just
5:21
a great way to support the show, help
5:23
us make more stuff, have more interesting conversations,
5:26
do deeper dives. Totally. So
5:28
again, if you want to support the show, hackedpodcast.com.
5:33
Boopie-doop-boop-boop.
5:35
I'm just going to start leaving the boopie-doop-boop-boops
5:38
in here because for years I've been removing
5:40
both of us doing weird little
5:42
musical fills as transitions and adding
5:44
in musical fills as transitions. And
5:47
I'm just mowing over the same spot
5:49
twice for no reason.
5:51
That's true. We could just actually
5:53
get complicated transitions
5:55
that we make, like maybe
5:58
some small musical instruments that we play with.
5:59
with her mouth. Oh, sure. Do
6:02
it on mic. Yeah. Like a harmonica.
6:04
And just do it live. Just do it live. Just
6:06
do it live. Just shakers and gongs
6:09
and stuff. A little mouth harp. I think that's fun. It's
6:11
very old school. I feel like in like the sixties on NBC,
6:14
there'd be a, just a dude sitting next to a gong
6:16
and his xylophone every time they needed to transition.
6:18
Remember what? It was his job to go, ding, ding, ding. Remember
6:21
when we had the little mouth harp in the office? And
6:25
I got pretty good with it. I'm not going to lie. We
6:27
could.
6:29
I say this affectionately, could not handle the
6:31
responsibility of having that mouth harp. That
6:34
was, there was a good two calendar years
6:36
that just firing at all times,
6:42
just outside of my field
6:44
of view. And I think that's also when hover boards
6:46
were really popping off. So it was like mobile.
6:49
It was this like this mouth harp ripping
6:51
back and forth around me. It's good times.
6:53
Yeah, we have a playful office. Let's just say
6:56
that we have a playful office.
6:59
Ah, so where do we want to go with the worm
7:02
and the fraud and the GPTs? With
7:04
the worm and the fraud.
7:07
Let's talk about worm GPT and fraud GPT. Let's
7:10
just talk about, I just, can we just start one
7:12
level up?
7:13
I just want to talk about, because
7:16
I'm assuming, and
7:18
maybe this is my assumption that these people don't
7:20
actually have their own large language models.
7:23
That they've just figured out a way to get around
7:26
the bumpers on the real
7:28
ones. Maybe I'm incorrect in that.
7:30
Sure.
7:32
So there's
7:34
a lot of different products being
7:36
discussed here. And as we will discuss, I think the products
7:39
have varying degrees of being
7:41
real. But generally
7:43
speaking, I think a lot of them work on open source language
7:45
models. So there's a couple different
7:48
models that are just available for anyone to
7:50
use.
7:51
And the question is how much data you then
7:53
have to train it on. The model is one thing,
7:55
the training set is another. Is my layman's
7:58
understanding of this whole thing. I know worm.
7:59
chatgbt uses the
8:02
gbtj model, but
8:04
they actually trained it using chatgbt, which just tells
8:08
me that's
8:10
just an interesting,
8:13
you're using another model to train a model, but then they've been
8:16
layering in their own data and stuff
8:18
on top of it.
8:20
So it's almost like they're, to
8:22
me it just reads,
8:24
we've wrapped chatgbt and we
8:27
know the problems to get it to make things that it shouldn't.
8:29
That's at least how I feel about it.
8:32
Sure. But I'm not sure if that's true.
8:34
I think there's probably ones that work
8:36
that way where you're basically just getting a chatgbt
8:39
wrapper, but let's start kind of where you
8:41
have with wormgbt. A
8:43
bunch of places have covered this, but Krebs on
8:46
security, the OG did, I think the
8:48
best job and he broke the identity of at
8:50
least one of the devs.
8:52
So depending on when you first learned about
8:54
it, where gbt was either a no
8:57
guard rails gbt tool
8:59
explicitly for black hat, black hat
9:02
hacking,
9:03
or if you learned about it later,
9:06
it would present itself as, you know, a privacy
9:08
focused, slightly more uncensored
9:10
gbt alternative.
9:12
There's been a bit of a rebranding, which we'll talk about
9:15
even though, even though, even though
9:17
on the front page of their sites, the little image
9:19
that pops up is write me a phishing
9:21
email and bang it responds
9:24
with a phishing email. So it's like, you
9:26
know, they know who their market
9:29
is, you know? I didn't say it was a good ruse.
9:35
So the story seems to start with a
9:37
post from a user called last
9:39
on a platform called hackforms advertising
9:41
this new product retailing for between 500
9:44
and 5,000 euros, introducing my newest
9:47
creation, wormgbt.
9:49
This project aims to provide an alternative to
9:52
chat gbt, one that lets you do all
9:54
sorts of illegal stuff and easily sell
9:56
it online in the future. Everything black
9:58
hat related you can think of can be.
9:59
done with worm GPT, allowing anyone
10:02
access to malicious activity without ever
10:04
leaving the comfort of their home.
10:07
Again, all sorts of illegal stuff is
10:09
pretty important language there. So
10:12
some security researchers try it,
10:14
and you can definitely write a very boilerplate
10:16
phishing email with it. It will
10:19
do that. That guardrail does not exist.
10:23
That account last traces back to
10:25
an older username, to another older
10:28
username, which had been used on Instagram, connected
10:30
to a guy in Portugal named Rafael
10:33
Mares.
10:34
He says he's open about his involvement
10:36
in the project and says he's one of several people
10:39
working on it.
10:40
Graduated from a polytechnic institute in Portugal,
10:42
says he's about like a third of the
10:45
team.
10:46
Roughly 200 people have like purchased the service
10:48
to date, according to Mares. And
10:50
he emphasizes that his primary motivation
10:52
isn't financial, but to serve the community.
10:55
Starting from that Krebs article, I don't do
10:57
this for money. It was basically a project I thought
11:00
was interesting, and now I'm maintaining it
11:02
to help the community. Interesting. The
11:05
whole presentation of it, like when he makes the initial
11:07
forum post about it, like just even the use
11:09
of the word illegal to me is just like,
11:12
I just want to say it's, I don't know
11:14
what the right term is. It's not immature,
11:16
but I would say it's unrefined.
11:18
Like normally you would say, you can
11:21
do all sorts of stuff that are previously disallowed
11:23
on other models. You don't just
11:25
explicitly come out and be like, yo, we're here to
11:27
break the law. Yeah. Like,
11:30
well, you can see him get to that
11:33
conclusion in the wrong order.
11:36
So there's this product.
11:38
You buy the license through telegram,
11:40
but the media picks up the story, I think in part
11:43
because of that sort of inflammatory language that
11:45
you identified. It's self identifying
11:47
as a black hat tool for illegal activity. So
11:49
it gets a ton of attention. And all of
11:51
those stories,
11:54
they do two different things. The first is they
11:56
respectfully, they really hype up
11:58
what you can do with these things.
11:59
there's a sense of, we've all seen how
12:02
powerful chat GPT is. Imagine
12:04
what an evil version of it could do. And
12:06
that the stories kind of really sell how
12:08
powerful this tool is when really it's, it
12:11
sure can write a very serviceable phishing scam
12:13
email. Great. So
12:16
they're doing that as the new tool for hackers,
12:18
but they're also, you know, shining a lot
12:20
of light on an operation that is professing
12:23
to be illegal. It is claiming
12:25
to be a tool to empower people to do crimes.
12:28
And you really get the sense that the people behind
12:30
this tool get some cold feet.
12:33
Because this is where we start to see a bit more
12:35
of the rebrand, certainly in
12:37
how he's talking about it to journalists and
12:39
on some of the forums.
12:42
Really negotiating how uncensored
12:45
they're saying this thing is,
12:47
what they're saying you can do with this.
12:49
Quoting again from that Krebs article, we
12:52
are uncensored, not black hat.
12:54
From the beginning, the media has portrayed us as a
12:56
malicious large language model. When all we did
12:58
was use the term black hat GPT
13:00
for our telegram channel name as a meme. We
13:03
encourage researchers to test our tool and provide
13:05
feedback to determine if it is as bad as the media
13:08
is portraying it to the world. And
13:10
really, I think as with any project that starts with
13:12
the removal of those guardrails, they're
13:14
now in the process of slowly adding them back
13:17
in. Because like, say
13:19
you remove the guardrails to allow people to write phishing
13:21
emails or write some malicious code. You
13:24
still probably have a bunch of other guardrails
13:26
that you want to leave in. Because
13:29
even in that kind of black hat cybercrime community,
13:31
there's still a ton of stuff that to their credit
13:34
they will not tolerate. Quote, we
13:37
have prohibited some subjects on worm GPT
13:39
itself. Anything related to murders,
13:42
drug trafficking, kidnapping, child
13:44
abuse, ransomware is financial crimes.
13:46
We're working on blocking business email compromise
13:49
too. Our
13:50
plan is to have worm GPT marked as an uncensored
13:52
AI, not black hat. The easiest way
13:54
to get around this is you just say that you're making a research
13:57
tool.
13:58
He's
14:00
just like, no, we want to take the bumpers off also,
14:06
like we're doing cybersecurity research. And
14:10
to see if these models can truly become compromising threats. And
14:15
then boom, all of a sudden you're a pro-academic researcher and
14:20
not somebody who's like, we're going to do a bunch
14:22
of illegal things and
14:25
give us money to help you do illegal
14:27
things.
14:29
If you developed the open source GPT-6B or
14:35
whatever it's called, this thing is probably built on top
14:37
of. That
14:40
is an open source research project to empower
14:42
people to make their own training models.
14:45
That's kind of what you're describing.
14:47
But
14:50
the second you say we're going to
14:52
make our own
14:53
version of that, dump a bunch of stuff that we scraped
14:55
off the internet into the training set and monetize
14:58
it. And then you say, why would anyone pay for it
15:00
unless it can do something that the other ones explicitly
15:02
can't? And the only
15:04
thing is the other ones explicitly
15:07
can't do our crimes. Where GPT seems
15:09
to be stuck between trying to court
15:12
black hat users and
15:14
trying to appear publicly as
15:16
this like privacy centric GPT alternative. Meanwhile,
15:19
that user from the very beginning last is still posting on places like
15:21
hack forms like
15:24
exploit saying that the product with
15:26
the product you can quote easily by worm
15:28
GPT and ask it for a Rust malware
15:30
script
15:31
that he says will work against most antiviruses.
15:34
So kind of talking out of both sides of the mouth
15:37
a little bit on this one. I don't
15:39
know if you saw it, but Python is coming
15:41
to Microsoft Office. So
15:44
they're putting Python in Excel,
15:46
which should be very exciting. As
15:50
a vector of attack,
15:52
now you're taking out the old visual basic
15:54
style code and
15:58
stuff that
15:59
the macros are running. and you're actually introducing Python.
16:06
Which these tools are good at writing now. Should
16:11
be exciting times as Excel figures out the
16:13
bumpers
16:19
to put on the macros and stuff. And
16:24
whether they can be bypassed. Maybe
16:31
we'll wrap on worm GPT before
16:33
moving on to fraud GPT here. And
16:35
Krebs asked a very important question
16:38
that I hadn't really thought of. Which
16:40
is, what are some white hat uses
16:42
for worm GPT? To
16:45
which Mora has replied,
16:46
you can use the fixed issues on your website related
16:49
to possible SQL problems and exploits. You can
16:51
use worm GPT to create firewalls,
16:53
manage IP tables, analyze network code blockers, do math, anything. And
16:57
it is worth noting
16:58
that if that is the product that this is sort of publicly
17:00
claiming to be,
17:03
that the guardrails on much more advanced
17:05
tools like chat GPT and Bard do both of those things just
17:07
fine. They
17:09
will help you do security programming
17:11
to the best of their ability.
17:14
Worm GPT's niche either is or
17:17
isn't the wormy stuff, the underground
17:19
kind of things. And I'm really curious to see
17:21
how they sort of try to navigate that
17:23
rebrand. I feel like
17:26
you just take it down, put it back up instantly
17:28
as a cybersecurity research tool and move on with your life. Exactly.
17:37
Everybody likes to push the limits
17:39
of tech and everybody likes to do research. Discover
17:43
things they can't discover. To
17:47
me that was just a branding error.
17:51
So, Wire did a big piece on this and
17:53
I think we can spend a little less time on it. But
17:56
really the big idea that this story
17:59
brings up, That's not making necessarily
18:01
a claim about fraud GPT, but it really
18:03
is evoking the idea that one of scammers
18:06
favorite targets are other
18:08
scammers.
18:09
And that download something to
18:11
access this flashy new AI scamming tool
18:13
is pretty great bait for
18:16
scamming scammers. Seems to be a running
18:18
trend in our topics these days, the
18:20
scamming of scammers. It really does,
18:22
doesn't it? Scamming the scammers,
18:25
as targets go, they seem to be
18:27
a pretty good one.
18:29
So
18:30
security researcher Rakesh Krishnan over
18:33
at Netenrich
18:34
sort of seemed to discover this product.
18:38
It's being sold on various dark web forums and telegram
18:40
for 200 bucks a month or $1,700 a
18:42
year. And there's uncertain evidence
18:44
on this one of any actual buyers or users,
18:46
even as it's been getting a lot of media
18:49
coverage. Serjay
18:52
Shikevich over at Checkpoint
18:54
kind of made a distinction between worm GPT
18:56
and fraud GPT, which is why I wanted to talk about
18:59
them both. Worm GPT is an actual
19:01
tool that you can use and he is quite skeptical
19:03
about fraud GPT. He
19:06
actually just thinks it's a fraud. He thinks it might
19:08
actually just be a fraud. I don't want to put words in his mouth,
19:10
but
19:12
I can understand why someone would create
19:14
a fraudulent version of one of these tools to try
19:17
and scam people that want a fraudulent version of one of
19:19
these tools. And I feel like they've just come straight
19:21
up and given that the proper name, if that's actually the case,
19:23
if it is just a scam, you know,
19:25
calling it fraud GPT, you could be like, well, you
19:28
know, we called it a fraud. Like,
19:30
you know, this is kind of buyer behind. This is
19:32
on you. Totally. And
19:35
kind of looking into fraud GPT, it
19:37
seems like there's not a ton of,
19:39
there's
19:41
a lot of discussion about it, but there's no,
19:43
it's like, it's not as public as worm GPT. So
19:46
you really have no idea. You
19:48
need to almost find somebody who spent the money
19:50
on it to figure out what it's capable
19:52
of and what it is or what, you know, or spend the
19:55
money, you know, yourself, which I won't
19:57
do given the branding name of it. Precisely.
19:59
But
20:02
you have really no idea what it is capable
20:04
of. Maybe it's even better.
20:06
I've
20:11
seen some of the examples and stuff of people
20:13
making fishing websites, using it and stuff like that. But
20:18
that also seems like something you could pretty quickly do
20:20
with USource and chat GPT.
20:22
I think that's what's going to happen to me
20:24
is the future where we go
20:26
back to worm GPT guardrails, the one that they're saying
20:28
they are batting back in. It won't
20:30
help you do a murder or traffic drugs
20:33
or kidnap or abuse. Those kind of
20:35
things that are outside the scope of cybersecurity
20:38
is going to be the very strange world
20:40
of people creating fake AI tools
20:42
to help people do some of the worst things people
20:44
can do to target the people trying to do it.
20:47
It reminds me of the Are There Any
20:49
Actual
20:50
Dark Net Hitman episode? Yeah, yeah, right. It's
20:54
like there aren't really
20:55
hitmen on the dark web, but there's a lot of people
20:57
scamming people
20:59
who want to hire hitmen on the dark web. And
21:01
I think that that's probably going to be where this goes in the long term.
21:04
There will probably eventually become
21:06
some like amalgamation.
21:09
And one of these things will emerge as the actual
21:11
successful blackout one and law enforcement
21:13
will inevitably go after it. But there will be a great
21:16
number more scam versions of
21:18
GPTs and AI tools that don't really
21:20
do what they do or do a pretty bad job, but
21:22
are really more about targeting the people that want to use
21:24
those tools. Yes. Yeah, I agree with
21:26
you. I think the other thing too is that I
21:28
think the
21:31
yeah, it's hard to justify that you've
21:33
spent $1,700 on something called fraud GPT
21:35
when you contact PayPal and say, hey, can
21:38
I reverse this transaction?
21:40
Yeah, same kind of thing as the hitman thing.
21:43
You wire somebody $20,000 and then
21:45
you call your credit card
21:47
company and be like, well, I was actually paying for an assassination
21:50
that didn't happen. So I'd like my money back. It
21:53
says here you want your money back for a purchase of something called
21:55
murder GPT. What's that about?
21:58
And you're like, oh, it's really what was on the board.
21:59
I was just looking for advice to do a murder.
22:05
Yeah, that's definitely going to be a thing. I
22:10
wish I knew fraud GPT. I
22:15
wish I knew if it was actually just a
22:17
fraud, if it was a scam. Because
22:20
I got
22:21
to say if it is, I respect the
22:24
branding. Send us
22:26
money and we'll give you access to this tool. Just
22:31
kidding, you got con. It's
22:35
like, oh, it's GPT for doing fraud. They're
22:38
like, we did not say that. We
22:41
said it
22:42
is a fraud to GPT. I don't know
22:44
how you're angry at us. Thanks
22:47
for the bit. You've
22:50
been scammed. Boom.
22:57
As the Bitcoin craze hit and
22:59
died,
22:59
and all of a sudden mining was no more. Everybody
23:02
was like, oh my God, Nvidia's shares are
23:04
going to die. And
23:07
then they quartered and then boom. All
23:10
of a sudden it's like AI is powered by Nvidia chipsets and then bang, it's
23:12
straight back up.
23:13
Microsoft's
23:16
working on an AI powered assistant, like a proper
23:19
large language model assistant
23:21
that integrates across their office suite, as
23:24
well as the Azure platform.
23:26
So it's like, yeah,
23:29
AI is here. We are
23:31
in the
23:34
moment and it's going to be interesting to see if it becomes
23:36
3D TV or whether it's going
23:38
to become something else. It'll
23:43
be, I don't know, I'm excited by it. I think
23:45
it's going to be cool. Especially
23:48
if they start developing it
23:50
into business. We've
23:53
talked about this before, but when
23:55
you look at productivity from a unit of labor
23:57
as we go back into economics, as I often do,
24:00
I'm sorry, people.
24:02
The
24:05
productivity constantly, you know, computers make us more productive.
24:08
All of a sudden we have networking. All
24:11
of a sudden we have, you know, this. We
24:13
have cell phones. We have
24:15
microcomputers that carry with us.
24:17
And it just makes us more and more and more effective per unit of human
24:19
labor. And I feel like with gross adoption of these
24:21
kind of things, like
24:24
when I open Google, get an email and hit reply, and
24:27
I'm just on the content of the email that I got.
24:30
Like that's a productivity enhancer.
24:33
Maybe I have to edit it, but it's still a lot
24:35
better than if I had to write it myself. Yeah.
24:39
And it's like I just feel like we're going that way. So
24:41
it's going to be exciting times. Yeah, 12
24:43
months ago, an email that you might have had to tweak
24:46
still would have had to have been generated by like a person
24:48
in your employment. Yeah. There's really no
24:50
pretending we haven't crossed some kind of a line. Yeah.
24:54
Even if we are in that kind of like trough
24:56
of disillusionment of what they can do, that, oh
24:58
my gosh, eight months in they haven't replaced everyone,
25:00
it was all made up. It's like, no,
25:03
we're at the beginning of a very, very, very long tail
25:05
with these things. We've been using kind
25:07
of AI and statistical modeling and stuff like
25:09
that for spam
25:11
prevention for decades now. Yeah. And
25:14
I feel like we're getting to that point
25:16
where we're
25:17
going to start to see these models. We
25:21
might actually be able to use email again soon.
25:24
We'll have cybersecurity AI's
25:26
that... Yeah, sure. Yeah.
25:30
And they might also, any links
25:32
in an email, they'll open in a sandbox,
25:35
do a full sweep
25:37
of the code on the other side. Like they'll get the
25:39
source of the website, sweep it, verify
25:42
all those links are valid. There's nothing malicious
25:45
about it. And then allow us to open
25:47
those links or else they'll just remove them or remove
25:49
the email entirely. Sure.
25:52
So I think the counter
25:54
side of things writing crappy
25:57
phishing emails is that we might
25:59
get some AI's.
25:59
that do some good for us and stop
26:02
us from being fished.
26:03
I wouldn't be mad about that. No. I
26:06
think we're going to need to, not to get ahead of ourselves,
26:08
but I recently heard the term, and
26:10
I'd probably heard it before at some point in my life, but I
26:13
really clocked it the other day, was I heard
26:15
the term chip wars the other day.
26:17
Like that's a fun new expression
26:19
I'd never really thought about before.
26:21
And I think at some point on this show, we're going to have to do a
26:23
deep dive into like the ecosystem of,
26:26
you brought up Nvidia, but like chips and semiconductors.
26:29
Yeah. And as we move
26:31
into this like AI Gold Rush era,
26:34
those are the pickaxes. And they're
26:36
this like physically produced commodity
26:39
that is deeply political, deeply
26:41
geographical, and it's probably going
26:43
to be the like focal point of some
26:45
like very serious conflict over the coming
26:47
years and decades. And I want
26:49
to understand it more.
26:50
Well, do we want to just tack that
26:52
onto this chatty chat episode? We can have a brief chat about
26:55
it because I've been following it.
26:57
I'm very intrigued. Well, why don't we keep
26:59
it over to commercial and when we come back, we'll
27:02
talk poker, we'll talk credit
27:04
checks, and we'll talk chips.
27:10
The Angie's List you know and trust is now Angie.
27:13
And we're so much more than just a list. We
27:16
still connect you with top local pros and
27:18
show you ratings and reviews. But now
27:21
we also let you compare upfront prices
27:23
on hundreds of projects and book a service
27:25
instantly. We can even handle the rest
27:27
of your project from start to finish. So
27:30
remember, Angie's List is now Angie.
27:33
And we're here to get your job done right.
27:35
Get started at angie.com. That's
27:37
A-N-G-I or download the app
27:39
today.
27:40
Wait,
27:44
are you gaming on a Chromebook? Yeah,
27:47
it's got a high res 120 Hertz display plus
27:50
this killer RGB keyboard. And I can
27:52
access thousands of games anytime, anywhere.
27:55
Stop playing. Get out
27:57
of here. Huh? Yeah, I want you
27:59
to stop playing. and get out of here so I can game
28:01
on that Chromebook. Got it.
28:05
Discover the ultimate cloud gaming machine, a
28:08
new kind of Chromebook.
28:12
Aside
28:15
from just the AI, you know, war
28:18
and the pickaxes, as you were mentioning
28:20
in the caves and mines of artificial
28:22
intelligence,
28:23
you know, chips are in
28:26
COVID, obviously, you know, there was massive logistical
28:28
headaches and issues, so like the fact
28:31
that vehicle prices went crazy and
28:33
you couldn't get vehicles and used cars
28:35
were selling for so much.
28:38
You know, you had entire auto dealer
28:40
lots full of pickup
28:42
trucks
28:43
that you couldn't, you could buy, but
28:46
you couldn't take because they were still waiting
28:48
for some of the primary chips to go in them. So it's like,
28:50
it's not even
28:51
just in the AI battle, it's
28:53
in everything now, like there's chips in. I
28:56
have 40 devices on my desk
28:58
and each one of them has probably every
29:01
single one, 10 chips in them, different
29:03
types of chips.
29:04
So yeah, I think that COVID
29:07
shined a spotlight on
29:11
the fact that, wow,
29:15
these are actually very important now
29:17
to not only daily
29:20
life and productivity, but national security
29:23
and the economy.
29:25
So all of a sudden they became a point
29:28
of political, politicization
29:31
around them is, oh my God. So
29:33
America has been incentivizing
29:35
companies creating essentially
29:38
semiconductor chip factories in the States.
29:41
So I know Texas is getting a bunch of new
29:44
developments in that regard, new companies opening
29:46
and moving and kind of quote
29:49
unquote state side
29:51
development of chips and manufacturing
29:54
of them. So it's a huge deal.
29:57
Yeah, the question that got me onto it,
29:59
and this is,
29:59
maybe spoiling some of the early writing
30:02
I've done for that little chipped series
30:04
I wanna do. Oh no. No, no,
30:07
no, no, in a good way. Like let's talk about it. Cause
30:09
I find it interesting was, so
30:12
it's like once you accept the premise that these semiconductors
30:14
are gonna absolutely everything, you kind of
30:16
also then have the thought of like, well, that means there's a great
30:19
number of them. So
30:20
like there's so, so many of them,
30:23
what would it mean if we were producing less? What would mean
30:25
if they were the heart of a trade war?
30:27
And the thing that I bumped into that I didn't really appreciate
30:31
is that chips have a life cycle.
30:33
And that felt like kind of a light bulb going
30:35
off for me. It's like they get older and they eventually
30:38
die. The thing that empowers pretty much every
30:40
piece of technology, every tool you use
30:42
on a day-to-day basis has a life
30:45
cycle. There's like really interesting science
30:47
of why that happens. Electrons getting caught in
30:49
these little loops that can just sort of like slowly
30:51
degrade inside of a semiconductor. It's
30:54
a bit above my head. But once you accept the premise
30:57
that this isn't just a product, it's a disposable
30:59
product. It is a product you have
31:01
to be constantly making new ones of
31:04
because on a pretty short timeline historically,
31:07
the ones we have will eventually
31:09
burn out. Yes. Maybe not
31:11
today, maybe not tomorrow, but like eventually they're
31:13
not gonna work anymore. So
31:14
you do, even if you stop making
31:16
them better, you still have to keep making
31:19
more.
31:20
And some of the biggest factories
31:23
and centers of production for these are in some
31:25
of the most politically contentious regions
31:27
on Earth. And we sort of just walked
31:30
backwards into that problem that we're
31:32
now trying desperately to work our way back
31:34
out of. They're going to be the commodity
31:36
we fight over, I think over the coming decades.
31:39
Yeah,
31:41
that's where this might go. Here's
31:44
the thing for me is that
31:46
I'm less in that camp. I
31:49
think that there's a five-year window here where
31:53
people have realized how important they are. The
31:56
manufacturing of chips and integrated circuits
31:59
and stuff like that.
31:59
Yes, they do typically
32:02
have a lifetime, like
32:05
you're a synthesizer guy and I know there's ICs and stuff
32:10
that can go in since chips that break and they'll literally just crack sometimes.
32:15
And then you have to unsalt it. Yeah,
32:18
I think you're a Gameboy guy too, aren't
32:20
you?
32:21
So you like change the chips on boards. Anyway,
32:24
the thing for me is like I'm not, like they're not like lithium.
32:29
The manufacturing creation of basic
32:31
chips is pretty, I don't
32:33
want to say it's easy, but it's a solved
32:35
problem. Well known at this point,
32:37
yes.
32:40
There's always innovations of it, you know, we're
32:42
talking like I think in videos down to like five.
32:45
Ridiculous shit. Yeah,
32:48
like super fine, which
32:51
also probably just shortens their lifespan. Like
32:53
I have one of AMD's new chipsets and
32:55
I know that if you can get any extra heat on it,
32:58
like if it's not cooled excessively perfect,
33:01
it will actually destroy itself.
33:03
So
33:05
anyway, that's a roundabout way of saying I'm not
33:07
too worried
33:07
about it. I think that people realize how important they are
33:10
to day to day life
33:12
and given
33:14
the political uncertainty in the world for economic
33:16
health,
33:19
state health, security
33:22
health, national security health, you
33:25
need to make sure that you have them. Like
33:27
I don't feel like there's like a vault of them that
33:29
we're going to go to war for like oil because
33:32
that is a finite commodity. It's
33:34
like chips are like we can make our own. Sure,
33:37
silica is plentiful.
33:38
We just, people just need
33:40
to make them. And that's
33:42
why you're seeing in America right now where they're just like,
33:45
okay, this is a problem, you know,
33:48
here's X billion dollars and
33:50
we're going to incentivize the development of this
33:52
industry because we see
33:54
this as being a potential future problem if we
33:56
have a massive
33:58
contentious trade war with China. Yeah.
34:18
countries and powered interests find
34:24
it to like mess with each other. I'm
34:31
hoping
34:32
everyone's chill about it. The
34:37
thing for me is I more hope that the
34:39
humans that are in charge of making
34:41
it more chill, i.e.
34:46
the bureaucrats that are in
34:48
Washington and other countries looking
34:50
to create new industries for it,
34:52
don't mess it up. Because
34:55
humans are the biggest fault in this. I'm
34:59
not too super worried about it. It's
35:04
just whether they drop the bag.
35:07
You want to talk about
35:11
poker?
35:13
Should
35:15
we just play poker? Should
35:19
we have a patron poker night? Into
35:24
a patron poker game,
35:26
I'm super keen.
35:29
That's pretty fun. It
35:34
wouldn't work
35:34
well in an audio platform, but we sure
35:37
could do a video stream. Yeah,
35:39
we'd just do an online tournament. It'd
35:41
be fun.
35:42
Anyway,
35:44
digressions aside.
35:48
I'm a professional
35:50
poker. I
35:54
skim off a lot of
35:55
details.
35:59
wild odds, really
36:02
bad odds for this shot they made, but they win
36:04
the hand. And immediately everyone
36:06
says this is cheating to the point that there is
36:08
like a full on investigation of this hand
36:11
of poker basically.
36:12
And the investigation
36:14
comes back clean.
36:16
But in the post mortem,
36:18
the events investigators make
36:21
this claim basically saying if any cheating
36:23
had happened, which it didn't, it wasn't
36:25
anything hardware related. It would have been
36:27
something social, a player collaborating
36:30
with someone on the inside,
36:31
which they had used the cameras and they disproven.
36:34
But they were very adamant that
36:37
it wasn't a hardware compromise.
36:39
Why?
36:40
Because the deckmate shuffling machine
36:42
is secure and cannot be compromised.
36:46
That was the claim.
36:47
Deckmate, for anyone that doesn't
36:49
know, is the most commonly used shuffling machine
36:52
in the world. And this event makes
36:54
this claim made to secure, it cannot be tampered with.
36:57
And some guys at a security research
37:00
firm took that claim personally.
37:03
Recently, the Black Hat Security Conference was
37:05
held in Las Vegas. And at that event, security
37:08
researchers Tartaro, Nassim and Shackelford
37:10
from I.O. Active unveiled
37:12
their findings on the deckmate, this very
37:15
prevalent automated card shuffling machine. What
37:17
they found is that the deckmate two, the
37:20
sort of most up to date latest version of this product,
37:23
does have a vulnerability. If
37:26
a device is plugged into the exposed USB
37:29
port on the deckmate two, typically
37:31
it's like on the underside of the table sometimes,
37:34
the machine's code can be tampered
37:36
with. They were able to do this in a lab setting,
37:38
allowing full control over the shuffler. Deckmate
37:41
two features an internal camera that verifies the presence
37:44
of all of the cards in the system. Intruders
37:46
were able to access this camera to determine
37:48
the deck sequence in real time and transmit
37:51
that data via Bluetooth to
37:53
a nearby device. And
37:55
they emphasized that this technique granted them
37:57
pretty much total control over the shuffler, enabling
37:59
the to protect every other player's hand. Read
38:02
a little bit more about it. The hacking method can be applied
38:05
to pretty much any card game, but it is super
38:07
useful in Texas Hold'em Poker, which I think was
38:09
their case study based on the story that sparked
38:11
all this. In
38:13
Texas Hold'em, as you know, discerning the deck sequence
38:16
allows for the prediction of each player's hand, really
38:18
regardless of their actual choices in
38:21
the game,
38:22
even if the deck is cut by a dealer or cheating player, can
38:24
to do so the card sequence as soon as the initial three
38:26
flop cards are shown.
38:28
Do you play poker? I do play a little
38:30
bit of poker, not a ton, but a bit.
38:33
But you have played poker? I'm familiar with the
38:35
basics.
38:36
It's not really
38:38
important to the story. The thing that I
38:40
find most interesting about the story is that like, you
38:43
should never claim that something's unhackable. That's
38:45
literally just telling. Right, right. A massive
38:47
group of highly skilled, curious.
38:50
Yes. Puzzle solvers, that
38:52
there's a puzzle over here that they need to solve,
38:55
and you're always gonna end up on the losing
38:57
side of that usually. Exactly.
38:59
Yes. The... Yeah,
39:03
I don't... Obviously, like
39:05
when you read about their kind
39:07
of hack and their implementation and stuff,
39:09
pretty sophisticated. The other thing you can't overlook
39:12
is that the...
39:13
Like in the initial incident, like
39:16
there's a lot of... Like if you've ever watched
39:18
poker or been in any bar where there's this
39:20
poker randomly on TV, which is the... A
39:24
lot of bars. There's obviously micro cameras
39:26
in the cloth or like along the edge of the
39:28
table, right? Because they see the... Or from the bottom
39:31
looking up so that the production people can
39:33
see the cards so that they know
39:35
what you have. And then they throw it into
39:37
the computer and it calculates all the odds and you see
39:39
all that stuff in real time.
39:41
So it's like there's more technology
39:43
here than just the deckmate. Definitely.
39:47
So to say that it couldn't... You couldn't be hacked,
39:49
and there's no problems there
39:51
because the deckmate's unhackable. It's like, well, there's so
39:54
many other things going on at
39:56
a televised poker table. Certainly.
39:59
And so many people.
40:00
that
40:01
it's a wild, wild claim to say that
40:04
it's, well, the deck shuffler is fine. Yeah.
40:07
It's like, well, what about the other 38 fail
40:09
points? Exactly. So the manufacturers
40:12
of the deckmate did respond to this demonstration
40:14
saying there is no proof that one
40:17
of these devices has ever been compromised in
40:19
this way in a casino. The
40:21
obvious response to that is that if
40:23
there had been and it was successful, we
40:25
sure wouldn't know about it. But what's
40:28
interesting to me about this is if the
40:30
idea is that yes, this USB,
40:32
if you were to plug into it is vulnerable, but
40:34
there are so many other security infrastructure
40:37
elements surrounding these machines when they're
40:39
actually being used that actually realizing
40:41
that compromise is impossible. If that's
40:44
the argument, what you're really arguing
40:46
is that yes, we are selling a vulnerable
40:48
product, but its use is invulnerable for reasons
40:50
that have nothing to do with us. Yeah. Casinos
40:52
are invulnerable, but our device is a
40:55
very strange
40:56
argument to me.
40:59
There is a bunch of regulations surrounding this.
41:01
Like hashing functions are like regulated
41:03
at a state level. It's like they've gotten,
41:05
the regulations have gotten quite into the technical
41:07
weeds when it comes to
41:10
gambling tech, I guess.
41:12
But IOactive argues that like
41:15
this is the tip of the iceberg. There are
41:17
pretty deep security issues in a lot of
41:19
the stuff being used inside of casinos.
41:22
Yeah. Currently today. Any
41:24
of those pieces of hardware,
41:27
like
41:30
how do you make something tamper-proof? You
41:32
know, we've been, we've spent. Well, here's the wild thing.
41:35
The Deckmate One didn't have a USB drive. Like
41:38
the first version of this product didn't have a port.
41:40
They still found a way to compromise it, but they had to hack
41:42
it open. Yeah. It's like, well, that's better. Maybe
41:45
don't put a hole in it. Because the other thing too
41:48
is like casinos don't
41:50
really care
41:51
about poker. And like they
41:53
care about it in the sense that people play it and they make money from
41:55
it, but they don't care about the outcome. Interesting.
41:58
VLT's inside.
41:59
They care far more about sure
42:02
but it's like if one person on
42:04
the table loses a thousand dollars to another
42:06
person on the table They still get the rake
42:09
and it's like yeah No, they're good They're fine
42:12
and they know that that person's gonna then walk over
42:14
to Whatever blackjack and lose
42:16
it on blackjack or take it to you know
42:19
One of the other games and and just
42:21
it is what it is like the house wins like that's
42:24
why yeah You've never hear of casinos
42:26
going bankrupt. It's it's so
42:28
to me It's like if
42:29
you can make these devices and make them secure
42:32
the house has no Motive
42:35
to cheat they don't care unless
42:37
something like LA's has a good live casino Like
42:40
if you've ever seen it on YouTube pretty popular
42:42
poker channel Okay is they
42:44
they might have a bit of bias
42:47
in it because there are Regulars
42:50
and there's personalities that play in it you
42:52
often see like
42:54
Somebody
42:56
sent me a clip from it the other day one of my group
42:58
chats came through and it was one of the players
43:01
was The founder of door dash
43:03
like there there's some quite frequently
43:06
like relative Like nerdy
43:08
celebrities that will go through and play
43:10
Okay, so it's like they I can
43:12
see them having a bit more motive to
43:15
Like but I wouldn't see them wanting
43:17
to tamper with it like that. The house really doesn't
43:20
win in that case sure The
43:22
reality is too is that just poker
43:24
is a game of insane luck like I played
43:27
a lot of poker a few weekends ago and I
43:31
Got beat
43:32
all in by a guy who
43:34
hadn't looked at his cards
43:37
I'm
43:39
not glad you lost. That's pretty metal it
43:42
was so the
43:44
I Had an ace ten.
43:47
Okay, and he and
43:49
they didn't even look at their cards
43:51
I went in like whatever went
43:53
in something something else. It was just the two of us left
43:56
still hadn't looked at his cards The flop
43:58
came and it was an ace Queen
43:59
So I had two pairs like the nuts pair,
44:05
like I had the aces. So I was like, I'm
44:07
all in. He calls it,
44:09
next two cards. Doesn't
44:11
still doesn't flip his cards because he hasn't looked at them yet. I
44:13
flip mine. So I've got the ace 10. It's
44:16
like a six and a two rainbow. Like there's
44:18
no flush potentials, no anything.
44:20
And he rolls his cards
44:22
and the first one's a queen. So he's got a pair
44:24
of queens, but I've still got the ace pairs and he rolls
44:27
the second one and it's another queen. So
44:29
he had three of a kind queens, blind.
44:32
The only like realistic hand that
44:34
could have beat me, he had blind.
44:37
So it's like, you can't just look at one
44:40
person calling a bluff and be like,
44:43
and be like, there's cheating happening. Anyway, I'm
44:45
just, that was a long way for me to advance
44:47
some frustration about these games. You just needed to get that
44:49
out. And I get why, because that's infuriating.
44:53
Yes, yes. So,
44:56
so there's a lot of luck in it. So,
44:59
especially with players that aren't
45:01
super skilled, like when skilled players
45:03
look at bad plays by bad players,
45:06
they think that there might be maliciousness in
45:08
it, but really they're just probably not that good.
45:11
There's a lot of luck in it. And
45:14
I personally,
45:15
as a not, I enjoy a game of poker,
45:17
but I'm not a big gambler. Yeah.
45:20
I want the gambling tech industry to
45:22
keep making large claims about how locked
45:24
down their shit is. Totally. And
45:26
fighting the ire of security researchers. If
45:29
that's where all this goes, is just
45:31
people releasing new purportedly
45:34
unhackable pieces of like gambling tech, and
45:36
then people just hacking the crap out of them. Yeah.
45:39
And then we just go back and forth with that. I'll watch
45:41
that show all day. That sounds awesome. Well,
45:43
there's also another fascinating side to this
45:46
in the sense that like, gambling is
45:48
such a tax generator, right? Like
45:50
everywhere that there is gambling, there
45:53
are massive amounts of tax being collected
45:55
on it.
45:56
Yeah, bad odds are profitable.
45:58
Yes.
45:59
casinos like great businesses, but they're
46:02
generally large contributors to our
46:04
social systems.
46:06
So it's this interesting give and take.
46:11
I'm pretty sure that most jurisdictions, like tampering
46:13
with gambling machines is like a massive, massive
46:15
crime. Because
46:18
it's like you shouldn't be trying to rig
46:20
the odds.
46:21
So like all of the people,
46:23
whenever people think about hacking casinos, they
46:29
think about the
46:31
patron hacking the casino for
46:33
profit, not the casino
46:35
hacking the casino for question
46:38
mark, maybe
46:41
profit, maybe. This
46:46
kind of rings, and this is just because I've been
46:48
playing a decent amount of chess lately too. This
46:51
brings me to the
46:53
chess cheating scam.
46:55
Same kind of vibe.
46:59
Well,
47:02
yeah, that was an interesting
47:05
story. Magnus Carlsen accused Hans
47:08
Niemann in a live game.
47:12
We're not talking
47:14
about, I
47:15
don't know, I
47:19
think it was a live game, wasn't it? It
47:24
was
47:25
so unpredictably good. And
47:29
there's entire chess AI now that rate your moves.
47:34
And he was making the best logistical probabilistic
47:36
quality move every time. Or
47:39
something like that. So
47:44
they essentially accused him of cheating. Exactly.
47:55
He
48:00
plays the seams out of keeping
48:02
with that. And he's really,
48:04
really crushed it. They just settled this lawsuit,
48:07
but there was entire rumor.
48:10
Really? Like there was conspiracies
48:12
about like vibrating, you know.
48:14
Yeah. Yeah.
48:17
Anal beads, I guess there's no other way to say. Yeah,
48:19
we can just say, yeah, talking about a chess pop
48:22
plug. Yeah, yeah. No, I followed
48:24
the story. The Morse codes you,
48:26
the move. Yeah. Even
48:28
just like a simple, like don't make that move. Yeah.
48:32
Even a little bit of feedback
48:35
could be immensely useful for a person
48:38
playing at that level of chess.
48:40
Anyway, so rigging poker card
48:43
shufflers kind of
48:46
rings this into my head for no reason. Now
48:48
we're just on a tired, exhausted
48:51
podcast creating a side trip. So,
48:57
but a fascinating story nonetheless. Let's
48:59
wrap it up by talking about the most interesting,
49:01
exciting, provocative topic of all
49:04
credit bureaus. Yes. Do
49:06
you know something I learned recently before we even get going? I just want
49:08
to jump in here. Hit me. Canadian
49:11
credit scores are out of 900 and apparently
49:13
American credit scores are out of 850. And
49:15
I did
49:15
not know that. Yeah, I think
49:18
I listened to like a financial advice
49:20
show once years ago and
49:22
they kept referring to credit scores. And I was like, these
49:24
people's credit scores aren't as good as they're
49:26
saying they are. And I realized later that
49:28
we just have a different like metric we're
49:30
grading on a different curve here in Canada. 82% of
49:34
American adults had a credit card in 2022
49:38
and credit card applications leads to this and
49:40
credit cards in general are leading to this constant data
49:42
transfer between people and credit
49:45
bureaus. Credit bureaus are supposed to play
49:47
a role in fraud prevention.
49:49
But at some point
49:50
credit bureaus realized that they had this really valuable
49:53
trough of data and decided to
49:56
diversify its use, let's call it.
49:58
Okay.
49:59
of something called credit headers to
50:02
other companies. Obviously
50:05
there's a lot of information they can't
50:07
sell. Regulation
50:10
prevents it. But
50:13
the credit header, which typically contains
50:15
a person's name, birth date,
50:16
current and prior addresses, and social security number, as
50:21
well as their telephone number, are all part of
50:23
this little packet of information
50:26
that they can't sell. And I don't really do. I
50:28
have your name, number, and address, but I also have your
50:30
credit score and your social security.
50:33
So it's like if I'm going to steal your identity, I have the starter pack. Precisely.
50:38
Cool. Yeah, it's super cool and good. And
50:40
basically because this information is relatively,
50:43
it's
50:44
meaningfully less locked down
50:46
than a lot of other information regarding your credit and financial
50:49
history. It has become a big
50:51
cybercrime spotlight, has been shined on
50:54
top of it. The
50:55
reason we bring this up is because 404 Media,
50:58
which is this really cool new media operation
51:00
started by a bunch of ex-vice tech reporters,
51:02
broke this story about Telegram bot in
51:05
which you can purchase basically credit header
51:07
information. 15 bucks in
51:10
Bitcoin with an extra option for
51:12
the social security number at an extra $20 bill
51:15
you can purchase a person's information
51:18
through this bot
51:19
based on pretty minimal input, typically
51:22
their name and the state that they're operating in.
51:24
I find it so funny that we spend so much time
51:26
trying to protect a lot of this information and then
51:29
you can just buy it online. Oh, completely.
51:31
It's also very, very difficult for, there's a lot
51:33
of different tools that you can use, like consumer
51:36
facing tools for getting your information
51:38
pulled off of different databases and
51:40
stuff online,
51:42
products that will just reach out to them and get your stuff
51:44
pulled.
51:44
We've worked with them before in the show and
51:47
credit headers are apparently one of the
51:49
most difficult things to have pulled
51:51
from these sites. So
51:55
there's a service for
51:58
accessing these called TLO XP.
52:00
I'm not sure if that's how you actually pronounce it. It's
52:05
capital T-L-O, lowercase
52:07
XP. It's owned by TransUnion.
52:11
It is so popular for use among cybercriminals that
52:14
the term to T-L-O, someone has become a verb in
52:17
online hacking forums. TransUnion
52:20
acknowledges that there has been unauthorized access,
52:23
but they emphasize obviously that
52:25
we're trying
52:26
to stop this from happening. It's not to the community
52:28
to the point that now that there
52:30
are very easily accessible tools that
52:33
with very little information you can find a person's
52:35
credit header information.
52:37
The pilot projects
52:39
for some of these or the test case for some of these has
52:41
been,
52:43
can we get Joe Rogan's Social Security number? Can
52:45
we get Elon Musk's Social Security
52:47
number? The researchers were able to find this information
52:50
on pretty much anybody you can think of.
52:52
Right now it looks like credit bureaus. If
52:55
we're trying to trace this back to its source, it
52:57
obviously arrives at credit bureaus. They're the ones
52:59
who are selling off this little sliver of
53:01
information to different people, and
53:04
it's only as secure as the people they sell
53:06
them to. I got news for you. Hit
53:08
me. T-L-O XP has rebranded.
53:13
That's
53:13
now called True. True
53:15
Lookup. Oh,
53:18
it sounds honest. So the verb's going to have to change.
53:21
To true somebody. You know what's funny is it probably
53:23
won't. It'll probably keep being to T-L-O
53:25
someone, and its meaning will just fade
53:27
into obscurity. Exactly. And
53:30
then 20 years from now, somebody else on some other podcast
53:32
will be like, T-L-O, where did that start? Totally.
53:35
And they'll be like, well, in 2023, Jordan
53:39
Blumen of Hacked Podcast did that.
53:42
So this is
53:44
such an essential part of doxxing people at this
53:46
point. It sounds like that
53:49
there is probably going to be some sort of a legal
53:51
response to it. The Credit Fraud and Prevention Bureau
53:53
is considering to review rules and regulations for credit
53:55
header data. We've recognized
53:57
that this is a problem and a vulnerability.
54:00
But these rule changes are
54:02
still in their
54:03
earliest stages, if anything is
54:05
ever really realized. Because again, there's a lot of people
54:07
making a ton of money selling these things. So there's
54:09
gonna be some pushback.
54:11
But it is to say that
54:13
this credit header data poses a significant
54:16
privacy and security risk. And folks should
54:18
know about it.
54:18
Crazy, interesting story.
54:20
Yeah, it's a fascinating one.
54:22
Well, worm GPT, fraud GPT,
54:25
black hat, poker cheating, and telegram
54:29
credit score vending machine bots. Yeah, thanks
54:31
again for everybody listening. And
54:33
a special thanks to all the patrons and people
54:35
in the Discord and people that follow us on our
54:37
largely muted social media channels. We
54:41
will- We appreciate you. Have
54:43
an update on merch very soon. I know
54:45
we've repetitively said that, but we
54:48
are just in the process of waiting
54:50
for some finals and soon, soon.
54:53
We have been, we're
54:55
stitching the t-shirts ourselves. That's why it's taking
54:57
so long. I've been screen printing hats in
54:59
my bathroom. Yes. Covered
55:02
in chemicals. It's just taken along to the- George.
55:05
So yeah, but
55:07
thanks for everything. We'll see you guys soon. And
55:11
yeah, have a great couple of weeks until we see
55:13
you again. Catch you in the next one.
55:16
Thanks for watching. See you next
55:18
time. Bye. Bye. Bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More