Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
MUSIC MUSIC
0:40
MUSIC MUSIC
1:00
MUSIC Podcasts
1:03
you love. From people you
1:05
trust. This
1:07
is Twit. MUSIC This
1:13
is Security Now with Steve Gibson
1:15
and this week Micah Sargent, episode
1:17
966, recorded Tuesday,
1:20
March 19, 2024. Morris
1:23
the second. This
1:25
episode brought to you by Robinhood. Did
1:28
you know that even if you have a
1:30
401k for retirement, you can still have an
1:32
IRA? Robinhood is the
1:34
only IRA that gives you a 3% boost
1:37
on every dollar you contribute when you
1:40
subscribe to Robinhood Gold. But
1:42
get this, now through April 30th Robinhood
1:44
is even boosting every single dollar you
1:46
transfer in from other retirement accounts with
1:49
a 3% match. That's
1:51
right, no cap on the 3% match. Robinhood
1:55
Gold gets you the most for your retirement
1:57
thanks to their IRA with a 3% match.
2:00
match. This offer is good through April 30th. Get
2:03
started at robinhood.com/Boost.
2:06
Subscription fees apply. And now for some legal info.
2:08
Claim as of Q1 2024 validated by
2:11
Radius Global Market Research. Investing
2:13
involves risk, including loss. Limitations
2:15
apply to IRAs and 401Ks. 3%
2:19
match requires Robinhood Gold for one year from the date
2:21
of first 3% match. Must
2:23
keep Robinhood IRA for five years. The
2:26
3% matching on transfers is subject to
2:28
specific terms and conditions. Robinhood IRA
2:30
available to US customers in
2:32
good standing. Robinhood Financial LLC
2:34
member SIPC is a registered
2:36
broker dealer. Hello
2:40
and welcome to Security
2:42
Now, the show where
2:45
the cybersecurity guru, Steve
2:47
Gibson, provides
2:50
a week's worth of cybersecurity
2:52
news in a small package
2:54
so you can just plug
2:56
in directly and download it
2:58
into your cranium. I am
3:00
just here to help facilitate
3:03
this. I am, how do you
3:05
say, the, let's go with the super
3:08
safe USB flash drive
3:10
that you can plug into your cranium.
3:12
I'm just providing the means by which
3:15
you connect to Steve Gibson, who actually
3:17
is providing all of the information. That's
3:19
the role I play here. Also the
3:21
role of shock and awe because occasionally
3:23
I am gobsmacked
3:26
by what Steve ends up telling us.
3:29
But Steve, it is good to see you again this week. How
3:31
are you doing? Micah, great to
3:33
be with you for our second week in
3:35
a row as Leo finishes working on his
3:38
tan and presumably gets ready
3:40
to return. So
3:43
we're going to do as we thought
3:45
we were going to do last
3:48
week but got pushed because of
3:51
what turned out to be a
3:54
surprisingly interesting episode for our
3:56
listeners, the pass keys versus
3:58
multi-factor authentication. In fact, I
4:01
was so overwhelmed with
4:03
feedback from that and
4:06
it was useful stuff, comments and questions
4:09
and so forth that it
4:11
ended up still being a lot of what
4:13
we end up talking about today just because
4:17
certainly the whole issue of cross-network
4:22
proving who you say you
4:26
are is, which is to
4:28
say authentication, is a big deal and
4:32
important to everybody who
4:35
is using the Internet. So we're going
4:37
to talk about that some more but we are
4:39
going to get to what was supposed to be
4:41
last week's topic which I
4:43
had originally as Morris Roman numeral
4:45
II but I saw that they're
4:47
referring to themselves or their creation
4:50
as Morris II. So
4:53
Morris II is today's number
4:56
966 and counting
4:58
a Security Now podcast
5:01
title. But
5:03
first we're going to talk about how
5:05
it may be that we were
5:07
doing the Requiem for
5:09
Voyager 1 a
5:11
little prematurely. It may
5:14
not be quite dead or insane
5:16
or whatever it was that it appeared
5:18
to be a week ago. So
5:21
the World Wide Web has just turned 35.
5:25
What does its dad think about
5:28
how that's going? What's
5:31
the latest unbelievably
5:33
horrific violation of consumer privacy
5:36
which has come to light?
5:40
We're going to share a lot about what
5:42
our listeners thought about pass keys and multi-factor
5:44
authentication and the ins and outs of all
5:46
that. And then as I promised
5:48
we're going to look at how a
5:51
group of Cornell University researchers
5:53
managed to get today's generative
5:56
AI to behave
5:58
badly. and just
6:01
how much of a cautionary tale this
6:03
may be. So
6:05
I think a lot of interesting stuff for us to talk
6:07
about. Absolutely. Again, shock and
6:09
awe. I can't wait. I'm looking forward
6:12
to it. Before though we
6:14
get to that, we will take a quick break so
6:16
I can tell you about a sponsor of today's episode
6:18
of Security Now. It's brought to
6:20
you by our friends at ITProTV,
6:22
now called ACI Learning. You already
6:25
know the name ITProTV. We've talked
6:27
about them a lot on this
6:29
here network. Now as
6:31
a part of ACI Learning, ITPro
6:33
has expanded its capabilities providing more
6:35
support for IT teams. With
6:38
ACI Learning, it covers all of your
6:40
team's audit, cybersecurity, and information technology training
6:42
needs. It provides you with a personal
6:44
account manager to make sure you aren't
6:46
wasting anyone's time. Your account
6:49
manager will work with you to ensure
6:51
your team only focuses on the skills
6:53
that matter to your organization so that
6:55
you can leave unnecessary training behind. ACI
6:58
Learning kept all the fun and
7:00
the personality of ITProTV while amplifying
7:02
the robust solutions for all your
7:05
training needs. So let
7:07
your team be entertained while they
7:09
train with short format content and
7:11
more than 7,200 hours to choose
7:13
from. Visit
7:19
go.acillearning.com/twit. For
7:21
teams that fill out ACI's form, you can receive
7:23
a free trial and up to 65% off an
7:27
ITPro Enterprise Solution plan. That's
7:31
go.acillearning.com/twit. That's
7:34
T-W-I-T. Thanks
7:36
so much to ACI Learning for
7:38
sponsoring this week's episode of Security
7:40
Now. Let's get back to
7:42
the show with our photo
7:45
of the week. Oh,
7:49
we've got you muted or you've got you muted. One of the two. I'm...
7:53
Nope. You're back, you're back. Oh,
7:55
hello. Let's retake that. Okay. So...
8:00
Hey I have a. Large.
8:02
Collection of photos in my
8:04
archive that. I'm ready
8:06
to deploy on demand, but this
8:08
would just caught me by surprise.
8:11
Somebody tweeted it and I thought
8:13
it was so cute. The caption
8:15
I gave it was wait. You.
8:18
Mean you did Not put this
8:20
wonderful gymnasium on the floor. Because.
8:23
It's the perfect space for a
8:25
slayer. And.
8:27
We. Have we? We have an
8:29
open. Old. School.
8:33
A large computer case.
8:36
Which. Three. Kittens
8:38
have managed to get themselves
8:40
into and a fourth one
8:42
is sort of looking on
8:44
enviously one of them their
8:47
the up. the uppercut looks
8:49
like it's standing on maybe
8:51
an audio. Card or
8:53
I graphics card has been
8:55
added to the case. I
8:57
actually have. A. Number of these
9:00
exact cases I look at it is
9:02
very familiar. the motherboard and there's actually
9:04
pretty ancient solder. Know what the story
9:06
is, where this came from or what's
9:09
going on and and the hard drives
9:11
us look like they are in need
9:13
of some help. In Iowa I just
9:15
have a successor of just and my
9:18
God these little kittens are a daughter.
9:20
So adorable. I. Add they'll
9:22
have me like how could anything be
9:24
that cute as these things are So
9:26
it's not exactly a cat video, but.
9:29
In. All. Chat via video
9:31
Done Security now. Ah. Yes
9:35
indeed Are those adorable little chats that
9:37
uses absolutely keep You know. I like
9:39
to imagine if this is somebody who
9:41
brought in their computer to a place
9:43
and said the something wrong with this
9:45
things that person has cleared out opens
9:47
it up of these little kitten surface
9:49
I'd play ah that's was er og
9:51
a little fur balls in this rat
9:53
kind of. That's what spin rate does
9:55
right it just or kittens inside to
9:58
fix the hard drive. My
10:00
secret formula hard as get
10:02
it out right and are
10:04
a lot because of the
10:06
security is a sin. Okay,
10:08
so we have a quick
10:10
follow up to Ah. As
10:12
I said, our recent perhaps
10:14
premature eulogy for the Voyager
10:17
One spacecraft's ah, it may
10:19
just turn out to a
10:21
been a flesh wound up.
10:23
The team occupying the little
10:25
office space and Pasadena instructed
10:27
Voyager to alter a location
10:29
of it's. Memory in
10:31
what. Everyone who's covering
10:33
this news is calling a
10:35
Poke instruction. Okay now peak
10:38
and Poke where the verbs
10:40
used by some earlier high
10:42
level languages When the code
10:44
wished which in odds that
10:47
that the code which is
10:49
talking in terms of variables
10:51
and not in terms of
10:53
storage wished to either directly
10:55
inspect which was to say
10:58
to peace. Or. To
11:00
directly asked her. To. Poke!
11:02
The. Contents of. Memory.
11:06
So. For the past several months
11:08
there has been a rising fear
11:10
that the world may need to
11:12
say farewell to the Voyager One
11:15
spacecraft after began to send back.
11:18
Just. Garbled data that nobody
11:20
understood. Ben So we were
11:22
say well it's lost his
11:24
mind. It's gonna say it's
11:27
just spitting gibberish, but after
11:29
being poked just right and
11:31
then waiting. What? Twenty
11:33
two and a half hours twice
11:35
I think is the current round
11:37
trip time. the speed of light
11:40
round trip so you know you
11:42
poke and then you're very patient
11:44
of the Voyager One Began to
11:46
read out. The. Data
11:48
from it's flight data subsystem.
11:50
D S D S That
11:52
is basically that it's if
11:54
began doing a memory dump
11:57
and. This brought renewed hope
11:59
that the. Spacecraft is actually
12:01
still. Somewhat. Of.
12:04
A meal miraculously in better
12:06
condition than it was feared.
12:09
in other words, that has
12:11
gone insane. Odds and the
12:13
return of of the flight
12:16
data subsystem memory will allow
12:18
engineers to dig through the
12:20
returned memory read out for
12:22
clues. Although. Paradoxically.
12:25
The data was not stance in
12:28
the format that the Sds is
12:30
supposed to use when is working
12:32
correctly. It is nevertheless readable, so
12:35
you know we're not out of
12:37
the woods yet. And still, it's
12:39
still could be unrecoverable. and this
12:42
is just another one of it's
12:44
death throes Still. And and really,
12:46
I mean realistically, at some point.
12:49
It. Will be. I mean these
12:51
veterans are going to happen turn
12:54
the lights off for the last
12:56
time of and put their office
12:58
space backup release but. Apparently.
13:01
Not yet. So I expect
13:03
it will be Yelp checking in from
13:05
time to time to see how our
13:08
beloved Voyager One Is spacecraft is doing.
13:12
But the games.up yet so it's
13:14
very cool and at this point.
13:17
It. All it's not clear how much. New
13:20
science is being sent back. I'm
13:22
in. It was incredibly prolific while
13:24
I was moving go through the
13:26
planet's of the solar system and
13:28
sending back amazing photos of stuff
13:30
that we'd never see before at
13:32
this point. Know it's sort of
13:34
being kept alive just because. It
13:36
can be so. You. Know
13:39
why not? It's. Not very expensive
13:41
to do. Okay,
13:43
so. The
13:46
Web or officially turned thirty
13:48
five. And it's
13:51
Dad is. Tim
13:53
Berners Lee has renewed
13:55
his his his expression
13:57
of his disappointment. Over.
14:00
How things have been doing recently. Ah
14:02
with this is one of those I'm
14:04
not angry, I'm disappointed situations. I was
14:06
hoping he was happy with us with
14:08
his yeah my son is in as
14:11
I'm disappointed in the way you have
14:13
turned out of so. Ah, but
14:15
when we could go on March twelfth. Tim.
14:17
Wrote. He. Said. Three
14:20
and a half decades ago. When.
14:22
I invented the web which
14:24
in old few people to
14:26
the say he said it's
14:29
trajectory was impossible to imagine.
14:31
There. Was no road map
14:33
to predict the course of
14:36
it's evolution. It was a
14:38
captivating odyssey filled with unforeseen
14:41
opportunities and challenges. Underlying as
14:43
whole infrastructure was the intention
14:45
to allow for collaboration, foster
14:48
compassion, and generate creativity. Okay,
14:50
I would argue that we got two of
14:52
those three of us. What?
14:55
I he says what I term the
14:57
three C's Now of course. A
14:59
lot of this is right. As like
15:01
as easy to rewrite history. Thirty five
15:04
years later, Bucks will see and or
15:06
he says it. Was. Up to
15:08
be. It was to be a
15:10
tool to empower humanity. The
15:13
first decade of the web
15:16
Fulfill that promise. The web
15:18
was decentralized with a long
15:21
tail of content and options
15:23
is created small more localized
15:26
communities provided individuals empowerment and
15:28
fostered huge value. Yes,
15:31
In the past decade, Instead
15:34
of embodying these values, the
15:36
web has instead played a
15:39
part in eroding them. The
15:41
consequences are increasingly far reaching.
15:44
From the centralization of platforms
15:46
to the Ai revolution, the
15:49
web serves as the foundational
15:51
layer of our online ecosystem
15:54
and ecosystems that is now
15:56
reshaping the geopolitical landscape driving
15:59
echo Know. Max ships and
16:01
influencing the lives of people around
16:03
the world. Five. Years
16:05
ago. When. The web turned
16:07
thirty. I called out
16:09
some of the dysfunction caused
16:12
by the web being dominated
16:14
by the self interest of
16:16
several corporations that have eroded
16:18
the webs values and lead
16:21
to break down and harm.
16:23
Now. Five years on as
16:26
we arrive at the webs thirty
16:28
fifth birthday, the rapid advancement of
16:30
ai. has exacerbated
16:32
these concerns brewing that
16:35
issues on the web
16:37
or not isolated but
16:39
rather deeply intertwined with
16:41
emerging technologies. There
16:43
are too clear connected
16:46
issues to address the
16:48
first as the extent
16:50
of power concentration, which
16:52
contradicts the decentralized spirit
16:54
I originally envisioned. If
16:57
indeed he originally did he as
17:00
this has segment of the web.
17:03
With. A fight to keep users
17:05
hooked on one platform. t
17:07
wonder what activists have to
17:10
optimize profits through the passive
17:12
observation of content In all
17:14
like while they drool. This
17:17
exploit of business model is
17:19
particularly grave in this year
17:21
of elections that could unravel
17:24
political turmoil com pounding this
17:26
issue is the second, the
17:28
personal data market. That
17:31
has exploited people's time and data.
17:33
With the creation of
17:36
deep profiles that allows
17:38
for targeted advertising and
17:40
ultimately control over the
17:42
information people are. How
17:45
has this happened? Leadership
17:47
hindered by a lack of
17:50
diversity has steered away from
17:52
a tool for good for
17:54
for public good, and one
17:56
that is instead subject to
17:59
capitalist forces. resulting in
18:01
monopolization. Governance.
18:04
Was should correct for this. Has.
18:06
Failed to do so with
18:08
regulatory measures being outstripped by
18:11
the rapid development as innovation
18:13
leading to a widening gap
18:16
between technological advancements. An effective
18:18
oversight. The future
18:20
he writes hinges on our
18:23
ability to both reform the
18:25
Kurds and create a new
18:27
one, but genuinely serves the
18:30
best interests of humanity. To
18:33
which I've just good would serve here. Good.
18:35
Luck with that way. To.
18:38
Achieve this, He writes. We
18:40
must break down data silos
18:43
to encourage collaboration, create market
18:45
conditions in which a diversity
18:48
of options thrive to fuel
18:50
creativity and shift away from
18:53
polarizing content to an environment
18:55
shaped by a diversity of
18:57
voices and perspectives that nurture
19:00
empathy and understanding. Or.
19:03
We. Could just all watch
19:05
kept videos because he knows
19:07
that was a cute I
19:10
know as he says, to
19:12
truly transform the current system,
19:14
we must simultaneously tackle it's
19:17
existing problems and champion of
19:19
the efforts of those visionary
19:21
individuals who are actively working
19:23
to build a new, improved
19:26
system. A new paradigm is
19:28
emerging, one the places individuals'
19:30
intention rather than attention. At
19:33
the heart of business models,
19:35
freeing us from the constraints
19:37
of the established order and
19:39
returning control over our data.
19:42
Driven by a new generation
19:44
of pioneers, this movement seeks
19:47
to create a more human
19:49
centered web aligned with my
19:51
original vision. These. Innovators
19:54
hail from diverse disciplines,
19:56
research, policy, and product
19:59
design. United in
20:01
their pursuit of a web and
20:03
related technologies that serve and empower
20:05
us all. Blue sky,
20:08
And Mastodon. Don't
20:10
feed off of our
20:12
engagement, but still create
20:14
group formation. Good. Hub
20:17
provides online collaboration tools
20:19
and podcasts. Contributes podcasts
20:22
know tribute to community
20:24
knowledge. As. This
20:27
emerging paradigm gains momentum
20:29
I should mention podcast
20:31
that are disappearing rapidly
20:33
unfortunately. As
20:35
as this emerging paradigm
20:37
gains momentum, we have
20:39
the opportunity to reshape
20:41
a digital future that
20:44
prioritizes human well being,
20:46
equity, and autonomy. The.
20:49
Time to act and embrace
20:51
this transformers of potential. His
20:54
desk What now? Ah, As
20:56
outlined in the Contract for
20:59
the Web. A multitude of
21:01
stakeholders must collaborate to reform
21:03
the web and guide the
21:06
development of emerging technologies. Innovative
21:08
market solutions like those I've
21:11
highlighted are essential to this
21:13
process. Forward thinking legislation. Okay,
21:16
Now there's an oxymoron for
21:18
it from governments worldwide can
21:21
facilitate these solutions that helped
21:23
manage the Courage Sister more
21:25
effectively. Finally, We. As
21:28
citizens all over the
21:30
world needed to be
21:32
engaged and demand a
21:34
higher standards and greater
21:36
accountability for our online
21:38
experiences, that time is
21:40
now to confront the
21:42
dominant system shortcomings, well
21:44
catalyzing, transform it solutions
21:46
that empower individuals. This
21:48
immersion system ripe with
21:50
potential is rising and
21:52
the tools for control
21:54
are within reach. A
21:56
certain. To sell a manifesto and
21:58
upsets it. Really is a
22:01
an ide, I only have a little
22:03
bit more and that I've got I've
22:05
we will will will discuss this part
22:07
of the solution is. So.
22:09
Called solid Protocol
22:12
capitalist capital p.
22:15
A specification and a
22:17
movement to provide each
22:19
person. With. Their own
22:22
person or online data
22:24
store. Known. As a pod.
22:27
P. O D right person
22:29
online data. We can return
22:31
the value that has been
22:33
lost. And restore
22:36
control over personal data.
22:38
By. Putting it in a pod. with
22:41
solid. Individuals. Decide
22:44
how their data is
22:46
managed, used and shared.
22:48
This approach has already
22:50
begun to take root
22:52
as seen in Flanders
22:54
where every citizen now
22:56
has their own pod.
22:58
After Jan Jan been
23:00
announced for years ago
23:02
that all slander citizens
23:04
should have a pass.
23:06
This is A Says
23:08
is the results of
23:10
Data Us ownership. And
23:13
Control. And it's an
23:15
example of the emergent
23:17
movement that is poised
23:20
to replace the outdated
23:22
incumbent system. And.
23:24
Finally, realizing.
23:26
This emerged movement. What Just
23:29
Happened Boise right about that.
23:31
oh I mean he. He
23:33
says it requires support for
23:35
the people reading the reform.
23:38
From researchers to adventures to
23:40
advocates we bust amplify and
23:42
promote these positive use cases
23:45
and works two sister a
23:47
collective mindset of global citizens.
23:49
The Web Foundation that I
23:51
cofounded with Rosemary Lay has
23:54
and will continue to. Support
23:56
and accelerate this emergent system
23:58
and the people. behind it. However,
24:01
there is a need, an
24:04
urgent need, for others to do the
24:06
same, to back the
24:08
morally courageous leadership that
24:10
is rising, collectivize
24:13
their solutions, and
24:15
overturn the online world being
24:17
dictated by profit to one
24:20
that is dictated by the
24:22
needs of humanity. It
24:25
is only then that the online
24:27
ecosystem we all live in will
24:29
reach its full potential
24:32
and provide the foundations
24:34
for creativity, collaboration,
24:38
and compassion. Tim
24:40
Berners-Lee, 12th March, 2024. Well
24:47
you've got your pod, right? Call
24:50
me jaded, call me
24:52
old, but I do
24:54
not see any way
24:58
for us to get from where we
25:00
are today to anything
25:02
like what Tim envisions.
25:06
The web has been captured.
25:09
Hook, line, and sinker by
25:12
commercial interests and they are never
25:14
going to let go. Diversity?
25:19
Well, one browser most of
25:21
the world uses is maintained
25:23
by the world's largest advertiser.
25:26
And no one forced that to happen. For
25:29
some reason, most people apparently
25:31
just like that colorful round
25:34
chrome browser icon. You
25:36
know, and chrome is cleaner looking. Its
25:39
visual design is appealing. Somehow
25:42
the word spread that it was a
25:44
better browser and nothing convinced
25:46
people otherwise. And
25:49
what Microsoft has done to their
25:51
Edge browser would drive anyone anywhere
25:53
else. But I've wandered away
25:55
from my point. People
25:58
do not truly care. about
26:01
things that they neither see
26:04
nor understand. You know, how do
26:06
you care about something that you
26:08
don't really understand? The
26:11
technologies that are being
26:13
used to track us around
26:15
the internet and to collect data
26:18
on our actions are both unseen
26:21
and poorly understood. People
26:23
have some dull sense that
26:25
they're being tracked, but only because
26:28
they've heard it said so many times,
26:30
oh I'm being tracked, you know, but
26:32
they don't know, they don't see it,
26:34
they just kind of think, okay, it
26:36
makes it feel uncomfortable, but they still
26:38
do what they were doing, you know,
26:41
they don't have any idea what that
26:43
really means. They certainly have no idea
26:45
about any of the details and
26:47
they have better things to worry
26:49
about. Yes, most importantly they have better things
26:51
to worry about. Tim
26:54
writes, part of the solution is
26:57
the solid protocol, a specification and
26:59
a movement to provide each person
27:02
with their own personal online data
27:04
store known as a pod. We
27:07
can return the value that has
27:09
been lost and restore control over
27:12
personal data. Now, okay,
27:15
while I honor Tim's spirit and
27:18
intent, I really do. I
27:21
seriously doubt that almost anyone
27:24
could be bothered to
27:26
exercise control over
27:28
their online personal data repository. I mean,
27:31
I don't even know what that looks
27:33
like. The listeners
27:35
of this podcast would likely be
27:37
curious to learn more, but
27:39
as one of my ex-girlfriends used to say,
27:43
we're not normal. My
27:47
feeling is that the web is
27:49
going to do what the web is
27:51
going to do. Yes, there are things wrong
27:53
with it and yes, it
27:55
can be deeply invasive of
27:57
our privacy, but it
28:00
all also appears to be largely
28:03
self-financing, apparently
28:06
at least in part thanks
28:08
to those same privacy invasions.
28:11
We pay for bandwidth
28:13
access to the Internet, and
28:16
the rest is free. Once
28:19
we're connected, we have
28:21
virtually instantaneous and unfettered
28:23
access to a truly
28:25
astonishing breadth of information,
28:27
and it's mostly
28:30
free. There are some annoying
28:32
sites that won't let you in without paying, so
28:34
most people simply go elsewhere. The
28:37
reason most of the Web is free
28:40
is that with a few exceptions, such
28:42
as Wikipedia, for-profit
28:45
commercial interests see an
28:47
advantage to them for
28:49
providing it. Are
28:52
we being tracked in return? Apparently.
28:55
And if that means we get everything for
28:57
free, do we really care? If
29:01
having the Internet know whether I
29:03
wear boxers or briefs means
29:06
that all of this is opened
29:08
up to me without needing to
29:10
pay individually for every site I
29:12
visit, then OK, briefs
29:14
have always been my thing. wisdom
29:18
may have invented the
29:20
World Wide Web 35 years
29:23
ago, but he certainly did
29:25
not invent what the Web has
29:27
become. That of course is
29:29
why he's so upset. The
29:32
Web has utterly outgrown its
29:34
parent, and it's finding its
29:36
own way in the world. It
29:39
is far beyond discipline and
29:41
far beyond control, and most
29:43
importantly of all, today it
29:46
is already giving most people
29:49
exactly what they want. Good
29:52
luck changing that. you
30:00
go, you said maybe you're jaded and old and
30:02
this and that and the other. I may be
30:05
jaded but I'm not exactly aged.
30:09
So even as
30:11
a relative youth,
30:15
hearing that, you know, I want to,
30:17
I don't know, put on a French
30:20
beret and, you know,
30:22
chant and say, hoorah, and feel
30:25
it and I do feel it but
30:27
I think realistically it is
30:30
not realistic if
30:32
you're being honest. And so as
30:35
cool as that would be and as amazing as that would
30:37
be, yeah, ultimately what you're
30:39
saying about the
30:42
stuff that Tim is talking about
30:45
here, you know, I should, I could say
30:47
Tim Berners-Lee is talking about here, is so
30:51
abstracted from how people
30:53
use these devices to
30:56
connect to the internet and to,
30:58
you know, communicate with one another
31:00
that, yeah, it would require some
31:02
level of sitting everyone down
31:04
across the entire world and explaining
31:06
to them how all of this
31:08
works for there to
31:10
be even the beginning of a
31:12
concern about what would be necessary
31:15
to convince everybody that they should
31:17
care about this. And as
31:19
I'm saying just then, you heard all the hedging
31:21
that kind of took place there. It wouldn't even
31:23
necessarily make a difference even
31:25
if you did explain it because they still have to
31:28
care about it and most importantly,
31:30
most people don't need to care
31:32
about it and so they
31:35
and they have bigger, better things
31:38
in their world that they have to care
31:40
about and that is I think always going
31:43
to be the case and
31:45
that's, you know, the
31:47
people who do care about this stuff,
31:50
we do our best to communicate and
31:52
educate but yeah,
31:54
I don't know. I mean, as much as you
31:57
might, I mean, I don't know. to
32:00
take the time to write
32:02
all this out and to put forth this idea,
32:05
I think is a very noble thing. But
32:07
I do wonder, I wish
32:10
I could talk to Tim Berners-Lee sort of just
32:12
face to face and say, What are you thinking?
32:14
Yeah, what do you think? Do you really think
32:16
that anyone's going to do this? Or you just,
32:18
this is just a hopeful sort of, I'm
32:20
putting it out into the world. Like, how
32:22
high is that ivory bar? Exactly.
32:25
We're down here. Yeah.
32:27
I don't know. Yeah.
32:32
And again, I
32:34
really do believe that
32:37
most users' wishes are
32:39
now being fulfilled. You
32:43
know, I mean, my wife asks me
32:45
a few questions every evening. And I
32:47
say, Well, did you Google it? You
32:49
know, it's like, that's what I do. I just
32:52
ask, I ask the internet, and it
32:54
tells me the answer. Because there's so
32:56
much going on. It's so complicated now,
32:59
that the right model is
33:01
no longer to try to know everything.
33:04
It's simply to know how to find
33:06
out, you know, that's the future with
33:08
the knowledge explosion that we're now
33:11
in, and the content explosion.
33:14
So I just, I,
33:16
again, I, you know, there was
33:18
an anecdote for a while. I don't remember now
33:21
exactly what it was. But it was something like,
33:23
I have, would you, back
33:26
when people were typically using
33:28
a single password, like universally
33:30
for all their stuff, someone
33:34
did an experiment where they went up to
33:36
people and said, here's a
33:38
lollipop. I'll
33:40
trade you for your password. And
33:42
most people said, okay, you
33:44
know, I mean, they just
33:47
didn't give a rat's
33:49
ass, you know, about security.
33:52
Right. And most people just
33:55
aren't as focused. I mean, This
33:57
podcast is all about this kind of focus. And
34:00
as I said and ex girlfriend
34:02
used to say to be years
34:04
not normal. So yeah well we're
34:06
not but it off most the
34:08
world they just throw that the
34:10
internet does what they want and
34:12
start asking them to pay right
34:15
in some some significant way. I
34:17
mean look at the club, look
34:19
at club twit. Yeah I mean.
34:22
It's it's an honorary small percentage of
34:24
the overall listener base. Yes, and I
34:26
would you say it's a pain. Some
34:28
significant way anecdotally is you have to
34:30
be significant amount of payment for somebody
34:32
to go. Well, certainly don't care about
34:34
that anymore. Ice there. But a number
34:36
of like an app. Oh
34:38
I saw that everybody's posting these photos of
34:40
themselves have been a i generate a how
34:42
do they do that I say oh it's
34:45
this app and you pay like fifty cents
34:47
to act get a photo generation out of
34:49
mine are care about that Importance. Of
34:53
I see us as citizens. I said
34:55
much for them to be like none
34:57
An unannounced. That's something. I'm in L
34:59
A N. As we know there are
35:01
some sites which have survived in the
35:03
young is in the pay to enter
35:05
model but many of the early attempts
35:07
cel flat because the moment that you
35:10
know sites put up a pay wall
35:12
most people says up in off I
35:14
clicked the first link in Google and
35:16
it took me to the paywalls. What's
35:18
the second link take me to? Oh
35:20
look that's free. Able
35:23
to get to it. You okay?
35:26
So. Ah, In
35:30
a show notes idea that title of
35:32
this next bit of news the title.
35:34
Wow. Just Wow.
35:38
Because it tells the story
35:40
of something that so utterly
35:42
violating of consumer rights and
35:45
privacy that. It. Needed
35:47
that that title the
35:49
headline. In. Last week's
35:51
New York Times read: Ah,
35:54
Auto. Makers are sharing
35:56
consumers driving behavior with
35:59
In. insurance companies. And
36:04
the sub had read
36:06
LexisNexis, which generates consumer
36:08
risk profiles for insurers,
36:11
knew about every trip
36:14
GM drivers had taken in
36:16
their cars, including
36:18
when they sped, braked
36:21
too hard, or
36:23
accelerated rapidly. Wow. Okay,
36:27
so it's astonishing.
36:29
Wow. I
36:32
know, exactly. Here's the real
36:34
world event that the New York Times
36:36
used to frame their disclosure. They
36:39
wrote, Ken Dahl says,
36:42
that's not D-O-L-L,
36:44
that's D-A-H-L, that
36:47
is K-E-N-N, Ken
36:49
Dahl says he
36:52
has always been a careful driver. The
36:55
owner of a software company near
36:57
Seattle, he drives at least Chevrolet
36:59
Bolt. He's never been
37:01
responsible for an accident. So
37:04
Mr. Dahl, at age 65, was
37:06
surprised in 2022 when
37:08
the cost of his car insurance
37:10
jumped by 21%. Quotes
37:14
from other insurance companies were also high. One
37:17
insurance agent told him his
37:20
LexisNexis report was a factor.
37:24
LexisNexis, they write, is
37:26
a New York based global data broker
37:29
with a risk solutions division
37:31
that caters to the auto
37:34
insurance industry and has traditionally
37:36
kept tabs on car accidents
37:38
and tickets. Okay, right. Public
37:42
record things, right? I mean like accidents
37:44
and tickets, that's out there.
37:47
Upon Mr. Dahl's request,
37:50
LexisNexis sent him a 258
37:52
page consumer disclosure report, which
37:59
it must be. must provide per
38:01
the Fair Credit Reporting
38:03
Act. What
38:05
it contained stunned him.
38:08
More than one hundred and thirty pages detailing
38:11
each time he or his wife had driven
38:13
the bolt over
38:20
the previous six months. It
38:23
included the dates of six hundred
38:25
and forty trips, their
38:27
start and end times, the
38:30
distance driven, and an accounting
38:32
of any speeding, hard
38:35
braking, or sharp accelerations.
38:39
The only thing it didn't have is where they
38:41
had driven the car. On
38:43
a Thursday morning in June, for example,
38:45
the car had been driven seven point
38:47
three three miles in eighteen
38:50
minutes. There had
38:52
been two rapid accelerations and
38:54
two incidents of hard braking.
38:57
According to the report, the trip
39:00
details had been provided by General
39:02
Motors, the manufacturer of
39:04
the Chevy Bolt. Nexus-Lexus
39:07
analyzed that driving data to
39:09
create a risk score, quote,
39:12
for insurers to use
39:15
as one factor of many to
39:18
create more personalized insurance
39:20
coverage, according to a
39:23
Lexus-Lexus spokesman, Dean Carney.
39:27
Eight insurance companies had
39:29
requested information about Mr.
39:31
Dahl from Lexus-Lexus over
39:33
the previous month.
39:36
Mr. Dahl said it felt like
39:38
a betrayal. They're
39:41
taking information that I didn't
39:43
realize was going to be
39:45
shared and screwing with our
39:48
insurance. Okay,
39:50
now, since this
39:52
behavior is so horrifying, I'm going to
39:54
share a bit more of what the
39:56
New York Times wrote. They said, years.
40:01
Insurance companies have offered incentives
40:03
to people who install dongles
40:05
in their cars or download
40:07
smartphone apps that monitor their
40:09
driving, including how much they
40:11
drive, how fast they take
40:13
corners, how hard they hit
40:15
the brakes, and whether they
40:17
speed. But
40:20
quote Ford
40:22
Motor put it, drivers are
40:25
historically reluctant to participate in
40:27
these programs. And
40:30
this occurred, this was written in
40:32
a patent application that describes what
40:34
is happening instead. Quote, car
40:37
companies are collecting information
40:40
directly from internet
40:42
connected vehicles for
40:44
use by the
40:46
insurance industry. In other words,
40:49
monetizing, right? Because you
40:51
know the insurance industry is paying to
40:54
receive that information. So
40:57
another means by which
40:59
today's consumer
41:02
is being monetized without
41:05
their knowledge. New
41:07
York Times says sometimes
41:10
this is happening with a driver's
41:12
awareness and consent. Car
41:14
companies have established relationships with
41:16
insurance companies so that if
41:19
drivers want to sign up
41:21
for what's called usage-based
41:24
insurance, where rates
41:26
are set based on monitoring of
41:29
their habits, it's easy to
41:31
collect that data wirelessly from their
41:33
cars. But in
41:35
other instances, something much sneakier
41:37
has happened. Modern
41:40
car companies are, I'm
41:42
sorry, modern cars are internet
41:45
enabled, allowing access
41:47
to services like navigation,
41:50
roadside assistance, and car apps that
41:52
drivers can connect to their vehicles
41:54
to locate them or unlock them
41:56
remotely. In recent
41:59
years, auto Automakers including
42:01
GM, Honda, Kia and
42:03
Hyundai have started
42:05
offering optional features in their
42:07
connected car apps that rate
42:10
people's driving. Some
42:12
drivers may not realize that
42:14
if they turn on these
42:16
features, the car companies then
42:18
give information about how they
42:20
drive to data brokers like
42:23
Lexus Nexus and again, not
42:25
give, sell. Drivers
42:29
and data brokers that have
42:31
partnered to collect detailed driving
42:33
data from millions of Americans
42:36
say they have drivers' permission
42:38
to do so, but
42:41
the existence of these partnerships
42:43
is nearly invisible to drivers
42:45
whose consent is obtained in
42:48
fine print and murky
42:51
privacy policies that few ever
42:53
read. The
42:55
troubling is that some
42:57
drivers with vehicles made by
42:59
GM say they were tracked
43:02
even when they did not turn
43:04
on the feature called
43:07
OnStar Smart Driver and
43:10
that their insurance rates went up as
43:13
a result. So,
43:15
I do have a problem with that last bit.
43:18
I simply because someone
43:20
says that, I almost wish that there
43:22
was some due diligence there. I'm
43:25
sure you've seen it. Somebody
43:28
is having an issue, a tech issue and you say, oh, here's
43:31
how you fix it and they say I've done that and then
43:33
you go and check and they didn't do that thing that you
43:35
told them to do and that they should have done it. I
43:38
wouldn't be surprised that they
43:41
did accidentally opt in, but all
43:44
that's to say whether you opt in or
43:46
not, this is still something that should be
43:48
brought to light and as for all of
43:50
it, especially if it's being put
43:52
forth as an idea of,
43:56
oh, here are these cool features you get and
43:58
secretly underneath what it's doing is giving out. Yeah,
44:00
that's bad. I just I don't know if I
44:02
like that from the New York Times there at
44:04
the end. Well, so
44:07
one thing, one analogy that occurs
44:09
to me is how, and
44:12
we mentioned this a number of times in prior
44:16
years during the podcast, is
44:20
employees in an organization
44:22
sometimes believe that what
44:24
they do on their corporate computer is private,
44:29
is like their business.
44:32
And even when the
44:34
employee agreement and occasional
44:37
reminder meetings and so forth say
44:39
that's not the case. That
44:42
this is a corporate network, corporate bandwidth,
44:44
a corporate computer, and what you
44:47
do is owned by the company. So
44:50
we have suggested that that really
44:52
ought to be on a
44:54
label running across the top of their
44:56
monitor. Yes. Like it literally
44:58
ought to say right in front of them,
45:02
please remember that everything you do on
45:04
this computer, which is owned by the
45:07
company, on the bandwidth owned by the
45:09
company, and the data owned by the
45:11
company is not private.
45:13
Right. But actually,
45:16
by analogy, the screens that
45:18
all these computers have, imagine
45:20
if they said along the
45:22
bottom, your driving
45:24
is being monitored by the
45:27
company you purchased this from
45:29
and is being sold to
45:31
your car insurance provider. They
45:34
don't want to do that, Steve.
45:36
Obviously, we're never going to see
45:38
that. But
45:41
that's the point, is that
45:43
this is going on surreptitiously,
45:45
and it being surreptitious
45:48
is clearly wrong. So
45:52
anyway, stepping back, okay,
45:54
from the specifics of this
45:56
particularly egregious behavior, add... the
46:00
context of Tim Berners-Lee's unhappiness
46:02
with what the web has
46:04
become, and the
46:06
growing uneasiness over the algorithms being
46:09
used by social media companies to
46:11
enhance their own profits, even
46:14
when those profits come at the cost
46:16
of the emotional and mental health of
46:18
their own users. We
46:21
see example after
46:23
example of amoral,
46:26
aggressive profiteering by
46:28
major enterprises where
46:31
the operative philosophy appears to
46:33
be, we'll do
46:36
this to make as much money as
46:38
we can, no matter
46:40
who is hurt, until
46:42
the governments in whose jurisdictions
46:44
we're operating get around
46:47
to creating legislation
46:50
which specifically prohibits our
46:52
conduct. But until
46:54
that happens, we'll do everything
46:56
we can to fight
46:58
against those changes, including where
47:01
possible, lobbying those governmental
47:03
legislators. Honestly, we could
47:05
just take that text
47:08
and slap it on the screen
47:10
anytime we talk about any antitrust
47:12
legislation across any of our shows,
47:14
and that perfectly sums up exactly
47:16
what's going on in every single
47:18
case. It's like make us
47:21
stop. Yeah,
47:23
exactly. And until you do,
47:25
we're going to use every clever
47:27
means we have of profiting from every area
47:35
in which we have not been
47:38
made to stop. There's
47:40
no more morality. There's no
47:42
more ethics. It's
47:46
profit wherever possible. And
47:50
that is exactly Tim Berners-Lee's
47:54
complaint, and
47:56
it's never going to change. It's
48:00
just too pervasive, right? I mean it's just,
48:02
you know. And again, as
48:05
a consequence of this, you know,
48:08
the Internet is largely free. And
48:11
I think that's a trade-off most people would
48:13
choose to make rather than having
48:15
to pay like, you
48:18
know, remember that there was early
48:20
talk about micropayments where when you
48:22
went to a website, it would,
48:24
you know, ding you some fraction
48:26
of something or other. And
48:29
that would make people very uncomfortable. They'd be
48:31
like, well, wait a minute, you know, suddenly
48:33
links are not free to click on. Yeah,
48:35
exactly. There's a cost to clicking on that
48:37
link. How many times per month should I
48:39
click on this? So then you've got, you're
48:42
telling your kids, don't click on links.
48:44
And it would just completely reshape everything.
48:47
And then there would be so many, I can't
48:50
imagine how much more money and time would
48:52
have to go
48:55
into customer support
48:57
because someone would click on a link
48:59
and then say, I'm not satisfied with
49:01
this page. And I don't want to
49:03
have paid for this page because I
49:05
didn't get the answer I wanted. That's
49:07
a very good point. That's a very
49:09
good point because right now it's like,
49:11
well, it's free. So you know, go
49:13
pound sale. Yeah, somewhere, you know, like
49:15
tough. Yeah.
49:18
Yeah. Well,
49:20
again, I don't mean to be
49:22
just simply complaining because I also
49:25
recognize, as I said, this is
49:28
why the web is here. I mean, in
49:31
the, I was present during like pre-web
49:33
during the early days and when there
49:36
was like not much on the net.
49:39
And the question was, well, why
49:42
is anyone going to put anything on
49:44
the internet? Because there's nobody
49:47
on the internet to go see it. So
49:50
there was like this chicken and egg problem, right?
49:53
No one's at no like
49:55
vast population or you are
49:57
actually going onto the internet
50:00
do anything, so why is anyone going
50:02
to put anything there? And if no one
50:04
puts anything there, then no one is going
50:06
to be incentivized to go and get what's
50:09
not there. So it
50:11
happened anyway. And the
50:13
way it's evolved, as I said, I'm
50:16
really, I'm actually not complaining. This is not a,
50:18
I mean, it sounds like I'm, you know, doing
50:21
some holier than now rant. It's not
50:23
the case. I like it the
50:25
way it is. And those
50:27
of us who are clever enough
50:29
to mitigate the tracking that's being
50:32
done and the monetizing of ourselves,
50:34
well, you know, we get the benefit that
50:36
is being reaped by all those who aren't.
50:39
So it works. And
50:42
I think on that note, we should take our
50:44
second break. Let us take a break so I
50:46
can tell you about Delete Me, who is bringing
50:49
you this episode of Security Now. If
50:51
you've ever searched, how appropriate,
50:53
for your name online and you were
50:55
shocked like I was to see how
50:57
much of your personal information is actually
51:00
out there, well, that is
51:02
where Delete Me comes into
51:04
play. It helps reduce the
51:06
risk for identity theft, credit
51:08
card fraud, robocalls, cybersecurity threats,
51:10
harassment, and unwanted communications overall. I've
51:13
used this tool personally. And as a
51:15
person who, you know, has an online
51:17
presence, I've written for sites, I've got
51:20
lots of news,
51:22
video out there, and then of course my
51:24
work here at Twit and other places doing
51:26
podcasting. I didn't want that part of me
51:28
to disappear from the internet. And it doesn't
51:30
have to. Delete Me will go and look
51:32
at those different data brokers and those different
51:35
sites that are storing my information like where
51:38
I live, where I have lived, who
51:40
my relatives are, that kind of information,
51:42
and make sure that that was removed. And
51:45
I have found it very easy to do.
51:49
And I love that they keep me updated
51:51
as they continue to remove stuff. See, the
51:53
first step for Delete Me, if you decide
51:55
you want to use it, is to sign
51:57
up and submit some of your basic personal
51:59
information for removal. You got to tell them who you
52:01
are so they know what to look for and actually find
52:03
it and get rid of it Then
52:06
delete me experts will find and remove
52:08
your personal information from hundreds of data
52:10
brokers Helping reduce your online footprint and
52:12
keeping you and your family safe Most
52:15
importantly delete me continues to scan
52:18
and remove your personal information regularly
52:21
Because those data brokers will continue
52:23
to scoop it up and put
52:26
it back into those data broker
52:28
sites This includes addresses photos emails
52:30
relatives phone numbers social media property
52:33
value and more If
52:35
you look up yourself, you're probably gonna see
52:37
some stuff out there and go what in
52:39
the world Why is that out there and
52:42
since privacy exposures and incidents affect individuals differently?
52:44
Their privacy advisors ensure that customers
52:46
have the support they need when
52:49
needed So protect yourself and
52:51
reclaim your privacy by going to join
52:53
at delete me comm Twit
52:56
and using the code TW IT
52:58
twit that's join delete me comm
53:01
slash twit with the code twit
53:05
For 20% off. Thank you
53:08
to delete me for sponsoring this
53:10
week's episode of security now We
53:13
are back to the show Steve
53:16
Gibson, let's close that loop let's
53:18
do it so Montana
53:21
Jay he wrote Hey a
53:23
flaw in passkey thinking I
53:26
teach computer science at a college like
53:29
many in the educational field I log on
53:31
to a variety of computers a day that
53:33
are used by myself fellow
53:36
instructors and students Using
53:38
a passkey in this environment would
53:40
allow others to easily gain access
53:42
to my accounts Not a
53:44
good thing. So turning off
53:47
passwords is not an option. Just
53:49
something to think about Jim Okay,
53:52
right. Well It's
53:55
a very good point which I tend to
53:57
forget since none of my
53:59
computers are shared. But
54:02
in a machine sharing environment there
54:04
are two strong options. FIDO
54:07
in a dongle is
54:09
one way to obtain the benefits
54:11
of passkey-like public key identity
54:14
authentication while achieving portability.
54:18
Your passkeys are all loaded
54:21
into this dongle and
54:23
that's what the website uses. But
54:25
also, reminiscent of the
54:28
way I designed squirrel originally, a smartphone
54:32
can completely replace a
54:35
FIDO dongle to serve
54:37
as a passkeys authentication
54:39
client by using the
54:41
QR code presented by
54:44
a passkeys website. And
54:46
in that model, passkeys probably
54:49
provides just about the
54:51
best possible user experience
54:53
and security for shared
54:55
computer use. So you go
54:58
to a site, you log in,
55:02
only with giving them your user name. At
55:04
that point, the site
55:06
looks up your user name, sees that you
55:08
have registered a passkey with the site and
55:11
moves you over to the passkey
55:13
login. Part of that will
55:15
be a QR code. You take your
55:17
Android or Apple
55:20
phone, open
55:22
the passkey app, let it see the
55:24
QR code and you're logged in.
55:27
So that computer
55:29
and the web browser never
55:32
has access to your passkeys, they remain
55:34
on your phone. So it's
55:36
absolutely possible to get all the
55:38
benefit of passkeys in a shared
55:43
usage model with arguably the
55:45
best security around. is
56:01
done, logs out, that another
56:03
person could sit down and log in because of
56:05
a passkey. Is it, how is that, I don't
56:07
understand how that would even work. Is Jim
56:10
suggesting that there's a... I believe it's
56:12
because last week one of the things
56:14
we talked about was
56:17
with passkeys being stored in a
56:19
password manager in the browser. Ah,
56:23
but you would... So, okay, so if you forgot
56:25
to log out of the... Oh, I see. If
56:28
it's in the browser and this person uses
56:30
the browser. Okay, gotcha. That makes sense. Okay,
56:33
got it. Yeah, so the
56:35
idea is you don't want obviously
56:37
a shared machine to store
56:40
anyone's passkeys. You want
56:42
that to all be provided
56:47
externally on the fly. Yeah, I mean
56:49
in theory you also don't want an
56:51
in-browser system storing passwords
56:53
if you're in a shared environment. I
56:55
agree completely. Exactly, like there should be
56:57
no password manager and like, you know,
56:59
would you like me to remember this
57:01
password for you. I don't
57:04
even know if there's a way to turn
57:06
that off, but that should be like, heck no,
57:08
forced off. Yeah, so that it's like you
57:10
can't even ask you. Gilding
57:14
Timings, he wrote, hey Steve,
57:17
I just finished watching episode 965 on passkeys
57:20
versus two-factor authentication. I
57:23
was wondering, don't passkeys
57:25
just change who
57:27
is responsible for securing
57:30
your authentication data? With
57:32
passwords and two-factor authentication, the
57:34
responsibility is with the
57:37
website. With passkeys, the
57:39
responsibility is with the tool
57:42
storing the passkeys. For example, a
57:44
password manager. If the password
57:46
manager is compromised, an attacker
57:48
has all they need to authenticate
57:50
as you. So again, we're talking
57:52
about storing passkeys in the password
57:54
manager, which is something that we
57:56
talked about last week, which is
57:58
why, you know, our
58:00
listeners are coming back with questions
58:02
about this practice, you know, deservedly
58:05
so. So,
58:07
he says, if the password manager is compromised,
58:09
an attacker has all they need to authenticate
58:12
as you. I would think
58:14
that if the website doesn't
58:17
allow disabling password authentication, then
58:20
two-factor authentication still has some
58:22
value if we're talking
58:24
about password managers being compromised. And
58:26
of course, there he's talking about
58:29
external storage of
58:31
the two-factor authentication code like in your
58:33
phone, which is again something we've also
58:36
talked about in the past. He
58:38
said, you can at least store
58:40
the two-factor authentication data separately from
58:43
your password manager. He says, I'm
58:45
loving Spinrite. It's already come in handy
58:47
multiple times. He's Spinrite
58:49
6.1, he said, thanks so much for
58:51
continuing the show. I look forward to
58:54
it every week. Okay, so first, thank
58:56
you. After three years of work on it, I
59:00
certainly appreciate the Spinrite feedback and I'm delighted
59:02
to hear that it's come in handy. So,
59:05
here's the way to think about authentication security.
59:09
All of the authentication
59:12
technology in use
59:14
today requires the
59:16
use of secrets that
59:18
must be kept, all of it. The
59:22
primary difference among
59:24
the various alternatives is
59:27
where those secrets are kept and
59:29
who is keeping them. In
59:32
the username password model, assuming
59:35
the use of unique and
59:37
very strong passwords, the
59:40
secrets must be kept at
59:43
both the client's end so
59:45
that they can provide the secret and
59:48
the server's end so
59:50
that it can verify the secret
59:52
provided by the client. So,
59:55
we have two separate locations where
59:57
secrets must be kept. By
1:00:00
comparison, thanks to
1:00:03
Passkey's entirely different
1:00:05
public key technology, we've
1:00:08
cut the storage of secrets
1:00:10
in half. Now
1:00:14
only the client side needs
1:00:16
to be keeping secrets, since
1:00:18
the server side is only
1:00:21
able to verify the client's
1:00:23
secrets without needing to
1:00:25
retain any of them itself. So
1:00:29
it's clear that by cutting
1:00:31
the storage of secrets in half,
1:00:33
we already have a much
1:00:36
more secure authentication solution. But
1:00:39
the actual benefit is far greater than
1:00:41
50 percent. Where
1:00:44
does history teach us the
1:00:46
attacks happen? When
1:00:49
the infamous bank robber Willie Sutton was
1:00:51
asked why he robbed banks, his answer
1:00:53
was obvious in retrospect, he said, because
1:00:56
that's where all the money is. For
1:00:59
the same reason, websites are
1:01:02
attacked much more
1:01:04
than individual users because
1:01:06
that's where all the authentication secrets
1:01:08
are stored. So
1:01:10
when the use of Passkey's cuts
1:01:13
the storage of authentication secrets by
1:01:15
half, the half that it's
1:01:17
cutting is where nearly all
1:01:19
of the theft of those secrets
1:01:21
occurs. So the
1:01:23
practical security gain is far more
1:01:26
than just 50 percent. Now
1:01:29
our listener said, I would
1:01:31
think that if the website doesn't
1:01:33
allow disabling password authentication, then two-factor
1:01:35
authentication still has some value if
1:01:37
we're talking about password managers being
1:01:39
compromised. You can at least store
1:01:41
the two-factor authentication data separately from
1:01:43
your password manager. That's
1:01:45
true, and there's no question that
1:01:48
requiring two secrets
1:01:51
to be used for a single authentication
1:01:53
is better than one, and
1:01:56
that storing those secrets separately is
1:01:59
better But
1:02:01
as we're reminded by the needs
1:02:03
of the previous listener who works
1:02:05
in a shared machine environment, just
1:02:08
like two-factor authentication, PAS keys
1:02:10
can also be stored in
1:02:13
an isolated smartphone and
1:02:15
thus kept separate from the browser. Having
1:02:18
our browsers or password manager
1:02:21
extensions storing our authentication data
1:02:24
is the height of convenience and
1:02:27
we're not hearing about that actually
1:02:30
ever having been a problem. That is
1:02:32
to say browser
1:02:35
extension compromise. That's
1:02:39
very comforting. But
1:02:41
a separate device just feels as
1:02:44
though it's going to provide more
1:02:46
authentication security if only in theory.
1:02:49
The argument could be made that
1:02:51
storing PAS keys in a smartphone
1:02:54
still presents a single point of
1:02:56
authentication failure. But it's
1:02:58
difficult to imagine a more secure
1:03:00
enclave than what Apple provides,
1:03:03
backed up by per
1:03:06
use biometric verification
1:03:08
before unlocking a PAS
1:03:10
key. So the
1:03:13
strongest protection I think you can get
1:03:15
today. Mike
1:03:18
Sheppers says, hi Steve.
1:03:21
I'm a long time listener of security now
1:03:23
and love the podcast. Thank you so much
1:03:25
for all your contributions for making this world
1:03:27
a better place and freely giving your expertise
1:03:29
to educate many people like myself. I
1:03:32
do have a question for you related to
1:03:34
PAS keys, episode 965, that I'm hoping you can
1:03:38
help me understand. There
1:03:40
are many accounts that my wife and I
1:03:43
share for things like
1:03:45
banking and health benefits websites where
1:03:47
we both need access to the
1:03:49
same accounts. If
1:03:52
they were to use only PAS
1:03:54
keys for authentication, is
1:03:56
sharing possible? Thank
1:03:58
you, Mike. word? Yes.
1:04:02
Whether path keys are
1:04:04
stored in a browser-side
1:04:07
password manager or in your smartphone,
1:04:10
the various solutions have
1:04:13
all recognized this necessity
1:04:16
and they provide some means for
1:04:18
doing this. For example, in the
1:04:20
case of Apple, under
1:04:22
Settings, Passwords, it's
1:04:25
possible to create a shared
1:04:28
group for which you and
1:04:30
your wife would be members. It's
1:04:33
then possible for members of the group
1:04:35
to select which of their passwords they
1:04:37
wish to share in the group. And
1:04:40
Apple has seamlessly extended this
1:04:42
so that it works identically
1:04:44
with pass keys. Apple's
1:04:47
site says, shared password
1:04:49
groups are an easy and
1:04:51
secure way to share passwords
1:04:53
and pass keys with your
1:04:55
family and trusted contacts. Very
1:04:58
trusted. Anyone in the
1:05:00
group can add passwords and pass
1:05:03
keys to the group share. When
1:05:05
a shared password changes, it changes
1:05:08
on everyone's device. So it's
1:05:10
a perfect solution. And yes, it
1:05:13
appears to be universal. So pass
1:05:16
key sharing has been provided.
1:05:21
Senraeth says, well,
1:05:24
we got
1:05:26
a posting or I got a tweet
1:05:28
from him. And there's
1:05:32
been such an outsized interest shown
1:05:34
in this topic by our listeners
1:05:36
that I wanted to share his
1:05:39
restatement and summary of the situation, even
1:05:41
though it's a bit redundant so that
1:05:43
everyone could just kind of check their
1:05:45
facts against the assertions that he's making.
1:05:48
He said, Hi, Steve, just listen
1:05:50
to SN 965 and
1:05:53
have a thought about pass key security.
1:05:55
Completely agree with your assessment of
1:05:57
the security advantages of pass key.
1:06:00
keys versus passwords and multi-factor
1:06:02
authentication in general. But
1:06:05
another practical difference occurs to me when
1:06:07
using a password manager to store your
1:06:10
pass keys. With
1:06:12
password plus MFA, if
1:06:15
your password manager is breached somehow,
1:06:17
you could still rest easy knowing
1:06:19
that only your passwords were compromised,
1:06:22
again, assuming multi-factor authentication is
1:06:24
in a separate device, right?
1:06:27
Because password managers are now
1:06:29
offering to do your multi-factor
1:06:31
authentication for you
1:06:34
too. He said, you
1:06:37
can still rest easy knowing that only
1:06:39
your passwords were compromised and that hackers
1:06:42
could not actually gain access to any
1:06:44
of the accounts in your vault that
1:06:46
were also secure with a second factor.
1:06:49
Of course, this is
1:06:51
not true if you also use
1:06:53
your password manager to store your
1:06:55
MFA codes, which is why you've
1:06:57
said in the past that you would not do that
1:06:59
as it puts all your eggs in one basket. With
1:07:03
pass keys stored in a password manager, this
1:07:06
is no longer the case. If
1:07:08
the password manager is breached, the hacker can
1:07:10
gain access to every account that was secured
1:07:12
with the pass keys in your vault. So
1:07:15
while pass keys most definitely make
1:07:18
you less vulnerable to breaches at
1:07:20
each individual site, the trade-off
1:07:22
is making you much more vulnerable to
1:07:24
a breach of your password manager if
1:07:27
I'm understanding this correctly, he writes. Like
1:07:30
the original listener from last week, Stefan
1:07:33
Jansen, this leaves me feeling
1:07:35
hesitant to use pass keys
1:07:37
with a password manager. I
1:07:40
think using pass keys with a
1:07:42
hardware device like a YubaKey would
1:07:44
be ideal. But then you
1:07:46
have to deal with the issue of
1:07:48
syncing multiple devices, which of course wouldn't
1:07:53
have been an issue with Squirrel. True,
1:07:55
thanks for all you do. So Apple
1:07:59
and Android. smartphones, support
1:08:02
cross-device path key syncing
1:08:04
and website logon via
1:08:06
QR code. So
1:08:08
path keys remains the winner.
1:08:11
No secrets are stored remotely by
1:08:14
websites. So the impact
1:08:16
of the most common website security
1:08:19
breaches is hugely reduced. If you
1:08:22
cannot get rid of or disable a
1:08:25
website's parallel use of
1:08:27
passwords, then by all means
1:08:30
protect the password with MFA just
1:08:32
so that the password by itself cannot be
1:08:34
used and perhaps remove the
1:08:36
password from your password manager if
1:08:39
its compromise is a concern. So
1:08:42
that leaves a user transacting with
1:08:45
pass keys for their logon and
1:08:47
left with a choice of where they are
1:08:49
stored in a browser
1:08:52
or browser extension or on
1:08:54
their smartphone. I would
1:08:56
suggest that the choice is up to the user.
1:08:59
The listeners of this podcast will
1:09:01
probably make a different choice than
1:09:03
everybody else, right? Because ease
1:09:05
of use generally wins out
1:09:07
here. The browser
1:09:09
presents such a large attack
1:09:12
surface that the quest
1:09:14
for maximum security would suggest that
1:09:16
storing pass keys in a separate
1:09:18
smartphone would be most prudent. But
1:09:21
that does create smartphone
1:09:24
vendor ecosystem lock-in.
1:09:27
And I'll remind everyone that we do
1:09:29
not have a history of
1:09:31
successful major password
1:09:34
manager extension attacks.
1:09:37
Why? I don't know. But it just
1:09:40
doesn't, you know, like attacks on our, although
1:09:43
we're all worried about them, we're
1:09:45
worried about the possibility because we
1:09:48
know it obviously exists. But what
1:09:51
we see is websites being attacked all
1:09:53
the time, not apparently
1:09:56
with any success, the
1:09:58
password manager extensions. which is
1:10:01
somewhat amazing, but it's true. So
1:10:04
the worry over giving our pass keys
1:10:06
to our password managers to store is
1:10:09
only theoretical, but
1:10:12
it's still a big what if, and
1:10:14
I recognize that. At this
1:10:16
point, I doubt
1:10:18
that there's a single right answer that
1:10:20
applies to everyone. You
1:10:22
know, when a
1:10:25
user goes to a website that says,
1:10:27
how would you like to switch to
1:10:29
pass keys? And they say, okay. And
1:10:33
they press a button and their
1:10:35
browser says, done. I
1:10:38
know your pass key now. I'll handle login for you
1:10:40
from now on. They're going to go, yay. You
1:10:43
know, like great with, you know, not
1:10:46
a second thought, not
1:10:48
this podcast audience, but again, the
1:10:51
majority. Um, and
1:10:54
I'll just finish by saying the lack of
1:10:57
past key portability is
1:10:59
a huge annoyance, you know, but
1:11:02
we're still in the very early
1:11:04
days and we do know that
1:11:06
the Fido group is working on
1:11:08
a portability spec,
1:11:11
so there is still hope. That
1:11:14
I think one of the things that make
1:11:16
us feel a little queasy about pass keys
1:11:19
is that we, you know, we
1:11:21
can't see them. We can't touch them.
1:11:23
We can't hold them. You know, the,
1:11:25
the, a password you can see, you
1:11:27
can write it down. You can copy
1:11:29
it somewhere else. You can copy and
1:11:31
paste it. I mean, it's tangible. And,
1:11:34
you know, as I've said on the
1:11:36
podcast, I print out the QR codes
1:11:38
of all of my, um,
1:11:41
my one time password authenticator QR
1:11:43
codes. Whenever a site gives me one
1:11:45
and I'm setting it up, I make
1:11:47
a paper copy and I've got them
1:11:49
all stapled together in a drawer because
1:11:52
if I want to set up
1:11:54
another device and I'm unable to
1:11:57
export and import those, I'm able
1:11:59
to, you know. to expose them to the camera
1:12:01
again and recreate those. So
1:12:03
the point is they're tangible.
1:12:06
But at this point, no one has
1:12:08
ever seen a PAS-Key. They're just like
1:12:12
somewhere in a cloud or
1:12:14
imaginary or something. And
1:12:16
it makes us feel uncomfortable that, you
1:12:18
know, they're just intangible the way
1:12:20
they are. The
1:12:23
R said, hi Steve, on
1:12:26
episode 965, a viewer commented
1:12:28
on how some sites are blocking
1:12:30
anonymous https colon
1:12:33
slash slash nuc.com email
1:12:35
addresses or stripping out the
1:12:37
plus symbol. I want to
1:12:39
share my approach that gets around these issues.
1:12:42
He said first, I
1:12:45
registered a web domain with
1:12:47
who is privacy protection to
1:12:49
use just for throwaway accounts. I
1:12:52
then added the domain to my
1:12:54
personal proton mail account,
1:12:57
which requires a plan upgrade.
1:13:00
But I'm sure there are many other
1:13:02
email hosting services out there that are
1:13:05
cheap or possibly free. Finally,
1:13:07
I enabled the catch
1:13:09
all address option. With
1:13:12
this in place, I can
1:13:14
now sign up on websites
1:13:16
using any name at my
1:13:18
domain. And those emails
1:13:21
are delivered to the catch all in proton
1:13:23
mail. You can set
1:13:25
up filters or real addresses if
1:13:27
you want to bypass the catch
1:13:29
all should you want some organization.
1:13:32
Proton mail also makes it really easy
1:13:35
to block email senders by right clicking
1:13:37
the email item in your inbox and
1:13:39
selecting the block action. So
1:13:42
far, this setup has been
1:13:44
serving me well for the past year
1:13:46
without any problems. So
1:13:49
I want to toss this idea just
1:13:51
out there into the ring as an
1:13:53
idea that might work for some of
1:13:55
our listeners. And I agree that
1:13:58
it solves the problem of creating
1:14:01
per site or just
1:14:03
random throwaway email addresses.
1:14:06
But the problem it does not solve for
1:14:09
those who care is the
1:14:12
tracking problem, since all
1:14:14
of those throwaway addresses would
1:14:17
be at the same personalized
1:14:19
domain. The reason the
1:14:22
atduck.com solution was so appealing
1:14:24
is that everyone
1:14:27
using atduck.com is
1:14:30
indistinguishable from everyone else using
1:14:34
atduck.com, making
1:14:36
obtaining any useful tracking
1:14:38
information from someone's use
1:14:41
of atduck.com or any
1:14:43
other similar mass anonymizing
1:14:45
surface feudal. And
1:14:48
this of course is exactly why some
1:14:50
websites are now refusing to accept such
1:14:53
domains and why this
1:14:55
may become unfortunately a growing trend,
1:14:58
for which there is
1:15:00
no clear solution at this point. And
1:15:03
I don't think there can be one really, it's going to be
1:15:05
a problem. Gabe
1:15:07
Van Engle said, Hey Steve, I
1:15:10
wanted to send you a quick note
1:15:12
regarding the vulnerability report topic over
1:15:14
the last two episodes. I
1:15:16
don't know the specifics of the issue the
1:15:18
listener reported, but I can
1:15:21
provide some additional context as
1:15:23
someone who runs an open
1:15:26
bounty program on
1:15:28
HackerOne. We
1:15:31
require that all reports include
1:15:33
a working proof of concept
1:15:35
to be eligible for bounty.
1:15:39
The reason is that many vulnerability
1:15:41
scanners flag issues simply by checking
1:15:44
version headers. However, most
1:15:46
infrastructure these days does not
1:15:48
run upstream packages distributed directly
1:15:50
by the author and
1:15:52
instead use a version package
1:15:55
by a third party providing back
1:15:57
ported security patches. For example,
1:16:00
repositories from Red Hat
1:16:02
Enterprise, Linux, Ubuntu, Debian, FreeBSD,
1:16:04
etc. It
1:16:07
is totally possible the affected
1:16:09
company is vulnerable to
1:16:12
the trivial Nginx remote code
1:16:14
execution. But if
1:16:16
they think the report isn't worth acting
1:16:18
on, it's also possible
1:16:21
they're running a version which
1:16:23
isn't actually vulnerable, but still
1:16:25
returns a vulnerable-looking version string.
1:16:28
To be clear, I'm not trying to
1:16:31
give the affected company a free pass. Even
1:16:34
if they aren't vulnerable, the time frame
1:16:36
over which the issue was handled and
1:16:38
the lack of a clear explanation as
1:16:40
to why they chose to take no
1:16:42
action is inexcusable.
1:16:45
All the best, keep up the good work, Gabe.
1:16:48
And he said, P.S., looking forward to
1:16:50
email so I can delete my Twitter
1:16:53
account. So I thought
1:16:55
Gabe's input as someone who's deep
1:16:57
in the weeds of vulnerability disclosures
1:17:00
at HackerOne was very valuable. And
1:17:03
it's interesting that they don't entertain
1:17:05
any vulnerability submission without
1:17:08
a working proof of concept. Given
1:17:11
Gabe's explanation, that makes sense. And
1:17:13
it's clear because they just have too many false
1:17:15
positive reports, right? And people say, hey,
1:17:18
why didn't I get a payment
1:17:20
for my valuable discovery? He's like, well,
1:17:22
it didn't work. Yeah, you didn't prove
1:17:24
that it actually worked. Exactly.
1:17:27
And it's clear that a working
1:17:30
proof of concept would move our listeners'
1:17:32
passive observation from
1:17:35
a casual case of,
1:17:37
hey, did you happen to
1:17:39
notice that your version of Nginx is
1:17:41
getting rather old to, hey, you
1:17:44
better get that fixed before someone
1:17:46
else with fewer scruples happens to
1:17:48
notice it too. As
1:17:51
we know, our listener was the
1:17:53
former of those two. He
1:17:56
only expressed his concern over
1:17:58
the possibility that... might be an
1:18:00
issue. And he even, in his
1:18:03
conversation with me, recognized that it
1:18:05
could be a honeypot where like
1:18:07
they deliberately had this version header
1:18:10
and were collecting attacks, though I
1:18:12
think he was being very generous
1:18:14
with that possibility.
1:18:18
He understood that the only thing
1:18:20
he was seeing was his server's
1:18:22
version headers and that
1:18:24
therefore there was only some potential
1:18:26
for trouble. And
1:18:28
had the company in question clearly stated
1:18:31
that they were aware of
1:18:33
the potential trouble but that they had
1:18:35
taken steps to prevent its exploitation, the
1:18:37
issue would have been settled. It
1:18:40
was only their clear absence of
1:18:42
focus upon the problem
1:18:44
and never addressing his other questions
1:18:47
that caused any escalation in the
1:18:49
issue beyond an initial
1:18:51
casual nudge. And
1:18:55
Gabe also said, looking forward
1:18:57
to email, meaning GRC's
1:19:01
soon to be brought online email system,
1:19:05
he said, so that he could
1:19:07
delete his Twitter account. I
1:19:09
also wanted to take a moment to talk about Twitter. Many
1:19:16
of this podcast's listeners take
1:19:19
the time to express similar
1:19:21
sentiments. And at the
1:19:23
same time, I receive
1:19:25
tweets from listeners arguing
1:19:27
that I'm wrong to
1:19:29
be leaving Twitter, as
1:19:32
well as the merits of Twitter
1:19:34
and how much Elon has improved
1:19:36
it since his purchase. OK.
1:19:40
So, for the record, let
1:19:42
me say again that I
1:19:45
am entirely agnostic on
1:19:47
the topic of Elon and Twitter.
1:19:50
In other words, I don't care one
1:19:53
way or the other. More than
1:19:55
anything, I'm not a big social
1:19:57
media user. What we normally think
1:20:00
of his social media doesn't interest me
1:20:02
at all. That
1:20:05
said, GRC has been
1:20:07
running quiet, backwater, NNTP-style
1:20:10
text-only news groups
1:20:13
for decades since long
1:20:15
before social media existed,
1:20:18
and we have very useful web forums.
1:20:21
But Twitter has never really been
1:20:24
social media for me. I
1:20:26
check in with Twitter once a week
1:20:29
to catch up on listener feedback, to
1:20:31
post the podcast's weekly summary, and
1:20:34
link to the show notes, and then recently
1:20:36
to add our picture of the week.
1:20:41
What caught my attention and brought me
1:20:43
out of my complacency with
1:20:46
Twitter was Elon's statement
1:20:48
that he was considering
1:20:50
charging a subscription for
1:20:53
everyone's participation. Thus
1:20:55
turning Twitter into a subscription-only
1:20:57
service. That
1:21:00
brought me up short and caused
1:21:02
me to realize that what
1:21:04
was currently a valuable and
1:21:06
workable communications facility for
1:21:08
as little as I use it might
1:21:11
come to a sudden end because
1:21:14
it was clear that charging everyone
1:21:16
to subscribe to use Twitter would
1:21:18
end it as a
1:21:21
means for most of our current
1:21:23
Twitter users to send feedback. They're
1:21:27
literally only using Twitter as I am
1:21:29
to talk to me. We
1:21:32
don't all have Twitter, but
1:21:35
we do all have email. So
1:21:37
it makes sense for me to be relying
1:21:39
upon a stable and
1:21:42
common denominator that
1:21:44
will work for everyone. And
1:21:47
since I proposed this plan
1:21:49
to switch to email, many
1:21:51
people like Gabe have indicated
1:21:53
to me through Twitter that
1:21:55
not needing to use Twitter would be a benefit
1:21:57
for them too. So I just wanted to say
1:21:59
that. again to explain again, you know,
1:22:02
because they're people are like fine, you
1:22:04
know, I don't have an
1:22:06
issue. You're not taking it where they are.
1:22:08
You're trying to make it available to more
1:22:10
people. That's the right. That's it. That
1:22:12
is exactly it. Exactly. And, and
1:22:15
Elon appears to be making it
1:22:17
available to fewer and
1:22:19
maybe many fewer. So you know,
1:22:23
that would be a problem for me. So I'm switching
1:22:25
before that happens. Mark Zip
1:22:27
wrote at SGGRZ
1:22:30
just catching the update
1:22:33
about the guy who found the flaw in
1:22:35
the big site and the
1:22:37
unsatisfactory response from Sisa slash
1:22:40
cert. I
1:22:42
think he should not take the
1:22:44
money. I think he
1:22:46
should tell Brian Krebs or
1:22:49
another high profile security reporter.
1:22:52
They can often get responses. Okay.
1:22:55
Now this is another
1:22:58
interesting possible avenue. My
1:23:00
first concern, however, is for
1:23:03
our listeners safety. And
1:23:05
by that, I don't mean
1:23:07
his physical safety. I mean
1:23:09
his safety from the annoying
1:23:11
tendency of bullying corporations to
1:23:13
launch meritless lawsuits just because
1:23:15
they easily can. Our
1:23:18
listener is on this company's radar now,
1:23:21
and that company might not take
1:23:23
kindly to someone like Brian Krebs
1:23:26
using his influential position to
1:23:28
exert greater pressure. This
1:23:31
was why my recommendation was to
1:23:33
disclose to Sisa and cert being
1:23:35
us government bodies disclosing to
1:23:38
them seems much safer
1:23:40
than disclosing to an influential
1:23:42
journalist. Now
1:23:45
recall from earlier Gabe from hacker
1:23:47
one. I subsequently
1:23:49
shared my reply with him
1:23:51
and he responded to
1:23:53
that and he said, this is
1:23:56
one of the benefits of
1:23:58
running a program. via hacker
1:24:01
one or other. By
1:24:03
having a hacker register and
1:24:06
agree to the program terms, it
1:24:09
both lets us require higher
1:24:11
quality reports and
1:24:13
to also indemnify them
1:24:17
against uh-huh otherwise
1:24:19
risky behavior like actually
1:24:21
trying to run remote
1:24:24
code executions against a
1:24:26
target system. So
1:24:29
yeah that indemnification could
1:24:31
turn out to be a big
1:24:33
deal and of course when working
1:24:36
through a formal bug bounty program
1:24:38
like hacker one it's not the
1:24:40
hacker who interfaces with the target
1:24:43
organization it's hacker one
1:24:45
who is out in front so
1:24:47
not nearly as easy to
1:24:49
ignore or silence an implied
1:24:52
threat. Are you
1:24:54
hearing that secret person who
1:24:56
messaged before? Perhaps hacker
1:24:58
one would be a good place for you to go
1:25:00
next. Yep another
1:25:04
of our listeners said this website
1:25:07
with this big vulnerability should
1:25:09
be publicly named. You
1:25:12
are doing a disservice to everyone
1:25:14
who uses that site by keeping
1:25:16
it hidden. To quote
1:25:18
you in your own words security
1:25:22
by obscurity is
1:25:24
not security. Let
1:25:26
us know which site it is so
1:25:28
that we can take action. Well
1:25:32
wouldn't it be nice if things were so simple?
1:25:36
In the first place this is
1:25:38
not my information to disclose
1:25:41
so it's not up to me. This
1:25:43
was shared with me in confidence. The
1:25:46
information is owned by the person who
1:25:48
discovered it and he has
1:25:50
already shared it with government authorities whose
1:25:52
job we could argue it actually
1:25:55
is to deal with such
1:25:57
matters of importance to major national corporations.
1:26:00
The failure to act
1:26:02
is theirs, not his
1:26:04
nor mine. The
1:26:07
really interesting question, all
1:26:10
of this conjures, is
1:26:13
whose responsibility is it? Where
1:26:16
does the responsibility fall? Some
1:26:18
of our listeners have suggested that bringing more
1:26:21
pressure to bear on the company
1:26:23
is the way to make them
1:26:26
act. But what gives
1:26:28
anybody the right to do that? Publicly
1:26:31
naming the company, as this
1:26:34
listener asks, would very likely
1:26:36
focus malign intent upon them.
1:26:39
And based upon what I previously shared about
1:26:41
their use of an old version of Nginx,
1:26:44
the cat, as they say, would be out of the
1:26:46
bag. At this point,
1:26:48
it's only the fact that the
1:26:51
identity of the company is unknown
1:26:53
that might be keeping it and
1:26:55
its many millions of users safe.
1:26:59
Security by obscurity might not
1:27:02
provide much security, but there
1:27:04
are situations where a
1:27:07
bit of obscurity is all
1:27:09
you've got. This
1:27:12
is a very large and
1:27:14
publicly traded company. So
1:27:18
it's owned by its
1:27:20
shareholders, and its board
1:27:22
of directors who have been appointed
1:27:24
by those shareholders are
1:27:26
responsible to them for
1:27:29
the company's proper, safe, and
1:27:31
profitable operation. So
1:27:34
the most proper and ideal course of
1:27:36
action at this point would likely be
1:27:38
to contact the members of
1:27:40
the board and privately
1:27:43
inform them of the
1:27:45
reasonable belief that the executives they
1:27:47
have hired to run the company
1:27:50
on behalf of its shareholders have
1:27:52
been ignoring, and apparently
1:27:54
intend to continue ignoring, a potentially
1:27:57
significant and quite wide-
1:27:59
widespread vulnerability in their
1:28:02
web-facing business properties. While
1:28:06
some minion who receives
1:28:08
anonymous email can easily
1:28:10
ignore incoming vulnerability reports,
1:28:13
if the members of the company's board were
1:28:15
to do so, any resulting
1:28:17
damage to the company, its
1:28:19
millions of customers and its reputation
1:28:22
would be on them. Getting
1:28:25
back from this a bit, I think
1:28:27
that the lesson here is that
1:28:30
at no point should it
1:28:32
be necessary for untoward pressure
1:28:34
to be used to force
1:28:36
anyone to do anything,
1:28:39
because doing the right thing should
1:28:42
be in everyone's best interest. The
1:28:45
real problem we have is that it's
1:28:47
unclear whether the right person within the
1:28:50
company has been made aware of the
1:28:52
problem. At this point,
1:28:54
it's not clear that's happened through no
1:28:56
fault of our original listener who may
1:28:58
have stumbled upon a serious problem and
1:29:01
has acted responsibly at every step. If
1:29:03
the right person had been made aware
1:29:05
of the problem, we would have to
1:29:08
believe that it would be resolved if
1:29:10
indeed it was actually a problem. My
1:29:14
thought experiment about reaching
1:29:16
out to the company's board of directors
1:29:18
amounts to going over the
1:29:20
heads of the company's executives
1:29:22
who do not appear to be getting
1:29:24
the message. That
1:29:26
has the advantage of keeping the
1:29:28
potential vulnerability secret while probably
1:29:31
resulting in action being taken. I'm
1:29:33
not suggesting that our listener should go to
1:29:36
all that trouble since that would
1:29:38
be a great deal of thankless effort. The
1:29:40
point I'm hoping to make is
1:29:42
that there are probably still things
1:29:44
that could be done short
1:29:47
of a reckless public
1:29:49
disclosure which could result
1:29:52
in serious and unneeded damage
1:29:54
to users and company alike.
1:29:57
And, maybe even to the person who
1:29:59
made the disclosure. I mean
1:30:01
likely to the person who made that
1:30:04
disclosure. Marshall
1:30:06
tweeted, Hi Steve, a quick follow-up
1:30:09
question to the Last Security Now
1:30:11
episode. Okay, there's one more. I
1:30:13
thought we were done with them. On
1:30:15
MFA versus pass keys. Does
1:30:18
the invention, oh this is actually a good one,
1:30:20
I know why I put it in here. Does
1:30:22
the invention of pass keys invalidate
1:30:26
the something you have, something
1:30:28
you know, and something you
1:30:30
are, paradigm? Or
1:30:33
does pass keys provide a
1:30:35
better instantiation of those
1:30:38
three concepts? Great question. Because
1:30:40
the idea with multi-factors
1:30:43
is that you'd add another factor
1:30:45
for greater security. But
1:30:47
with pass keys, do you
1:30:50
still consider those factors? Thanks
1:30:53
for everything you do. Okay, I think this
1:30:55
is a terrific question. The
1:30:57
way to think of it is that
1:30:59
the something you know is a
1:31:01
secret that you're
1:31:03
able to directly share. The
1:31:06
use of something you have,
1:31:09
like a one-time password generator,
1:31:11
is actually you sharing
1:31:14
the result of another secret you
1:31:17
have where the result is
1:31:19
based upon the time of day. And
1:31:22
the something you are is
1:31:24
some biometric being used to
1:31:26
unlock and provide a third
1:31:28
secret. In all
1:31:31
three instances, a
1:31:33
local secret is being
1:31:35
made available through some means.
1:31:39
It's what's done with
1:31:42
that secret, where the
1:31:44
difference between traditional authentication
1:31:47
and public key authentication
1:31:49
occurs. With
1:31:51
traditional authentication, the resulting
1:31:53
secret is simply compared
1:31:56
against a previously stored copy
1:31:58
of the same
1:32:00
secret to see whether they match.
1:32:04
But with public key authentication, such as
1:32:06
pass keys, the secret
1:32:08
that the user obtained at their
1:32:10
end is used to sign
1:32:13
a unique challenge provided by the other
1:32:15
end, and then that
1:32:17
signature is verified by the sender
1:32:19
to prove that the signer is
1:32:21
in possession of the secret private
1:32:23
key. Therefore, the
1:32:26
answer, as Marshall suggested, is
1:32:29
that pass keys provides
1:32:31
a better instantiation of
1:32:34
those original three concepts. For
1:32:37
example, Apple's pass key
1:32:40
system requires that the user
1:32:42
provides a biometric face or
1:32:44
thumbprint to unlock the secret
1:32:46
before it can be used. Once
1:32:49
it's used, the way
1:32:51
it's used is entirely different
1:32:55
because it's using pass keys. But
1:32:58
a browser extension that contains
1:33:01
pass keys merely
1:33:03
requires its user to provide something
1:33:05
they know to log into
1:33:07
the extension and thus
1:33:09
unlock its store of pass key
1:33:12
secrets. As we mentioned earlier, all
1:33:15
of these traditional factors were
1:33:18
once layered upon each other in
1:33:21
an attempt to shore each other up since
1:33:24
storing and passing secrets back and
1:33:26
forth had turned out to be
1:33:28
so problematic. We
1:33:30
don't have this with pass
1:33:33
keys because the presumption is
1:33:35
that a public key system
1:33:37
is fundamentally so much more
1:33:39
secure that a single, very
1:33:43
strong factor will provide all
1:33:45
the security that's needed. Just
1:33:47
for the record, yes, I think the
1:33:50
pass keys should be stored off a
1:33:52
browser because even
1:33:54
though we're not seeing lots of browser attacks,
1:33:57
they do seem more possible than
1:34:00
you know, an
1:34:03
attack on an entirely separate
1:34:05
facility which is designed for
1:34:07
it. Rob
1:34:10
Mitchell said, interesting to
1:34:12
learn the advantages of pass keys, it
1:34:14
definitely makes sense in many ways. The
1:34:17
one disadvantage my brain sticks on
1:34:20
versus TOTP, you
1:34:22
know, time-based, one-time passwords,
1:34:25
is that I'd imagine someone who can
1:34:27
get into your password manager, okay, so
1:34:29
he was talking about it, you know,
1:34:31
he says, hack into a cloud backup
1:34:34
or signed onto your computer, now can
1:34:36
access your account with pass keys. Like
1:34:39
if pass keys were a thing when
1:34:42
people were having their last pass accounts
1:34:44
accessed. But if your
1:34:46
time-based token is only on your phone,
1:34:48
someone who gets into your password manager
1:34:50
still can't access the site because they don't
1:34:53
have the TOTP key stored on your phone.
1:34:55
Maybe pass keys are still better, but
1:34:58
I can't help but see that weakness.
1:35:00
So again, I've
1:35:03
been overly repetitive here. His
1:35:06
Rob sentiment was expressed by many
1:35:08
of our listeners, so I just
1:35:11
wanted to say that I agree.
1:35:13
And as I mentioned last week,
1:35:15
needing to enter that ever-changing secret
1:35:18
six-digit code from the authenticator on
1:35:20
our phone really does make everything
1:35:22
seem much more secure. Nothing
1:35:26
that's entirely automatic can
1:35:29
seem as secure. So
1:35:31
storing pass keys in a smartphone is
1:35:34
a choice, I think, that makes the
1:35:36
most sense. And as I mentioned, the
1:35:38
phone can be used to authenticate through
1:35:40
the QR code that a pass keys
1:35:43
enabled site presents to its users. Christian
1:35:46
Chury said, Hi Steve, on
1:35:48
SN965 you discussed the issue
1:35:51
with Chrome extensions changing owner
1:35:53
and how devs are being tempted
1:35:56
to sell their extensions. There
1:35:58
is a way to be safe when you... using
1:36:00
extensions in Chrome or Firefox.
1:36:03
Download, now this is interesting, download
1:36:05
the extension, expand
1:36:08
it, and inspect it. Once
1:36:10
you are sure it's safe, you
1:36:13
can install it on
1:36:15
Chrome by enabling developer
1:36:17
mode under chrome colon
1:36:20
slash slash extensions slash
1:36:24
and selecting load
1:36:26
unpacked. The extension
1:36:29
will now be locally installed,
1:36:31
which means it will never update
1:36:34
from the store or
1:36:36
change. It's frozen in
1:36:38
time. If it ain't broke,
1:36:40
don't fix it. And
1:36:43
if the extension does break in
1:36:46
a future update due to Chrome changes,
1:36:49
you can get the update and
1:36:51
perform the same process again. While
1:36:53
using these steps requires some expertise,
1:36:55
it should be fine for most
1:36:57
security now listeners. Interesting. Anyway, thank
1:36:59
you, Kristen. Yes. I said,
1:37:02
I think that is a great tip and
1:37:06
I bet it will appeal to many of
1:37:08
our listeners who generally prefer taking
1:37:10
automatic things into their own
1:37:12
hands. So again, chrome colon
1:37:15
slash slash extensions and
1:37:17
then select load unpacked. And
1:37:19
you're able to basically
1:37:21
unpack and permanently store
1:37:23
your extensions, which
1:37:26
stops Chrome from having, you know,
1:37:29
from auto updating them on from
1:37:31
the store. So if an extension goes bad,
1:37:33
you get to keep using the good one.
1:37:36
Very cool. And
1:37:39
lastly, Bob Hutsle. Hi,
1:37:42
Steve. Before embracing Bitwarden's Pasky
1:37:44
support, it is important to
1:37:46
note that it is still a work in
1:37:49
progress. Mobile app
1:37:51
support is still being developed. Also,
1:37:53
Paskys are not yet included in
1:37:56
exports. So
1:37:59
even if someone... maintains offline
1:38:01
vault backups, a
1:38:03
loss of access to or corruption of
1:38:05
the cloud vault means pass
1:38:08
keys are gone. Thank you for
1:38:10
the great show Bob Hutsle. And finally,
1:38:12
so yes in general, as I said, with
1:38:15
the FIDO folks still working
1:38:17
to come up with a
1:38:20
universal pass keys import export
1:38:22
format, which my God,
1:38:24
do we need that? Yeah, seriously. It
1:38:27
doesn't feel right to have
1:38:29
them stuck in anyone's walled
1:38:31
garden. The eventual
1:38:34
addition of pass key
1:38:36
transportability should make a
1:38:39
huge difference. Again, it'll
1:38:42
allow us to see, to
1:38:44
hold, to touch my
1:38:47
precious pass key. I
1:38:49
just think we need that. You know, like where
1:38:51
is it? That
1:38:53
is honestly what's keeping me from using pass
1:38:55
keys as anything other than a second
1:38:58
factor of authentication. That's
1:39:00
where I end up because there are a few
1:39:02
sites like GitHub that give you the option to
1:39:05
either use it as just a straight up login
1:39:07
or use it as the second factor of authentication.
1:39:10
I'm okay with doing that knowing
1:39:12
that I can only have it
1:39:14
in one place, but I haven't
1:39:16
completely removed my password and username
1:39:18
login yet because I want that
1:39:20
transportability before I feel comfortable completely
1:39:22
saying, okay, I'll shut off my username and
1:39:25
password if I'm even given that option. Right.
1:39:28
I think that, you
1:39:30
know, I've talked about waiting
1:39:33
for Bitwarden to add the
1:39:36
support to mobile because
1:39:38
then we get it everywhere. But
1:39:42
looking at the responses from our users
1:39:44
and my own, I
1:39:47
don't think I want pass keys
1:39:49
in my password manager. I
1:39:51
still need Bitwarden. I remember a
1:39:53
sponsor of the Twit Network. I
1:39:57
need it for all the sites where I
1:39:59
still only can use it. passwords. So
1:40:01
it's not going away but I
1:40:04
think that you know I
1:40:07
mean I'm a hundred percent
1:40:09
Apple mobile person for phone
1:40:11
and pad. So I don't
1:40:14
mind having Apple holding all those but I
1:40:16
just I still want to be able to
1:40:18
get my hands on them. Yeah I want
1:40:20
to see it. I want to touch it.
1:40:22
I want to print it
1:40:25
out and frame it. No but
1:40:27
yeah I'm with you. Yeah
1:40:29
I don't write another chalkboard behind you when you're
1:40:32
doing a video podcast. Very
1:40:35
long. Okay let's do our final break and
1:40:37
then we're gonna talk about Morris the second
1:40:40
and why I think we
1:40:43
are in deep trouble. Alright
1:40:45
let's take a break so I
1:40:47
can tell you about Vanta who
1:40:49
is bringing you this episode of
1:40:51
Security Now. Vanta is your single
1:40:54
platform for continuously monitoring your controls,
1:40:56
reporting on security posture and streamlining
1:40:58
audit readiness. Managing the
1:41:00
requirements for modern security programs
1:41:02
is increasingly challenging and
1:41:05
it's time-consuming. Well that's where Vanta
1:41:07
comes into place. Vanta gives you
1:41:09
one place to centralize and scale
1:41:12
your security program. You can quickly
1:41:14
assess risk, streamline security reviews
1:41:16
and automate compliance for SOC
1:41:19
2, ISO 27001 and more.
1:41:21
You can leverage Vanta's market
1:41:23
leading trust management platform to
1:41:25
unify risk management and secure
1:41:27
the trust of your customers.
1:41:30
Plus use Vanta AI to
1:41:32
save time when completing security
1:41:34
questionnaires. G2 loves
1:41:36
Vanta year after year. Check out
1:41:38
this review from a CEO. Vanta
1:41:41
guided us through a process that
1:41:43
we had no experience with before.
1:41:45
We didn't even have to think
1:41:48
about the audit process. It became
1:41:50
straightforward and we got SOC 2
1:41:52
type 2 compliant in just a
1:41:54
few weeks. Help your business scale
1:41:56
and thrive with Vanta. To learn
1:41:58
more watch Vanta's on-demand demo
1:42:00
at vanta.com/security now that's
1:42:02
vanta.com/security now thank you
1:42:05
vanta for sponsoring this
1:42:07
week's episode of security
1:42:09
now now let's
1:42:12
hear about morris the second okay
1:42:16
so since ben
1:42:18
nasi one
1:42:20
of the researchers behind this reached out to
1:42:23
me a couple of weeks ago via twitter
1:42:25
and added his voice by the way
1:42:27
to those who are looking forward to having a
1:42:29
non twitter means of doing so in
1:42:31
the soon future the work that he
1:42:33
and his team have done has
1:42:36
garnered a huge amount of attention it's
1:42:38
been picked up by wired PC mag
1:42:40
arst technica the verge and many more
1:42:42
outlets and there are a bunch of
1:42:45
videos on YouTube that are like jumping
1:42:47
up and down worrying about this in
1:42:51
thinking about how to characterize this I'm
1:42:53
reminded of our early observations of conversational
1:42:57
AI we
1:42:59
talked about how the creators
1:43:01
of these early services had
1:43:04
tried to erect barriers
1:43:06
around certain AI responses
1:43:08
and behaviors but
1:43:11
that clever hackers quickly discovered
1:43:13
that it was possible to
1:43:16
essentially seduce the
1:43:18
AI's into ignoring
1:43:20
their own rules by
1:43:22
asking nicely or by being
1:43:24
more demanding and like even
1:43:26
actually getting mad like sounding
1:43:29
upset the AI would capitulate
1:43:31
so it was like okay okay fine here's
1:43:34
what you wanted to know what
1:43:36
Ben and his team have managed
1:43:39
to do here can be
1:43:41
thought of as the exploitation
1:43:44
of that essential weakness
1:43:46
on steroids okay
1:43:49
so to quickly create some
1:43:51
foundation for understanding this I
1:43:54
want to run through the very brief
1:43:56
Q&A that they provided
1:43:58
since it establishes us some terms
1:44:01
and sets the stage for their
1:44:03
far more detailed 26-page academic paper,
1:44:07
only pieces of which I'm going to share.
1:44:09
But they said, Question, What
1:44:11
is the objective of this study? Answer,
1:44:15
This research is intended to
1:44:17
serve as a whistle-blower to
1:44:20
the possibility of creating
1:44:23
generalized AI worms
1:44:26
in order to prevent their
1:44:29
appearance. In other words, Hey
1:44:31
everybody, hold on here,
1:44:34
hold up. Look
1:44:36
what we did. You
1:44:40
better do something about that. I noticed
1:44:42
you did use the term generalized. Would
1:44:44
that be generative AI worms? I'm sorry,
1:44:47
generative. They're
1:44:49
saying gen AI and generative is exactly
1:44:52
what they mean. Thank you for catching
1:44:54
that. Question,
1:44:56
What's a computer worm? They
1:44:58
answer, Computer worm is malware with
1:45:01
the ability to replicate itself
1:45:04
and propagate or spread
1:45:06
by compromising new machines
1:45:08
while exploiting the sources
1:45:11
of the machines to
1:45:13
conduct malicious activity through
1:45:15
a payload. And
1:45:18
they've done that. Why
1:45:20
did you name the worm
1:45:23
Morris the second? Answer
1:45:25
because like the famous 1988 Morris
1:45:29
worm that was developed by a
1:45:31
Cornell student, Morris
1:45:33
2 was also developed by
1:45:35
two Cornell tech students, Stav
1:45:38
and Ben. What
1:45:40
is a gen AI ecosystem?
1:45:43
It is an interconnected network
1:45:46
consisting of gen
1:45:48
AI powered agents.
1:45:52
What is gen AI
1:45:54
powered application client agent?
1:45:58
A gen AI powered agent. is
1:46:00
any kind of application
1:46:03
that interfaces with one,
1:46:05
GenAI services to process
1:46:08
the inputs sent to the agent
1:46:10
and two, other GenAI
1:46:12
powered agents in the ecosystem.
1:46:15
The agent uses the GenAI
1:46:17
service to process an input it
1:46:20
receives from other agents. Where
1:46:24
is the GenAI service deployed? The
1:46:27
GenAI service that is used by the
1:46:29
agent can be based on a local
1:46:31
model, i.e. the GenAI model
1:46:34
is installed on the physical device of
1:46:36
the agent or a remote
1:46:38
model. The GenAI
1:46:40
model is installed on a cloud server
1:46:42
and the agent interfaces with it via
1:46:45
an API. Which
1:46:47
type of GenAI powered applications may
1:46:49
be vulnerable to the worm? Two
1:46:53
classes of GenAI powered applications might
1:46:55
be at risk. GenAI
1:46:57
powered applications whose execution
1:46:59
flow is dependent
1:47:02
upon the output of the
1:47:04
GenAI service. This class of
1:47:06
applications is vulnerable to application
1:47:09
flow steering GenAI worms and
1:47:12
GenAI powered applications that
1:47:15
use RAG to enrich
1:47:17
their GenAI queries. This
1:47:19
class of applications is vulnerable
1:47:22
to RAG based GenAI worms.
1:47:26
I looked up the acronym of
1:47:29
RAG and now I forgot where it is but it's
1:47:31
in their paper. They
1:47:33
said what is a zero click
1:47:35
malware? Malware
1:47:37
that does not require the user
1:47:39
to click on anything, you know
1:47:42
a hyperlink, a file or whatever
1:47:44
to trigger its malicious execution. Why
1:47:47
do you consider the worm a zero
1:47:49
click worm? Due
1:47:51
to the automatic inference performed
1:47:54
by the GenAI service which
1:47:57
automatically triggers the worm, the
1:47:59
user does not have to click on anything
1:48:01
to trigger the malicious activity of the worm
1:48:03
or to cause it to propagate. Does
1:48:06
the attacker need to compromise an application
1:48:09
in advance? No. In
1:48:12
the two demonstrations we showed, the
1:48:14
applications were not compromised ahead
1:48:17
of time, they were compromised
1:48:19
when they received the email.
1:48:23
Did you disclose the paper with
1:48:25
OpenAI and Google? Although
1:48:28
this is not
1:48:30
OpenAI's or Google's responsibility,
1:48:32
the worm exploits
1:48:34
bad architecture design
1:48:37
for the GenAI
1:48:39
ecosystem and is not
1:48:41
a vulnerability in
1:48:43
the GenAI service. Are
1:48:48
there any
1:48:50
similarities between adversarial self-replicating
1:48:53
prompts and buffer overflow
1:48:55
or SQL injection attacks?
1:48:58
Yes. While a
1:49:00
regular prompt is essentially code
1:49:02
that triggers the GenAI model
1:49:04
to output data, an
1:49:07
adversarial self-replicating prompt is a
1:49:09
code, a prompt,
1:49:11
that triggers the GenAI model
1:49:13
to output code, another
1:49:15
prompt. This idea
1:49:17
resembles classic cyber attacks that
1:49:20
exploited the idea of changing
1:49:22
data into code to carry
1:49:24
out an attack. A
1:49:27
SQL injection attack embeds
1:49:29
code inside a query, which is
1:49:31
its data. A buffer
1:49:33
overflow attack writes data into
1:49:35
areas known to hold executable
1:49:37
code. An adversarial
1:49:40
self-replicating prompt is
1:49:42
code that is intended to cause
1:49:44
the GenAI model to output another
1:49:47
prompt as code instead of data.
1:49:51
Okay, so one
1:49:54
thing that should be clear to everyone is
1:49:57
that a gold rush mentality has
1:50:00
formed in the industry with
1:50:04
everyone rushing to stake
1:50:06
out their claim over what
1:50:08
appears to be a huge
1:50:11
new world of online services
1:50:13
that can be made available
1:50:15
by leveraging this groundbreaking new
1:50:18
capability. But as always,
1:50:20
when we rush ahead, mistakes
1:50:23
are inevitably made, and
1:50:25
some stumbles, perhaps even large
1:50:27
ones, can occur. The
1:50:32
wake-up call this Morris
1:50:34
II research provides has arrived,
1:50:36
I think, at a vital
1:50:39
time, and certainly not
1:50:41
a moment too early. Here's
1:50:45
how these researchers explain what
1:50:47
they've accomplished. I want to pause here for just
1:50:49
a second. I don't want to interrupt your
1:50:51
flow, but I do, because I know
1:50:54
we have a lot of people who
1:50:56
listen to this show who are not
1:50:58
incredibly security-versed because they find everything that
1:51:00
happens in the show very interesting, and
1:51:02
they learn things. I'm
1:51:04
having a little bit of an audience, playing
1:51:07
the audience role here, in what
1:51:09
you're about to read to us. Is
1:51:12
it heavy jargon, or before people start
1:51:14
to go, �Oh, I don't know what's
1:51:16
about to happen.� Is it heavy
1:51:18
jargon, or are we going to actually
1:51:20
understand what everything you've just read about
1:51:23
means and what the outcome of what
1:51:25
they've done means? Or will
1:51:27
we be provided a Steve Gibson explanation after
1:51:29
the fact, where I can go, �Oh,
1:51:31
that's what they're doing here.� It's
1:51:34
definitely an overview, so
1:51:37
not getting too deep into the
1:51:40
weeds. Okay, excellent. Wonderful, because I want to
1:51:42
know what all of this means, but I was just worried
1:51:44
that I would not mind that. They
1:51:47
said, �In the past year,
1:51:50
numerous� �No, this is the
1:51:52
researchers from their perspective �
1:51:55
�In the past year, numerous
1:51:57
companies have incorporated generative�
1:52:00
AI capabilities into new
1:52:02
and existing applications, forming
1:52:05
interconnected generative
1:52:07
AI ecosystems consisting
1:52:10
of semi and fully
1:52:12
autonomous agents powered
1:52:14
by generative AI services. While
1:52:18
ongoing research highlighted risks
1:52:20
associated with the Gen-AI
1:52:23
layer of agents, for
1:52:25
example, drug poisoning,
1:52:27
membership inference, prompt leaking,
1:52:29
jailbreaking, etc., a critical
1:52:32
question emerges. Can attackers
1:52:36
develop malware to exploit
1:52:38
generative AI components of
1:52:40
an agent and
1:52:43
launch cyberattacks on
1:52:45
the entire Gen-AI ecosystem?
1:52:49
This paper introduces Morris
1:52:51
II, the first
1:52:53
worm designed to
1:52:55
target generative AI ecosystems
1:52:58
through the use of
1:53:00
adversarial, self-replicating prompts. The
1:53:04
study demonstrates that attackers can
1:53:06
insert such prompts into
1:53:08
inputs that when
1:53:10
processed by generative AI models,
1:53:13
prompt the model to replicate
1:53:15
the input as output,
1:53:18
which yields replication, that is, of the
1:53:20
worm, engaging in
1:53:23
malicious activities, which is
1:53:25
what the payload of malware does.
1:53:28
Additionally, these inputs compel the
1:53:30
agent to deliver them, so
1:53:33
we get propagation, to new
1:53:35
agents by exploiting the
1:53:37
interconnectivity within the
1:53:40
generative AI ecosystem. We
1:53:43
demonstrate the application of
1:53:45
Morris II against generative
1:53:47
AI-powered email assistance in
1:53:49
two use cases, spamming
1:53:53
and exfiltrating personal data
1:53:56
under two settings, a black box
1:53:58
and white box. using
1:54:00
two types of input data, text
1:54:02
and images. The worm
1:54:05
is tested against three different
1:54:07
generative AI models, Gemini
1:54:09
Pro, Chat GPT-4,
1:54:12
and Lava, and various
1:54:15
factors, propagation rate, replication,
1:54:17
malicious activity, influencing
1:54:20
the performance of the worm is
1:54:27
evaluated. Under
1:54:31
their ethical considerations section of
1:54:35
this, they then go into a
1:54:37
deep 26-page paper. But
1:54:39
their ethical considerations, I thought, was interesting. They
1:54:41
wrote, the entire experiments
1:54:44
conducted in this research were
1:54:46
done in a lab environment.
1:54:49
The machines used as victims of the
1:54:51
worm, the hosts, were
1:54:53
virtual machines that we ran
1:54:55
on our laptops. We
1:54:57
did not demonstrate the application of
1:54:59
the worm against existing
1:55:02
applications to avoid
1:55:04
unleashing a worm into the
1:55:06
wild. Instead, we
1:55:09
showcased the worm against an
1:55:11
application that we developed running
1:55:14
on real data consisting of
1:55:16
real emails received and
1:55:18
sent by the authors of the paper
1:55:21
and were given by the authors of their
1:55:23
free will to demonstrate the worm
1:55:25
using real data. We
1:55:27
also disclosed our findings to
1:55:30
OpenAI and Google using
1:55:32
their bug bounty systems. OK,
1:55:35
so unlike Morris
1:55:38
the first worm, which
1:55:41
escaped from MIT's network
1:55:44
at 8.30 PM on November 2, 1988, after
1:55:49
having been created by Cornell
1:55:52
University graduate student Robert Morriff,
1:55:55
today's Cornell University researchers
1:55:57
were extremely careful. not
1:56:00
to quote, you know, see what would happen.
1:56:02
Let's just see what will happen, you know.
1:56:04
Let's see if it really works,
1:56:07
shall we? You know,
1:56:09
they weren't going to do that. They
1:56:11
were not going to turn their creation
1:56:13
loose upon any live Internet services. One
1:56:16
thing we've learned quite well during the
1:56:18
intervening 36 years since Morris I is
1:56:22
exactly what would happen and it
1:56:24
would be neither good nor would
1:56:26
it further the career of these
1:56:28
researchers. Their
1:56:31
paper ends on a somewhat
1:56:33
ominous note that feels correct to me. They
1:56:36
conclude by writing, While
1:56:39
we hope this paper's findings
1:56:41
will prevent the appearance of
1:56:43
generative AI worms in the
1:56:45
wild, we believe
1:56:48
that generative AI worms
1:56:50
will appear in the
1:56:53
next few years, if
1:56:55
not sooner, I'm worried, in
1:56:58
real products and
1:57:00
will trigger significant and
1:57:02
undesired outcomes, as they
1:57:04
phrased it, unlike the
1:57:07
famous paper on ransomware that
1:57:09
was authored in 1996
1:57:13
and preceded its time by
1:57:15
a few decades until
1:57:18
the Internet became widespread in 2000
1:57:20
and Bitcoin was developed in 2009,
1:57:23
we expect to see the application
1:57:25
of worms against generative
1:57:28
AI powered ecosystems very
1:57:31
soon, perhaps maybe
1:57:33
even in the next two to three years,
1:57:35
and again, if not sooner,
1:57:38
because one, the
1:57:41
infrastructure, the Internet and
1:57:43
generative AI cloud servers and
1:57:46
knowledge, adversarial AI
1:57:48
and jailbreaking techniques needed
1:57:50
to create and orchestrate
1:57:53
generative AI worms already
1:57:55
exists, two, generative
1:57:58
AI ecosystems are under massive
1:58:00
development by many companies in
1:58:03
the industry that integrate Gen
1:58:05
AI capabilities into their cars,
1:58:08
smartphones, and operating systems.
1:58:12
And three, attacks always
1:58:15
get better. They never get
1:58:17
worse. And
1:58:19
we know at least one podcast these guys listen
1:58:21
to because that's something we're
1:58:23
often saying here. And
1:58:26
they said, we hope that our
1:58:28
forecast regarding the appearance of worms
1:58:30
in generative AI ecosystems will turn
1:58:32
out to be wrong because
1:58:34
the message delivered in this paper served
1:58:37
as a wake-up call. So
1:58:40
in other words, they're hoping that
1:58:42
by developing
1:58:46
a working
1:58:48
proof of concept, which is what
1:58:50
they have, where
1:58:52
they were able to send a
1:58:55
crafted email to
1:58:57
a local instance of
1:59:00
a generative AI, which
1:59:03
suborned that AI,
1:59:06
causing it to, for
1:59:08
example, spam everybody in
1:59:11
the person's contact lists
1:59:13
with itself, thus
1:59:15
sending itself out to
1:59:18
all of their email
1:59:20
contacts, which when received
1:59:22
would immediately spawn a
1:59:26
second tier of worms, which
1:59:28
would then send itself to
1:59:30
all of those email contacts.
1:59:33
You can see that in like 10 minutes,
1:59:37
this thing would have spread to
1:59:39
every Gmail user on the
1:59:41
planet. What
1:59:47
these guys have done is crucial.
1:59:51
They have vividly shown
1:59:53
by demonstration that cannot
1:59:55
be denied just
1:59:57
how very immature, stable
2:00:01
and inherently dangerous today's
2:00:04
first generation open-ended,
2:00:06
interactive generative AI models
2:00:09
are. They
2:00:11
are, these models are
2:00:13
extremely subject to manipulation
2:00:15
and abuse. The
2:00:17
question in my mind
2:00:20
that remains outstanding is
2:00:22
whether they can actually
2:00:25
ever, right, see made safe.
2:00:28
I'm not at all sure. That's
2:00:30
not necessarily a given. Safer,
2:00:34
certainly, but safe
2:00:36
enough to be stable while
2:00:39
still delivering the benefits that competitive
2:00:41
pressure is going to push for.
2:00:44
That remains to be seen and
2:00:47
to be proven. We
2:00:50
still can't seem to get the
2:00:52
bugs out of simple
2:00:54
computers whose operation
2:00:56
we fully understand. How
2:01:00
are we ever going to
2:01:02
do so for systems whose
2:01:04
behavior is emergent and
2:01:07
whose complexity literally boggles the
2:01:09
mind? I'm
2:01:11
glad it's not my problem. Oh
2:01:16
dear. And this is, I
2:01:18
just, I'm thinking about all
2:01:21
of the companies and services and
2:01:23
subscriptions and all of these places
2:01:25
that are integrating all of this
2:01:27
AI technology across so many aspects
2:01:30
of so many types of business
2:01:32
right now. Without a single thought
2:01:34
to this. Without a single thought
2:01:36
to how easy, and you just
2:01:38
described it right there, a worm
2:01:41
that goes in and then sends it all,
2:01:43
and then it goes to them and then
2:01:45
it goes, and now somebody could just. It
2:01:47
would explode. It would absolutely. It's a chain
2:01:49
reaction explosion. And that's earlier whenever they said
2:01:51
this isn't open AI or
2:01:54
Google's responsibility, I was a little confused about that,
2:01:56
but now I understand that what they're saying is
2:01:58
it's not their responsibility because it's. that's bigger
2:02:00
than that. It is a fundamental issue
2:02:04
that is only partially their
2:02:07
responsibility. It is everyone's responsibility.
2:02:10
And as you point out, is there a way to
2:02:14
fix this, to correct it? Again,
2:02:17
we're still having buffer overflows.
2:02:19
We're having reuse of variables
2:02:21
that were released. I
2:02:25
mean, these are the
2:02:28
simplest concepts in computing and
2:02:30
we can't get them right. And
2:02:33
now we're gonna turn something
2:02:35
loose in
2:02:39
people's email that's gonna read their
2:02:41
email for them to summarize it.
2:02:43
But it turns out that
2:02:45
the email it's reading could have been
2:02:47
designed to be malicious so that when
2:02:50
it reads it, it suddenly
2:02:52
sends itself to all their
2:02:54
contacts. And video just-
2:02:56
Holy crap. And video just showed
2:02:58
an example of talking to an
2:03:01
AI that
2:03:03
helps provide information for whether
2:03:05
you should take, whether
2:03:08
you should take a medication. And
2:03:10
so I'm talking to this bot that's
2:03:13
like, and I say, oh, I'm
2:03:15
taking St. John's Wort as a
2:03:17
supplement. Should I also take this
2:03:20
depression medication at the same time? Or anyone
2:03:22
who knows anything about that? No, you should
2:03:25
not. But imagine a world where there's this
2:03:27
worm that goes through and it's doing all
2:03:29
this self replication stuff that causes it to
2:03:31
put out a, I mean, oh,
2:03:34
Steve, what are we gonna do? So like,
2:03:36
yes, to blindly diagnose.
2:03:39
Yes, exactly. What
2:03:42
do we do? You
2:03:45
said it's not your problem. You're right, it's not our problem. I'm
2:03:48
like, yeah, I just, you know, just be
2:03:51
careful. Again,
2:03:54
it's to me, it's, when
2:03:58
we learned that- that
2:04:02
asking like in
2:04:04
a seductive way or like
2:04:07
sounding angry could get the
2:04:09
AI to become apologetic
2:04:12
and then give you what it
2:04:14
had been instructed not to is
2:04:17
like, oh, this does
2:04:19
not sound good. And
2:04:21
these guys have demonstrated just how
2:04:24
bad it is. The
2:04:26
good news is they show Google,
2:04:28
I'm sure Google just lost
2:04:30
their you know what you have though
2:04:33
said, oh, let's let's rethink
2:04:35
launching this tomorrow. Yeah, they
2:04:37
need to institute the no
2:04:40
means no protocol for this
2:04:42
because no, this
2:04:44
is that's scary. I mean, honestly,
2:04:46
this is easily the scariest thing
2:04:49
that I've heard you mention, only
2:04:51
because I'm thinking about how quickly
2:04:53
you even mentioned it earlier. It's
2:04:55
like a gold rush. We
2:04:58
know how irresponsible companies will be
2:05:00
when they're rushing to be first.
2:05:03
So much something, something
2:05:05
on the market. It's like, oh, AI, there's
2:05:08
AI that you got an AI coffee pot,
2:05:10
you got an AI, you know, toothbrush.
2:05:13
Like, oh, God. Yeah. Oh,
2:05:16
boy. Folks, got
2:05:20
a lot of got a lot of meditation to
2:05:23
do, and a lot of thinking to do, which is
2:05:26
what Steve brings you every
2:05:29
week right here on security.
2:05:31
Now you can head to
2:05:33
grc.com to get
2:05:36
the episodes available in
2:05:38
a very good quality,
2:05:40
but perhaps more
2:05:43
bandwidth friendly version of the
2:05:45
show. Along with
2:05:47
that, you will soon find within
2:05:50
I believe days a transcript
2:05:53
of the show, which is
2:05:55
also made available to you. Of
2:05:57
course, that website, grc.com. is
2:06:00
where you head to get Spin Right,
2:06:02
which we were talking about earlier, secretly
2:06:04
powered by kittens. And
2:06:08
of course, Security Now, you can
2:06:10
tune in every Tuesday, round about
2:06:13
4.30 p.m., Eastern 1,
2:06:15
3 p.m. Pacific. Leo LaPorte
2:06:17
will be back next week tanned and
2:06:20
ready to talk about security once again.
2:06:22
It has been my pleasure to join
2:06:24
you, see these past couple of weeks.
2:06:27
And is there anything I'm missing? Anything you want to plug? It's
2:06:30
been great working with you, Micah. And
2:06:32
let's tell Leo to, you know,
2:06:34
go take another trip. Yeah, why not? Why
2:06:37
not? Get out of here. Get
2:06:39
out of here, Leo. Thank you so much,
2:06:41
Steve. Folks, now I would like to mention
2:06:43
that you should check out Club Twit at
2:06:45
twit.tv slash club twit. There
2:06:48
you can get a subscription to Club Twit, $7 a
2:06:50
month, $84 a year. When
2:06:52
you join the club, you help support this
2:06:54
show, help keep everything that we do here
2:06:57
at Twit rolling. And of course,
2:06:59
you gain access to a lot of extra benefits
2:07:01
as well, including ad free versions of
2:07:03
every show, access to the
2:07:05
Twit Plus bonus feed, extra content you won't
2:07:07
find anywhere else behind the scenes before the
2:07:09
show, after the show, special Club Twit events
2:07:11
get published there. You also
2:07:13
get some Club Twit exclusive shows in
2:07:15
video format that you would not get
2:07:18
otherwise. If that sounds good to you,
2:07:20
twit.tv slash club twit. With
2:07:22
that, I will say goodbye as we
2:07:25
bring you into this episode of Security
2:07:27
Now. Bye-bye. Been
2:07:29
a pleasure, Micah. And we'll see you next
2:07:32
time. Leo gets the lust for
2:07:34
wandering. All
2:07:36
right, bye-bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More