Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
It's time for security now. Steve Gibson is
0:02
here. The latest chapter in the Voyager 1
0:05
drama coming up. We'll talk
0:07
about the gray beard at Ghentu
0:09
who says no AI in Linux.
0:11
About the Hyundai owner whose car
0:13
really is tracking him. And then
0:16
what the EU plans to do
0:18
with end-to-end encryption. I can give
0:20
you a little tip. It's not
0:22
good news. All that coming up
0:24
next on security now. Podcasts
0:29
you love from people
0:31
you trust. This
0:33
is Twit. This
0:39
is security now with Steve Gibson episode
0:41
971 recorded Tuesday April 23rd 2024.
0:48
Chat out of control.
0:52
It's time for security now. The show
0:54
we cover the latest security, privacy, internet
0:56
updates, even occasionally some good books and
0:58
movies with this guy right here. Steve
1:01
Gibson the security guy
1:03
in chief. Hello Steve. Yeah
1:05
and you know not where are the good
1:07
movies Leo? I mean like we used
1:09
to have a lot of fun. I did see that you're doing
1:11
uh the Bobaverse with
1:13
uh Stacy's book club. Yes
1:16
Thursday. Yes. She was
1:18
chagrined when you reminded her. She said oh
1:21
I forgot to read that. So
1:23
of course it'll take her an hour.
1:25
It's pretty quick. Yeah. And my wife
1:28
apparently just glances at pages when she
1:30
like I see her she like like
1:32
with ebooks. I say can
1:34
I test you on the content of this after
1:36
you're through like and she says
1:38
oh I'm I'm getting most of it. Did she
1:41
go to Evelyn Wood's speed reading academy? I apparently
1:44
or she's in some sort of a
1:46
time warp. I don't know because I mean I'm an
1:48
engineer. I read every word
1:50
and in fact that's why Michael McCollum
1:52
was sending me his books before publication
1:55
because it turns out I'm a pretty
1:57
good proofreading editor because I'm a good
1:59
editor. because I spot
2:01
every mistake, of course not my
2:03
own. So other
2:06
people's much easier to see. Yes,
2:08
it's really true. It's the name of code, right? You
2:11
can see, even though there's a bug in there, you
2:13
can stare at it till the cows come home and
2:15
you go. Absolutely, one of the neatest things that
2:18
the GRC group has done
2:20
for me is that our
2:22
Spinrite News Group is this
2:24
thing. Basically, we're not having any
2:26
problems. Thousands of people are coming on
2:29
board with 6.1, and it's
2:31
done. We're not chasing bugs
2:33
around. Oh, it is good.
2:36
We've got a great bunch of beta testers.
2:39
That's in control. We're going to
2:41
talk about what is out of
2:43
control. Today's title is Chat Out
2:45
of Control for
2:49
Security Now in episode 971 for
2:51
the second to last episode of
2:53
the month of April, which
2:55
becomes important because of what's going to happen in
2:57
June. But anyway, I'm getting all tangled up here.
3:01
We're going to talk about a lot of fun things, like
3:03
what would you call
3:05
Stuxnet on steroids? What's
3:08
the latest on the Voyager 1 drama?
3:10
We've got even more good news than
3:12
we had last week. What
3:14
new features are coming to Android? Probably in
3:16
15. We're not sure, but probably.
3:19
And also in Thunderbird this
3:21
summer. What's China gone and done
3:23
now? What did Gen
3:25
2 Linux say? I'm
3:28
sorry, why did Gen 2 Linux say
3:30
no to AI? And
3:33
what's that all about? And after sharing
3:35
and discussing a bunch of feedback, because there
3:37
wasn't a huge bunch of really
3:39
gripping news, but we had a lot of feedback
3:41
from our listeners that we're going to have fun with. And
3:44
a brief little update on Spinrite. We're
3:47
going to examine the latest
3:49
update to the European Union's
3:52
quite worrisome chat
3:54
control legislation, which
3:57
is reportedly just over a
3:59
year. month away from
4:02
becoming law. Is the
4:04
EU to
4:09
end encryption in order
4:11
to enable and require the scanning
4:13
of all encrypted
4:16
communications for the
4:18
children? And it appears
4:21
ready to do just that. This
4:23
latest update, it came
4:25
onto my radar because somebody said
4:28
that the legislators had excluded themselves
4:31
from the legislation. Of course. Well,
4:33
so I got this 203
4:36
page tome and
4:38
it's section 16a
4:41
was in bold because it just got added.
4:43
Anyway, we'll talk about it. I
4:45
think that the person doing that speaking
4:47
that caught my attention, I'm glad he
4:50
caught my attention, but he was overstating
4:52
the case in order to make a
4:54
point. But the
4:56
case that we have doesn't
4:58
need overstating because it
5:00
looks really bad. You know, there's
5:02
no sign of exclusion like the UK
5:04
gave us on their legislation in September
5:07
which said, we're technically feasible. That's completely
5:09
missing from this. So anyway, I think
5:11
we have a lot of fun to
5:14
talk about, a fun thing to talk
5:16
about. I did make sure that the
5:18
pictures showed up this week in Apple
5:22
devices. What's interesting is I have an older 6,
5:24
I think it's a 6 or maybe it's a
5:26
7 or 8. I don't know. Anyway,
5:29
the pictures all work there. Even
5:32
last week's pictures work there. But
5:34
not on my iPhone X. So
5:36
Apple did in fact change
5:39
the rendering of PDFs
5:42
which caused some
5:44
problems, some incompatibility. Anyway, I don't know
5:46
why it was last week but not
5:48
this week. We're all good to go
5:50
this week. So even Mac people
5:53
can see our picture of the week which
5:56
is kind of fun. So
5:59
lots of good stuff. So I verify
6:01
that it doesn't work. That's
6:04
good news. Week, may I take this moment
6:06
to talk about a long time sponsor of
6:08
the show, a company we really like
6:10
quite a bit. You and I
6:13
started way back when, 19 years ago. The
6:16
Honey Monkey. Honey Monkey.
6:18
Yeah. And when we
6:20
did our last past event in Boston
6:23
about four or five years ago now,
6:25
can't believe it, pre-COVID,
6:29
we had Bill Cheswick on who
6:31
created one of the very first honeypots.
6:36
Back then it was hard to do. Today,
6:38
it's trivially easy thanks to
6:40
the Thinkst Canary. Thinkst
6:43
Canaries are honeypots. They're well named because you know like
6:45
the canary in the coal mine. They're
6:47
honeypots that can be deployed in minutes and
6:50
they're there to let you know one thing. Somebody
6:53
is inside your network. Whether it's
6:55
a bad guy from the outside or
6:57
malicious insider, somebody's snooping
6:59
around your network. Whether
7:02
it's a fake SSH server, in my
7:04
case it's a Synology NAS. This
7:07
is the Thinkst Canary. It looks like in every
7:09
respect, even down to the MAC address, a
7:12
Synology NAS, it's got the login
7:14
and everything, or an IIS server
7:17
or Apache or a Linux box
7:19
with the Christmas tree of services
7:21
opened up or just a few
7:23
carefully chosen tasty morsel services that
7:25
any bad guy will look at
7:27
and say, oh, I got
7:30
to try that. But the minute they
7:32
touch that device, it's
7:34
not really a server. It's
7:36
not really SSH. It's
7:39
a Thinkst Canary. And you're going to
7:41
get the alerts, just the alerts that
7:43
matter. No false positives, but real alerts
7:45
saying somebody's snooping around the Thinkst Canary.
7:48
You can also make Canary tokens files
7:50
that you can sprinkle around. So really, you
7:53
can have unlimited little tripwires all over your
7:55
network. I have files, .xls
7:57
spreadsheet files that say things like... employee
8:00
information, you know, what hacker is
8:03
going to not look at that, right? But as soon as
8:05
they open it, it's not really an Excel
8:07
file, it lets me know
8:09
there's somebody snooping around our network. You
8:12
choose a profile and there are hundreds to choose from.
8:14
It could be a SCADA device, it could be almost
8:16
anything. You choose and it's easy
8:18
to change too. I mean you could change it every day,
8:20
you have a different device, which is nice if you're tracking
8:23
the Wily hacker, as Bill might say. You
8:26
register with a hosted console for monitoring
8:28
and notifications that supports webhooks, they've got
8:30
an API, Slack, email, text, any way
8:32
you want to be notified, they can
8:34
notify you. They can even be a
8:36
green bubble if you want, you know, or
8:39
a blue bubble, whatever color bubble you want. Then
8:41
you wait. Attackers who breached your
8:44
network, malicious insiders,
8:46
other adversaries, absolutely,
8:49
you know, this is the problem companies normally
8:51
don't know these people are inside. They
8:53
are looking around, they're not just sitting
8:55
there going, they're looking,
8:58
they're actively exploring devices and
9:00
files and they will trigger
9:02
the ThingsCanaries and you will
9:04
be alerted. ThingsCanary is so
9:07
smart. Visit canary.tools slash
9:09
twit for just $7,500 a
9:12
year as an example, you can get five
9:14
ThingsCanaries. Big bank might have hundreds, small
9:16
operation like ours, just a handful. Five
9:19
ThingsCanaries, $7,500 a year, your own
9:21
hosted console, you get the upgrades
9:23
to support the maintenance and
9:26
here's a deal, if you use the offer code twit in
9:28
the how did you hear about this box, you're gonna get
9:30
10% off the price for
9:33
life. There's no risk
9:35
in this, you can always return your
9:37
ThingsCanary for a full refund, two month
9:39
money back guarantee, 60 days.
9:41
I have to point out
9:43
that in the decade now that we've been
9:46
telling you about ThingsCanaries, no
9:49
one has ever asked for a refund. People
9:51
love them. These things, once you get it, once you have
9:53
it, you will just say no, no,
9:55
no, every network needs one or two
9:57
or three or four or five. Visit canary.tools
10:00
and to the code twit in the how
10:02
did you hear about us box thinks
10:04
canary a very important
10:06
piece of your overall security now
10:10
I believe it's time for
10:13
a picture of the week yeah
10:16
so this was just caught
10:18
my attention because lately I've
10:21
been seeing as I'm sure our listeners
10:23
have so much of this
10:25
you know AI
10:28
everything yeah I everywhere
10:30
so the picture shows
10:32
your a couple of
10:34
young upstarts in a
10:36
you know in a
10:39
in a in
10:41
a startup venture who
10:43
are they've got some ideas for
10:45
some product that they want to
10:47
create and one of the one
10:50
of the things that happens when you're going out
10:52
to seek financing and
10:54
funding you're typically going
10:56
and giving presentations to like venture
10:58
capital firms and explaining what you're
11:00
going to do and and how
11:03
you're going to do it and
11:05
so you know powerpoint presentations are
11:07
put together and they're called pitch
11:09
decks because you're making a pitch
11:11
to to you know whomever you're
11:13
you're explaining your ideas to so
11:16
we see in this picture two
11:18
guys facing each other each behind
11:20
their own display
11:24
one of them saying to the other can
11:26
you go through all the
11:28
old pitch decks and
11:31
replace the word crypto with
11:34
AI and of course the
11:36
point being that you know we were just
11:38
what was it like a year ago Leo
11:41
it's just been everything yeah yeah exactly
11:43
I mean it was like time
11:46
must be accelerating because it was
11:48
just so recently that everything was
11:50
blockchain this and blockchain that cryptocurrency
11:52
you know crypto this and that
11:55
and so no that was you
11:57
know that's all you know yesterday
11:59
that's So what do they say? So
12:02
last minute or something? Anyway, now
12:04
it's AI. So yes, and we have a
12:07
couple things during this
12:09
podcast that we'll be touching on
12:11
this too. So anyway, just not
12:13
a fantastic picture, but I thought
12:15
it was just so indicative of
12:17
where we are today. I've been
12:19
dealing with Bing. I don't
12:22
know why I've been launching it, but it's been
12:24
launched a few times in the last week. And
12:26
Microsoft- Because you use Windows, that's why. Oh,
12:29
that was definitely- They do everything they can
12:31
to get Bing in your face. Definitely. Oh
12:33
my God, yes. And so it's like, no,
12:35
I don't want this. And also,
12:38
for me, since I'm not normally
12:40
using Edge or Bing, it's
12:44
like, okay, how do I close this? It looks
12:46
like it takes over the whole UI. And it
12:49
very much like that old, when
12:52
people were being forced to upgrade to
12:54
Windows 10 against their will, where for
12:56
a while it said, no, thank you.
12:58
And then it changed to later tonight.
13:00
So it's like, wait a minute, what
13:02
happened to not at all, never, ever?
13:05
You know, it's like, do you want to do it now or do you want to
13:07
do it in an hour? Wait,
13:10
those are my only two options. Anyway,
13:13
okay, so as
13:16
we know, SecurityNow is
13:18
primarily an audio podcast.
13:21
But even those watching, you know, though it
13:23
remains unclear to me why anyone would, don't
13:26
have the advantage of looking at my show
13:28
notes. If anyone were to be reading the
13:30
notes, they would see that the spelling of
13:32
the name of this
13:35
new attack is
13:37
far more, shall we say,
13:39
acceptable in polite company than
13:42
the attack's verbal pronunciation.
13:45
But this is an audio podcast and the
13:47
story of this attack that I very much
13:49
want to share refers to the attack by
13:51
name. And that
13:53
name, which rhymes with Stuxnet,
13:56
is spelled F-U-X-N-E-T.
14:01
And there's really no other way to
14:03
pronounce it than just to spit it out. But
14:06
I'm just gonna say F-net for
14:08
the sake of the children.
14:10
Thank you. Because yes,
14:13
you know, so it's not really an
14:15
F-bomb but it's audibly
14:17
identical and there's no point in
14:19
saying it. Everybody understands how you
14:22
would pronounce F-U-X N-E-T,
14:25
which is what the Ukrainians named
14:28
their weapon, which they reportedly,
14:32
and this was confirmed by an
14:34
independent security
14:37
company, successfully launched
14:39
into the heart of Russia.
14:42
So with that preamble
14:44
and, you know, explanation, let's look
14:46
at the very interesting attack that
14:49
was reported last week by
14:51
Security Week. Their headline, which
14:53
also did not shy away from using
14:55
the attack's name, said
14:58
destructive ICS malware,
15:01
F-net, used by
15:03
Ukraine against Russian infrastructure.
15:07
So here's what we learned from what they
15:09
wrote. They said in recent months
15:11
a hacker group named Blackjack,
15:14
which is believed to be
15:16
affiliated with Ukraine's security services,
15:18
so, you know, as in
15:20
state sponsored, has
15:22
claimed to have launched attacks
15:25
against several key Russian organizations.
15:28
The hackers targeted ISPs,
15:30
utilities, data centers, and
15:32
Russia's military and allegedly
15:34
caused significant damage and
15:36
exfiltrated sensitive information. Last
15:39
week Blackjack disclosed
15:41
the details of an alleged
15:44
attack aimed at
15:46
Moss Collector, Moss Collector,
15:48
a Moscow-based company
15:53
responsible for underground infrastructure,
15:55
meaning things like water,
15:58
sewage, and communication. systems.
16:01
So quoting they said Russia's
16:05
industrial sensor and monitoring
16:07
infrastructure has been disabled
16:10
so said the hackers.
16:13
It includes Russia's network operations
16:15
center that monitors
16:17
and controls gas, water, fire
16:20
alarms and many others including
16:22
a vast network of remote
16:24
sensors and IoT controllers. I
16:27
don't know what to say.
16:29
So the hackers claimed to
16:31
have wiped database, email, internal
16:33
monitoring and data storage servers.
16:36
In addition they claimed to
16:38
have disabled some 87,000 sensors
16:40
including ones associated with airports,
16:42
subway systems and
16:49
gas pipelines. To achieve this
16:51
they claimed to have used Fnet,
16:54
a malware they described
16:57
as Stuxnet on steroids
17:00
which enabled them to physically destroy
17:02
sensor equipment. Our longtime listeners
17:04
and anybody who's been in around
17:07
IT will recall that
17:09
Stuxnet was a previous
17:14
also physically
17:17
destructive malware.
17:19
I guess we have to call it malware even
17:22
though we were apparently
17:24
part of the US participated. There
17:26
are US intelligence services was involved
17:28
in its creation. It caused
17:31
the centrifuges used in
17:33
Iran to over spin and
17:39
essentially self-destruct. So
17:42
those were being used to
17:44
enrich uranium at the time. Anyway so
17:47
that's why they're calling this thing
17:49
Stuxnet on steroids is that
17:52
they worked to cause actual
17:55
physical damage as we'll see in
17:57
a second to hardware. The
18:00
difference though between destroying centrifuges
18:02
which have one purpose, which
18:05
is enriching uranium and destroying
18:07
sensors which
18:10
can prevent gas
18:12
leaks and I mean this
18:14
is a civilian attack. Finish
18:17
this group but I would love to talk at the end of
18:19
it about how you feel about this. Good.
18:22
And I agree with you. They wrote, FNET
18:24
has now started to flood the
18:27
RS-485 M bus
18:30
and is sending random commands to
18:32
87,000 embedded control
18:34
and sensory systems and
18:36
they did say while carefully
18:39
excluding hospitals, airports and
18:41
other civilian targets. Now
18:45
they said that so they
18:47
share some of our sensitivity to that
18:50
and I do question, you
18:52
know, given that
18:55
they're also claiming 87,000 some
18:58
sensors, how
19:00
they can be that careful about what
19:03
they've attacked and what they haven't. Anyway,
19:06
the report goes on saying the
19:08
hackers claims are difficult to verify
19:11
but the industrial and
19:13
enterprise IoT cybersecurity firm
19:15
Clarity was able
19:17
to conduct an analysis of
19:19
the FNET malware based on
19:21
information and code made available
19:23
by Blackjack. Clarity pointed
19:26
out that the actual sensors
19:28
deployed by MOS collector which
19:30
are used to collect physical
19:32
data such as temperature were
19:34
likely not themselves damaged by
19:36
FNET. Instead the
19:38
malware likely targeted roughly 500
19:41
sensor gateways. So
19:45
the idea is that the gateway is
19:47
a device out located remotely somewhere and
19:49
it has RS-485
19:52
lines running out to
19:54
a ton
19:56
of individual sensors. sensor
20:00
data collector and forwarding device.
20:03
So the malware targeted around 500 of
20:05
these sensor gateways, which
20:09
communicate with the sensors over a
20:11
serial bus, such as RS-485 or
20:13
meter bus that was mentioned by
20:15
Blackjack. These gateways are
20:17
also connected to the internet to
20:20
be able to transmit data to
20:22
the company's global monitoring system. So
20:24
that was probably the means by
20:26
which the FNET malware got into
20:29
the sensor gateways. Clarity
20:31
notes, quote, if the gateways
20:33
were indeed damaged, the repairs
20:35
could be extensive, given
20:38
that these devices are spread
20:40
out geographically across Moscow and
20:42
its suburbs and must be
20:44
either replaced or their firmware
20:46
must be individually reflashed. Clarity's
20:49
analysis of FNET showed
20:52
that the malware was likely
20:54
deployed remotely. Then, once
20:56
on a device, it would
20:58
start deleting important files and
21:00
directories, shutting down remote access
21:02
services to prevent remote restoration,
21:05
and deleting routing table information
21:07
to prevent communication with other
21:09
devices. FNET would
21:12
then delete the file system and
21:14
rewrite the device's flash memory.
21:18
Once it has corrupted the file
21:20
system and blocked access to the
21:22
device, the malware attempts to physically
21:24
destroy the NAND
21:26
memory chip and
21:29
then rewrites the UBI volume
21:31
to prevent rebooting. In
21:34
addition, the malware attempts to
21:36
disrupt the sensors connected to the
21:38
gateway by flooding their
21:40
serial communications channels with random data
21:43
in an effort to overload the
21:45
serial bus and sensors, essentially
21:48
performing an internal DOS attack on
21:50
all the devices the gateway is
21:52
connected to. And I'll argue
21:54
that if these are not sensors but these
21:57
are actuators, as you said, Leo, this
21:59
could be costly. causing some true damage. I
22:01
mean like true infrastructure. Well,
22:04
they said some places, airports,
22:07
gas pipelines. Yeah.
22:09
Yeah. Yeah.
22:11
Clarity explained, quote, during the malware
22:14
operation, it will repeatedly write
22:16
arbitrary data over the meter bus
22:18
channel. This will prevent the sensors
22:21
and the sensor gateway from sending
22:23
and receiving data, rendering the
22:25
sensor data acquisition useless. Therefore, despite
22:27
the attacker's claim of physically
22:30
destroying 87,000 devices, wrote Clarity,
22:33
it seems that they actually managed
22:35
to infect the sensor gateways and
22:37
were causing widespread disruption by
22:40
flooding the meter bus channel connecting
22:42
the sensors to the gateway, similar
22:44
to network fuzzing the different connected
22:46
sensor equipment. As a result, it
22:49
appears only the sensor gateways were
22:51
bricked and not the end sensors
22:53
themselves. So
22:56
okay, I particularly appreciated
22:59
the part about attempting to
23:01
physically destroy the gateway's NAND
23:03
memory chip because it
23:06
could happen. As we
23:08
know, NAND memory is fatigued
23:11
by writing because writing and erasing,
23:13
which needs to be part of
23:15
writing, is performed by
23:18
forcing electrons to tunnel through
23:20
insulation, thus weakening
23:22
its dielectric properties over
23:24
time. So the
23:27
attacking malware is likely writing and
23:29
erasing and writing and erasing the
23:31
NAND memory over and
23:33
over as rapidly as it
23:36
can. And since such
23:38
memory is likely embedded into the
23:40
controller and is probably not field
23:42
replaceable, that would necessitate
23:44
replacing the gateway device. And perhaps
23:46
all 500 of them spread
23:49
across Moscow and its suburbs. And
23:52
even if the NAND memory was not
23:54
rendered unusable, the level of destruction
23:56
appears to be quite severe. Wiping
23:58
stored data in the network is not a good idea. directories and killing
24:01
the system's boot volume means
24:03
that those devices probably cannot
24:05
be remotely repaired. Overall, I'd
24:07
have to say that this
24:09
extremely destructive malware was well
24:11
named. And
24:13
we live in an extremely
24:17
and increasingly
24:19
cyberdependent world. Everyone
24:22
listening to this podcast knows
24:24
how rickety the world's cybersecurity
24:26
truly is. So I shudder
24:30
at the idea of any sort
24:33
of all-out confrontation between superpowers. I
24:36
don't want to see that. Do you think there should be
24:38
a Geneva convention-style
24:41
accord between nations about cyber
24:44
warfare? I mean it's
24:46
a it's it's the
24:48
problem is you can do it but then you're just
24:50
gonna escalate. It's gonna go back and forth just which
24:53
is why we decided for instance not to
24:55
allow bioweapons. They still get used but
24:58
it's again you know the civilized world
25:00
agrees not to use biologic
25:04
weapons in war. Well
25:08
and the feeling is of course that COVID
25:10
was a lab escape. Well I mean
25:13
there's no evidence but not a lot.
25:16
There's no evidence. That's a question. Yeah. It
25:19
wasn't a very good it wasn't a very effective
25:23
war-like attempt since it killed far more people in
25:25
China than it did elsewhere. But
25:27
anyway. Clearly
25:30
a mistake. Yeah it wasn't intentional.
25:32
So what
25:35
do you think? I mean so
25:37
I agree with you the problem
25:39
is it's tempting because
25:44
it doesn't directly hurt
25:47
people right? I mean so like like
25:49
right now we're in a Cold War.
25:51
We're constantly on this
25:53
podcast talking about state-sponsored attacks.
25:56
Well those are attacks. Especially
25:58
infrastructure attacks. Yes,
26:00
yes. I mean the whole
26:02
colonial pipeline thing, that
26:06
really damaged the US. And I
26:08
mean it was a
26:10
true attack. So,
26:13
you know, and
26:15
we just talked about how
26:17
China was telling some
26:19
of their, China
26:22
told their commercial
26:24
sector, you need to
26:26
stop using Windows. You
26:28
need to stop using,
26:30
you know, this Western
26:32
computer technology because the
26:35
West is able to get into it. So
26:38
that was the first indication we really
26:40
had that, as I put it at
26:42
the time, that we're giving as well
26:44
as we're getting. Unfortunately, this
26:47
is all happening. I mean I
26:50
wish none of it was happening, but the
26:52
problem is security is porous.
26:56
And I guess the
26:59
reason a nuclear weapon
27:02
and a
27:04
bioweapon are
27:07
unconscionable, you know,
27:09
is that they
27:12
are so tissue
27:15
damaging, for lack of a better word. They
27:18
really – they're
27:21
like really
27:23
going to kill people, whereas
27:25
a network got breached.
27:28
Whoops. I mean it doesn't
27:30
have the same sort of
27:33
visceral grip. And unfortunately, here's
27:35
an example, and I'm glad you brought it up,
27:37
Leo, that Ukraine, sympathetic
27:40
as we can be for their
27:44
situation. This
27:46
was a blunt-edged attack,
27:49
right? I mean this was, you know,
27:52
sewage and water and
27:54
gas and airports and, you know,
27:57
I mean it's – they created
27:59
it. couldn't have controlled what
28:02
damage was caused. You
28:05
mess up water and sewage and you're really
28:07
hurting actual people who
28:09
are innocent of what ... Or
28:12
air ports or gas pipelines. I
28:16
don't know what the answer is. I'm
28:18
no fan of Putin. He brought the
28:20
war upon himself, but hurting
28:23
civilians, I don't know. This
28:26
is not a
28:28
good situation. It's
28:30
the world we're in. It's the world we're in. Yeah.
28:33
And it is technology we
28:35
created. I mean, you know, oh,
28:38
let's have the password be admin
28:40
admin because we don't want people
28:43
calling us and asking what the password
28:45
is. Or I mean, it's like we've
28:47
made so many bad decisions. And
28:50
while we're now making them better
28:52
today, we have seen how
28:55
long the tail of inertia is. I
28:57
mean, you could argue
28:59
infinite. We
29:01
still have Code Red and Nimda out there sending
29:04
packets out. Somewhere there's
29:07
an NT machine just hoping to
29:09
find something that it can infect.
29:12
When is it going to die? I don't know. We
29:15
have another update on Voyager 1. Actually,
29:21
if Voyager is not going to give
29:23
up on us, we're not going
29:25
to give up on it. But remember
29:27
that no matter what, Voyager
29:30
is deriving all
29:33
of its diminishing operating power
29:36
from the heat being generated
29:38
by the decay of radioisotopes.
29:41
And through the years and now
29:43
decades since this thing left
29:46
Earth in 73, those isotopes are
29:50
continuing to put out less and less
29:53
heat. And thus, Voyager has
29:55
less and less energy available to it.
29:58
So it can't. to go
30:00
forever, but it
30:02
amazes everybody that it has gone as long as
30:04
it has and it is still going. What
30:07
equally amazes me is
30:10
that the intrepid group
30:12
of well-past-their-retirement engineers who
30:15
are now endeavoring to patch
30:18
the code of this
30:20
ancient machine that's 22 and
30:23
a half light
30:26
hours away. Oh my
30:28
God, it's amazing. It
30:30
boggles the mind. It's so
30:33
amazing. Just yesterday, on April
30:35
22, JPL, NASA's
30:38
Jet Propulsion Laboratory, posted
30:40
the news under the
30:42
headline, NASA's Voyager 1 Resumes
30:45
Sending Engineering Updates to Earth. They
30:52
wrote, After some
30:54
inventive sleuthing, the mission
30:56
team can, for the first time
30:58
in five months, check
31:01
the health and status of
31:04
the most distant human-made
31:06
object in existence. For
31:09
the first time since November, NASA's
31:11
Voyager 1 spacecraft is returning
31:13
usable data about the health
31:15
and status of its onboard
31:17
engineering systems. The next
31:19
step is to enable
31:21
the spacecraft to begin returning
31:24
science data again. The
31:26
probe and its twin, Voyager 2,
31:29
are the only spacecraft to ever
31:31
fly in interstellar
31:33
space, the space between
31:35
the stars. Voyager
31:38
1 stopped sending readable
31:40
science and engineering data
31:42
back to Earth on
31:44
November 14, 2023. Even though
31:46
mission controllers could tell, the spacecraft
31:49
was still receiving their commands and
31:51
otherwise operating normally. In
31:54
March, the Voyager engineering
31:56
team at NASA's Jet
31:58
Propulsion Laboratory in Southern California confirmed
32:00
that the issue was tied
32:02
to one of the spacecraft's
32:05
three onboard computers called the
32:07
Flight Data Subsystem, or FDS.
32:10
The FDS is responsible for packaging
32:12
the science and engineering data before
32:15
it's sent to Earth. The
32:18
team discovered that a single
32:20
chip responsible for storing a
32:22
portion of the FDS's
32:25
memory, including some
32:28
of the FDS computer's software
32:30
code, is no longer working.
32:33
The loss of that code rendered
32:35
the science and engineering data unusable.
32:39
Unable to repair the chip, right after 22 and a
32:41
half light days away, the team decided to
32:50
place the affected code
32:52
elsewhere. They're relocating the
32:54
code, Leo, at
32:56
this distance on a probe built in 73, or
32:58
launched in
33:01
73. Oh, cool. It's
33:03
insane. But
33:06
they said no single location is large
33:09
enough to hold the section of code
33:11
in its entirety, so they're having to
33:13
fragment it. They
33:16
devised a plan to divide
33:18
the affected code into sections
33:21
and store those sections in
33:23
different places in the
33:26
FDS. To make
33:28
this plan work, they also needed
33:30
to adjust those code sections to
33:32
ensure, for example, that they all
33:34
still function as a whole. Any
33:37
references to the location
33:39
of that code in other
33:41
parts of the FDS memory
33:43
need to be updated as
33:45
well. So they're relocating and
33:48
then patching to relink the
33:51
now fragmented code sections
33:53
so that they jump to
33:55
each other. It's
33:58
dynamic linking on a in
34:00
a way that was never designed or
34:03
intended. They
34:05
wrote, the team started by
34:08
singling out the code responsible
34:10
for packaging the spacecraft's engineering
34:12
data. They sent
34:14
it to its new location in
34:16
the FDS memory on April 18th.
34:20
A radio signal takes about 22 and a
34:22
half hours to reach Voyager 1, which
34:25
is now over 15 billion with a B
34:27
miles from Earth. And
34:32
another 22 and a half hours for a
34:34
signal, oh, sorry, hours, not days, for a
34:36
signal to come back to Earth. When
34:39
the mission flight team heard back
34:41
from the spacecraft on April 20th,
34:44
they saw that the modification
34:46
worked. For the first time
34:48
in five months, they have been
34:51
able to check the health and status
34:53
of the spacecraft. During the
34:55
coming weeks, the team will relocate
34:58
and adjust the other affected
35:01
portions of the FDS software.
35:03
These include the portions that
35:06
will start returning science data,
35:08
rendering the satellite again, back
35:11
to doing what it was
35:13
designed to do, which is using
35:15
its various sensor suites and
35:18
sending back what it's seeing and
35:20
finding out an interstellar space, which,
35:22
as I mentioned previously, has
35:25
surprised the cosmologists because
35:27
their models were wrong.
35:30
So Voyager 1 is saying, ah, not
35:32
so fast there. Uh, nice
35:35
theory you got, but that's not
35:38
matching the facts. Wow.
35:43
Yay, Veeger. And
35:46
yay, those brilliant scientists who
35:48
are keeping her alive. Oh,
35:50
and Leo, I did take,
35:52
I've made, Laurie and I
35:54
watched that in, in the
35:56
quieter and the twilight. And,
35:58
and what was interesting. was that this
36:00
announcement, and it was picked up in
36:03
a few other outlets, showed a photo
36:05
of the event where the
36:07
team were gathered around their conference
36:09
table. I recognize them from
36:12
the documentary. It's the same people. Yeah,
36:15
exactly. They're all still there. In
36:18
fact, some of them don't look like they've changed
36:20
their clothes, but that's what you get with
36:23
old JPL engineers. Love it. It's
36:25
such a great, wonderful story. It
36:28
really is. Let's take a break. I'm going
36:30
to catch my breath, and then we're going
36:32
to talk about changes to coming to Android
36:34
15 and Thunderbird. Yes. All
36:37
right. As we continue with the best
36:39
show on the
36:41
podcast universe, 22
36:45
light hours ahead of everyone else. Our
36:48
show today brought to you by Lookout.
36:51
We love Lookout. Every company today is a
36:53
data company. It's all about data, isn't it?
36:56
And that means every company is at risk. That's
37:00
the bad news. Cyber criminals, breaches,
37:03
leaks, these are the new
37:06
norm. And cyber criminals are growing more sophisticated
37:08
by the minute. At a time when boundaries
37:10
no longer exist, what it means
37:12
for your data to be secure has
37:15
fundamentally changed. Enter
37:17
Lookout. From the first
37:19
phishing texts to the final data grab,
37:22
Lookout stops modern breaches as
37:24
swiftly as they unfold. Whether
37:26
on a device, in the
37:29
cloud, across networks, or
37:31
working remotely at the local coffee shop, Lookout
37:34
gives you clear visibility into all
37:36
your data at rest and
37:38
in motion. You'll monitor, assess,
37:41
and protect without sacrificing productivity
37:43
for security. With a
37:45
single unified cloud platform, Lookout
37:47
simplifies and strengthens, reimagining security
37:50
for the world that we'll
37:52
be today. Visit
37:55
lookout.com today to learn how to safeguard
37:57
your data, secure hybrid work, and
37:59
reduce... It complexity that's
38:02
lookout dot com. Thank
38:05
you lookout for supporting the great
38:07
work Steve is doing here
38:09
at security now. Okay so there's
38:12
not a lot of clear information about this
38:15
yet but Google is working on a new
38:17
feature for Android which
38:19
is interesting they're gonna start watching
38:21
their apps behavior.
38:24
It will place under quarantine any
38:27
applications that might sneak
38:30
past its Play Store
38:32
screening only
38:34
to then begin exhibiting signs
38:36
of behavior that it
38:38
deems to be malicious. The
38:42
apps will reportedly have all
38:44
their activity stopped, all
38:46
of their windows hidden and
38:48
notifications from the quarantine apps will
38:51
no longer be shown. They also
38:53
won't be able to offer any
38:55
API level services to other apps.
38:59
The reports are that Google began working on this feature
39:02
during Android 14's development last
39:04
year and that the feature
39:06
is expected to finally appear
39:08
in forthcoming Android 15 but
39:11
we don't have that confirmed for
39:13
sure. So there
39:16
wasn't and I wasn't able to find any dialogue
39:20
or or conjecture about
39:23
why the apps aren't just removed
39:25
maybe oh and that they
39:27
do still appear to be an app installed on the
39:29
phone. They're not hiding it from the user they're
39:32
just saying no you bad app
39:35
we don't like what you've been
39:37
doing. Maybe it reports
39:40
back to the Play Store and
39:42
then Google takes a closer look
39:45
at the app which is in
39:47
the Play Store which of course
39:49
is how the user got it and then
39:52
says oh yeah we did
39:54
miss this one and at that point it gets
39:56
yanked from the Play Store and yanked from all
39:58
the Android devices. It
40:00
could just be like essentially
40:03
a functioning as
40:05
a remote sensor package. Anyway, I'm sure
40:07
we'll learn more once it becomes official,
40:10
hopefully in this next Android 15. Also
40:14
this summer, Thunderbird
40:16
will be acquiring support for
40:19
Microsoft Exchange email for the
40:21
first time ever. It'll
40:23
only be email at first. The other
40:26
exchange features of calendar and contacts are
40:29
expected to follow at some later date,
40:31
although Mozilla is not saying for sure.
40:33
Now, I happen to be a
40:35
Thunderbird user. I was finally
40:38
forced to relinquish the
40:40
use of my beloved
40:42
Udora email client. Once
40:45
I began receiving email containing
40:47
extended non-ASKI character sets that
40:49
Udora was unable to manage,
40:51
I got these weird capital
40:53
A with little circles above
40:56
them things in my email,
40:58
which was annoying instead
41:00
of line separators. At
41:04
the same time, I have zero interest in
41:06
exchange. GRC runs a
41:08
simple and straightforward instance of a
41:11
mail server called H-mail server,
41:14
which handles traditional pop IMAP
41:16
and SMTP and
41:19
does it effortlessly with ample features.
41:21
But I know that exchange is a
41:23
big deal and obviously
41:25
Mozilla feels that for Thunderbird to
41:28
stay relevant, it probably needs to
41:30
add support for
41:33
exchange. In any
41:35
event, to support this rather massive coding effort,
41:38
in Mozilla's reporting of this, they
41:40
mentioned that it
41:43
had been 20 years since
41:45
email has been done. 20
41:49
years since any
41:51
code in Thunderbird dealing
41:54
with email had been touched. They've just
41:56
been screwing around with the
41:58
user interface. And that during
42:01
those 20 years, a lot
42:04
of, as they put it,
42:06
institutional knowledge about that code
42:08
had drained. So
42:12
they've decided now that
42:14
they're going to recode in Rust. Rust
42:18
is their chosen implementation language, and they
42:20
did so for all the usual reasons.
42:22
They cited memory safety. They
42:24
said Thunderbird takes input from anyone
42:27
who sends an email. So
42:29
we need to be diligent about keeping security
42:31
bugs away. Performance.
42:34
Rust runs as native code
42:36
with all the associated performance
42:38
benefits and modularity and ecosystem.
42:41
They said the built-in modularity of
42:43
Rust gives us access to a
42:46
large ecosystem where there are already
42:48
a lot of people
42:50
doing things related to email, which we
42:52
could benefit from, they said. Anyway,
42:56
for what it's worth, Thunderbird
42:59
is a strong client from
43:01
Mozilla. Is it
43:03
multi-platform, Leo? Do you know? Is
43:07
Thunderbird Windows only or
43:09
Mac and Linux? I
43:12
don't know. Either way.
43:15
China. The Chinese
43:19
government has ordered Apple
43:22
to remove four Western
43:25
apps from the Chinese version
43:27
of the Apple App Store.
43:31
Those are Meta's new social
43:33
network threads, which is
43:35
now gone, Signal,
43:37
Telegram, and WhatsApp,
43:40
all removed from the Chinese
43:42
App Store. The
43:45
Chinese government has stated that they have national
43:47
security concerns about those four. And
43:49
as we've seen, and as I fear we'll
43:51
be seeing shortly within the EU, what
43:54
countries request countries
43:56
receive? Technology is
43:59
ultimately un-muted. unable to stand up to
44:01
legislation. And
44:03
this is going to cause a lot of trouble,
44:05
as I mentioned in the EU. We'll be talking
44:07
about that here at the end of the podcast.
44:09
Yeah, and I think the Chinese government's removing it
44:11
for the same reason the EU wants to remove
44:13
it. They don't like end-to-end encryption. Yes, exactly. I
44:15
don't know. I mean, threads is something else. But
44:17
Signal and Telegram and WhatsApp, that's all E2E encryption.
44:19
Yep. By
44:22
the way, to answer your question, I
44:24
was down the hall. Thunderbird is Mac,
44:26
Windows, Linux. Okay. It's completely
44:28
open source and everywhere. Yeah. Very
44:31
cool. Nice program. In that
44:33
case, it will have access to Exchange Server,
44:35
which may allow it to move into a
44:37
corporate environment, which is probably what they're speaking.
44:39
Which would be great. Yes. Yeah.
44:42
That would be great. Yeah. Well,
44:44
we'll see if Microsoft does it. Oh, no, they're going to do
44:46
it. Oh, cool. Oh,
44:49
yeah. I mean, Mozilla. I wish they'd
44:51
just kill. I just wish they'd kill Exchange Server, but
44:53
okay. And
44:56
I mean, just get out of the
44:58
Exchange Server. I wish Microsoft would kill
45:00
Exchange. That's been a problem. Yeah. Since
45:02
forever. Since it was
45:04
created, exactly. And
45:07
you think that China's move is like
45:09
in response to TikTok and what's happening
45:12
here in the US with that? Well,
45:14
we had a discussion on that break
45:16
about that. The Times says
45:18
it's because nasty
45:20
things were said on those platforms about
45:22
Xi Jinping, which
45:26
is possible. There's no corroboration of
45:28
that. Apple says no. I
45:31
think it's just that there were threads. Maybe
45:33
that would be because threads doesn't have any, you
45:36
know, it's just a social network. But
45:38
for sure, they don't
45:40
want, I think threads is being killed probably
45:43
because of TikTok. It did, Andy,
45:45
I think pointed out that it happened immediately after
45:48
the TikTok ban was approved in the house. So
45:51
it's likely by the way that that this time
45:53
will be approved in the Senate because it's part
45:55
of a line package. Right.
45:58
So get ready to. Say
46:00
goodbye to TikTok. Wow,
46:03
Leo. That'll be an event, won't it? I think
46:05
the courts will block it. I'm hoping they will.
46:09
It's a very weird thing. They have a
46:11
year and a half to do it. Well,
46:13
and I mean, here, we were talking about a
46:15
Cold War, and there's this,
46:18
you know, China- Is it an
46:20
economic Cold War? Absolutely, yeah. Right,
46:22
and China, understandably,
46:24
is uncomfortable about
46:27
Western-based apps using
46:30
encryption that they're unable to
46:32
compromise. Right. So, I
46:35
mean, I get it, you know?
46:37
And so, sort of like we
46:39
lived through this brief period where,
46:42
like, you know, there was global
46:45
encryption and privacy,
46:47
and everybody had apps that everybody
46:49
could use. And then barriers began
46:51
getting erected, right? I mean, so,
46:54
sorry, you know, if you're Chinese,
46:56
you got to use, you know,
46:58
China Chat. And
47:00
the numbers for these particular apps in
47:02
China are pretty low. We're talking hundreds
47:04
of thousands of users, not millions or
47:07
billions. Okay, so
47:09
not a huge action impact. I think it's an
47:11
easy thing for them to do, yeah. Okay,
47:14
so this was interesting. I'll
47:16
just jump right in by sharing the posting
47:18
to the Gen 2 mail serve. This was
47:20
posted by a long standing since 2010. So,
47:24
14 years of involvement, and he's
47:26
very active, Gen 2 developer and
47:28
contributor. He wrote, given
47:31
the recent spread of the
47:33
AI bubble, and he has AI in
47:36
quotes, like, everywhere. So, he's
47:38
obviously immediately exposed himself as
47:40
not being a fan. Gen
47:42
2, you should understand, is
47:44
the ultimate gray beard Linux.
47:46
That's all you need to
47:48
know. That totally explains it,
47:50
yes. Given
47:53
the recent spread of the AI
47:55
bubble, I think we
47:57
really need to look into formally
47:59
addressing it. the related concerns.
48:02
Oh, he says in my opinion
48:05
at this point the only reasonable course
48:07
of action would be to safely ban
48:10
AI backed contribution
48:13
entirely. In other words
48:15
explicitly forbid people from
48:18
using chat GPT, bard,
48:20
github, copilot and so on to
48:23
create e-builds, code, documentation, messages,
48:26
bug reports and so on
48:28
for use in Gen 2.
48:32
Just to be clear I'm talking
48:34
about our original content. We can't
48:36
do much about upstream projects using
48:39
it. Then he
48:41
says here's the rationale. One, copyright
48:44
concerns. At this
48:46
point the copyright situation
48:48
around generated content is
48:50
still unclear. What's pretty
48:52
clear is that pretty much all LLMs,
48:55
you know large language models, are
48:58
trained on huge corpora
49:00
of copyrighted material and
49:02
the fancy AI companies don't care
49:05
about copyright violations. What this means
49:07
is that there's good risk that
49:09
these tools would yield
49:12
stuff we cannot legally
49:14
use. Number two, quality
49:16
concerns. LLMs are really
49:19
great at generating plausible
49:21
looking BS and he
49:23
didn't actually say BS but
49:25
I changed it for the podcast. I
49:29
suppose they can provide
49:31
good assistance if
49:44
you are careful enough but we
49:46
can't really rely on all
49:48
our contributors being aware of the
49:50
risks. Then there's ethical
49:52
concerns. Number three, as pointed
49:55
out above the AI
49:57
in quotes corporations care
50:00
about neither copyright nor
50:02
people. The AI
50:04
bubble is causing huge energy
50:07
waste. It's
50:12
giving a great excuse for
50:15
layoffs and increasing exploitation of
50:17
IT workers. It
50:19
is driving the further. Here I
50:21
felt I had to use the word because it has
50:24
now become a common word. I think I've heard it
50:26
on the Twitter network. The
50:29
enshutification of the
50:32
Internet is empowering all
50:34
kinds of spam and scam.
50:37
And that is the case. Gen
50:39
2 has always stood, he concludes,
50:43
as something different, something that
50:45
worked for people for whom
50:47
mainstream distros were lacking. I
50:50
think adding made-by-real
50:52
people to the list
50:55
of our advantages would be a good thing.
50:57
But we need to have policies
51:00
in place to make sure that
51:02
AI-generated crap, and again, not the
51:04
word he chose, doesn't
51:07
flow in. I like this guy. He's
51:12
right. I think that's
51:14
fair. Did you see the
51:17
study from the University of Illinois at
51:19
Urbana-Champaign? They
51:21
used ChatGPT-4, the latest version
51:24
of OpenAI's model, to
51:26
they gave it the CVE
51:28
database. That's it. Nothing more than
51:31
the description in the CVE. And
51:33
it was able to successfully attack
51:36
87% of those vulnerabilities. It
51:41
was able to craft an attack
51:43
based merely on the CVE description,
51:46
an effective attack. Wow.
51:49
Wow. Yeah, I mean,
51:51
I think he's probably right. But
51:54
I don't know how you enforce this because... No,
51:56
that's exactly the problem. In
52:00
his posting, he had a
52:02
link, he referred to
52:04
some, we'll call it a
52:07
crap storm, over on GitHub,
52:11
where, and I
52:13
went, I followed the link because I was curious, there
52:17
is a problem underway where
52:20
what is clearly, you
52:23
know, AI-generated
52:25
content, which,
52:27
you know, looks really reasonable, but
52:30
doesn't actually manage to get around
52:32
to saying anything, is like becoming
52:35
a problem over on GitHub. So,
52:38
anyway, in order to share that,
52:40
as we saw, I had to clean up the
52:42
language, you know, in his posting, you
52:46
know, since he clearly
52:48
doesn't think much of AI-generated code. And
52:51
as I said, there have been some signs
52:53
over on GitHub, which he referred to, of
52:56
descriptions appearing to be purely AI-generated,
52:58
you know, they're not high quality.
53:02
And I suppose
53:04
we should not be surprised,
53:06
Leo, that there are people,
53:08
maybe we'll call them script
53:10
kitties, who are probably incapable
53:13
of coding from scratch for
53:15
themselves. So, why wouldn't
53:17
they jump onto
53:19
large language model systems,
53:22
which would allow them to feel as
53:24
though they're contributing? But
53:27
are they really contributing? Now, look, let's
53:29
face it, humans are just as capable
53:31
of introducing bugs into code as anybody
53:34
else, and more often maliciously than AIs.
53:36
I mean, AIs aren't natively malicious. The
53:38
other thing I would say is there's
53:40
a lot of AI-generated pros
53:43
on GitHub, because English
53:45
is often not the first language of
53:47
the people doing the coding. A lot
53:49
of the problems towards GitHub are by
53:51
non-English speakers. And I think that that's
53:54
more likely the reason you'll see kind
53:56
of AI-like pros on there, because
53:58
they don't speak English. that well or they don't
54:00
order at all and so they're using
54:03
chat GPT for instance to generate the
54:06
text. I
54:09
don't think, I mean honestly I've used Copilot.
54:12
I have my own custom GPT
54:15
for Lisp. The code
54:17
it generates is indistinguishable from human code
54:19
probably because it is at some
54:22
point from human code. I
54:24
don't know how you're going to stop it. It
54:28
doesn't have a big red flag that says an
54:30
AI generated this. Right. As
54:32
we've noted the genie is out of the bottle
54:34
already. So yeah,
54:36
we're definitely in for some
54:39
interesting times. Okay,
54:42
we've got a bunch of feedback that I
54:44
found interesting that I thought our listeners would
54:46
too. Let's
54:48
take another break and then we will get
54:51
into what was this one. Oh,
54:55
oh, we have a
54:57
listener whose auto was
54:59
spying on them and he's absolutely
55:01
sure it never had permission and
55:04
we have a picture of the
55:06
report that it generated. We
55:09
here at Nissan see that you've been using
55:11
your vehicle for love making and we want
55:13
you to knock it off. Well,
55:15
we'll find out more about that in just a second. But
55:18
first, apparently you're doing it
55:20
wrong. You're doing it wrong. We
55:23
have some tips we'd like to share. First
55:27
a word from collide. We love collide. You
55:29
probably heard us talk about collide
55:31
many times on the show. I
55:34
personally think collide model of enlisting
55:36
your users as part
55:38
of your security team is the
55:41
only way to travel. Maybe
55:44
you just heard the news that collide was
55:46
acquired by one password. That
55:48
is good news. I really think so. Both
55:51
companies are leading the way in
55:54
creating security solutions with
55:56
a user first focus for
55:58
over a year. Collide Device Trust
56:01
has helped companies that use Okta
56:04
to ensure that only known and
56:06
secure devices can access their data.
56:08
You know, Okta verifies the human,
56:10
authenticates the human, but what's authenticating
56:12
the hardware they're bringing into
56:14
your network with them? Well, Collide does.
56:16
It works hand in hand with Okta. And
56:18
they're still doing that. They're just doing it now with
56:21
the help and resources of 1Password. I think
56:23
it's a match made in heaven. If you've
56:25
got Okta and you've been meaning to check
56:27
out Collide, this is the time. Oh,
56:30
and by the way, Collide's easy to get
56:32
started with because they come with a library
56:34
of pre-built device posture checks. So
56:36
you can get started right away, but it's
56:38
also easy to write your own custom checks
56:40
and for just about anything you can think
56:42
of. So you start with a high level
56:44
of security for a broad variety of software
56:47
and hardware, and then you can add some
56:49
custom stuff that's specific to your environment, for
56:51
instance. Plus, and I love this,
56:53
you can use Collide on devices
56:55
without MDM. That means your whole Linux
56:57
fleet. That means contractor devices.
57:00
And yes, every BYOD phone and laptop
57:02
in your company. So now
57:04
that Collide is part of 1Password, it's just going to
57:06
get better. I want you to check
57:08
it out. Go to collide.com security now. You
57:12
can watch. There's a great demo. You
57:14
can learn more. Watch the demo today. That's
57:17
K-O-L-I-D-E, collide.com security
57:21
now. Congratulations, Collide. That's a great
57:23
partnership, and I think you're going
57:25
to be continuing to do great things. We're so glad
57:27
you're part of our security now family.
57:30
Steve, let's close the loop. So
57:33
we have a note from a
57:35
guy who is no slouch. He's
57:38
a self-described user of Shields
57:40
Up, Spinrite, and an avid
57:42
listener of SecurityNow. He's
57:45
also an information security practitioner
57:47
and I think he
57:49
said a computer geek. Oh yeah, he does. So he said,
57:51
hi, Steve. I apologize for sending
57:53
to this email. I probably came through
57:55
sewer, Greg. He says, I
57:57
couldn't find a different email for contact information.
58:00
And yes, that's my
58:02
design, but okay. He
58:04
said, anyway, long-time
58:06
follower of Shields Up and
58:08
Spinrite and an avid listener
58:10
of Security Now, my full-time
58:12
gig, is an info security
58:14
practitioner and computer geek. We
58:18
have a couple of Hyundai's in the
58:20
family, and I purchased one last fall.
58:23
I used the Hyundai Blue Link
58:26
app on my phone as
58:28
I can make sure I locked my doors
58:30
and get maintenance reminders. I
58:33
made a point to not opt
58:35
in for the quote, he has
58:38
in quotes, driver discount, and
58:40
as a privacy cautious
58:42
person, I declined sharing data
58:45
wherever possible. But
58:47
after the story in the New York Times
58:49
regarding car makers sharing data, I
58:52
contacted Verisk and
58:54
Lexus Nexus to see what they
58:56
had on me. Lexus Nexus
58:58
had nothing other than the vehicles
59:00
I have owned in the past,
59:02
but Verisk had a lot. I
59:05
have attached a report of the,
59:07
a page of the report. It
59:10
includes driving dates, minutes,
59:13
day and night, acceleration events,
59:16
and braking events. The
59:19
only thing missing is the actual
59:21
speeds I was going or if
59:23
I was ever speeding. What
59:25
bothers me most about this is that
59:27
I have no way to challenge the
59:30
accuracy. For events that
59:32
are not illegal, I can
59:34
still be penalized. Braking
59:37
hard and accelerating fast should
59:39
not be safety concerns without
59:41
context, and today's smarter
59:44
cars are still imperfect. By
59:47
adaptive cruise control, he has
59:49
impressed radar, will still brake
59:51
hard at times it shouldn't,
59:53
and I will get penalized by that data.
59:57
My car is also a turbo, and
59:59
if I accelerate. rate for fun or
1:00:01
safety, that too can be a
1:00:03
penalty. And if I happened
1:00:05
to drive in Texas, where the highway is with
1:00:07
an 85-mile speed limit, so
1:00:11
I would be downrated for that illegal
1:00:13
behavior. My family
1:00:16
tried these safe-driving BT dongles
1:00:18
from another insurer years ago,
1:00:20
but the app has too
1:00:22
many false positives for driving
1:00:24
over speed, he says
1:00:26
posted speed limit doesn't agree with the app,
1:00:29
and hard-breaking and accelerating that we
1:00:32
decided it wasn't worth our time
1:00:34
or the privacy concerns. My
1:00:36
wife and I are close to Leo's
1:00:38
age, and she drives like
1:00:40
a grandmother, but her scores
1:00:42
were no better than mine. I
1:00:45
have attached a picture of the document
1:00:47
I got from Verisk, he says name
1:00:49
and VIN number removed to
1:00:51
give you an idea of
1:00:53
what is reported without my
1:00:56
consent from my car. I've
1:00:58
contacted Hyundai and told them I
1:01:00
do not and did not consent
1:01:03
to them sharing my data with
1:01:05
Verisk. After a few back
1:01:07
and forths, I got this
1:01:09
reply on April 12th, quote, thank
1:01:12
you for contacting Hyundai customer
1:01:14
care about your concerns.
1:01:17
As a confirmation, we've
1:01:19
been notified today that
1:01:21
the driver's score feature
1:01:23
and all data collecting
1:01:25
software has permanently disabled.
1:01:28
We do care. As
1:01:30
always, if you ever need additional
1:01:32
assistance, you can do so either
1:01:34
by email or phone, case number
1:01:36
dot dot dot. So
1:01:38
he said, I will request another
1:01:41
report from Verisk in the future
1:01:43
to validate this report
1:01:45
from Hyundai. Keep up the
1:01:47
good work. I thought you would like
1:01:49
to see the data and hear from someone
1:01:51
who is 100 percent certain they
1:01:55
never opted in. All
1:01:57
the best, Andrew. And
1:02:00
in the show notes, we sure enough, we've
1:02:02
got the page with
1:02:05
a report from the period September 26th of last
1:02:07
year through March 25th of this year. So
1:02:12
just last, toward the end of last month. Showing
1:02:16
things like the number of
1:02:18
trips, vehicle ignition on
1:02:21
to ignition off was
1:02:23
242 instances. Braking
1:02:29
events, where the vehicle speed is greater than
1:02:31
80 miles per hour has
1:02:34
an N.A. Hard
1:02:36
braking events, where they say
1:02:39
change in speed is
1:02:42
less than, because it's
1:02:44
braking, negative 9.5
1:02:46
kph per second is 24. So
1:02:56
during that period of time, what the
1:02:58
car regarded as a hard
1:03:00
braking event occurred 24 times. Rapid
1:03:04
acceleration events, change in speed
1:03:06
is greater than 9.5 kph per second
1:03:08
is 26. Daytime
1:03:14
driving minutes between the hours of 5 a.m. and 11 p.m.,
1:03:16
6,223. Time
1:03:22
minutes, actually very few, between 11 p.m. and 5
1:03:24
a.m., just 25 minutes. Miles
1:03:28
driven, 5,167.6 miles during this period. And
1:03:34
then an itemized daily
1:03:36
driving log showing
1:03:38
the date, the number
1:03:40
of trips taken that day, the
1:03:42
number of speeding events, the
1:03:44
number of hard braking
1:03:46
events, rapid acceleration events,
1:03:49
driving minutes both
1:03:52
daytime and nighttime. So yes, just
1:03:54
to close the loop on this,
1:03:56
as we first talked about from
1:03:59
the New York Times reporting
1:04:01
which informed
1:04:04
us that both this
1:04:07
Verisk and LexisNexis were
1:04:09
selling data to insurers
1:04:12
and as a consequence those insurers
1:04:15
were relying on that data to
1:04:17
set insurance premium
1:04:20
rates. And look what
1:04:22
it says. That's all happening. This report
1:04:24
may display driving data associated with other
1:04:26
individuals that operated insured's vehicle. So
1:04:29
my guess is
1:04:32
this is a report for an insurance company, right?
1:04:38
Whether he agreed to it or not, it may be that
1:04:40
he could turn off some things like, I noticed
1:04:42
the speeding events is NA all the way through.
1:04:45
Either he's a really careful driver or they're
1:04:48
not recording that which may well be
1:04:50
something he didn't agree to. So
1:04:54
anyway, I know my BMW records that because I
1:04:56
have it on my app. And
1:05:00
my Mustang used to give me a report card after every
1:05:02
trip. Right. And I
1:05:04
mean compared to the way I used
1:05:06
to drive when I was in my
1:05:08
younger years, I would be happy to
1:05:10
have my insurance company privy
1:05:12
to the fact that I drive about three
1:05:15
miles a day at 60 miles an
1:05:17
hour surrounded by
1:05:19
other traffic. And it's just, you know.
1:05:22
Here's my driving performance for the month
1:05:24
of March. And
1:05:26
in fact, Lori added me to her
1:05:28
car insurance and her rate went down.
1:05:31
Yeah, exactly. Because you're safe.
1:05:34
Right? Yeah. This is more because it's
1:05:36
an EV. You want to know a little bit about,
1:05:38
a reason you want to
1:05:40
know about hard braking and hard acceleration and
1:05:42
stuff. Because you've got a price here. A battery
1:05:44
triad. Right. Yeah. Right.
1:05:47
So I think that's great. You
1:05:50
know. But yeah, I understand why he doesn't want
1:05:52
Hyundai to record it. Well, and I would
1:05:54
argue that a consumer who says, no, I
1:05:56
don't want to be watched and spied on
1:05:59
and reported on. that ought
1:06:01
to be privacy right is available. Yeah, I'd like
1:06:03
to see the fine print in the rest of
1:06:05
the contract. Those are long those contracts. Go on
1:06:08
and on. Yeah,
1:06:12
so Lon Seidman said, I'm listening to
1:06:14
the latest Security Now episode, definitely agree
1:06:16
that freezing one's credit needs to be
1:06:18
the default position these days. One
1:06:21
question though, most of these
1:06:23
credit agencies rely on the types
1:06:25
of personal information that typically gets
1:06:27
stolen in a data breach
1:06:29
for authentication. Certainly a bad
1:06:32
actor will go for the lowest hanging
1:06:34
fruit and perhaps move on from a
1:06:36
frozen account, but if there's a
1:06:38
big whale out there, they may
1:06:40
go through the process of unlocking
1:06:42
that person's credit, then
1:06:45
stealing their money. What kind
1:06:47
of authentication changes do you
1:06:49
think are needed? Okay, well that's an
1:06:52
interesting question. Since I froze my credit
1:06:54
reporting, I've only had one occasion to
1:06:56
temporarily unfreeze it, which is what I
1:06:59
decided to switch to using an Amazon
1:07:01
credit card for the additional purchase benefits
1:07:03
that it brought since I'm a heavy
1:07:05
Amazon user. That's when
1:07:08
I discovered to my delight that
1:07:10
it was also possible to now
1:07:12
specify an automatic refreeze on a
1:07:15
timer to prevent the
1:07:17
thaw from being inadvertently
1:07:21
permanent. Since I
1:07:23
had very carefully recorded and stored
1:07:25
my previously freezing authentication, I
1:07:28
didn't need to take any account
1:07:30
recovery measures, so I can't speak
1:07:32
from experience. But one thing does
1:07:34
occur to me is that
1:07:37
strong measures are available. The
1:07:40
reporting agencies, for example, will have
1:07:42
our current home address, so
1:07:45
they could use the postal system to
1:07:48
send an authentication code
1:07:50
via old-school paper mail
1:07:53
that would be quite difficult if not
1:07:55
effectively impossible for
1:07:58
a criminal located in some hostile
1:08:00
foreign country to obtain. So
1:08:03
there certainly are strong authentication
1:08:05
measures that could be employed
1:08:08
if needed. Again
1:08:10
I don't have any experience
1:08:12
with saying, whoops, I forgot
1:08:15
what you told me not to forget when
1:08:17
I froze my credit. So
1:08:19
but it's me, hi, it's me
1:08:21
really, unfreeze me please. And
1:08:24
Lon's right that so much
1:08:26
information is in the report that
1:08:28
if or in the data which
1:08:30
is being leaked these days, for
1:08:33
example in that massive AT&T
1:08:35
leakage that something over
1:08:39
and above that needs to be used. They gave
1:08:42
me a long pin, I mean like a really
1:08:44
long pin. Yeah, I had that too. I
1:08:48
wrote it down and saved it. But
1:08:50
then what happens if you say, oh, it's me. I
1:08:53
forgot. They say if you
1:08:55
don't, you can't log in, you forget it, call
1:08:57
us, which means you
1:08:59
could easily socially engineer a customer service
1:09:01
rep. Because let's face it, the credit
1:09:03
reporting agencies don't want you to have
1:09:06
a credit freeze. Correct.
1:09:08
That's how they make their money is selling your
1:09:10
information. So I suspect it's
1:09:12
pretty easy to get turned off, I
1:09:16
would guess, by a third party. Eric
1:09:19
Berry, he tweeted, what was that credit
1:09:22
link from the podcast? I tried the
1:09:24
address you gave out and got PageNotFound.
1:09:27
I don't know why, but
1:09:29
it is grc.sc.credit. And
1:09:34
I just tried the link
1:09:36
and it works. So grc.sc.credit,
1:09:39
that bounces you over to
1:09:41
the Investopedia site. And
1:09:44
I've just verified, and I said that it
1:09:46
is still working. And for what it's worth,
1:09:48
in what, page nine of the show notes,
1:09:51
is the Investopedia link all the
1:09:53
way spelled out. So if
1:09:56
something about your computer doesn't follow
1:09:58
HTTP3 or the one
1:10:00
redirects then the link is there.
1:10:03
At Investopedia, how to freeze and unfreeze your
1:10:06
credit. So you could probably also just Google
1:10:08
that. You should also point out the FTC
1:10:10
has a really good page about
1:10:13
credit freezes, fraud alerts, what they are, how
1:10:15
they work and so forth. So you
1:10:18
could also just Google FTC and credit
1:10:21
freeze and they have a lot of
1:10:23
information on there. Does that provide links
1:10:25
to the actual freezing pages in the
1:10:27
in the bureaus? That's why I chose
1:10:29
all good. Absolutely.
1:10:32
Good, good, good. Okay. They point
1:10:34
you to a website run
1:10:36
by the FTC called identity theft
1:10:38
gov and they're gonna give you
1:10:40
those three. Now I should point out there's more
1:10:43
than three. These are the three big ones but
1:10:45
when I do the credit freeze there I think I did
1:10:48
five or six of them. There's others. It's
1:10:50
probably not a bad idea that fine seek them
1:10:52
all out but obviously these are the three the
1:10:54
FTC mentions as well as Investopedia. So
1:10:59
someone who tweets from the handle
1:11:01
or the moniker the monster he
1:11:03
said at SGGRC
1:11:05
the race condition isn't
1:11:08
solved solely with the
1:11:11
exchange counter ownership protocol
1:11:14
unless the owner immediately rereads
1:11:16
the owned memory region to
1:11:18
be sure it wasn't altered
1:11:20
before it got ownership. Okay
1:11:24
now I
1:11:26
don't think that's correct. There are
1:11:28
aspects of computer science
1:11:31
that are absolutely abstract
1:11:33
and purely conceptual and
1:11:36
I suppose that's one of the reasons I'm so drawn
1:11:38
to it. Yeah, me too. I think you are too.
1:11:40
Yes, exactly. One of
1:11:42
the time-honored masters of
1:11:44
this craft is
1:11:47
Donald Knuth and
1:11:49
the title of his masterwork is
1:11:51
a multi-volume. I have
1:11:53
all three although
1:11:56
there supposedly are five. Yes,
1:11:59
he's working. on them. He
1:12:01
calls the other ones fascicles and
1:12:04
I have those as well. Oh yeah,
1:12:06
how do you get those? Oh
1:12:09
yeah, they're available. I wanted the
1:12:11
full bookshelf but I only could
1:12:13
find three. Yeah, three are in
1:12:16
that original nice classic binding and
1:12:18
then he has a set of
1:12:20
what he calls fascicles which
1:12:22
are the other two. Anyway, the masterwork is
1:12:24
titled the masterwork
1:12:28
is the art of computer
1:12:30
programming and saying
1:12:32
that is not hyperbole.
1:12:36
There are aspects of computer programming
1:12:38
that can be true art and
1:12:41
his work is full
1:12:43
of lovely constructions similar
1:12:46
to the use of a
1:12:48
single exchange instruction being used
1:12:51
to manage interthread
1:12:53
synchronization. In
1:12:55
this case, as I tried to carefully
1:12:57
explain last week, the whole point of
1:12:59
using a single exchange
1:13:01
instruction is that it
1:13:04
is not necessary to
1:13:06
reread anything because
1:13:08
the act of attempting to
1:13:11
acquire the ownership variable acquires
1:13:14
it only if it
1:13:16
wasn't previously owned by
1:13:18
someone else while simultaneously
1:13:21
and here
1:13:24
simultaneity is the point
1:13:27
and the requirement also
1:13:30
returning information about whether
1:13:32
the variable was or was
1:13:34
not previously owned by
1:13:36
any other thread. So if
1:13:39
anyone wishes to give their brain a bit
1:13:41
more exercise, think about
1:13:43
the fact that in an
1:13:46
environment where individual threads of
1:13:48
execution may be preempted at
1:13:51
any instant, nothing
1:13:53
conclusive can
1:13:56
ever be determined by
1:13:58
reading the present statement. date of
1:14:01
the object ownership
1:14:03
variable. Since
1:14:06
the reading thread might
1:14:08
be preempted immediately
1:14:11
following that reading, and
1:14:14
during its preemption the owner
1:14:16
variable might change, the
1:14:19
only thing that anyone reading that
1:14:21
variable might learn, that is just
1:14:24
simply reading it, is
1:14:26
that the object being managed was
1:14:28
or was not owned at the
1:14:31
instant in time of
1:14:34
that reading. While that
1:14:36
might be of some interest, it is
1:14:38
not interesting to anyone who wishes to
1:14:40
obtain ownership since that
1:14:43
information was already obsolete
1:14:45
the instant it was obtained. So
1:14:49
that is what is so uniquely
1:14:51
cool about this
1:14:53
use of an exchange instruction
1:14:55
which both acquires
1:14:58
ownership only if
1:15:00
it isn't owned and returns
1:15:03
the previous state, meaning
1:15:05
if it wasn't previously owned, now
1:15:07
the thread that asked owns
1:15:10
it. And it is as simple
1:15:12
as a single instruction which is just conceptually
1:15:15
so cool. Java
1:15:19
Mountess said regarding episodes 970
1:15:21
and 969 with push button hardware config
1:15:23
options, my
1:15:29
first thought is of the 2017 Saudi
1:15:33
chemical plant attacked with the
1:15:35
Triton malware. The
1:15:37
admins working on the ICS
1:15:39
controllers deliberately left
1:15:42
an admin permission
1:15:44
key in the controllers
1:15:47
instead of walking the 10 minutes
1:15:50
required to insert the key
1:15:52
every time a
1:15:55
configuration needed changing. I
1:15:57
don't blame the result. As
1:16:02
a result, the attackers were able
1:16:04
to access the IT systems and
1:16:06
then the OT systems because
1:16:08
the key was always left in and
1:16:11
in admin mode. He
1:16:14
says, lazy people will
1:16:16
always work around, in convenience,
1:16:19
very secure systems. Like me.
1:16:21
And he finishes with to 999
1:16:24
and beyond like Voyager. Yes,
1:16:29
this podcast is going into
1:16:31
interstellar space. 999
1:16:33
and beyond. To boldly go when
1:16:36
no podcast has gone before. That's
1:16:39
not true though because I keep
1:16:41
hearing Quint talking about, oh we'd...
1:16:43
Yeah. Anyway, I thought he made a
1:16:46
good point. For example, the
1:16:48
push button, dangerous
1:16:51
config change enabler, should
1:16:55
work on a change
1:16:57
from not pushed to pushed
1:17:00
rather than on whether the button is
1:17:03
depressed. The electrical engineers
1:17:05
among us will be familiar with the
1:17:07
concept of edge triggered versus
1:17:10
level triggered. If
1:17:12
it's not done that way, people will
1:17:14
simply depress the button once, then
1:17:16
do something like wedge a toothpick
1:17:19
into the button in
1:17:21
order to keep it depressed. My
1:17:23
feeling is, the ability
1:17:25
to bypass well
1:17:28
designed and well
1:17:30
intentioned security does
1:17:32
not matter at all. There's
1:17:36
a huge gulf separating secure
1:17:38
by design and insecure by
1:17:41
design. And it's
1:17:43
absolutely worth making things secure
1:17:45
by design even
1:17:48
if those features can
1:17:50
be bypassed. The
1:17:52
issue is not whether they
1:17:54
can be bypassed, but whether they
1:17:56
are there in the first place
1:17:59
to perhaps. perhaps be bypassed.
1:18:03
If someone goes to the effort to
1:18:05
bypass a deliberately designed
1:18:08
security measure then the
1:18:10
consequences of doing that is 100% on
1:18:12
them. It's a matter of transferring responsibility.
1:18:20
If something is insecure by
1:18:22
design then it's the designers
1:18:24
who are at fault for
1:18:27
making the system insecure. Designing
1:18:30
the system insecurely. They
1:18:32
may have assumed that someone would
1:18:35
come along and make their insecure
1:18:37
system secure but we've
1:18:39
witnessed far too many instances where
1:18:42
that never happened. So
1:18:44
the entire world's overall
1:18:47
net security will be
1:18:49
increased if systems
1:18:51
just start out being secure
1:18:54
and are then later in
1:18:56
some instances forced against
1:19:00
their will to operate insecurely.
1:19:03
And if someone's manager learns
1:19:07
that the reason the enterprises entire
1:19:09
network was taken over all
1:19:11
their crown jewels stolen and sent to
1:19:13
a hostile foreign power and then all
1:19:16
their servers encrypted is because
1:19:18
someone in IT wedged
1:19:21
a toothpick into a button to
1:19:23
keep it held down for
1:19:26
their own personal convenience. Of
1:19:29
course they did. Well you
1:19:32
won't be asking that manager for
1:19:34
a recommendation on the resume that
1:19:36
will soon need updating. It will
1:19:39
be your fault and no one
1:19:42
else's. David
1:19:45
Sostjen tweeted, Hi Mr. Gibson, long
1:19:47
time listener, very formal and spin
1:19:49
right owner. Actually Ant used to
1:19:51
call me Mr. Gibson but nobody
1:19:53
else does. He said I was
1:19:55
listening to podcast 955 and I
1:19:57
meant to message you
1:20:00
about the Italian company Actalis,
1:20:05
A-C-T-A-L-I-S.
1:20:08
But life has a tendency to get in the
1:20:10
way. They happen to be
1:20:12
one of the few remaining companies that
1:20:14
issue free S-MIME
1:20:17
certificates. I've
1:20:20
been using them for years to
1:20:22
secure all my email, all the
1:20:24
best, David. So I just wanted
1:20:27
to pass that on, David. Thank
1:20:29
you. So an Italian company, Actalis,
1:20:31
A-C-T-A-L-I-S, are
1:20:34
issuing free S-MIME certificates. I
1:20:38
mean I use P-P most of the time, which is
1:20:40
free. But that's cool. S-MIME is a lot
1:20:43
easier for some people. So that's
1:20:45
cool. Meanwhile, the felonious
1:20:48
waffle has tweeted, Hi
1:20:51
Steve, I created an account
1:20:53
on this platform to message you. Oh, thus
1:20:56
felonious waffle. He says, I
1:20:58
cannot wait for your email to be up and
1:21:00
running. Neither can I. I
1:21:02
was just listening to episode 968 on my
1:21:05
commute and believed the outrage of
1:21:08
AT&T's encryption practices to be
1:21:10
undersold. Oh, he
1:21:13
says, you mentioned that if someone is
1:21:15
able to decrypt one string to get
1:21:18
the four-digit code, then they
1:21:20
have everyone's code who shares the same string.
1:21:22
I believe it to be far worse
1:21:24
than that. Am I wrong in
1:21:26
thinking that if they crack one, then
1:21:29
they have all 10,000? I'm
1:21:32
making some assumptions that there are only
1:21:34
two ways that 10,000 unique codes produces
1:21:36
exactly 10,000 unique encrypted
1:21:38
strings. The
1:21:43
first, and this is what
1:21:45
I'm assuming, AT&T used the same
1:21:47
key to encrypt every single code.
1:21:50
That's right. The second would
1:21:52
be to have a unique key for each code.
1:21:54
So code 123 would have to be a different
1:21:57
key than 5678. Things
1:22:00
far-fetched to me. Is there an error to
1:22:02
my thinking? Thanks for the podcast
1:22:04
and everything you do. Glad you're sticking around beyond 999.
1:22:07
Darrell. Okay, so I
1:22:10
see what Darrell is thinking. He's
1:22:12
assuming that what was done was
1:22:15
that if the encrypted string
1:22:18
was decrypted to obtain
1:22:21
the user's four-digit passcode, then
1:22:24
the other 9,999 strings could similarly
1:22:26
be decrypted to
1:22:33
obtain the other four-digit
1:22:35
passcodes. And he's
1:22:37
probably correct in assuming that
1:22:40
if one string had been decrypted,
1:22:42
then all the others could be
1:22:45
too. But
1:22:47
that isn't what happened. No
1:22:49
encrypted strings were ever
1:22:52
decrypted, and the
1:22:54
encryption key was never learned.
1:22:58
But due to the static
1:23:00
nature of the passcode's encryption,
1:23:02
that wasn't necessary. I
1:23:05
wanted to share Darrell's note because
1:23:08
it reveals an important facet of
1:23:10
cryptography, which is that
1:23:12
it's not always necessary
1:23:15
to reverse a
1:23:17
cryptographic operation, as
1:23:20
in decryption in this case,
1:23:22
but it's also true of hashing, where
1:23:25
we've talked about through the years many
1:23:27
instances where we don't need to unhash
1:23:29
something. Going
1:23:31
only forward in the forward
1:23:34
direction is often still useful.
1:23:37
If the results of going in
1:23:39
the forward direction can
1:23:42
only be reapplied to
1:23:44
other instances, then
1:23:46
a great deal can still be learned.
1:23:48
In this case, since people
1:23:50
tended to use highly
1:23:53
non-random passcodes, reusing
1:23:56
their birthday, their house's
1:23:58
street number, or the last four digits
1:24:00
of their phone number or social security
1:24:02
number, all things
1:24:05
that were also part of
1:24:07
the exfiltrated data and
1:24:09
assuming a fixed mapping
1:24:12
between their plain text passcode
1:24:15
and its encryption, meaning the
1:24:17
key never changed, examining,
1:24:20
for example, the
1:24:23
details of all the records
1:24:25
having a common encrypted passcode.
1:24:27
Imagine that from this big
1:24:29
massive database, you pull together
1:24:32
all the records with the
1:24:34
same encrypted passcode and you
1:24:36
look at them. Look,
1:24:39
just that observation would
1:24:41
very quickly reveal what
1:24:44
single passcode most
1:24:46
of those otherwise unrelated records
1:24:49
shared and thus all of
1:24:51
them used. For example, one
1:24:54
household lived
1:24:56
at 1302 Willowbrook,
1:25:00
whereas the birthday of someone
1:25:02
else was February 13
1:25:05
and someone else's phone number ended in 1302. So
1:25:09
by seeing what digits were common
1:25:11
among a large group of records,
1:25:14
all sharing only the
1:25:17
same encrypted passcode, it
1:25:19
would quickly become clear what
1:25:21
identical passcode they all chose,
1:25:24
no decryption necessary.
1:25:27
So that's one of the cool things that
1:25:29
we've seen about the nature of
1:25:31
crypto in the field is there actually
1:25:34
are some interesting ways around it when
1:25:36
you have the right data,
1:25:39
even if you don't have the keys. Skynet
1:25:43
tweeted, Hi Steve, would
1:25:45
having DRAM catch up
1:25:48
and be fast enough, eliminate
1:25:51
the ghost race issue? And
1:25:54
I thought that was a very interesting question. We've
1:25:57
talked about how caching is there. to
1:26:01
decouple slow DRAM from
1:26:03
the processor's much more
1:26:06
hungry need for data in a
1:26:08
short time. So
1:26:12
the question could be reframed
1:26:14
a bit to further
1:26:16
clarify what we're really asking.
1:26:18
So let's ask. If
1:26:21
all of the system's memory were
1:26:24
located in the processor's
1:26:26
most local instant
1:26:28
access L1 cache,
1:26:31
that is, if it's L1 cache,
1:26:34
we're 16 GB
1:26:36
in size, so
1:26:38
that no read-to or
1:26:41
write-from main memory
1:26:43
took any time at all, would
1:26:46
speculative execution still
1:26:48
present problems? And
1:26:51
I believe the answer is yes. Even
1:26:54
in an environment where access
1:26:56
to memory is not an
1:26:58
overwhelming factor, the work of
1:27:00
the processor itself can
1:27:02
still be accelerated by
1:27:05
allowing it to be more clever about
1:27:07
how it spends its time. Today's
1:27:10
processors, for example, are not executing
1:27:12
instructions one at a time. And
1:27:16
in fact, processors have not
1:27:18
actually been executing one instruction at
1:27:20
a time for quite a
1:27:22
while. The concept of out-of-order
1:27:27
instruction execution dates way
1:27:29
back to the
1:27:31
early CDC, Control Data Corporation
1:27:34
6600 mainframe,
1:27:38
which was the first commercial computer system,
1:27:40
a mainframe, to implement
1:27:43
out-of-order instruction execution.
1:27:46
And that was in 1968, I believe, was when
1:27:49
the CDC 6600 happened. It
1:27:57
sucked in instructions ahead of
1:27:59
them. being needed, and
1:28:02
when it encountered an instruction
1:28:04
whose inputs and outputs were
1:28:06
independent of any earlier instructions
1:28:09
that were still being worked
1:28:11
on, it would
1:28:13
execute that later instruction in
1:28:15
parallel with other ongoing work,
1:28:17
because the instruction didn't need
1:28:19
to wait for the results
1:28:22
of previous instructions, nor would
1:28:24
its effect change
1:28:26
the results of previous instructions.
1:28:29
The same sort of instruction pipelining
1:28:31
goes on today, and
1:28:34
we would still like our processors to be
1:28:36
faster if a processor
1:28:38
had perfect knowledge of
1:28:41
the future by knowing
1:28:43
which direction it was going to
1:28:45
take at any branch, or
1:28:48
where a computed indirect
1:28:51
jump was going to land it,
1:28:54
and if it had perfect knowledge of those things,
1:28:56
it would be able to reach its theoretical maximum
1:29:04
performance given any clock rate.
1:29:08
But since a processor's ability to predict
1:29:10
the future is limited to what lies
1:29:12
immediately in front of it, it
1:29:15
must rely upon looking back at
1:29:17
the past and using
1:29:19
that to direct its guesses about
1:29:21
the future, or, as we
1:29:24
say, its speculation about its
1:29:27
own immediate future. Here's
1:29:31
something to think about. The
1:29:33
historical problem with
1:29:36
third-party cookies has been
1:29:38
that browsers maintained in the past
1:29:41
a single large shared cookie jar,
1:29:43
as we've discussed before, in fact
1:29:45
just recently. So an
1:29:48
advertiser could set its cookie while the
1:29:50
user was at site A and
1:29:52
read it back when the same user had moved
1:29:54
to site B. This was
1:29:56
never the way cookies were meant to be used. They
1:29:58
were meant to be used in a
1:30:01
first-party context to allow sites to
1:30:03
maintain state with their visitors.
1:30:06
The problem is that until very
1:30:08
recently, there has been no cookie
1:30:11
compartmentalization. We have
1:30:13
the same problem with
1:30:15
microprocessor speculation that
1:30:18
we have had with third-party cookies,
1:30:21
lack of compartmentalization.
1:30:25
The behavior of malware
1:30:27
code is affected
1:30:30
by the history of
1:30:32
the execution of
1:30:34
the trusted code that ran just
1:30:36
before it. Malware
1:30:38
is able to detect the behavior of
1:30:40
its own code, which gives
1:30:42
it clues into the operation of
1:30:45
previous code that was running in
1:30:48
the same set of processors. In
1:30:50
other words, a
1:30:52
lack of compartmentalization.
1:30:56
Malicious code is
1:30:58
sharing the same
1:31:00
microarchitectural state as
1:31:03
non-malicious code, because
1:31:05
today there's only one set
1:31:08
of state. That's
1:31:10
what needs to change. And
1:31:13
I would be surprised if Intel wasn't
1:31:15
already well on their way to implementing
1:31:19
exactly this sort of change. I
1:31:22
have no idea how large
1:31:24
a modern microprocessor's speculation state
1:31:26
is today, but the
1:31:28
only way I can see to maintain the performance
1:31:30
we want today in an
1:31:33
environment where our processors might
1:31:35
be unwittingly hosting malicious code
1:31:38
is to arrange to
1:31:40
save and restore the
1:31:42
microprocessor's speculation state
1:31:45
whenever the processing, whenever the
1:31:48
operating system switches
1:31:50
process contexts. It
1:31:53
would make our systems even more
1:31:55
complicated than they already are, but
1:31:58
it would mean that malicious code code
1:32:00
could no longer obtain any
1:32:03
hints about the
1:32:05
operation of any other code that
1:32:07
was previously using the same system
1:32:09
it is. I'll
1:32:16
omit this listener's full name since it's not
1:32:18
important. We'll call him
1:32:20
John. He says, I got
1:32:23
nailed in a phishing
1:32:25
email for AT&T.
1:32:27
See the attached?
1:32:29
Yeah. Oh. Save
1:32:32
the attached picture. He
1:32:35
said, no excuse, but
1:32:37
at least I realized it immediately
1:32:40
and changed my password. He
1:32:42
said, which is not one that has been used
1:32:44
anywhere else, of course. He
1:32:47
ended up saying, feel stupid, dot,
1:32:49
dot, dot. No, because we've been
1:32:51
talking about this AT&T breach. He
1:32:53
was expecting this email from AT&T.
1:32:56
Yep. Exactly.
1:33:00
The email says, dear customer, at
1:33:02
AT&T we prioritize the security of
1:33:04
our customer's information and are committed
1:33:07
to maintaining transparency in all matters
1:33:09
related to your privacy and data
1:33:11
protection. We are writing
1:33:13
to inform you of a recent security
1:33:15
incident involving a third party vendor. Despite
1:33:19
our rigorous security measures, unauthorized access was
1:33:21
granted to some of our customer data
1:33:23
stored by this vendor. This
1:33:25
incident might have involved your
1:33:28
names, addresses, email addresses, social
1:33:30
security numbers and dates of birth.
1:33:33
We want to assure you that
1:33:35
your account passwords were not exposed
1:33:37
in this breach. But
1:33:39
they're about to be. We have notified
1:33:42
federal law enforcement about
1:33:44
the unauthorized access. Please
1:33:47
accept our apology for this
1:33:49
incident. To determine if your
1:33:51
personal information was affected, we
1:33:54
encourage you to follow the link below
1:33:56
to log into your account.
1:34:00
And then there's a little highlight that says,
1:34:02
sign in. And finally, thanks
1:34:04
for choosing us, AT&T. I'm
1:34:07
going to bet this is a copy of
1:34:09
the actual email because it's too much corporate,
1:34:12
like rigorous security measures, but they
1:34:14
did gain all your data. It's very
1:34:16
much what AT&T said. So
1:34:19
I bet the bad guy just copied the
1:34:21
original AT&T email and just changed this
1:34:23
little link here. Exactly.
1:34:27
Well, I would imagine that there
1:34:30
was probably no sign in in the
1:34:32
original link. Right. Because
1:34:34
that's really what changes it
1:34:37
into a phishing attack. And
1:34:39
so anyway, I just wanted to say, this
1:34:41
is how bad it is out there. I
1:34:43
mean, as you said, Leo, you saw it
1:34:46
immediately. We've been talking
1:34:48
about it. This is a listener of ours.
1:34:50
He knew about it before it came. So
1:34:53
again, absolutely authentic
1:34:55
looking. They're so smart. They're so evil.
1:34:58
You know, we really, we absolutely need
1:35:00
to always be vigilant. And never
1:35:02
click links in email. No,
1:35:05
no. Never. Even from mom.
1:35:09
Maybe especially. Tom
1:35:14
Minnick said, with these
1:35:16
atomic operations to mitigate race
1:35:18
conditions, how does
1:35:20
that work with multi-core processors?
1:35:23
When multiple threads are running in parallel,
1:35:28
couldn't erase conditions still occur? I
1:35:31
probably says I probably don't understand enough
1:35:33
about how multi-core processors handle threads. So
1:35:36
Tom's question actually is a terrific one.
1:35:38
And it occurred to many of our
1:35:40
listeners who wrote, he and
1:35:43
everyone were right to wonder. The
1:35:47
atomicity of an
1:35:49
instruction only applies to
1:35:51
the threads running on a single core,
1:35:53
since it, the core, can only be
1:35:55
doing one thing. thing
1:36:00
at a time by definition. Threads,
1:36:02
as I said, are an abstraction
1:36:05
for a single core. They are
1:36:07
not an abstraction if
1:36:09
multiple cores are sharing memory. So,
1:36:13
what about multiple cores
1:36:15
or multiprocessor systems? The
1:36:19
issue is important enough that
1:36:22
all systems today
1:36:24
provide some solution for this.
1:36:27
In the case of the Intel architecture,
1:36:30
there's a special instruction
1:36:33
prefix called lock,
1:36:36
which when it immediately
1:36:38
precedes any of the handful
1:36:40
of instructions that might find
1:36:43
it useful, forces
1:36:45
the instruction that follows
1:36:49
to also be atomic in
1:36:52
the sense of multiple cores
1:36:54
or multiple memory sharing processors.
1:36:57
Only one processor at a
1:37:00
time is able to
1:37:02
access the targeted memory location.
1:37:05
After all, it's just an instant.
1:37:09
Essentially, there is
1:37:11
a lock signal that comes out of
1:37:13
the chip that all
1:37:15
the chips are participating with.
1:37:17
So, the processor, when it's
1:37:19
executing a lock instruction,
1:37:21
drops that signal,
1:37:24
performs the instruction, and immediately
1:37:27
raises it. So, it's as
1:37:29
infinitesimally brief
1:37:33
lockout as could be,
1:37:36
so it doesn't hurt performance,
1:37:38
but it prevents any other
1:37:40
processor from accessing the same
1:37:42
instruction at the same time. Only
1:37:44
one processor at a time is able
1:37:47
to access the targeted memory location. And
1:37:50
there's one other little tidbit. That
1:37:52
simple exchange instruction is so
1:37:56
universally useful for
1:37:58
performing thread synchronous that
1:38:00
the lock prefix functionality
1:38:03
is built in to
1:38:06
that one instruction. All the other
1:38:08
instructions that can be used require
1:38:11
an explicit lock prefix, not
1:38:13
the exchange instruction. It automatically
1:38:16
is not only thread
1:38:18
safe, but multi-core and multi-processor
1:38:20
safe, which I think is
1:38:22
very cool. Finally,
1:38:26
Michael Hagberg said, credit
1:38:28
freeze rather than unlock
1:38:30
your entire account. It
1:38:33
should work this way. I'm
1:38:36
buying a car. The dealer
1:38:38
tells me which credit service they
1:38:40
use and the dealerships
1:38:44
ID number. I
1:38:46
go to the credit service website,
1:38:49
provide my social security number, PIN
1:38:51
assigned by the site
1:38:53
when I froze it and the car
1:38:55
dealers ID number. My
1:38:57
account will then allow that
1:38:59
car dealer only
1:39:02
to access my account for 24 hours.
1:39:07
And Michael, I agree
1:39:09
100% and this just shows us that
1:39:15
the child in you has not
1:39:18
yet been beaten into submission and
1:39:20
that you are still
1:39:22
able to dream big. More
1:39:26
power to you. Wouldn't
1:39:31
it be nice if the world was so
1:39:33
well-designed? I actually do everything but that last
1:39:35
piece where you give the car dealers ID
1:39:37
to the credit bureau, but
1:39:39
I do ask them which credit bureau are
1:39:42
you going to use and then
1:39:44
that's the one I unfreeze. And I tell
1:39:46
them, you got whatever, three days to do
1:39:49
this and it's going to automatically lock up
1:39:51
again. And now, I did this
1:39:54
enough people use freezes that when they get
1:39:56
that, they kind of know what happened and they'll
1:39:58
call you and say, hey, your credit's a good deal. Yeah,
1:40:00
right. It's not unusual
1:40:03
to encounter a freeze. And in fact, I
1:40:05
did some googling around before I
1:40:07
got my card with Amazon to find out
1:40:09
which of the services they use. And then
1:40:13
that's the one I unlocked. And you'd be
1:40:15
more judicious. I love the idea though of
1:40:17
saying, hey, credit bureau, this guy's going to
1:40:19
ask, don't tell anybody else. Wouldn't
1:40:22
that be? But Leo, all
1:40:25
the junk mail we receive as elders-
1:40:29
All those credit card offers. Yes.
1:40:31
It's because everybody's pulling our
1:40:33
credit and- By
1:40:36
the way, when I froze all my accounts, I stopped
1:40:38
getting those. Yeah, I haven't had any
1:40:40
for years. The only
1:40:42
ones I get are from existing cards saying,
1:40:45
hey, you got a blue card. Would you like
1:40:47
a green one? That's it. Because
1:40:49
no new card companies can get my
1:40:51
information. So it works. Right.
1:40:55
It works. Right. We got just two
1:40:57
little bits regarding Spinrite. Mike Shales said,
1:40:59
recently I've run into some issues with
1:41:02
my old iMac, a mid 2017 model.
1:41:05
He said, I've wanted to support your
1:41:07
valuable security now efforts for some time,
1:41:09
but investing the time to see if
1:41:11
I could even run Spinrite on my
1:41:13
Macs when they were all
1:41:15
running without problems, discouraged me. But
1:41:18
now you mentioned on your April
1:41:20
9th podcast, I wanted
1:41:22
to remind any would-be Mac
1:41:24
purchasers, oh, this is me
1:41:26
speaking. I keep quoting me. I wanted to
1:41:29
remind any would-be Mac purchasers that this is
1:41:31
the reason I created GRC's freeware named Bootable
1:41:33
in favor, okay, the name in favor of
1:41:35
DOS boot. If you can get
1:41:37
Bootable to congratulate you on your success in
1:41:40
booting it, then exactly the same path can
1:41:42
be taken with Spinrite. Right. So he said,
1:41:45
he wrote, but Bootable is
1:41:47
a Windows.exe file and
1:41:50
needs a Windows machine to
1:41:52
create a Bootable USB flash drive,
1:41:54
right? Lacking a Windows
1:41:57
machine, I made a Bootable DOS
1:41:59
drive. from your
1:42:01
read speed image download. Wow!
1:42:05
Good going there. Following instructions
1:42:08
from chat GPT, I used
1:42:13
DD to write the read
1:42:15
speed image to a 4
1:42:18
gig flash drive. Then following
1:42:20
instructions in the GRC forum post
1:42:22
I said I succeeded in booting
1:42:24
my iMac into DOS and running
1:42:26
read speed. So far so
1:42:34
good but I believe
1:42:36
the current Spinrite 61 includes the
1:42:39
capability to recognize more drives than
1:42:41
previously and might rely on
1:42:43
features not provided in the version
1:42:45
of DOS that I now have
1:42:47
installed on my flash drive. If
1:42:50
so perhaps downloading the Spinrite 61
1:42:52
XE file and copy into my
1:42:55
flash drive might not be ideal.
1:42:57
Is this an issue? Thanks for the
1:42:59
help Mike. Well okay.
1:43:02
Mike very cleverly arranged to
1:43:05
use various tools at GRC
1:43:08
and amazingly enough chat
1:43:10
GPT to create a
1:43:12
bootable USB drive which successfully booted
1:43:14
his mid 2017
1:43:16
iMac. So first
1:43:18
responding to Mike directly. Mike
1:43:21
everything you did was 100%
1:43:23
correct and if you place your copy
1:43:25
of the Spinrite XE onto that USB
1:43:27
stick and boot it everything
1:43:30
will work perfectly and if you run
1:43:32
it at level 3 on
1:43:34
any older Macs with solid-state storage
1:43:36
you can expect to witness a
1:43:38
notable and perhaps even very significant
1:43:41
subsequent and long-lasting improvement in the
1:43:43
system's performance and while
1:43:45
it won't be obvious there's also a very
1:43:47
good reason to believe that in the process
1:43:49
you will have significantly improved the system's reliability.
1:43:52
The reason the SSD will
1:43:55
now be much faster is it is needing
1:43:57
as I mentioned before to struggle much less
1:43:59
at after running Spinrite to
1:44:01
return the requested information. And
1:44:06
we will be learning far more about this
1:44:08
during the work on Spinrite 7. And
1:44:11
although 6.1 is a bit of a blunt instrument
1:44:13
in this regard, it works, and it's here today.
1:44:16
To Mike's question, the specific
1:44:18
version of Free DOS does not
1:44:20
matter at all. Since
1:44:22
DOS is only used to load Spinrite
1:44:24
and to write its log files, otherwise
1:44:27
Spinrite ignores DOS and interacts directly
1:44:29
with the system's hardware. So yes,
1:44:31
you can run it on your
1:44:33
read speed drive. I
1:44:36
wanted to share Mike's question
1:44:39
because I just finished making
1:44:41
some relevant improvements. He mentioned
1:44:43
correctly that Bootable is Windows-only
1:44:46
freeware. But over the weekend,
1:44:48
the Bootable download was changed from
1:44:50
an XE to a zip archive. And
1:44:52
the zip archive now also contains a
1:44:55
small Bootable file system
1:44:57
image, which can
1:45:00
be used by any Mac or
1:45:02
Linux user to directly create a
1:45:04
Bootable boot testing USB drive. Any
1:45:07
Intel, Mac, or Linux
1:45:09
user? We should really emphasize that because most
1:45:11
Mac users now are no longer
1:45:13
Intel. One
1:45:16
of the guys in GRC's web forums
1:45:18
put me onto a perfect and easy
1:45:20
to use cross. I should mention,
1:45:22
Leo, we've solved the Intel problem. But that's
1:45:25
for another topic. Oh, tease me.
1:45:27
Yeah, we've got some guys
1:45:30
who figured out how to
1:45:32
boot on UEFI-only systems and
1:45:34
on ARM-based silicon
1:45:39
using some concoction of virtual
1:45:41
machines. And I haven't
1:45:44
followed what they're doing because
1:45:47
I'm just focused on getting what all of this
1:45:50
is done done. Anyway, there's
1:45:52
something known as Etcher by
1:45:56
a company called Ballina. It
1:45:59
is a perfect software. easy to use
1:46:01
for an Intel Mac
1:46:03
person means
1:46:06
of moving the bootable image
1:46:08
onto a USB without the
1:46:11
DD command. DD makes
1:46:13
me nervous because you need to
1:46:15
know what you're doing. I mean,
1:46:17
it's a very powerful, you know,
1:46:20
direct drive copying tool. Linux
1:46:23
people are probably more comfortable with DD.
1:46:26
I'm glad that this Mac user,
1:46:28
Mike, was able to get
1:46:30
chat GPT to help him, and I'm
1:46:32
glad that chat GPT just didn't stumble
1:46:34
over hallucination at that particular moment. You
1:46:36
can erase everything with DD very easily.
1:46:39
Yeah. Careful. Yeah.
1:46:41
Seriously, Sean wrote, Hey, Steve, I'm sure
1:46:43
you're hearing this a lot. But
1:46:46
Windows, oh, Windows did not
1:46:48
trust Spinrite. Despite all
1:46:50
your signing efforts, I
1:46:53
had to clear three severe warnings before it
1:46:55
would allow me to keep 6.1 on
1:46:58
my system for use. I
1:47:00
hope it gets better soon for users less
1:47:02
willing to ignore the scary warnings from Microsoft.
1:47:05
Signed, Sean. And,
1:47:07
yep, I don't recall whether
1:47:10
I had mentioned it here also, since
1:47:12
I participated a lot about this over
1:47:14
in this discussion within the GRC's news
1:47:16
groups. One thing that has
1:47:18
been learned is that
1:47:20
Microsoft has decided to
1:47:23
deprecate any and
1:47:25
all special meaning for
1:47:27
EV, Extended Validation,
1:47:31
code signing certificates. It's
1:47:33
gone. All those
1:47:35
hoops I jumped through to get
1:47:37
remote server-side EV code signing to
1:47:39
work remotely
1:47:42
on an HSM device will
1:47:46
have no value moving forward, except
1:47:48
having the signing
1:47:51
key in the HSM does prevent
1:47:53
anybody, even me, I mean,
1:47:56
from extracting it. It can't be extracted. It
1:47:58
can only be used to sign. When
1:48:02
I saw this news, I
1:48:05
reached out to Jeremy Rowley, who's
1:48:07
my friend and primary contact over
1:48:09
at DigiCert, to ask him if
1:48:12
I had read Microsoft's announcement
1:48:14
correctly. And he
1:48:16
confirmed that Microsoft had just that
1:48:19
like a week, the week before,
1:48:21
surprised everyone in the CAB forum
1:48:23
with that news. Apparently,
1:48:27
what's at the crux of this is that for, you know,
1:48:30
historically, end users were able
1:48:32
to use EV code signing
1:48:34
certificates to sign their kernel
1:48:37
drivers. That was
1:48:39
the thing Microsoft most cared about
1:48:41
as far as EV was concerned.
1:48:44
But after the problems with malicious
1:48:47
Windows drivers, Microsoft has
1:48:49
decided to take away that
1:48:52
right and require
1:48:55
that only they, meaning
1:48:57
Microsoft, will be authorized
1:48:59
to sign Windows kernel drivers in
1:49:02
the future. In
1:49:04
their eyes, this eliminated the
1:49:06
biggest reason for having and
1:49:08
carrying at all
1:49:10
about EV code signing certs. So
1:49:12
they will continue to be honored
1:49:15
for code signing, but EV certs
1:49:17
will no longer have any benefit.
1:49:20
They will confer no extra
1:49:22
meaning. What
1:49:24
I think is going on with
1:49:28
regarding Spinrite is
1:49:30
that something Windows Defender
1:49:32
sees inside Spinrite's 6.1s
1:49:34
code, which was not
1:49:36
in 6.0, absolutely
1:49:39
convinces it that
1:49:41
this is a very specific trojan
1:49:44
named WACATAC.B,
1:49:49
which I guess you pronounce WACATAC. If
1:49:53
I knew what part of Spinrite
1:49:55
was triggering these false positives, I could
1:49:57
probably change it. I
1:50:00
have some ideas. So I'm
1:50:02
going to see, because we
1:50:05
just can't keep tolerating these sorts
1:50:07
of problems from Microsoft. And
1:50:09
it doesn't now look like my
1:50:11
having an EV cert, it's been three
1:50:14
months now, and tens of thousands of
1:50:17
copies of GRC's freeware, because
1:50:19
thousands, plural, thousands of copies
1:50:21
are being downloaded every day.
1:50:23
I re-signed them all with
1:50:25
this new certificate in order
1:50:27
to get it exposed and
1:50:30
let Microsoft see that whoever
1:50:33
was signing this wasn't producing
1:50:35
malware. But here's
1:50:37
Mike, or no, sorry, Sean, who
1:50:40
just said, he had to
1:50:42
go to extreme measures to get
1:50:44
Windows to leave this download alone. So
1:50:48
grumble. Grumble, grumble. Big time.
1:50:51
OK, we're going to talk about what the
1:50:53
EU is doing, Leo, after you share our
1:50:55
last sponsor with our listeners. Breaking
1:50:58
news, however, that you will,
1:51:01
depending on your point of view, will either be
1:51:03
surprised or not surprised to hear. Google has decided
1:51:06
to delay third party
1:51:08
cookie blocking until next
1:51:11
year. Digiday,
1:51:13
this fantastic opening
1:51:16
sentence by Seb Joseph. Google is delaying
1:51:19
the end of third party cookies in
1:51:21
its Chrome browser again. In
1:51:23
other unsurprising developments, water remains
1:51:25
wet. So
1:51:30
they did not outline a more specific
1:51:33
timetable beyond hoping for 2025. OK,
1:51:37
and that, I mean, it does show you
1:51:39
the problem with
1:51:43
taking this away. They promised it
1:51:45
originally in January 2020. This
1:51:49
is the third time they've
1:51:51
pushed it back. And I'm
1:51:53
guessing it's not going to be the last. Some
1:51:58
of this is actually, it's not going to be the last. intertwined
1:52:02
with the UK Competition and Markets
1:52:04
Authority, they say,
1:52:08
it's critical that CMA
1:52:13
has sufficient time to review all the
1:52:15
evidence, including results from industry tests, which
1:52:17
the CMA has asked market participants to provide by the
1:52:20
end of June. In
1:52:22
order to see whether the privacy
1:52:25
sandbox will be a replacement. We
1:52:28
recognize Google says there are
1:52:30
ongoing challenges related to reconciling
1:52:32
divergent feedback from the industry,
1:52:35
regulators and developers will continue to engage
1:52:38
closely with the entire ecosystem. Yeah, but
1:52:40
some of this is that the CMA
1:52:42
wants to see proof and
1:52:45
they're not ready to provide proof. So
1:52:48
Leo, here's another good, another reason I'm so
1:52:50
happy we're going past 999. Yeah.
1:52:53
Because November is when we hit
1:52:55
999 and I would not be here
1:52:57
for... Let's make a deal
1:53:00
that you'll keep doing the show until Google
1:53:02
faces out third party cookies. Oh
1:53:04
no. Rats, I can't.
1:53:08
Well, Pat. Almost fooled him. No, no.
1:53:10
I think it's going to happen. I think it's inevitable. You
1:53:12
think? Okay, we'll see. I do. It's
1:53:16
been four years. I would wager 2025. I'll go
1:53:18
for next year. Let's
1:53:20
go for 2025. Let's do it. Let
1:53:23
me real quickly mention our great sponsor and
1:53:25
then we can get to the meat
1:53:28
of the matter. These chat guys here
1:53:30
going on, what's going on in Europe. But
1:53:33
first a word from Zscaler,
1:53:35
the leader in cloud security.
1:53:38
It's no surprise cyber attackers are
1:53:40
finally... I just mentioned that
1:53:43
we are using AI in creative
1:53:45
ways to compromise users to
1:53:47
breach organizations from
1:53:50
high precision phishing emails to video,
1:53:53
voice deep fakes of
1:53:56
CEOs of celebrities in a
1:53:58
world where employees are working... everywhere, apps
1:54:01
are everywhere, data is everywhere.
1:54:04
Firewalls and VPNs are just not
1:54:06
working to protect organizations. They weren't
1:54:09
designed for these distributed environments, nor
1:54:11
were they designed with AI-powered attacks
1:54:13
in mind. In fact,
1:54:16
often, it's the case that firewalls and
1:54:18
VPNs become the attack surface. In
1:54:21
a security landscape where you must
1:54:23
fight AI with AI, the best
1:54:25
AI protection comes from having the
1:54:27
best data. Boy, listen to this.
1:54:29
Zscaler has extended its Zero Trust
1:54:31
architecture with powerful AI engines that
1:54:33
are trained and tuned in real
1:54:35
time by 500 trillion
1:54:38
signals every day. 500
1:54:42
trillion, with a T, signals
1:54:44
every day. Zscaler's
1:54:46
Zero Trust and AI helps
1:54:48
defeat AI attacks today by
1:54:50
enabling you to automatically detect and
1:54:53
block advanced threats, discover
1:54:55
and classify sensitive data everywhere,
1:54:58
generate user-to-app segmentation
1:55:01
to limit lateral threat movement, and
1:55:04
to quantify risk, prioritize remediation,
1:55:07
and importantly, generate board-ready
1:55:09
reports. Board's gotta write the
1:55:12
check. Learn more about Zscaler's
1:55:14
Zero Trust plus AI to
1:55:17
prevent ransomware and other AI
1:55:19
attacks while gaining the agility
1:55:22
of the cloud. Experience your
1:55:24
world secured. Visit zscaler.com/Zero Trust
1:55:27
AI. That's zscaler.com
1:55:30
slash Zero Trust AI.
1:55:33
zscaler.com/Zero Trust AI.
1:55:37
All right, let's talk about chat,
1:55:39
Steve Gibson. Okay, so, oh boy.
1:55:41
Across the pond from the
1:55:44
US, the EU is continuing to
1:55:46
inch forward on their
1:55:48
controversial legislation, commonly referred to
1:55:50
as chat control. Thus,
1:55:53
today's title is Chat Out
1:55:55
of Control, which
1:55:57
proposes to require providers of. encrypted
1:56:00
messaging services to somehow arrange to
1:56:02
screen the content that's
1:56:04
carried by those services for
1:56:07
child sexual abuse material commonly known
1:56:10
as CSAM. As I said
1:56:12
when we last looked at this last year, 2024 will
1:56:15
prove to be quite interesting since
1:56:18
all of this will likely be coming to
1:56:20
a head this year. What's
1:56:22
significant about what's going on in the
1:56:24
EU, unlike in the
1:56:27
UK, is that
1:56:29
the legislation's language carries
1:56:31
no exclusion over
1:56:33
the feasibility of
1:56:35
performing this scanning. To
1:56:38
remind everyone who has a
1:56:40
day job and who might not be
1:56:42
following these political machinations closely, last
1:56:44
year the UK was at a similar
1:56:47
precipice and with their own
1:56:49
legislation at the
1:56:51
11th hour they added some
1:56:53
language that effectively neutered it while
1:56:56
allowing everyone to save face. For
1:56:59
example, last September 6th, Computer World's
1:57:01
headline read, UK
1:57:03
rolls back controversial
1:57:05
encryption rules of online
1:57:07
safety bill and
1:57:10
followed that with, quote, companies
1:57:12
will not be required to
1:57:14
scan encrypted messages until it
1:57:17
is, quote, technically feasible, unquote,
1:57:20
and where technology has been
1:57:22
accredited as meeting minimum
1:57:24
standards of accuracy in detecting
1:57:27
only child sexual
1:57:30
abuse and exploitation content,
1:57:32
unquote. So since
1:57:34
it's unclear how any
1:57:36
automated technology might successfully
1:57:39
differentiate between child sexual
1:57:42
abuse material and,
1:57:44
for example, a photo of a,
1:57:46
you know, a photo that a
1:57:48
concerned mother might send of her
1:57:51
child to their doctor, there's
1:57:53
little concern that the high bar
1:57:55
of technical feasibility will be
1:57:58
met in the foreseeable future. future
1:58:01
while the UK came under some attack
1:58:03
for punting on this, the
1:58:05
big tech companies all breathed
1:58:07
a collective sigh of relief. But
1:58:10
so far, and boy there's
1:58:12
not much time left, there's
1:58:15
no sign of the same thing happening in
1:58:17
the EU, not even a murmur
1:58:19
of it. One of the
1:58:21
observations we've made about all such legislation
1:58:23
was the curious fact that if
1:58:26
passed, the legislation would
1:58:28
mean that the legislators' own
1:58:31
secure, encrypted and private communications
1:58:33
would similarly be subjected to
1:58:36
surveillance and screening. Or
1:58:39
would they? Two weeks ago,
1:58:41
on April 9, the
1:58:43
next iteration of the legislation appeared in
1:58:45
the form of a daunting
1:58:47
203-page tome. Fortunately,
1:58:52
the changes from the previous iteration
1:58:55
were all shown in bold type,
1:58:57
or crossed out, or bold
1:59:00
underlined, or crossed out
1:59:02
and underlined, all meaning different things.
1:59:05
But that made it at least somewhat possible to
1:59:07
see what's changed. You could tell I spent way
1:59:09
too much time with that 203 pages. This
1:59:13
was brought to my attention by
1:59:15
the provocative headline in an
1:59:17
EU website. Not
1:59:19
control colon EU ministers
1:59:22
want to exempt themselves.
1:59:26
What that article went on to say was, quote, according
1:59:29
to the latest draft
1:59:31
text of the controversial
1:59:33
EU child sexual abuse
1:59:35
regulation proposal leaked by
1:59:37
the French news organization
1:59:39
Context, which the EU
1:59:42
member states discussed, the
1:59:44
EU interior ministers want
1:59:46
to exempt professional accounts
1:59:48
of staff of intelligence
1:59:50
agencies, police and military
1:59:52
from the envisioned scanning
1:59:54
of chats and messages. The
1:59:58
regulation should also not apply to
2:00:01
confidential information such as
2:00:03
professional secrets. The
2:00:05
EU governments reject the idea
2:00:08
that the new EU Child
2:00:10
Protection Centre should support them
2:00:12
in the prevention of child
2:00:14
sexual abuse and
2:00:17
develop best practices for
2:00:19
prevention initiatives. Okay,
2:00:21
so the
2:00:23
EU has something called the
2:00:25
Pirate Party, which doesn't
2:00:28
seem to be well-named, but it is what it
2:00:30
is. Oh, it's a real, it's, you know, Pirate
2:00:32
Bay people. Yeah. It's a party
2:00:34
of pirate pirates. Yeah. Yes. And
2:00:37
popular. Yes. It's
2:00:39
formed from a collection of many
2:00:41
member parties across and throughout the
2:00:43
European Union. The party
2:00:45
was formed 10 years ago back in
2:00:48
2014 with a focus upon internet
2:00:50
governance. So the issues
2:00:52
created by this pending legislation is
2:00:54
of significant interest to this group.
2:00:57
To that end, one of the
2:00:59
members of Parliament, Patrick Breyer, had
2:01:01
the following to say about these
2:01:04
recent changes to the proposed legislation, which
2:01:06
came to light when the document leaked.
2:01:09
He said, quote, the
2:01:11
fact that the EU
2:01:13
interior ministers want to
2:01:15
exempt police officers, soldiers,
2:01:17
intelligence officers, and even
2:01:20
themselves from chat
2:01:22
control scanning proves
2:01:24
that they know exactly just
2:01:26
how unreliable and dangerous the
2:01:28
snooping algorithms are that they
2:01:31
want to unleash on us
2:01:33
citizens. They seem to
2:01:35
fear that even military secrets without
2:01:37
any link to child sexual abuse
2:01:40
could end up in the US
2:01:42
at any time. The
2:01:44
confidentiality of government communications is
2:01:46
certainly important, but the same
2:01:48
must apply to the protection
2:01:50
of business and of
2:01:53
course, citizens communications, including the
2:01:55
spaces that victims of abuse
2:01:57
themselves need for secure internet.
2:02:00
exchanges and therapy. We
2:02:02
know that most of the chats
2:02:05
leaked by today's voluntary snooping algorithms
2:02:07
are of no relevance to the
2:02:09
police, for example, family photos or
2:02:12
consensual sexting. It
2:02:14
is outrageous that the EU
2:02:16
interior ministers themselves do not
2:02:18
want to suffer the consequences
2:02:20
of the destruction of digital
2:02:22
privacy of correspondence and secure
2:02:24
encryption that they are
2:02:27
imposing upon us. The
2:02:29
promise that professional secrets should
2:02:32
not be affected by chat
2:02:34
control is a lie cast in
2:02:38
paragraphs. No provider
2:02:40
and no algorithm can know or
2:02:42
determine whether a chat is being
2:02:45
conducted by doctors, therapists,
2:02:47
lawyers, defense lawyers,
2:02:49
etc., so as to
2:02:52
exempt it from chat control. Chat
2:02:55
control inevitably threatens to leak
2:02:57
intimate photos sent for medical
2:02:59
purposes and trial documents set
2:03:02
for defending abuse victims. It
2:03:05
makes the mockery of the
2:03:07
official goal of child protection
2:03:09
that the EU interior ministers
2:03:11
reject the development of best
2:03:14
practices for preventing child sexual
2:03:16
abuse. It could
2:03:18
not be clearer that the aim
2:03:20
of this bill is China's style
2:03:22
mass surveillance and not
2:03:24
better protecting our children. Real
2:03:27
child protection would require a systematic
2:03:31
evaluation and implementation of
2:03:33
multidisciplinary prevention programs, as
2:03:35
well as Europe-wide standards
2:03:38
and guidelines for criminal
2:03:40
investigations in the child
2:03:42
abuse, including the identification
2:03:44
of victims and the necessary
2:03:46
technical means. None
2:03:48
of this is planned
2:03:51
by the EU interior ministers.
2:03:54
So after the article finished quoting Patrick
2:03:57
Breyer, he said that
2:03:59
the EU would governments want to
2:04:01
adopt the chat-control bill by
2:04:03
the beginning of June. We're
2:04:07
approaching the end of April, so
2:04:10
the only thing separating us from June is
2:04:12
the month of May. I
2:04:15
was curious to see whether the breadth of
2:04:17
the exclusion might have been overstated in order
2:04:20
to make a point, so I
2:04:22
found the newly added section of the legislation
2:04:24
on page 6 of the 203-page PDF. It
2:04:28
reads, this is section 12A. The
2:04:31
A is the new part. In
2:04:34
the light of the more limited risk
2:04:36
of their use for the purpose of
2:04:39
child sexual abuse and the
2:04:42
need to preserve confidential information,
2:04:44
including classified information, information covered
2:04:46
by professional secrecy and trade
2:04:49
secrets, electronic communication services
2:04:51
that are not publicly available,
2:04:53
that's the key. Submitted
2:04:56
communication services that are not
2:04:59
publicly available, such as those
2:05:01
used for national security purposes,
2:05:04
should be excluded from the scope of
2:05:06
this regulation. Accordingly, this
2:05:09
regulation should not apply
2:05:11
to interpersonal communication services
2:05:13
that are not available
2:05:16
to the general public
2:05:19
and the use of which is instead
2:05:21
restricted to persons involved in the activities
2:05:24
of a particular company, organization,
2:05:26
body or authority. Okay,
2:05:29
now, I'm not trained in law, but
2:05:31
that doesn't sound to me like an
2:05:34
exclusion for legislators who
2:05:36
would probably be using iMessage,
2:05:38
Messenger, Signal, Telegram, WhatsApp, etc.
2:05:42
It says, this regulation should
2:05:44
not apply to interpersonal communication
2:05:46
services that are not available
2:05:48
to the general public. So,
2:05:51
you know, internal proprietary
2:05:53
intelligence agency communication
2:05:57
software, you know, applications. Remember
2:06:01
that it's this
2:06:03
proposed EU legislation which includes
2:06:06
the detection of grooming behavior
2:06:08
in textual content.
2:06:11
So it's not just imagery that
2:06:14
needs to be scanned, but the
2:06:16
content of all text messaging. We're
2:06:19
also not talking about
2:06:22
only previously known and
2:06:25
identified content, which
2:06:27
is apparently circulating online,
2:06:29
but also anything the
2:06:31
legislation considers new content.
2:06:34
As I read through section after section of what
2:06:36
has become a huge mess
2:06:39
of extremely weak language that
2:06:42
leaves itself open to whatever
2:06:44
interpretation anyone might want to
2:06:46
give, my own
2:06:49
lay feeling is that this promises to
2:06:51
create a huge mess. I've
2:06:53
included a link to the latest legislation's pdf
2:06:55
in the last page of the show notes
2:06:58
for anyone who's interested. You'll
2:07:00
only need to read the first eight pages or
2:07:02
so to get a
2:07:04
sense for just what a
2:07:06
catastrophic mess this promises
2:07:08
to be. As is
2:07:11
the case with all such legislation,
2:07:13
what the lawmakers say they want
2:07:16
and via this legislation will
2:07:19
finally be requiring is
2:07:22
not technically possible.
2:07:25
They want detection of
2:07:28
previously unknown imagery and
2:07:31
textual dialogue which
2:07:33
might be seducing children while
2:07:36
at the same time honoring
2:07:38
and actively enforcing EU
2:07:41
citizen privacy rights. Oh,
2:07:43
and did I mention that 78
2:07:46
percent of the EU population
2:07:48
that was polled said they
2:07:50
did not want any of
2:07:53
this and it occurred to me
2:07:55
that encryption providers will
2:07:58
not just be able to say they're completely complying
2:08:01
when they are not, because
2:08:03
activist children's rights
2:08:05
groups will be
2:08:07
able to trivially test any
2:08:09
and all private communication services
2:08:12
to verify that they do
2:08:14
in fact detect and take
2:08:16
the action that the legislation
2:08:18
requires of them. All
2:08:21
that's needed is for such groups
2:08:23
to register a device as being used
2:08:25
by a child, then proceed
2:08:27
to have a pair of adults hold
2:08:30
a seductive grooming conversation
2:08:32
and perhaps escalate that to sending
2:08:35
some naughty photos back and forth.
2:08:38
And you can believe that
2:08:40
if the service they're testing
2:08:43
doesn't quickly identify and red
2:08:45
flag the communicating parties involved,
2:08:48
those activist children's rights groups
2:08:51
will be going public with a
2:08:53
service's failure under this new legislation.
2:08:57
I've said it before, and
2:08:59
I understand that it can sound like an
2:09:01
excuse and a cop-out, but
2:09:03
not all problems have
2:09:05
good solutions. There
2:09:08
are problems that are fundamentally
2:09:10
intractable. This entire
2:09:13
debate surrounding the abuse of
2:09:15
the absolute privacy created
2:09:17
by modern encryption is one such
2:09:20
problem. This is not
2:09:22
technology's fault. Technology
2:09:24
simply makes much greater levels of
2:09:26
privacy practical, and people
2:09:29
continually indicate that's what they
2:09:31
prefer. As
2:09:33
a society, we have to
2:09:35
decide whether we want the
2:09:37
true privacy that encryption offers
2:09:40
or whether we want to deliberately
2:09:43
water it down in order to
2:09:45
perhaps prevent some of the abuse
2:09:48
that absolute privacy also
2:09:50
protects. Agreed,
2:09:56
agreed, and agreed. Yeah,
2:10:00
I do commend to anyone that
2:10:02
the last page of the show
2:10:05
notes has a link. It's
2:10:07
not widely available publicly because it
2:10:10
was leaked and Patrick Breyer
2:10:13
at .de, so a German site
2:10:15
has it on his site and is
2:10:17
making it available. So
2:10:21
you'll need to get it if you're interested. But
2:10:25
boy, as I said, just reading through it, it
2:10:27
is, again, it's
2:10:29
insanely long at 203 pages.
2:10:35
I struggled to find any
2:10:38
language about like what
2:10:41
time period this takes effect
2:10:43
over. I couldn't find any.
2:10:46
It all seems to indicate
2:10:48
once this legislation is in place
2:10:52
that the organizations need to act.
2:10:57
But I just think the EU is stepping into a
2:11:00
huge mess. And
2:11:02
again, as I said, 2024, I said last year, this next year,
2:11:04
2024, we're in now, is going to
2:11:08
be one to watch because lots of this
2:11:10
is beginning to come to a head. So
2:11:12
Leo, as you just shared with us, not
2:11:15
the third party cookie issue with Chrome,
2:11:18
that's been punted into when we
2:11:20
have four digits on this podcast.
2:11:22
And the future. Yeah.
2:11:29
Interesting world we live in. So I imagine when
2:11:31
the legislation happens and it's supposed to be happening
2:11:33
in the early June, there
2:11:35
will be lots of coverage. We'll be back
2:11:38
to it. And we'll
2:11:40
have some sense for when it's taking
2:11:42
effect and what the various companies are
2:11:44
choosing to do. And it might be
2:11:46
well modified from this leaked document. There
2:11:48
will certainly be amendments and things like
2:11:50
that. So we'll have to look
2:11:52
at the actual legislation to see
2:11:55
what's happening. Right.
2:11:59
And we will. because that's what we
2:12:01
do. That's what we do here. I
2:12:03
know you love this show. You're listening. You got all
2:12:06
the way to the end. That's pretty impressive. May
2:12:09
I invite you, if you're not yet a member of Club Twit,
2:12:11
to save some time by
2:12:13
eliminating all the ads and
2:12:16
support Steve's work so that we can keep
2:12:18
doing it by
2:12:20
joining Club Twit. It's only seven bucks a
2:12:22
month. It's very inexpensive. You get ad-free versions
2:12:24
of all the shows. You
2:12:26
get video for shows where we only have
2:12:29
audio, like Hands on Mac, Hands on Windows,
2:12:31
Untitled Linux show, Home Theater Geeks,
2:12:34
iOS Today. You also get access to
2:12:36
the Discord, which is more than just
2:12:38
a hang or chat room around the
2:12:40
shows. It's really where thousands
2:12:43
of very smart, interesting people.
2:12:46
It was brought home to me this Sunday.
2:12:48
We had the live audience
2:12:51
and I got to meet everybody. And just every one
2:12:53
of them, high-level, interesting, smart
2:12:55
people, mostly in technology, almost all in
2:12:58
technology. In fact, I don't think there
2:13:00
was anybody not in technology. And
2:13:03
that's who you get access to in the Discord. It's more
2:13:05
than just us. It's some really smart
2:13:07
people. So if you've got questions or
2:13:09
thoughts or you want to talk to somebody who really knows what
2:13:11
they're doing, it's another
2:13:13
great benefit of joining the club. The best
2:13:15
benefit is you're supporting us in
2:13:18
our mission to keep you informed without fear or
2:13:20
favor. We owe no
2:13:22
one and that's what we want to keep doing thanks to
2:13:24
you. Twit.tv slash club twit
2:13:26
is the URL. Please join.
2:13:29
We'd love to have you and we'll
2:13:31
see you in the Discord. Steve does
2:13:33
this show along with me. I happen to be here
2:13:35
most of the time. Not all the time, but most of the time. Every
2:13:38
Tuesday, right after Mac Break Weekly, that's 130 Pacific,
2:13:40
430 Eastern, 2030 UTC.
2:13:43
You can watch us do it
2:13:45
live on YouTube, youtube.com/twit. If
2:13:47
you go to that page and you hit the bell,
2:13:49
you'll automatically get notified when we go live because we
2:13:52
don't stay live. We go live when the shows start
2:13:54
and we stop it when the shows ends. So
2:13:57
Subscribe to the channel and that way you'll get
2:13:59
the notification. that's after the fact. If it's more
2:14:01
convenient and I'm sure it is, you can. Always.
2:14:04
Download any show from Steve site
2:14:06
grc.com He has the Sixty Four
2:14:09
kill a bit audio. But.
2:14:11
He also has a sixteen killer the
2:14:13
audio only were only place in get
2:14:15
that of her the bandwidth impair to
2:14:17
and really really good transcriptions run by
2:14:19
Lane Ferris those are available grc.com while
2:14:22
you're there. Hey, it
2:14:24
would behoove you. To. Get some
2:14:26
coffee, Spin right! The world's best mass
2:14:28
storage maintenance and recovery utility. Six One
2:14:30
is out and get the latest baby
2:14:32
and you get some good stuff. Ah,
2:14:35
Grc that com And it's also where you
2:14:37
to find so many other useful tools. Deceive
2:14:39
gives away. Is. Very generous
2:14:41
valid drive shields up. And
2:14:44
on and on. trc.com. He's
2:14:47
on X.com at. S. T
2:14:50
G R C. So you can do Yemen.
2:14:52
There's the answer. Opener You can. Leave.
2:14:54
A message at Stg or see
2:14:56
an.com or we have that a
2:14:59
Sixty Four Kill Audio At our
2:15:01
website we also video and our
2:15:03
website that's our unique format Twitter
2:15:05
Tv/s N there's also you Tube
2:15:08
channel was a video. Great.
2:15:10
That's actually the most useful for sharing clips
2:15:12
see Heard some here today that you wanted
2:15:14
a shared with friends a colleague of the
2:15:16
boss. You can clip it very easily A
2:15:18
new to present them shirts them as a
2:15:21
really great used for that channel. And
2:15:23
then of course, the most convenient thing
2:15:25
would be to get a podcast player
2:15:27
and download it. And subscribers for Nancy
2:15:29
get every episode automatically enormous a one
2:15:31
you don't wanna miss one you wanna
2:15:33
see here month early. Ah,
2:15:36
thank you for joining us Steve Will
2:15:38
be back next week with more exciting
2:15:41
security news. Have a great week. We'll
2:15:43
see an extent these last podcast of
2:15:45
April and then we plow and the
2:15:48
may go. Solar
2:16:00
Eclipse at Seneca Resorts and Casinos
2:16:02
join us on Monday, April eighth
2:16:04
for events with food, drinks, d
2:16:06
days viewing glasses and more. Family
2:16:09
friendly As Seneca, Niagara and some
2:16:11
of the Allegheny Twenty One and
2:16:13
up as Santa go Buffalo Creek,
2:16:15
the first two hundred guests at
2:16:17
each property receive a commemorative t
2:16:19
shirt. Book your overnight say now
2:16:21
so you don't miss it. Gets
2:16:23
all the details at Seneca casinos.com
2:16:26
Seneca Resorts and Casinos Nothing else
2:16:28
comes. Close.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More