Podchaser Logo
Home
The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

Released Wednesday, 3rd July 2024
Good episode? Give it some love!
The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

The End of Entrust Trust - Open SSH Vulnerability, SyncThing, Endtrust

Wednesday, 3rd July 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

2:01

the whole question of

2:04

certificate authority self-management because

2:06

one of the oldest

2:09

original certificate authorities

2:11

in the world and

2:14

trust has, well,

2:18

they've got themselves in so much trouble that

2:21

they're no longer going to be

2:23

trusted by the industry's browsers. So

2:25

basically all of their SSL,

2:28

TLS certificate business is

2:31

gone in a few months. Wow. So

2:34

it's like, yeah. And

2:36

so this also gives us

2:38

an occasion to really look at the

2:40

behind the scenes mechanisms

2:43

by which this

2:45

can happen. And of course

2:47

we've covered CA's falling, you

2:49

know, from grace in the past.

2:54

This is a biggie. Also

2:57

someone just moved 50 Bitcoins

3:00

minted back in the

3:02

Satoshi era. Sadly

3:04

it wasn't me, but who was

3:06

it? Well that's interesting. Hmm.

3:09

Yeah. Also, how are

3:11

things going with our intrepid Voyager

3:13

one spacecraft? What

3:15

features have I just removed from

3:17

GRC's email system? And by the

3:20

way, just shy of 5,000 people

3:22

have email from me from like

3:24

an hour ago. And

3:27

what embarrassingly affordable

3:30

commercial emailing system am

3:32

I now prepared to recommend

3:35

without reservation, which is what I'm

3:37

using. And I could not be

3:39

more impressed with it or its

3:41

author. Is

3:43

a she and not a he who

3:46

I mistakenly referred to last

3:48

week. What's recently

3:51

been happening with sync thing and

3:53

what can you do about it? Why

3:56

do I use DNS for

3:58

freeware release management? and

4:01

how. And

4:03

then we're gonna spend the rest of

4:05

our time taking a look at the title of

4:07

this of today's podcast 981 for this July 2nd,

4:09

The End of Entrust Trust. Wow.

4:16

So I think you know

4:18

a really great episode of Security Now

4:20

that is now yours for

4:22

the taking. Well

4:25

it will be in a moment. I mean as soon

4:27

as I get through my ad it'll be yours for

4:29

the taking. I'm gonna keep it here for a little

4:31

bit. Wow very

4:33

interesting. I can't wait to hear about all of these.

4:36

This is gonna be a very geeky episode. I

4:38

like that. Yeah. I always enjoy it when when

4:41

your shows are a little bit on

4:43

the propeller head side. By the way

4:45

welcome to a brand new sponsor. We're

4:48

really thrilled to have Big ID on

4:51

the Security Now show. They're the

4:54

leading DSPM solution. Data

4:56

Security Posture Management.

4:58

DSPM. If you're an enterprise

5:00

you certainly know what this

5:02

is or you should anyway

5:05

but they do DSPM differently.

5:07

DSPM centers around risk management

5:10

and how organizations like yours need

5:12

to assess, understand, identify

5:15

and then of course remediate

5:17

data security risks across their

5:19

data. And there are lots of

5:21

reasons to do this. But

5:23

let me tell you why you need to do it with Big ID. Big

5:26

ID seamlessly integrates with your existing

5:28

tech stack so you don't have

5:30

to abandon anything right. It allows

5:32

you to coordinate all your security

5:34

and remediation workflows with

5:37

Big ID and let me

5:39

tell you some of their clients are as big as

5:41

you can get. I don't know if I

5:43

can give you some names. If you go to the website you'll see. Big

5:46

ID will let you uncover dark data, identify

5:50

and manage risk, remediate

5:52

the way you want to remediate,

5:54

scale your data security strategy, take

5:58

action on data risks whether it's It's

6:00

annotating, deleting, quarantining, or

6:03

more, based on the data, all while maintaining

6:05

a full audit trail. They

6:09

work with ServiceNow. They work with Palo Alto

6:11

Networks, with Microsoft, with Google, with AWS. They

6:13

work with everybody. With Big

6:15

ID's advanced AI models, now this is

6:17

kind of new and it's very, very

6:19

cool, you can reduce risk, accelerate time

6:22

to insight, and gain visibility and control

6:24

over all your data. Now

6:27

I mentioned that they have some pretty big customers. Maybe,

6:30

I don't know, like the United States Army. Imagine

6:33

how much data the US Army has.

6:36

And in many different data stores,

6:39

you know, a completely heterogeneous

6:41

environment, Big ID equipped

6:44

the US Army to illuminate their dark

6:46

data, to accelerate their cloud migration, to

6:49

minimize redundancy, and to automate data

6:51

retention. The Army, US

6:54

Army Training and Doctrine and Command quote, here,

6:56

let's play this for you. The

6:58

first wow moment with Big ID came

7:01

with just being able to have that

7:03

single interface, that

7:05

that inventory is a variety

7:07

of data holdings, including structured

7:10

and unstructured data, across emails,

7:12

zip files, SharePoint databases,

7:14

and more. To see that mass and

7:16

to be able to correlate across all

7:19

of those is completely novel. US

7:22

Army Training and Doctrine and Command said, I've

7:24

never seen a capability that brings this together

7:26

like Big ID does. Big

7:29

enough for the Army, gotta be big enough for you.

7:31

CNBC recognized Big ID as one of the top 25

7:33

startups for their enterprise. They

7:36

were named the Inc 5000 and the Deloitte 500, two

7:40

years in a row. They're the

7:42

leading modern data security vendor in the

7:44

market today. Aren't you glad I

7:46

brought this up? You need to get to know these

7:48

guys. The publisher of

7:50

Cyber Defense Magazine, here's another testimonial, says,

7:53

Big ID embodies three major features

7:56

we judges look for to become winners. One,

7:59

understanding tomorrow. tomorrow's threats today, two,

8:02

providing a cost-effective solution,

8:05

and three, innovating in unexpected ways that

8:07

can help mitigate cyber risk and get

8:09

one step ahead of the next breach.

8:12

You need this. Start protecting

8:15

your sensitive data wherever

8:17

your data lives at

8:19

bigid.com/security now. Of

8:22

course you can go there and get a free

8:24

demo to see how Big ID can help your

8:26

organization reduce data risk and accelerate the adoption of

8:28

generative AI. By the way, one of the things

8:30

they do, I've talked to them the other day,

8:33

let you use AI without exfiltrating

8:36

important information and protect you against

8:38

the privacy risk of AI without

8:40

stopping you from using AI. That

8:42

is fantastic. That by itself is

8:45

worth calling. bigid.com/security

8:52

now. They

8:55

also have some really useful white papers and

8:57

reports. There's a new free report that provides

9:00

valuable insights and key trends

9:02

on AI adoption, challenges, the

9:05

overall impact of generative AI across

9:07

organizations. That's just one of many

9:09

though. So I really want you

9:12

to go to bigid.com/security now.

9:14

If you've got a lot of data, and the

9:16

Army had a lot of data, you

9:19

need bigid.com/

9:22

security now. Reduce risk, protect

9:24

that sensitive data, and

9:27

accelerate AI adoption. It can all be done

9:29

at the same time. bigid.com/security now. Brand new

9:31

sponsor. We want to welcome him to the

9:33

show. Had a great conversation

9:35

with him the other day. Really impressive,

9:37

the stuff they do. Alright,

9:40

let's get back to Steve Gibson and

9:42

your first topic of the day. Well,

9:45

our picture of the week. Oh, yes. I

9:48

gave this one the caption, perhaps

9:51

we deserve to be taken over by

9:53

the machines. Oh,

9:55

dear. Boy,

10:00

oh boy, oh boy. I

10:02

don't know. Describe this for us, will you?

10:04

So we have a close-up

10:07

photo of a standard intersection

10:13

traffic light, and

10:16

permanently mounted on the

10:18

horizontal member, which

10:21

holds the red, yellow,

10:23

and green lights, is

10:26

a sign that's got the left turn

10:28

arrow with the big red, you know,

10:30

the big circle red slash through it,

10:33

clearly indicating that if a police

10:35

officer is watching you and you

10:38

turn left, you'll

10:42

be seeing some flashing lights behind your car

10:44

before long. Now,

10:46

the problem here, the reason

10:48

I'm thinking, okay, maybe machines

10:51

will be better at this than we are, is

10:54

that the signal has a

10:57

left green arrow illuminated,

11:01

meaning, you know. Turn left. Turn

11:03

left here. So

11:06

I'm not sure what you do.

11:08

This is one of those things

11:10

where the automated driving

11:13

software in the electric

11:16

vehicle, it comes here

11:18

and just quits.

11:20

It just shuts down and says, okay. I

11:23

don't know what to do. I

11:25

don't know what, I'm seeing that I can't

11:27

turn left and the signal is saying I

11:30

must. And now, not only that, but

11:32

it's my turn. So I give up. Anyway,

11:36

I don't know. Frankly, I don't know

11:38

what a human would do if we came to this. It's

11:40

like, uh. Go straight, that's what I do. I would not,

11:43

anything but turn left. Wow.

11:47

Okay, so the big news

11:49

of the week we're gonna start with because

11:51

we're gonna talk about industry

11:54

politics and management, like self-management

11:56

of the whole. key

12:01

certificate mess

12:04

at the end of the show. But if

12:08

you survive this first piece you'll be

12:10

you'll be ready for something as tame

12:12

as politics. Everyone's buzzing

12:14

at the moment about

12:16

this regression flaw. It's

12:19

a regression because it was

12:22

fixed back in open

12:24

SSH in 2006

12:28

and in a later update

12:31

it came back. Thus there

12:34

was a regression to an earlier

12:37

bad problem. This was discovered

12:39

by Qualis in you

12:43

know open SSH a

12:46

widely used and inherently

12:48

publicly exposed service

12:51

and in fact when

12:53

I say widely used we're talking 14 million

12:58

vulnerable publicly exposed

13:00

servers identified

13:02

by shodan and

13:05

census. So the

13:08

developers of open SSH have

13:10

been historically extremely careful

13:14

and thank God because you know

13:16

open SSH being vulnerable that would

13:19

be a big problem. In

13:21

fact this is get a load of

13:23

this the first vulnerability to be discovered

13:26

in nearly 20 years. That's an

13:29

astonishing track record for it

13:31

you know a chunk

13:34

of software but nevertheless when

13:36

you hear that open SSH has an

13:39

unauthenticated remote code

13:41

execution vulnerability that

13:43

grants its exploiter full

13:46

root access to the system

13:48

with the ability to

13:50

create a root level

13:52

remote shell affects the

13:55

default configuration and does not require

13:57

any interaction from the user over

14:00

on the server end, that ought to get

14:02

anyone's attention. Okay, so we have

14:05

CVE-2024-6387. So

14:11

here's what its discoverer,

14:13

Qualys, had to say.

14:16

They wrote, the Qualys Threat

14:18

Research Unit, the TRU, discovered

14:21

this unauthenticated remote code

14:23

execution vulnerability in Open

14:26

SSH's server, which

14:28

is SSHd, because you know, it's

14:30

a Linux daemon, in

14:33

Glib C-based Linux systems.

14:36

This bug marks the first Open

14:38

SSH vulnerability in nearly two decades,

14:41

an unauthenticated remote code execution

14:43

that grants full root access.

14:47

It affects the default configuration

14:49

and does not require user

14:51

interaction, posing a significant exploit

14:53

risk. In Qualys's

14:56

TRU's analysis, we

14:58

identified that this vulnerability is a

15:00

regression of the previously patched

15:02

vulnerability, which was CVE-2006-5051, reported in,

15:04

of course, 2006. A regression in

15:07

this context

15:15

means that a flaw once fixed,

15:18

what has reappeared in a subsequent

15:20

software release, typically due to changes

15:23

or updates that inadvertently reintroduce the

15:25

issue. This incident

15:27

highlights the crucial role

15:30

of thorough regression testing

15:32

to prevent the reintroduction of

15:34

known vulnerabilities into the environment.

15:37

This regression was introduced in

15:39

October of 2020 with

15:42

Open SSH 8.5

15:45

P1. So it's

15:47

been there for four years. And

15:50

what this means is

15:52

any Open SSH from

15:55

8.5 P1 on, thus for the last four

15:57

years, years

16:01

is vulnerable. Now

16:04

they say OpenSSH is a suite

16:06

of secure network utilities based on

16:08

the SSH protocol that are essential

16:11

for secure communication over unsecured networks.

16:14

It provides robust encryption, secure

16:16

file transfers, and remote server

16:18

management. OpenSSH is

16:20

widely used on Unix-like

16:22

systems including macOS and

16:24

Linux and it supports

16:26

various encryption technologies and

16:28

enforces robust access controls.

16:31

Despite a recent vulnerability,

16:34

OpenSSH maintains a strong

16:36

security record, exemplifying

16:39

a defense in-depth

16:41

approach and a critical

16:43

tool for maintaining network

16:45

communication confidentiality and integrity

16:47

worldwide. Okay so what

16:50

we're dealing with in this instance is

16:53

a very subtle and

16:56

very tight race

16:58

condition between

17:01

multiple threads of execution. Remember

17:03

that not long ago we spent some

17:06

time looking at race conditions closely.

17:09

Back then I used the

17:11

example of two threads that

17:13

both wanted to test and

17:15

conditionally increment the same variable.

17:18

A race condition fault could occur

17:21

if one thread first read

17:23

the variable and tested it

17:26

but before it could return the

17:28

updated value it was

17:31

preempted. Then another thread came

17:33

along to change that shared

17:36

value without the first thread

17:38

being aware. Then

17:40

when that first thread had

17:42

its execution resumed it

17:45

would place the updated value back

17:48

into the variable destroying

17:50

whatever that second thread had

17:52

done. That's

17:54

a bug and just something as simple as

17:57

that can lead to the loss of lots.

18:00

Okay, so today

18:03

as it happens we're gonna see

18:05

a real-world example of

18:07

exactly this sort of problem

18:10

actually occurring. Okay, so first

18:12

though I want to share

18:15

Qalis' note about OpenSSH in

18:17

general. In their technical

18:19

report about this they wrote, OpenSSH

18:24

is one of the most

18:26

secure software in the world.

18:29

This vulnerability is

18:31

one slip-up in

18:33

an otherwise near flawless

18:36

implementation. Its

18:38

defense in depth design and

18:40

code are a model

18:42

and an inspiration and we

18:44

thank OpenSSH's developers for their

18:46

exemplary work. Then

18:49

they explain, this vulnerability

18:51

is challenging, they write,

18:54

to exploit due

18:56

to its remote race condition

18:58

nature requiring multiple

19:00

attempts for a successful

19:02

attack. This

19:04

can cause memory

19:06

corruption and necessitate

19:08

overcoming address space

19:10

layout randomization, ASLR.

19:13

Advancements in deep learning may

19:16

significantly increase the exploitation

19:18

rate potentially providing attackers

19:21

with a substantial advantage

19:23

in leveraging such security

19:25

flaws. In our

19:27

experiments, and I should note that they

19:29

wrote this but we'll see later, this

19:31

is one of three sets of experiments

19:34

and this was the least

19:37

worrisome of the three. They

19:39

wrote, in our experiments it takes

19:41

around 10,000 tries on average to

19:44

win this

19:48

race condition. So for

19:50

example, with 10 connections

19:53

being accepted every 600 seconds,

19:57

it takes on the average of one

19:59

week. to obtain

20:01

a remote root shell. On

20:04

the other hand, you

20:06

could obtain a remote root shell, which

20:08

is not nothing. And

20:10

as it turns out, there are ways

20:12

to optimize this heavily, which we'll get

20:14

to. Okay, so of course, again,

20:17

they say around 10,000 tries

20:21

on average to win the race condition. So

20:24

that's statistics, right? It

20:26

could happen on the first try, or

20:30

never, or anywhere in between.

20:32

You know, it's like those 50 Bitcoin

20:34

I mined back in 2011. I

20:37

got lucky, and it's

20:39

still possible to get lucky today,

20:42

though it's vastly less likely than

20:44

it was back then. The

20:47

great concern is

20:49

the available inventory, the

20:52

total inventory of currently

20:54

vulnerable open SSH servers,

20:57

which are publicly exposed to the internet. Qualys

21:00

writes that searches using

21:03

census and showdown had

21:05

identified over 14 million potentially

21:08

vulnerable open

21:12

SSH server instances exposed to

21:14

the internet. And

21:16

that within their own, that

21:18

is Qualys's own customer

21:21

base of users who are using

21:23

their CSAM 3.0 external

21:27

attack surface management technology,

21:30

approximately 700,000 external internet

21:32

facing instances of

21:38

their own customers are

21:41

vulnerable. And they

21:43

explained that this accounts for around 31% of

21:46

all internet facing instances

21:48

of open SSL in

21:51

their own global customer

21:53

base. So, you know,

21:55

of their own customers,

21:58

they know of 700,000. external

22:01

internet facing instances vulnerable,

22:04

census and shodan have all

22:06

of those and an additional

22:08

13 plus

22:11

million more. Okay,

22:13

so the way to think about this is

22:16

that both intensely

22:19

targeted and diffuse

22:22

and widespread attacks are

22:24

gonna be highly likely. If

22:27

a high value target is running

22:29

a vulnerable instance of open SSH,

22:32

once this has been fully weaponized,

22:35

someone can patiently try

22:37

and retry in

22:40

a fully automated fashion, patiently

22:43

knocking at the door until

22:45

they get in. And

22:47

what makes this worse, as we'll see, is

22:50

that the attack is not an

22:53

obvious flooding style attack that might

22:55

set off other alarms. Its

22:58

nature requires a

23:00

great deal of waiting. This

23:03

is why the 10 connections over since

23:05

600 seconds that

23:07

Qualis mentioned. Each

23:09

attack attempt requires,

23:12

the way it actually works is

23:14

a 10 minute timeout. But

23:17

since 10 can be simultaneously

23:20

overlapped and running at once

23:23

against a single target, that

23:26

brings the average rate down to

23:30

one completed attack attempt per

23:32

minute. So on

23:34

average, you're

23:37

getting one new connection attempted

23:40

per minute, which is each

23:43

of those patiently knocking

23:46

quietly on the door until

23:48

it opens for them. And

23:51

note that what this means is

23:54

that a single attacker can

23:56

be, and almost certainly will

23:58

be, simultaneously spraying

24:01

a massive

24:03

number of overlapping connection

24:06

attempts across the internet.

24:09

It would make no sense for

24:11

a single attacking machine to just

24:13

sit around waiting 10 minutes for

24:16

a single connection to time

24:18

out. Rather, attackers will

24:20

be launching as many new

24:22

attempts at many different

24:25

targets as they can. During

24:28

the 10 minutes, they must wait

24:30

to see whether a single connection

24:32

attempt succeeded on any one machine.

24:37

So to that end, they

24:39

wrote, QALIS has

24:41

developed a working exploit

24:44

for the regression vulnerability.

24:47

As part of the disclosure

24:50

process, we successfully demonstrated the

24:52

exploit to the OpenSSH team

24:54

to assist with their understanding

24:56

and remediation efforts. We

24:58

do not release our exploits, as

25:00

we must allow time for patches

25:03

to be applied. However, even

25:06

though the exploit is complex, we

25:08

believe that other independent researchers

25:11

will be able to replicate

25:14

our results. And

25:18

then indeed, they detail exactly

25:20

where the problem lies. I'm

25:23

going to share two dense paragraphs

25:25

of techiness, then I'll pause

25:27

to clarify what they've said. So they

25:29

wrote, we discovered a

25:32

vulnerability, a signal

25:34

handler race condition in

25:36

OpenSSH's server, SSHD. If

25:40

a client does not authenticate

25:43

within the login graced

25:46

time, which

25:48

is 120 seconds recently, 600 seconds

25:58

in older OpenSSH versions. versions,

26:01

then SSHD's SIGALARM

26:04

handler is

26:06

called asynchronously. That's

26:08

a key, asynchronously.

26:11

They said, but this signal

26:13

handler calls various functions that

26:15

are not async

26:17

signal safe. For

26:20

example, it calls syslog to log the

26:22

fact that somebody never authenticated and it's

26:24

going to hang up on them. They

26:28

said this race condition affects

26:30

SSHD in its default configuration.

26:33

This vulnerability is exploitable

26:35

remotely on GLibe C

26:38

based Linux systems where syslog

26:40

itself calls async

26:44

signal unsafe functions

26:47

like malloc and free, which

26:50

allocate and free dynamically

26:53

allocated memory. They

26:56

said an authenticated remote code

26:58

execution as root because

27:01

it affects SSHD's privileged code,

27:03

which is not sandboxed and

27:06

runs with full privileges, can

27:08

result. We've not

27:10

investigated any other libc or

27:12

operating system, but OpenBSD is

27:16

notably not vulnerable because

27:19

SIGALARM handler calls

27:21

syslog underscore r,

27:24

which is async

27:26

signal safe version

27:29

of syslog that was invented by

27:31

OpenBSD back in 2001. So

27:35

what's going on here is

27:38

that when someone connects to

27:41

a vulnerable instance of open

27:43

SSH, as part of

27:45

the connection management, a

27:47

connection timeout timer is

27:50

started. That timer

27:52

was one set to 600 seconds,

27:54

which is 10 minutes, but

27:56

in newer builds, giving someone

27:58

10 minutes to get the themselves

28:01

connected seemed excessive and unnecessary. So

28:03

it was shortened to 120 seconds, which is two minutes. Unfortunately,

28:10

at the same time, they

28:12

increased the number of simultaneous

28:14

waiting connections to complete from

28:16

10 to 100. So

28:20

it really did make things worse.

28:24

And because the attack inherently needs to be in

28:26

the air, anticipate

28:31

the expiration moment.

28:35

A shorter expiration allows

28:37

for faster compromise, since

28:40

it's the instant of

28:42

timer expiration when OpenSSH

28:45

is briefly vulnerable to

28:47

exploitation. That window of

28:50

vulnerability is what the attacker

28:53

anticipates and exploits. So

28:56

the more often you get

28:58

those little windows, the

29:01

worse off you are. So

29:03

upon a new connection, the timer

29:05

has started to give the new

29:08

connection ample but limited time to

29:10

get itself authenticated and going. And

29:13

if the incoming connection just

29:15

sits there doing nothing or

29:18

trying and failing to properly authenticate,

29:20

regardless of what's going on and why,

29:23

when that new connection

29:25

timeout timer expires, OpenSSH

29:29

drops that still pending

29:31

connection, right? All that

29:33

makes sense. That's the way you'd want things

29:35

to operate. Unfortunately, before

29:38

it does that, as

29:40

it's doing that, it

29:43

goes off to do some other things,

29:45

like make an entry in the system

29:47

log about this expired

29:49

connection attempt. So

29:52

if the wily attacker was

29:55

doing something on purpose at

29:58

the precise instant, that the

30:01

connection expiration timer expires,

30:04

the race condition can be forced

30:06

to occur. Wow. Just

30:09

as, yeah, yeah. The

30:11

modern day hacks are so subtle

30:13

and interesting. I just love

30:16

it. Yeah, because all the easy ones are gone. Yeah, that's

30:18

a good point. Yeah, the

30:20

dumb ones, we're not doing dumb problems anymore.

30:22

Well, you can see how this could have

30:24

been a regression too. That

30:27

would be easy to reintroduce it. Yeah,

30:29

there was actually an if def that

30:31

got dropped from an update and

30:34

that allowed some old code to

30:36

come back in that had been

30:38

deliberately defed out. So

30:41

just as with the two threads in

30:43

my original shared variable example,

30:46

the timer's expiration

30:49

asynchronously, and that's the key, it asynchronously

30:52

interrupts. That means it

30:54

doesn't ask for

30:56

permission to interrupt. It

30:59

just yanks control away

31:01

from OpenSSH in order

31:03

to start the

31:05

process of tearing down this

31:08

connection that never authenticated itself. So

31:12

if the attacker was able

31:14

to time it right so that

31:16

OpenSSH was actively doing something

31:18

involving memory allocation at the exact

31:21

time of the timer's expiration,

31:23

the memory allocations that would

31:26

then be performed by the

31:28

timer-driven logging action would

31:31

conflict and collide with what

31:34

the attackers were causing OpenSSH

31:36

to be doing. And

31:39

that could result in the attacker

31:42

obtaining remote code execution

31:44

under full root privilege

31:46

and actually getting themselves

31:49

a remote shell onto

31:52

that machine. With

31:55

the massive inventory of 14 million

31:57

explorers. open

32:01

SSH servers currently available,

32:05

this is going to be something bad guys

32:07

will not be able to resist. And

32:11

unfortunately, as we know, with

32:14

so many of those forgotten,

32:16

unattended, not being

32:19

quickly updated, whatever, there's

32:21

just no way that attackers will

32:24

not be working overtime to work

32:26

out the details of this attack

32:28

for themselves and get busy. Qualis

32:31

explained, to exploit

32:33

this vulnerability remotely, to

32:36

the best of our knowledge, the

32:38

original exploit of this, the original

32:41

vulnerability of this CVE 2006 5051,

32:43

which I've initially

32:47

mentioned, was never successfully

32:50

exploited before. They

32:53

said, we immediately face three problems.

32:55

From a theoretical point of view,

32:57

we must find a useful code

33:00

path that if interrupted

33:02

at the right time by SIGALARM

33:05

leaves SSHD in an

33:07

inconsistent state. And

33:09

we must then exploit this inconsistent

33:12

state inside the SIGALARM

33:14

handler. From

33:16

a practical point of view, we

33:19

must find a way to reach

33:21

this useful code path in SSHD

33:23

and maximize our chances of interrupting

33:25

it at the right time. And

33:28

then from a timing point of view, we

33:30

must find a way to further

33:32

increase our chances of interrupting this

33:34

useful code path at the right

33:36

time remotely. So theoretical,

33:39

practical and timing. They

33:41

said to focus on these

33:43

three problems without having to

33:45

immediately fight against all the

33:47

modern operating system protections, in

33:49

particular, ASLR and NX, which

33:51

is, you know, not an

33:54

execution protection, no execute,

33:57

they said, we decided to exploit old

34:00

open SSH versions first on

34:04

on an x86 system and then

34:06

based on this experience moved

34:08

to recent versions. So their

34:11

first experiment was with Debian

34:15

They showed it as well. It was

34:18

the old woody version Which

34:20

they shows Debian 1 a 3.4 point 3.4

34:24

p1 hyphen 1 dot woody dot 3 They

34:29

said this is the first Debian

34:31

version that has privilege separation enabled

34:33

by default and that

34:35

is patched against all the critical

34:38

vulnerabilities of that era They

34:41

wrote to remotely exploit this

34:43

version we interrupt a call

34:45

to free Where

34:48

where where memory is being

34:50

released back to the system with

34:53

SIG alarm inside

34:55

SSHD's public key parsing code

34:58

now that's significant because that

35:00

means that the attacker is

35:04

causing SS open

35:06

SSH to to do some

35:09

public key parsing probably presenting

35:11

it with a bogus Public

35:14

key saying here's my key

35:17

use this to authenticate me so

35:20

Unfortunately bad guys have been given

35:22

lots of clues here as

35:25

a consequence of this disclosure They know

35:27

exactly where to look and what to

35:29

do So

35:31

they said we interrupt a call

35:33

to free with SIG alarm While

35:37

SSHD is in its public

35:39

key parsing code that

35:41

leaves the heap in the

35:43

the memory heap in an Inconsessed

35:46

in an inconsistent state and

35:49

exploit this Inconsistent state

35:51

during another call to free

35:54

Inside the SIG alarm handler

35:57

probably in syslog said,

36:00

in our experiments, it

36:02

takes around 10,000 tries

36:05

on average to win this

36:07

race condition. In other words, with

36:09

10 connections, which is

36:12

the max startup's setting,

36:14

accepted per 600

36:19

seconds, which is the log in grace time,

36:21

they said it takes around

36:23

one week on average to

36:26

obtain a remote

36:28

route shell. But

36:30

again, like, like, even

36:33

if you couldn't multiplex this and

36:35

you couldn't do, you couldn't be

36:37

attacking a bazillion servers at once,

36:40

just one attacker camped

36:43

out on some

36:46

highly valuable open

36:49

SSH that thinks it's secure

36:51

because, hey, we use public

36:54

key certificates, you're never going to guess

36:57

our password. It's just sitting there, ticking

37:00

away, knocking patiently at

37:02

the door, an average

37:06

of once a minute, because

37:09

it can do be doing it 10 times over

37:11

10 minutes. And

37:14

eventually the door opens 10,000

37:17

tries is hysterical.

37:21

Right. But very patiently, if you

37:23

have to be patient, but still,

37:25

yep, that's, that's how subtle these,

37:27

this race condition is. Right? Yes.

37:29

Yes. Um, well, because it

37:32

also involves, uh, vagaries of the

37:34

internet timing, right? Because you're, you're

37:36

a remote person. I mean, the

37:38

good news is the further away

37:41

you are, if you're in Russia

37:43

with a flaky network and

37:45

lots of packet delay or, or

37:47

you're in China and there's so

37:49

many hackers that your packets are

37:51

just, you know, competing with all

37:54

the other attackers, then that's going

37:56

to introduce a lot more variation,

37:58

but still again, This

38:00

is where patience pays off.

38:02

You end up with a

38:04

remote shell with root privilege

38:06

on the system that

38:09

was running that server. So

38:11

the point is, yeah, around

38:13

10,000 tries, but

38:15

massive payoff on

38:18

the other side. Okay, then they said,

38:20

and I won't go through this in detail, on

38:22

a newer Debian build where the

38:25

login grace time had been reduced

38:27

from its 600 seconds down

38:30

to 120. In other words, from

38:32

five minutes to two minutes, it

38:35

still took them around 10,000 attempts.

38:38

But since they only needed to wait

38:40

two minutes for timer expiration rather than

38:42

10 minutes, and they

38:45

were able to do, oh no,

38:47

sorry, on that system, they were still only

38:49

to do 10 at once. So

38:54

it reduced the wait

38:56

from five minutes to two

38:58

minutes down

39:01

from, I'm sorry, from 10 minutes to two

39:04

minutes. They were able to now

39:06

obtain a remote shell. This is

39:08

on a newer build in one

39:11

to two days down from

39:13

around a week. And finally,

39:16

on the most current stable

39:18

Debian version 12.5.0, due

39:22

to the fact that it has

39:24

reduced the login time to 120

39:27

seconds, but also increased the

39:30

maximum number of simultaneous login

39:32

attempts, that so called

39:34

max startups value from 10 to 100.

39:37

They wrote, in

39:42

our experiments, it takes around

39:44

10,000 tries on

39:46

average to win this race

39:48

condition. So on this machine,

39:51

three to four hours

39:54

with 100 connections accepted per 120 seconds. Ultimately,

40:00

it takes around six to

40:02

eight hours on average to

40:04

obtain a remote root shell

40:06

because we can only guess

40:08

the GLIB-C's address correctly half

40:10

the time due to

40:13

ASLR. And they

40:15

finish explaining, this research is still a

40:17

work in progress. We've targeted

40:19

virtual machines only, not bare metal

40:21

servers, on a mostly

40:23

stable network link with around 10

40:26

milliseconds of packet jitter. We

40:28

are convinced that various

40:30

aspects of our exploits can be

40:33

greatly improved. And we've

40:35

started to work on an

40:37

AMD64, you know, 64-bit world,

40:40

which is much harder because of the

40:42

stronger ASLR. You

40:44

know, of course, that's address-based layout

40:46

randomization. And the reason 64-bits make

40:49

things much worse is

40:51

that you have many more high bits

40:53

to allow for more randomized places to

40:55

locate the code. And

40:57

finally, they said, a few

40:59

days after we started our

41:02

work on the AMD64, we noticed a

41:06

bug report in OpenSSH's

41:08

public Bugzilla regarding

41:11

a deadlock in SSHD's

41:13

Sigalarm handler. We

41:16

therefore decided to contact

41:18

OpenSSH's developers immediately to

41:20

let them know that

41:23

this deadlock is caused by

41:26

an exploitable vulnerability. We

41:29

put our AMD64 work on hold and we

41:31

started to write this advisory. Okay,

41:33

so, yikes. We

41:36

have another new and

41:39

potentially devastating problem.

41:42

Everyone running a maintained Linux

41:44

that's exposing an OpenSSH server

41:46

to the public internet and

41:49

potentially even major corporations

41:52

using OpenSSH internally because,

41:54

you know, can you

41:56

trust all your employees? Need

41:59

to update the web? their builds to incorporate

42:01

a fix for this immediately. Until

42:03

that's done, and unless you must

42:06

have SSH running, it might be

42:08

worth blocking its port and shutting

42:10

it down completely. I

42:13

think I have SSH

42:15

running on my ubiquity system and

42:19

my Synology. Yes, probably, and

42:22

how about on the Synology box?

42:27

Yeah, yeah, yeah. So

42:29

I better check on both of those. And

42:31

my server too, come to think of it. Oh

42:34

yeah, yeah. So, yeah, I

42:36

mean, no, this is a

42:38

big deal. Yeah. And

42:44

as I said at the top, both

42:47

profiles, a

42:49

high value target could be located.

42:51

And notice that nothing prevents 50

42:55

different people from trying to get into

42:58

the same high value target at once.

43:01

So, the high value is

43:03

a vulnerability and just being

43:06

present is one because the

43:08

bad guys are gonna be

43:10

spraying the internet just

43:12

looking for opportunistic access. If

43:14

nothing else, even if they

43:17

don't care what's going on on your

43:19

server, they wanna put a crypto

43:21

miner there or

43:23

stick a botnet node there. I

43:26

mean, they're gonna want in on

43:28

these machines where now they have

43:30

a way. This gives

43:33

them a way for anything

43:35

that has been brought up

43:38

with code for the last four years, since

43:40

2020, when this

43:42

regression occurred. And

43:46

they also know some systems will

43:48

be getting patched. So

43:50

there's also a rush to weaponize this

43:53

thing and get into

43:55

the servers they can. The way you

43:57

describe it, it sounds so difficult to

43:59

implement, but. they publish proofs

44:01

of concept, which a

44:03

script kitty can implement, right? Just yep.

44:06

Yeah. Yep. It'll end up being packaged in

44:08

a product ties. Right. And, and, and you

44:10

don't need to know what you're doing. You

44:13

just, in the same way that we saw

44:15

that windows wifi bug last week, some guy

44:17

offering for five grand, you can just buy

44:19

it right now. Wow. Oh,

44:23

well, what a world. Hey,

44:27

it keeps this podcast full

44:29

of great material. That's right. Hello.

44:31

We appreciate it. Give up the good work.

44:33

Bad guys. You're

44:36

keeping us busy. You want me to do an ad now, or you

44:38

want to keep going up to you? Perfect timing. Let's go. You're

44:41

watching security now with our genius at

44:43

work. Steve Gibson, when I wouldn't

44:46

be able to talk about this

44:48

stuff without him, I tell you, he's the

44:50

key security now is brought to you by

44:52

delete me. One of the

44:54

things we've learned by doing this show over the

44:56

many years that we have is it's

44:59

dangerous out there. Right. And we've

45:01

also learned to pay attention when

45:04

we get a suspicious emails or

45:07

text messages. It happened to us. Fortunately,

45:10

our staff listens to this show and

45:12

they knew, uh, I mentioned

45:14

this before the CEO

45:16

sent out a text message. It happens all

45:18

the time in every company to her underlings

45:21

saying, Hey, quick, I'm in a meeting. I

45:23

need Amazon gift cards. Just buy a bunch

45:25

and send them to this address. Fortunately,

45:28

uh, we have a

45:31

smart people here. I hope you have smart people who work

45:33

for you, but let me tell you the

45:35

thing that was the eye opener here, we didn't

45:37

lose any money. The thing that was clear eye

45:40

opener is that they know a lot about us

45:42

and how do they know a lot about us?

45:44

Cause of information brokers that collect this information and

45:46

have no scruples about who they sell it to,

45:49

whether it's an advertiser, a foreign

45:51

government, or the

45:53

hacker down the road. That's why

45:56

you need delete me. Have you ever

45:58

searched for your name online? and

46:00

didn't like how much of your personal information

46:02

was available. I can't recommend it. Don't. If

46:05

you haven't done it, don't. But

46:07

maintaining privacy is not just a concern for you.

46:10

It's a concern for your business. And

46:12

you know, it's even a concern for your family. Delete

46:15

Me has family plans. Now, an adult

46:17

has to administer it. With

46:19

the family plan, you can ensure everyone in the

46:22

family feels safe online. And of course, doing

46:24

this, getting your name off of

46:26

these databases reduces the risk from

46:29

identity theft, cyber security threats, harassment,

46:31

and more. We used it for

46:33

Lisa for years, and it

46:35

really made a big difference getting that stuff off. What

46:38

happens is you go there, and if you're on

46:40

the family plan, by the way, as the administrator,

46:42

you'll have different information sheets and different requirements for

46:44

each member of your family, tailored

46:48

to them, and easy to use controls so

46:50

you can manage privacy settings for everybody. Delete

46:53

Me's experts will find and remove your information

46:55

from hundreds of data brokers. Now, the

46:57

law requires these data brokers have those forms that say, you're

46:59

ruined by data. So yeah, you

47:01

could do it yourself. But here's the problem. The

47:04

data brokers just start building your dossier all over

47:06

again the minute you leave. You've

47:09

got to keep going back, and that's what

47:11

Delete Me does. They'll continue to scan and

47:13

remove your information regularly. And it is everything.

47:16

I mean, it's property records, it's

47:18

social media, photos, emails,

47:21

addresses, relatives, phone numbers,

47:24

income information, all that stuff's online.

47:27

And, you know, they say, well, we just collect

47:29

this for targeted advertising. Yeah, that and anybody else

47:32

who wants to get it. Protect

47:34

yourself. Reclaim your privacy. Visit

47:36

joindeleteme.com/twit. Use the code

47:38

TWIT for 20% off.

47:41

That's joindeleteme.com

47:44

slash twit. And use

47:46

the offer code TWIT for 20%. You

47:49

owe it to yourself, you owe it to your family, you owe it to

47:51

your company. joindeleteme.com/twit.

47:56

Thank them so much for the job they did

47:58

to protect Lisa and... for

48:00

the job they're gonna do to protect you. Now

48:03

back to Steve Gibson who is protecting

48:05

us all week long. So

48:08

a listener of ours, James Tutten,

48:10

shot me a note asking whether

48:13

I may have found my 50

48:15

Bitcoin when he saw an

48:17

article about 50 Bitcoin

48:19

having been moved from a

48:22

long dormant wallet. Now

48:25

yeah I wish that was my 50

48:27

Bitcoin but I've satisfied myself that they're

48:29

long gone. But I thought

48:31

our listeners would enjoy hearing about

48:33

the general topic of ancient Bitcoin

48:35

movement. The article which

48:38

appeared last Thursday at

48:40

cryptonews.com was titled Satoshi-era

48:43

Bitcoin wallet awakens. Wow. When

48:46

did you make your 50

48:49

Bitcoin strike?

48:52

It was early on. It was early 9th of 2011. Okay.

48:55

So it was one year after this. Oh

48:58

that's interesting. It was early but it was

49:00

not this early. Okay. So they

49:02

said Satoshi-era Bitcoin wallet

49:04

awakens moves 50 Bitcoin

49:07

to Binance. And

49:09

the CryptoNews piece starts out

49:11

saying a Satoshi-era

49:13

Bitcoin wallet address dormant

49:15

for 14 years transferred

49:18

50 Bitcoin approximately

49:20

3.05 million US dollars to

49:25

the Binance exchange on June 27th last

49:27

Thursday. The

49:30

wallet is believed to belong to

49:32

a Bitcoin miner who likely earned

49:34

the 50 Bitcoin as mining rewards

49:36

in 2010. This must make

49:38

you cry. I know.

49:40

Believe me. It's like oh gosh.

49:43

It hurts. Yeah.

49:47

They said on chain analytics

49:49

firm Look On Chain revealed

49:52

the Bitcoin wallet's origins. It's

49:55

linked to a miner who received

49:57

50 Bitcoin as a mining

49:59

reward. July 14th

50:02

2010 just months after

50:05

the Bitcoin network launched and

50:08

I'll note that my podcast which

50:11

Tom Merritt and I did which

50:13

was titled Bitcoin Cryptocurrency where

50:16

I explained the operation of

50:18

the entire Bitcoin cryptocurrency system

50:20

how the blockchain works and

50:23

all that that aired

50:25

the following February 9th of

50:28

2011. Wow we were really early on that.

50:30

Wow we were on the

50:32

ball yeah so while it's true

50:34

that solving the Bitcoin hash problem

50:36

way back then resulted in

50:38

an award of 50 Bitcoin

50:41

my 50 were different

50:44

from the 50 that were recently

50:46

moved. The article continues

50:48

back in 2010 one

50:51

Bitcoin oh and

50:53

this explains why I formatted my

50:55

hard drive was valued at a

50:57

mere 0.003 dollars or 0.3 cents.

50:59

So I mean it was all

51:02

just a nickel worth of Bitcoin

51:04

it wasn't worth worrying about. Yeah

51:10

well and remember the faucet the

51:13

Bitcoin faucet was dripping out

51:15

Bitcoin that anybody could go get

51:17

for free. Right so

51:20

they said this price was not surpassed until February

51:22

of 2011 reaching $30 by June of that year.

51:25

Today Bitcoin

51:29

today Bitcoin trades around $61,000 which is a which they

51:31

say is a 17% drop

51:42

from its all-time high in mid-march of

51:44

this year of $73,750 per coin. Wow.

51:55

Satoshi Bitcoin wallets. Satoshi

51:59

Bitcoin wallets. wallets, they write, which

52:01

were created during Bitcoin's infancy from 2009

52:03

to 2011, hold historical

52:07

significance. This

52:09

period marked the time

52:11

when Bitcoin's enigmatic creator,

52:13

Satoshi Nakamoto, was still

52:15

an active presence in

52:17

the cryptocurrency community. The

52:20

wallet's historical value, coupled with the

52:22

limited transactions during that era, makes

52:25

any movement of funds from them a notable

52:28

event. In

52:30

2010, Bitcoin mining was

52:32

accessible to anyone with a

52:34

personal computer, yielding a

52:37

reward of 50 Bitcoin. This

52:40

accessibility stands in stark contrast

52:42

to the current Bitcoin mining

52:45

environment. Four

52:47

halving events, as

52:49

in cut in half, have since

52:51

reduced the block reward to a mere

52:53

3.125 Bitcoin.

52:57

On the other hand, the

53:00

Bitcoins are worth 60 grand, so

53:02

not so mere. It gets harder

53:04

to make a block, though, too.

53:06

The math is much harder. It's virtually

53:08

impossible. These

53:11

halvings, occurring roughly every

53:13

four years, are integral

53:15

to Bitcoin's deflationary model.

53:18

This recent transfer from a

53:20

Satoshi Bitcoin wallet is not

53:22

an isolated incident. It joins

53:25

a growing list of dormant

53:27

wallets springing back to life.

53:30

We know why they're springing. It's

53:32

because Bitcoin is jumping.

53:37

Multiplying the number of Bitcoins, which were easily

53:39

earned back then by 60,000, will definitely

53:43

put a spring in one's step.

53:47

They wrote, in March,

53:49

a similar event occurred. A

53:52

miner transferred 50 Bitcoin, earned from

53:54

mining on April 25, 2010, to

53:56

Coinbase. after

54:01

14 years of wallet

54:03

inactivity. The reactivation

54:05

of these wallets often stirs

54:07

interest and speculation within the

54:10

cryptocurrency community. Many are

54:12

curious about the intentions behind these moves,

54:14

whether they signal a change in

54:17

market dynamics or simply represent

54:19

a longtime holder finally

54:22

deciding to liquidate their assets. Bitcoin

54:25

whales, individuals or

54:28

entities holding vast quantities

54:30

of Bitcoin, possess the

54:32

capacity to influence the

54:34

cryptocurrency market through their

54:36

sheer trading volume and

54:38

holdings. Two such

54:40

whale wallets dormant for

54:42

a decade sprang to life

54:45

on May 12th of 2024, transferring a

54:49

combined 1,000 Bitcoin. On September

54:52

12th and 13th of 2013, when Bitcoin was trading

54:59

at $1.24, each of these two wallets received 500 Bitcoin,

55:01

which was valued

55:07

at $62,000 back then. In another

55:10

noteworthy event on May 6th, a Bitcoin

55:15

whale moved $43.893 million

55:17

worth of Bitcoin to

55:20

two wallet addresses. This

55:27

whale had remained inactive for over

55:30

10 years, having initially received the

55:32

Bitcoin on January 12th 2014 when

55:34

it traded at $917. This is

55:36

why it's hard though

55:42

because had you had those

55:44

50 Bitcoin when it got to say worth

55:46

$100,000 you would have

55:48

for sure sold it. You might have

55:50

said I'll take the money, that's great, I'm happy. Right

55:55

and that's why last week's podcast about when

55:57

is a bad pseudo random number

55:59

generated. Right. A good thing. It

56:02

kept that guy from decrypting his

56:04

wallet and selling his Bitcoin until,

56:07

you know, far later

56:09

when it became, you know, worth paying

56:11

some hackers, we don't know what percentage

56:13

they took, but you know, it

56:16

would have been nothing if this guy had

56:18

them crack his password. Many have offered to

56:20

crack my password, all have failed because it's

56:23

probably a good password and

56:25

it's not, you know, it's not, it's not, if

56:27

it's not, if it's a random password, it's not,

56:29

and it's done well, which it was,

56:32

you know, using Bitcoin, Bitcoin

56:35

wallet. It's

56:38

virtually impossible to, to, to, you

56:40

know, brute force. So, uh,

56:42

salt, salted and memory hard and slow

56:44

to do and so forth. But someday,

56:46

you know, I figure this

56:49

is just a force savings account. Someday those

56:51

eight Bitcoin will be mine. I

56:54

don't know. I'll guess the password because I must have

56:56

come up with something I know. Why

56:59

wouldn't I record it? Right? I'm sure

57:01

I know mine. Yeah.

57:03

That's what's then. Yeah. Yes.

57:07

Back then we were not fully up

57:09

to speed on generating passwords at random

57:11

and having password managers hold onto them.

57:13

Right. So I could

57:16

guess my own password. I've tried all

57:18

of the dopey passwords I used by

57:20

wrote back in the day and none

57:22

of those worked. So maybe

57:24

I was, Did you rule out monkey one, two,

57:26

three? I did immediately. Okay. It's

57:29

the first one I tried. Oh,

57:33

well, those eight Bitcoin, I'm just going to sit there

57:35

for a while. It's interesting that 50 is enough to

57:37

make a news story. That's

57:39

really amazing. Yes. And

57:41

Leo, there are, there is so

57:43

much Bitcoin that has been lost.

57:46

So many people did this. Right.

57:49

I'm, you know, I'm not a unique, you

57:51

know, hard luck case at

57:53

all. And besides I'm doing fine.

57:56

So, but a lot of people and

57:58

remember when we had. of our

58:00

listeners come up to us in Boston when

58:03

we were there for the Boston event. There

58:06

was one guy in particular who said

58:08

thank you for that podcast. I retired

58:10

a long time ago. Oh

58:12

my gosh. Oh my gosh. Thanks to listening

58:14

to the Security Now podcast. What you said

58:17

made a lot of sense. I

58:19

got going. I mined a bunch

58:21

of Bitcoin and I don't have to work anymore

58:23

for the rest of my life. That's

58:26

just good luck. Good fortune. Well, yeah.

58:29

Nice. Okay.

58:31

So on the topic of astonishing

58:33

achievements by mankind and

58:36

not cracking your password. I

58:40

wanted to share a brief update

58:42

on the status of what has

58:44

now become the Voyager one interstellar

58:46

probe. NASA's

58:48

JPL wrote NASA's Voyager

58:51

one spacecraft is conducting normal

58:53

science operations for the first

58:55

time following a technical issue

58:58

that arose back in November

59:01

of 2023. The team partially resolved

59:04

the issue in April when they

59:06

prompted the spacecraft to begin returning

59:09

engineering data, which includes information about

59:11

the health and status of the

59:13

spacecraft. On May 19th, the

59:17

mission team executed the second

59:19

step of that repair process and beamed

59:21

a command to the spacecraft to begin

59:23

returning science data. Two

59:26

of the four science instruments

59:28

returned to their normal operating

59:30

modes immediately. Two

59:32

other instruments required some additional

59:35

work. But now all

59:38

four are returning

59:40

usable science data. The

59:43

four instruments study plasma

59:45

waves, magnetic fields and

59:47

particles. Voyager one

59:49

and Voyager two are the only

59:51

spacecraft to directly sample interstellar

59:54

space, which is the

59:56

region outside the heliosphere, the protective

59:58

bubble of magnetic and solar

1:00:00

wind created by the Sun. While

1:00:03

Voyager 1 is back to conducting science,

1:00:05

additional minor work is needed to clean

1:00:07

up the effects of the issue. Among

1:00:10

other tasks, engineers will re-synchronize

1:00:13

timekeeping software in the spacecraft's

1:00:15

three onboard computers so they

1:00:17

can execute commands at the

1:00:20

right time. The team

1:00:22

will also perform maintenance on the digital

1:00:24

tape recorder, which records some

1:00:27

data for the plasma wave

1:00:29

instrument, which is sent to

1:00:31

Earth twice per year. Most

1:00:34

of the Voyager science data is

1:00:36

beamed directly to Earth, not recorded

1:00:38

onboard. Voyager 1 now

1:00:41

is more than 15 billion miles,

1:00:43

24 billion kilometers, from Earth. And

1:00:48

Voyager 2 is more than 12 billion

1:00:51

miles, 20 billion kilometers

1:00:53

from us. The

1:00:56

probe will mark 47 years of operations, 47

1:00:58

years of operations later this year. They're

1:01:06

NASA's longest running and most

1:01:08

distant spacecraft. We were young

1:01:11

men at the time. Yes,

1:01:14

Leo. Just children. Just

1:01:17

we thought we were going to be able

1:01:19

to understand all this one day. And

1:01:23

there's more to understand now than there was then.

1:01:26

Well, that's fun. It's

1:01:29

not like everything's a solved problem anymore.

1:01:32

Nope, we don't have to worry about that. Speaking

1:01:35

of solved problems, everything is going

1:01:38

well with GRC's email system and

1:01:40

I'm nearly finished with my work

1:01:43

on it. The work I'm

1:01:45

finishing up is automation for sending

1:01:47

the weekly security now email. So

1:01:50

I'm able to do it before the

1:01:52

podcast while being prevented

1:01:54

from making any dumb errors like

1:01:56

having, you know, forgetting to update

1:01:58

the. names of links and

1:02:01

so forth. I'm about a day or two away

1:02:03

from being able to declare that that work is

1:02:05

finished and I should mention just

1:02:07

shy of 5,000 listeners

1:02:10

already have the email

1:02:12

describing today's podcast with

1:02:15

a thumbnail of the

1:02:17

show notes that they can click on

1:02:19

to get the full-size show notes, a

1:02:22

link to the the entire show notes

1:02:24

text that you and I have Leo

1:02:27

and then also a bullet-pointed summary of

1:02:29

the things we're talking about. So that's

1:02:31

all working. Last

1:02:34

week's announcement that I'd

1:02:36

started sending out weekly podcast

1:02:38

summaries generated renewed interest and

1:02:40

questions from listeners both

1:02:42

via Twitter or forwarded to me through

1:02:44

Sue and Greg and

1:02:47

these were listeners who had apparently been

1:02:49

waiting for the news that something was

1:02:51

actually being sent before deciding

1:02:53

to subscribe to these weekly

1:02:55

summary mailings. So now they

1:02:58

wanted to know how to do that. All

1:03:00

anyone needs to know is that

1:03:02

at the top of every page

1:03:04

at GRC is a shiny new

1:03:07

white envelope labeled

1:03:09

email subscriptions. Just

1:03:11

click that to begin the process. If

1:03:13

you follow the instructions presented at each

1:03:16

step a minute or two later you'll

1:03:18

be subscribed and remember that

1:03:20

if your desire is not to

1:03:23

subscribe to any of the lists

1:03:25

but to be able to bypass

1:03:27

social media to send email directly

1:03:29

to me you're welcome to leave

1:03:32

all of the subscription check boxes

1:03:34

unchecked when you press the update

1:03:36

subscriptions button. That will

1:03:38

serve to confirm your email address

1:03:41

which then allows you to send

1:03:43

feedback, email, pictures of the week,

1:03:45

suggestions and whatever else you like

1:03:47

directly to me at by

1:03:50

just writing to securitynow at

1:03:53

grc.com. Finally

1:03:55

I wanted to note that

1:03:57

the email today's subscribers have already

1:03:59

received from me was 100% unmonitored,

1:04:05

as I expect all future email will

1:04:08

be. So I won't

1:04:10

know whether those emails are opened or

1:04:12

not. I've also removed

1:04:14

all of the link redirections from

1:04:16

GRC's email, so that clicks are

1:04:19

also no longer being counted. This

1:04:22

makes the mailings completely blind, but

1:04:24

it also makes for cleaner and

1:04:26

clearer email. Some

1:04:28

of our listeners, as I mentioned last

1:04:30

week, were objecting to their clients warning

1:04:33

them about being tracked, even though I

1:04:35

still don't think that's a fair use

1:04:37

of a loaded term when

1:04:39

the email has been solicited by

1:04:41

the user, and if the notification

1:04:43

only comes back to me. I

1:04:46

would never have bothered, frankly, to put any

1:04:48

of that in if I'd

1:04:51

written the system myself from scratch,

1:04:53

but it was all built into

1:04:55

the bulk mailing system I purchased,

1:04:57

and it is so slick, and

1:04:59

it has such lovely graphical displays

1:05:02

with pie charts and bar charts

1:05:04

and flow charts, and it was

1:05:06

so much fun to look at

1:05:08

while it was new. And

1:05:11

frankly, I didn't anticipate the level

1:05:13

of backlash that doing this would

1:05:16

produce, but then this

1:05:18

is not your average crowd, is it? So

1:05:22

we're all SecurityNow listeners. And

1:05:24

by the way, the average crowd probably knows this, but

1:05:26

I will reiterate this.

1:05:30

You could go get this PHP program

1:05:32

yourself, but the chances are your internet

1:05:34

service provider would immediately block it. You

1:05:36

have some sort of special

1:05:38

relationship with level three or somebody that

1:05:40

allows you to send 5,000

1:05:42

emails out at once. No other internet

1:05:44

service provider would allow that. Well,

1:05:47

no consumer ISP, right? So

1:05:51

anybody who has any of our people

1:05:53

in corporations who

1:05:56

have a regular connection to

1:05:58

the internet, internet, you

1:06:00

know, not through Cox or through, you

1:06:02

know, any of the of

1:06:04

the consumer ISPs. But anyway,

1:06:07

the first two mailings I've done so

1:06:10

far, which did contain link monitoring, provided

1:06:14

some interesting feedback. For example,

1:06:17

three times more people clicked

1:06:19

to view the full-size picture of the

1:06:22

week than clicked to view the show

1:06:24

notes. Now, in

1:06:26

retrospect, that makes sense, right? Because

1:06:29

most people will be listening to

1:06:31

the podcast audio, but they're still

1:06:33

curious to see the picture of

1:06:35

the week, which, you know, we

1:06:37

have fun describing each week. And

1:06:39

in any event, I'm over it now.

1:06:42

No more single pixel fetches

1:06:44

with its attendant email

1:06:47

client freak out or anything else

1:06:49

that might be controversial. What you

1:06:51

do with any email you receive

1:06:53

for me is entirely up

1:06:55

to you. I'm just

1:06:57

grateful for everyone's interest. There's

1:07:00

also an issue with with those

1:07:02

invisible pixels. Most

1:07:05

good email clients, certainly all the ones I use,

1:07:08

don't ever load them. They know they're there.

1:07:10

They don't warn me. I don't get a

1:07:12

warning. They just go, oh well, a lot

1:07:14

of our listeners do. Apparently they do. That's

1:07:16

probably Outlook. Yeah. But you know, most email

1:07:18

clients just go, yeah sure. Yeah.

1:07:22

Anyways, that's all gone. Now,

1:07:25

one thing I've been wanting to do, and

1:07:27

I've been waiting until I knew I could,

1:07:30

was to give a shout out to

1:07:32

the emailing system I chose to use.

1:07:35

I've been utterly and

1:07:38

totally impressed by its design,

1:07:40

its complete feature set,

1:07:42

its maturity, and the author's

1:07:45

support of his system. And

1:07:48

I have to say, I feel somewhat embarrassed

1:07:50

over what I've received in return for a

1:07:52

one time purchase payment of $169. This thing

1:07:55

is worth far more. than

1:08:00

that. Now, because I'm

1:08:02

me, I insisted upon

1:08:04

writing my own subscription management

1:08:06

front end. Although I have

1:08:09

to say this packages author, a Greek guy

1:08:11

whose first name is Panos, and I can't

1:08:14

even begin to pronounce his last name because

1:08:16

it's about 12 inches

1:08:18

long. He has

1:08:20

no idea why I've done,

1:08:23

you know, my own

1:08:25

subscription management front end. He thinks

1:08:27

I'm totally nuts because his system

1:08:29

as delivered does all of that

1:08:31

too. But as Frank Sinatra famously

1:08:33

said, I did it

1:08:35

my way. I wanted to, you know,

1:08:37

have it look like, you know, GRC's

1:08:40

pages that our users interacted with. So

1:08:43

Nuvo Mailer, which is

1:08:45

spelled N-U-E-V-O-M-A-I-L-E-R. It's

1:08:51

an open source, PHP based

1:08:54

email marketing management and mailing

1:08:56

solution. It runs beautifully

1:08:58

under Windows, Unix, or

1:09:01

Linux. To help anyone

1:09:03

who might have any need to

1:09:05

create an email facility for their

1:09:07

organization or their company or whatever

1:09:09

from scratch or replace one that

1:09:11

you're not happy with, I made

1:09:15

it this episode's GRC

1:09:17

shortcut of the week.

1:09:19

So GRC.SC slash 981

1:09:22

will bounce

1:09:24

you over to www.nuvomailer.com.

1:09:33

I've had numerous back and

1:09:36

forth dialogues with Panos because

1:09:38

I've been needing to customize

1:09:40

some of the RESTful APIs

1:09:43

which his package publishes. I've

1:09:45

actually extended his API for

1:09:47

my own needs. But,

1:09:50

you know, for example, a new feature

1:09:52

that's present in the email everyone received

1:09:54

from me today for the first time

1:09:56

provides a direct link back to every

1:09:59

one of us. Everyone's own email

1:10:01

subscription management page so you can

1:10:03

click it and immediately be looking

1:10:05

at all of the lists and

1:10:08

and Add or remove yourself to

1:10:11

do that. I needed to modify some of

1:10:13

his code So

1:10:15

I can vouch for the support

1:10:17

he provides and as I've said

1:10:19

I felt somewhat guilty about paying

1:10:21

so little what I've received so

1:10:23

much. I mean, this is GRC's

1:10:25

email system Moving forward

1:10:28

forever. So, you know,

1:10:30

I'm aware that telling this podcast

1:10:32

listeners about his work I

1:10:35

hope will help him all I

1:10:37

can say is that he deserves at

1:10:39

every penny he makes there are thousands

1:10:42

literally thousands of Bulk mailing solutions out

1:10:44

in the world. This one

1:10:46

allows you essentially to roll your own

1:10:48

and I'm very glad I chose it

1:10:51

most people will use something

1:10:53

like You know, what is

1:10:55

it chimp mail and or Mailchimp

1:10:58

mailchimp or constant contact

1:11:00

because they do the mailing and they've you

1:11:02

know Arranged with whoever's doing their mailing to

1:11:05

send out tens of thousands of emails at

1:11:07

once But yeah, most consumer is

1:11:09

peaceful won't let you mail anything like that

1:11:11

at all No, no, no

1:11:14

in fact, they they block port 25,

1:11:16

right? Which is a teepee. He has

1:11:18

a very limited Basically, he

1:11:20

has a very limited set of possible

1:11:22

customers. So you should use it

1:11:24

if you can. Yeah, absolutely Okay

1:11:29

a bit of errata and then we're gonna

1:11:31

take a next break last week's podcast Drew

1:11:34

heavily on two articles written

1:11:36

by Kim Zetter It's

1:11:39

embarrassing that I've been reading

1:11:42

appreciating and sharing Kim's writing for

1:11:44

years But never stopped

1:11:46

to wonder whether Kim would probably

1:11:48

associate with the pronoun he or

1:11:50

she Her

1:11:53

quite attractive Wikipedia photo strongly

1:11:55

suggests that she would opt

1:11:58

for she as will

1:12:00

I from now on? Can't you call her him

1:12:02

last time? I think I must

1:12:04

have because somebody said, hey Gibson. She's

1:12:06

a she. Yeah. What

1:12:09

are you talking about? I get the pronouns right these

1:12:11

days. I want to hear what

1:12:13

you have to say about Sink Thing because I still use

1:12:15

it like crazy and I'm worried now that there's something I

1:12:18

should be worried about, but that's after all why

1:12:21

we listen to the show, isn't it? So

1:12:23

let's take a break and then I will come back and

1:12:26

you can explain what I need to do to

1:12:28

keep my Sink Thing in sync or

1:12:31

something. With your thing. Do

1:12:33

my thing with the Sink Thing. This

1:12:36

episode of Security Now brought to you by

1:12:38

Panoptica. Panoptica is

1:12:40

Cisco's cloud application security

1:12:43

solution and it provides end-to-end life

1:12:45

cycle protection for cloud native application

1:12:47

environments. More and more we're moving

1:12:49

to the cloud these days and

1:12:52

Cisco Panoptica is ready and

1:12:54

willing to protect you. It

1:12:56

empowers organizations to safeguard everything

1:12:58

about their cloud implementations, their

1:13:00

APIs, their serverless functions, containers,

1:13:03

their Kubernetes environments. Panoptica

1:13:05

ensures comprehensive cloud security,

1:13:08

compliance and monitoring at

1:13:10

scale, offering deep

1:13:12

visibility, contextual risk assessments and

1:13:15

actionable remediation insights for

1:13:18

all your cloud assets.

1:13:20

Inspired by graph-based technology,

1:13:22

Panoptica's attack path engine

1:13:24

prioritizes and offers dynamic

1:13:26

remediation for vulnerable attack

1:13:28

factors, helping security teams

1:13:30

quickly identify and

1:13:33

remediate potential risks across

1:13:35

cloud infrastructures. A

1:13:37

unified cloud native security platform has a

1:13:39

lot of benefits. It minimizes gaps

1:13:42

that might arise with multiple solutions. You know this

1:13:44

does that much, that much, but then there's a

1:13:46

big hole right in the middle. More

1:13:49

providing a complex variety

1:13:52

of management consoles, none of which really

1:13:54

look like the other one. With

1:13:56

Cisco's Panoptica you get a centralized

1:13:59

management and you You can see it and

1:14:01

of course you don't have that problem of fragmented

1:14:03

systems causing real

1:14:06

issues with your network. Panoptica

1:14:09

utilizes advanced attack path

1:14:11

analysis, root cause analysis,

1:14:14

and dynamic remediation techniques to reveal

1:14:16

potential risks from an attacker's point

1:14:19

of view. This approach

1:14:21

identifies new and known risks emphasizing

1:14:24

critical attack paths and their

1:14:26

potential impact. Panoptica

1:14:28

provides several key benefits for

1:14:30

businesses at any stage of

1:14:33

cloud maturity including advanced CNAAP,

1:14:35

multi-cloud compliance, end-to-end

1:14:38

visualization, the ability

1:14:40

to prioritize with precision and

1:14:42

context, dynamic remediation, and

1:14:45

increased efficiency with reduced overheads. It's

1:14:48

everything you need. Visit panoptica.app to

1:14:50

learn more. We

1:14:54

thank Cisco

1:14:58

and Panoptica for their support

1:15:02

of security now. On

1:15:05

we go with the show Mr. G. So

1:15:07

I wanted to note that while I

1:15:09

am still a big fan of SyncThing

1:15:12

lately I had been noticing a great

1:15:14

deal of slowdown in its

1:15:17

synchronization relay servers. I

1:15:20

don't think they used to be so slow. I'm

1:15:22

unable to get more than 1.5 to 1.8 megabits of traffic

1:15:24

through them. While

1:15:29

it's not possible to obtain a direct

1:15:32

end-to-end, or I should say

1:15:35

when it's not possible to

1:15:37

obtain a direct end-to-end connection

1:15:39

between SyncThing endpoints, an

1:15:42

external third-party relay server

1:15:44

is required to handle

1:15:46

their transit traffic. SyncThing

1:15:49

is super well encrypted so that's not

1:15:51

the issue. The issue is the performance

1:15:54

of this solution. Since

1:15:56

this problem has persisted or was

1:15:58

persisting, for me for several

1:16:00

weeks, my assumption is

1:16:03

that SyncThings popularity has been growing

1:16:05

and actually we know it has,

1:16:08

and is loading down

1:16:10

their relay server infrastructure, which

1:16:12

after all they just provide for

1:16:14

free. No one's paying anything for

1:16:16

this. At one

1:16:18

point in the past I had

1:16:21

arranged for point-to-point connections between my

1:16:23

two locations, but some

1:16:25

network reconfiguration had broken that.

1:16:28

My daytime work location has a machine that runs

1:16:30

24-7, but I shut down my

1:16:34

evening location machine at the

1:16:37

end of every evening's work.

1:16:39

The trouble was that synchronization

1:16:41

to that always-on machine

1:16:44

had become so slow that I

1:16:46

was needing to leave my evening

1:16:48

machine running unattended for several hours

1:16:51

after I stopped working on it,

1:16:53

waiting for my evening's work to

1:16:55

trickle out and be synchronized with

1:16:57

the machine I'd be using the

1:17:00

next morning. I finally

1:17:02

became so, you know, this problem

1:17:04

finally became so intolerable that I

1:17:07

sat down and punched

1:17:09

remote IP filtered holes

1:17:11

through my firewalls at

1:17:13

each endpoint. Even

1:17:17

if PF Census firewall rules

1:17:19

were not able to track

1:17:21

public domain names as they

1:17:23

are, the public IPs of our

1:17:25

cable modems, for example, changed

1:17:27

so rarely that even

1:17:29

statically opening an incoming

1:17:32

port to a specific

1:17:34

remote public IP is

1:17:36

practical. Once I

1:17:38

punched those holes, Syncing was able

1:17:40

to make a direct point-to-point connection

1:17:42

once again and my synchronization

1:17:45

is virtually instantaneous. So

1:17:49

I just wanted to give a heads

1:17:51

up to anyone who may be

1:17:53

seeing the same dramatic slowdown that

1:17:56

I was seeing with the use

1:17:58

of their Relay Server. infrastructure.

1:18:00

You know, it is

1:18:03

an amazingly useful free service.

1:18:05

And frankly, helping it to

1:18:08

establish direct connections between endpoints

1:18:11

also helps to keep the relay servers

1:18:13

free, you know, freed up for those

1:18:15

who really need them. So

1:18:18

that was the issue, Leah, it was just the

1:18:21

use of a third relay server

1:18:24

had recently really ground to

1:18:26

a near moment. Yeah,

1:18:30

I haven't noticed it, but I don't have you

1:18:32

have a much more complicated setup than I do.

1:18:34

So yeah, I've got like double nat and all

1:18:37

kinds of other crazy stuff going on that really

1:18:39

make it, you know, a little extra difficult. But

1:18:42

for what it's worth, it's I

1:18:44

guess my point is it's worth

1:18:46

taking the time, if you are

1:18:48

not seeing a direct WAN, they

1:18:51

call it a WAN connection in

1:18:53

the UI with the IP of

1:18:56

your remote node. Instead,

1:18:58

you see some mention of relay servers.

1:19:00

That's not good. Well, you probably already

1:19:02

know how slow things are going. Right.

1:19:05

The point is, it's worth

1:19:07

taking the time to resolve

1:19:10

that and then sinking is

1:19:12

just instantaneous. Yeah. I'm

1:19:15

Matt St. Clair Bishop wrote

1:19:17

saying Hello, Steve, I've been a listener of

1:19:19

security now for some years now. However,

1:19:22

as I've edged closer to making

1:19:25

my own utilities publicly available, my

1:19:27

mind has turned to my method of updating

1:19:30

them. I think in

1:19:32

my dim and distant memory, I

1:19:35

remember you saying that you used

1:19:37

a simple DNS record to hold

1:19:39

the latest edition of each of

1:19:42

your public releases. And

1:19:44

the client software inspects that

1:19:46

record it being very

1:19:48

simple and efficient mechanism to

1:19:50

flag available updates. Could

1:19:53

you elaborate at all if you

1:19:55

have a spare section in your

1:19:57

podcast. I'm personally using she C

1:19:59

sharp. and the .NET framework

1:20:01

as I'm a Windows only guy. So

1:20:04

if you could paint the broad strokes, I

1:20:06

should be able to Google the C

1:20:08

sharp detail. Spinrite user,

1:20:11

loving all your efforts in this

1:20:13

field, Matt St. Clair

1:20:15

Bishop. Okay, so Matt's

1:20:17

correct about my

1:20:19

use of DNS and

1:20:21

I am pleased with the way that capability

1:20:23

has turned out. Anyone who

1:20:26

has the ability to look

1:20:28

up the IP address, for

1:20:30

example, for

1:20:33

valadrive.rel.grc.com, will

1:20:38

find that it returns 239.0.0.1. This

1:20:45

is because Valadrive is still at

1:20:47

its first release. When

1:20:50

I've released an update to Valadrive,

1:20:53

it will be released number two. And

1:20:56

I'll change the IP address of

1:20:58

valadrive.rel as

1:21:01

in release.grc.com to

1:21:03

239.0.0.2. Whenever

1:21:09

an instance of Valadrive is launched

1:21:11

by any user anywhere in the

1:21:13

world, it performs a

1:21:15

quick DNS lookup of its own

1:21:18

product name, valadrive

1:21:20

in front of .rel.dns.com, and

1:21:25

verifies that the release number

1:21:27

returned in the lower byte

1:21:30

of the IP address is

1:21:32

not higher than its own current

1:21:34

release number. If it is, it

1:21:37

will notify its user that a

1:21:39

newer release exists. What's

1:21:42

convenient about this, I mean, there are many

1:21:44

things about it. There's no

1:21:46

massive flood of queries coming

1:21:48

in all over the internet.

1:21:50

It also provides all of

1:21:52

its users the anonymity of

1:21:54

making a DNS query as

1:21:56

opposed to coming back to

1:21:58

GRC. So there's... that too. But

1:22:02

this version checking is also performed

1:22:04

by a simple DNS

1:22:06

query packet, and that

1:22:09

DNS is distributed and caching. So

1:22:12

it's possible to set a

1:22:14

very long cache expiration to

1:22:17

allow the cached knowledge of

1:22:19

the most recent version of

1:22:21

Valadrive to be spread

1:22:23

out across the internet, varying widely

1:22:25

with when each

1:22:27

cache expires. This

1:22:30

means that when the release

1:22:32

number is incremented, the notifications

1:22:34

of this event will also

1:22:36

be widely distributed in time

1:22:39

as those local caches expire. This

1:22:42

prevents everyone on the internet from coming

1:22:44

back all at once to get the

1:22:47

latest version. And typically, it's

1:22:49

not a matter of any urgency. And

1:22:52

to Matt's question and point, I've

1:22:56

never encountered a language that

1:22:58

did not provide some relatively

1:23:01

simple means for making a DNS

1:23:03

query. I know that C sharp

1:23:06

and .NET make this trivial. So

1:23:09

anyway, that's the story on

1:23:11

that. Oh, and

1:23:13

I should mention that 239 is obviously

1:23:15

a huge block of

1:23:21

IPs which have been set

1:23:23

aside. That's the high end

1:23:26

of the multicast address space,

1:23:28

but the 239 block specifically

1:23:30

is non-roundable. So those

1:23:33

IPs will never and can never go anywhere.

1:23:35

So that's why I chose 239 as the

1:23:37

first byte of the IP in the DNS

1:23:43

for my release management.

1:23:48

A listener of ours named Carl sent

1:23:50

email to me at securitynow

1:23:52

at grc.com. He said, hi, Steve.

1:23:55

Much has been discussed over the recent

1:23:57

weeks on your podcast about the upcoming

1:23:59

Windows. recall feature and

1:24:02

its value proposition versus security

1:24:04

and privacy concerns. It has

1:24:07

been suggested that the concept

1:24:09

started as a productivity assistant

1:24:11

that uses AI to index

1:24:13

and catalog everything on your

1:24:16

screen and may

1:24:18

be more applicable in an office

1:24:20

environment than at home. However,

1:24:23

I think it was

1:24:25

just as likely that this

1:24:27

concept first started as a

1:24:30

productivity monitoring tool where

1:24:32

corporate management can leverage recall

1:24:34

to ensure employees are using

1:24:36

their screen time doing real

1:24:39

actual work. Of course, Microsoft

1:24:42

realizes they can't possibly

1:24:44

market recall this way, so

1:24:47

here we are. He said,

1:24:49

I dread the day recall is

1:24:51

installed on my work computer signed

1:24:54

Carl. Bad news, Carl. Microsoft

1:24:57

already has a product that does that called

1:24:59

Viva. They don't need

1:25:02

another one. They monitor you

1:25:04

all the time. Anyway, Carl's

1:25:06

take on this, you know, it

1:25:08

aligned with the evil empire

1:25:11

theory, which as we know I

1:25:13

don't subscribe to. I

1:25:15

would say that recall

1:25:17

is itself is ethically

1:25:20

neutral. It's like

1:25:22

the discovery of the chain

1:25:24

reaction in the fission of

1:25:26

atomic nuclei. That discovery

1:25:28

can be used to generate needed

1:25:30

power or to make a really

1:25:32

big bomb. But the chain

1:25:34

reaction itself is just the physics of

1:25:36

our universe. Similarly, recall

1:25:39

is just a new capability

1:25:41

which could be used to

1:25:43

either help or to hurt

1:25:45

people. Could employer,

1:25:47

could employers use it

1:25:49

to scroll back through their employees

1:25:51

timeline to see what they've been

1:25:54

doing on enterprise owned machines? That's

1:25:57

not yet clear. There are

1:25:59

a indications that Microsoft

1:26:01

is working to make

1:26:03

that impossible. But

1:26:06

we know that as it was first

1:26:08

delivered, it would have been entirely possible.

1:26:11

It appears that Microsoft desperately

1:26:13

wants to bring recall to

1:26:15

their Windows desktops. It

1:26:18

would be incredibly valuable as

1:26:21

training material for a local

1:26:23

AI assistant and

1:26:25

to deeply profile the

1:26:27

desktop user as

1:26:30

a means for driving advertising selection

1:26:32

in a future ad supported

1:26:34

Windows platform. So

1:26:37

I suspect they will be doing

1:26:39

anything and everything required to make

1:26:41

it palatable. And

1:26:43

as I said, they already have an enterprise product that

1:26:45

does that. Right.

1:26:47

For that is that is deployed in

1:26:50

business group policy. Right. Yeah.

1:26:53

Right. Okay. So

1:26:57

this week I want to share the

1:26:59

story and the backstory of

1:27:02

the web browser community again bidding

1:27:05

a less than fond

1:27:08

farewell to yet another

1:27:10

significant certificate authority in as

1:27:12

we'll see what

1:27:15

appears to be or as

1:27:17

a result of as

1:27:19

we'll see what appears to be a demonstration

1:27:23

of executive arrogance and

1:27:27

trust one of the

1:27:29

oldest original certificate authorities after

1:27:31

six years of being

1:27:34

pushed prodded and encouraged to

1:27:36

live up to the responsibilities

1:27:38

that accompany the right to

1:27:41

essentially print money by

1:27:43

charging to encrypt the

1:27:45

hash of a blob of

1:27:48

bits. The rest of

1:27:50

the industry that proactively

1:27:52

monitors and manages the

1:27:54

behavior of those who

1:27:56

have been dare I

1:27:58

say entrusted to do

1:28:01

this responsibly, finally

1:28:04

reached its limit, and

1:28:07

Google announced last Thursday

1:28:09

that Chrome would be

1:28:11

curtailing its trust of

1:28:13

entrust from its browser's

1:28:15

root store. Okay,

1:28:18

so signing and

1:28:20

managing certificates is by no

1:28:23

means rocket science. There's

1:28:26

nothing mysterious or particularly challenging

1:28:28

about doing it. It's

1:28:31

mostly a clerical activity, which

1:28:34

must follow a bunch of

1:28:36

very clearly spelled out rules

1:28:39

about how certificates are formatted

1:28:41

and formulated and what information

1:28:43

they must contain. These

1:28:46

rules govern how the certificates

1:28:48

must be managed and what

1:28:50

actions those who signed them

1:28:53

on behalf of their customers must

1:28:55

do when problems arise.

1:28:58

And just as significantly, the rules

1:29:01

are arrived at and

1:29:04

agreed upon collectively. The

1:29:06

entire process is a somewhat

1:29:08

amazing model of self-governance.

1:29:13

Everyone gets a say, everyone

1:29:15

gets a vote, the rules

1:29:17

are adjusted in response to the

1:29:19

changing conditions in our changing world,

1:29:21

and everyone moves

1:29:23

forward under the updated guidance.

1:29:26

This means that when someone

1:29:29

in this collective misbehaves, they're

1:29:31

not pushing back against

1:29:34

something that was imposed upon them.

1:29:37

They are ignoring the rules

1:29:39

that they voted to change

1:29:42

and agreed to follow.

1:30:00

the internet really is who we think

1:30:02

they are and not any

1:30:04

form of spoofed forgery. The

1:30:06

idea behind a certificate authority is

1:30:09

that while we may have no

1:30:11

way of directly confirming the identity

1:30:13

of an entity we don't know

1:30:15

across the internet, if

1:30:17

that entity can provide proof

1:30:20

that they have previously and

1:30:22

somewhat recently proven their

1:30:24

identity to a third party, a certificate

1:30:27

authority whose identity assertions

1:30:30

we do trust, then

1:30:33

by extension we can

1:30:35

trust that the unknown party is

1:30:38

who they say they are when

1:30:40

they present a certificate to that

1:30:42

effect signed by an

1:30:44

authority whom we trust. That's

1:30:47

all this whole certificate thing is

1:30:50

about. It's beautiful and

1:30:52

elegant in its simplicity, but

1:30:55

as the saying goes, the devil is in

1:30:57

the details and we're going

1:31:00

to see today those who

1:31:02

understand the importance of

1:31:04

those details can be

1:31:06

pretty humorless when

1:31:08

they are not only ignored but

1:31:11

flaunted. The

1:31:14

critical key here is that

1:31:16

we are completely and solely

1:31:19

relying upon a certificate

1:31:21

authority's identity assertions where

1:31:25

any failure in such

1:31:27

an authority's rigorous verification of

1:31:30

the identity of their client customers

1:31:33

could have truly widespread

1:31:35

and devastating consequences. This

1:31:38

is one of the reasons I've always

1:31:40

been so impressed with the extreme patience

1:31:43

shown by the governing parties

1:31:45

of this industry in the

1:31:47

face of certificate authority misbehavior.

1:31:50

Through the years we've seen

1:31:52

many examples where a certificate

1:31:55

authority that's trusted really

1:31:57

needs to screw up over a piece of

1:31:59

paper. period of years and

1:32:02

actively resist improving their

1:32:04

game in order to

1:32:06

finally have the industry lower the

1:32:09

boom on them. No

1:32:11

one wants to do this

1:32:14

indiscriminately or casually because it

1:32:16

unilaterally puts the wayward CA,

1:32:18

the certificate authority, out

1:32:21

of the very profitable

1:32:24

browser-certificating business, overnight.

1:32:28

OK, so what happened?

1:32:32

In a remarkable show of

1:32:34

prescience, when things were

1:32:37

only just heating up, Feisty

1:32:39

Duck's cryptography and

1:32:42

security newsletter posted the

1:32:44

following, only a

1:32:46

few hours before Google

1:32:49

finally lowered the boom on Entrust.

1:32:53

Feisty Duck wrote, Entrust,

1:32:57

one of the oldest

1:32:59

certification authorities, is

1:33:01

in trouble with Mozilla and other

1:33:03

root stores. In

1:33:06

the last several years, going

1:33:08

back to 2020, there have

1:33:10

been multiple, persistent technical problems

1:33:12

with Entrust certificates. That's

1:33:15

not a big deal when it happens once,

1:33:18

or even a couple times, and

1:33:20

when it's handled well. But

1:33:23

according to Mozilla and others, it

1:33:25

has not been. Over

1:33:27

time, frustration grew.

1:33:31

Entrust made promises, which it then broke. Finally

1:33:34

in May, Mozilla compiled a

1:33:37

list of recent issues and

1:33:39

asked Entrust to please formally

1:33:42

respond. Entrust's

1:33:44

first response did not go

1:33:46

down well, being

1:33:49

non-responsive and lacking sufficient

1:33:51

detail. Sensing

1:33:53

trouble, it later provided

1:33:55

another response with more information.

1:33:59

We haven't seen a response. response back from

1:34:01

Mozilla, just ones from

1:34:03

various other unhappy members of the

1:34:05

community. It's clear

1:34:07

that Entrust's case has reached

1:34:09

a critical mass of unhappiness.

1:34:12

And that's really interesting because this

1:34:15

is really the point. All

1:34:17

it takes is a critical

1:34:19

mass of unhappiness. As

1:34:23

I said, four hours

1:34:25

after this was posted, Entrust

1:34:28

lost Google. And that's

1:34:30

losing the game, essentially, if you're

1:34:33

selling certificates for browsers. So

1:34:36

they said, we haven't heard from

1:34:39

other root stores yet. However,

1:34:41

at the recent CA

1:34:43

browser forum meeting, also in May,

1:34:46

Google used the opportunity

1:34:48

to discuss standards for

1:34:50

CA incident response. It's

1:34:53

not clear if it's just

1:34:55

a coincidence, but Google's presentation

1:34:57

uses pretty strong words that

1:34:59

sound like a serious warning

1:35:02

to Entrust and all other

1:35:04

CA's to improve or else.

1:35:08

Looking at the incidents themselves,

1:35:10

they're mostly small technical problems

1:35:12

of the kind that could

1:35:14

have been avoided with standardized

1:35:16

validation of certificates just prior

1:35:18

to issuance. And

1:35:20

I'll note later that I'll use the

1:35:23

term lint. Lint

1:35:25

is well understood in the developer

1:35:27

community. It means just running a

1:35:29

certificate through a lint filter to

1:35:31

make sure that there isn't any

1:35:34

lint, any debris,

1:35:36

any obviously like a

1:35:39

date set to

1:35:41

an impossible number or something

1:35:44

obviously missing that the standard says

1:35:46

should be there. Just

1:35:49

do it, but that

1:35:51

doesn't happen. They

1:35:53

said as it happens,

1:35:56

ballot SC95 focuses on

1:35:58

pre-issuance certificates. linting.

1:36:02

If this ballot passes, linting

1:36:04

will become mandatory as of

1:36:06

March 2025, meaning

1:36:10

it's not there yet, but boy

1:36:12

after March, we're gonna

1:36:14

see some more booms lowered if

1:36:17

people don't lint by default. And

1:36:19

that means people are gonna

1:36:21

have to spend some time and spend

1:36:23

some money upgrading their

1:36:26

certificate issuing infrastructures.

1:36:29

They have not been bothering. Anyway,

1:36:32

they said it's a good first

1:36:34

step. Perhaps the CAB forum, you

1:36:37

know, the CA browser forum, will

1:36:39

in the future consider encoding

1:36:42

the baseline requirements into a

1:36:44

series of linting rules that

1:36:46

can be applied programmatically to

1:36:48

always ensure future compliance.

1:36:53

Okay, now as I noted, a few

1:36:55

hours after Feisty Duck posted

1:36:57

this, Google made

1:37:00

their announcement last Thursday,

1:37:03

June 27th, the

1:37:05

Chrome root programming Chrome

1:37:07

security team posted the

1:37:09

following in Google's security

1:37:12

blog under the title sustaining

1:37:15

digital certificate

1:37:17

security, entrust,

1:37:20

certificate distrust.

1:37:23

And Leo, after taking our final

1:37:25

break, I will share what Google

1:37:27

wrote and the logic and

1:37:29

the and the basically the

1:37:32

preamble that led up to this. Yeah,

1:37:35

as you say, you lose Google, you pretty much

1:37:37

lost the game. Game over. Game over,

1:37:39

man. Our show today

1:37:41

brought to you by Lookout. Why

1:37:44

have a game over when you're just getting

1:37:47

started? Every company today is a data company.

1:37:50

That means every company's

1:37:52

at risk. Cyber threats,

1:37:54

breaches, leaks. These are like, just

1:37:56

listen to the show, these are the new norm. And

1:37:59

of course, cybercriminals are becoming more sophisticated

1:38:02

by the minute. At

1:38:05

a time when boundaries for your data

1:38:07

no longer exist, what it

1:38:09

means for you to be secure, for your

1:38:11

data to be secure, has just fundamentally changed.

1:38:13

But that's why you need Lookout. From

1:38:15

the first phishing text to

1:38:17

the final data grab, Lookout

1:38:19

stops modern breaches as swiftly

1:38:22

as they unfold, whether it's on a device in the

1:38:24

cloud, across networks, or

1:38:27

working remotely at the local coffee shop.

1:38:29

Lookout gives you clear visibility into all

1:38:31

your data at rest and

1:38:34

in motion. You'll monitor, assess, and

1:38:36

protect without sacrificing productivity

1:38:38

for security. With a

1:38:40

single unified cloud platform,

1:38:42

Lookout simplifies and strengthens,

1:38:44

reimagining security for

1:38:46

the world that will be today.

1:38:49

Visit lookout.com today to

1:38:51

learn how to safeguard data, secure hybrid work,

1:38:54

and reduce IT

1:38:56

complexity. That's lookout.com.

1:38:59

Data protection from endpoint to cloud

1:39:02

to your happy place. Aww. Thank

1:39:04

you Lookout for supporting security now.

1:39:06

This is Steve Gibson. He's our happy place.

1:39:09

Back to the saga of

1:39:12

Entra. So

1:39:14

Google wrote, the Chrome

1:39:16

security team prioritizes the security and

1:39:18

privacy of Chrome's users and

1:39:21

we are unwilling to compromise on

1:39:23

these values. The

1:39:25

Chrome root program states

1:39:27

that CA certificates included in

1:39:30

the Chrome root store must

1:39:32

provide value to Chrome end

1:39:34

users that exceeds the risk

1:39:37

of their continued inclusion. You

1:39:40

should hear a drumbeat in the background here. It

1:39:43

also describes many of the

1:39:45

factors we consider significant when

1:39:47

CA owners disclose and respond

1:39:50

to incidents. When things

1:39:52

don't go right, we expect

1:39:54

CA owners to commit

1:39:56

to meaningful and demonstrable

1:39:58

change. resulting in

1:40:01

evidence continuous improvement. Over

1:40:04

the past few years, publicly

1:40:06

disclosed incident reports highlighted a

1:40:09

pattern of concerning

1:40:11

behavior by end trust

1:40:14

that falls short of the

1:40:16

above expectations and

1:40:18

has eroded confidence in

1:40:21

their competence, reliability,

1:40:24

and integrity as a

1:40:26

publicly trusted CA owner. In

1:40:29

response to the above concerns and

1:40:31

to preserve the integrity of

1:40:33

the Web PKI ecosystem,

1:40:36

Chrome will take the following

1:40:38

actions. In

1:40:41

Chrome 127 and higher, TLS

1:40:44

server authentication certificates validating

1:40:46

to the following end

1:40:49

trust roots whose

1:40:51

earliest signed

1:40:53

certificate timestamp is

1:40:56

dated after October 31,

1:40:58

2024 will

1:41:02

no longer be trusted

1:41:05

by default. Then

1:41:09

in Chrome's posting, they

1:41:11

enumerate the exact

1:41:14

nine root certificates that

1:41:16

Chrome has until now

1:41:19

trusted to be valid

1:41:22

signers of the

1:41:24

TLS certificates that remote web

1:41:26

servers present to their Chrome

1:41:28

browser. They continue

1:41:30

writing, TLS server

1:41:33

authentication certificates validating to the

1:41:35

above set of roots whose

1:41:37

earliest signed certificate timestamp is

1:41:39

on or before October 31,

1:41:41

2024 will not be

1:41:46

affected by this change. This

1:41:48

approach attempts to minimize

1:41:51

disruption to existing subscribers

1:41:54

using a recently announced Chrome

1:41:56

feature to remove default trust

1:41:59

based on the SCTs,

1:42:01

that's the signed certificate

1:42:03

timestamp, the signing date,

1:42:06

and certificates. Additionally,

1:42:08

should a Chrome user or

1:42:10

enterprise explicitly trust any of

1:42:13

the above certificates on a

1:42:15

platform and version of

1:42:17

Chrome relying on the Chrome root store,

1:42:21

the SCT-based constraints described above

1:42:24

will be overridden and

1:42:26

certificates will function as they do today. To further

1:42:29

minimize risk of disruption, website

1:42:32

owners are encouraged to review

1:42:34

the frequently asked question listed

1:42:36

below. Okay, so now, okay,

1:42:39

if Chrome were to yank, just

1:42:42

summarily yank, all nine

1:42:44

of those Entrust certs from their root

1:42:46

store, at that instant,

1:42:49

any web servers that

1:42:52

were using Entrust TLS

1:42:54

certificates would generate

1:42:56

those very scary untrusted

1:42:59

certificate warnings that

1:43:01

sometimes we see when someone allows

1:43:03

their certificate to expire by mistake.

1:43:06

And that makes it quite difficult to use

1:43:08

your browser. And most users just say, Whoa,

1:43:11

I don't know what this red flashing neon

1:43:14

thing is, but it's very scary. And

1:43:16

if you want to see that, you

1:43:18

can go right now to untrusted hyphen

1:43:21

root dot bad

1:43:23

SSL calm. And

1:43:25

there, what you will get is a deliberately

1:43:28

untrusted certificate. So you can see what

1:43:30

your browser does untrusted hyphen

1:43:32

root dot bad SSL calm.

1:43:36

Okay, instead of

1:43:38

doing that, Chrome is

1:43:40

now able to keep those,

1:43:42

I guess I would call them semi

1:43:45

trusted or time base trusted

1:43:48

root certificates in their

1:43:50

root store in

1:43:53

order to continue trusting

1:43:55

any certificates Entrust previously

1:43:58

signed and will

1:44:01

sign during the next

1:44:03

four months. July,

1:44:05

August, September and October. Halloween

1:44:08

being the end of that. No

1:44:12

end trust certificate signed

1:44:14

from November on will

1:44:16

be accepted by Chrome.

1:44:19

So that's good. That allows end

1:44:21

trust four months to wind down

1:44:23

their services to decide, maybe

1:44:27

make a deal with some

1:44:30

other CA to, you know,

1:44:33

like purchase their existing customers and, and,

1:44:35

you know, and, and move, transfer them

1:44:37

over. I would imagine that's what they'll

1:44:40

do. But there

1:44:42

could be no question that this

1:44:44

will be a devastating blow

1:44:46

for end trust. Not

1:44:48

only will this shut down

1:44:51

completely their TLS certificate business,

1:44:53

but CA is obtained a great

1:44:56

deal of additional revenue by providing

1:44:58

their customers with many related services.

1:45:01

End trust will lose all of that too. And

1:45:04

of course, there's the significant reputational damage

1:45:06

that accompanies this, which, you know, makes

1:45:09

a bit of a mockery of their

1:45:11

own name. And there's

1:45:13

really nothing they can say or do

1:45:15

at this point. The system

1:45:17

of revoking CA trust

1:45:19

operates with such care

1:45:22

to give misbehaving see

1:45:24

CA's every opportunity

1:45:26

to fix their troubles, that

1:45:29

any CA must be flagrant in

1:45:31

their misbehavior for this to occur.

1:45:35

As long as

1:45:38

longtime listeners of this podcast know, I'm

1:45:40

not of the belief

1:45:42

that firing someone who missteps always

1:45:44

makes sense. Mistakes happen and

1:45:47

valuable lessons can be learned. But

1:45:49

from what I've seen and what I'm going

1:45:51

to share, I'll be surprised if

1:45:53

this is a survivable event

1:45:56

for end trusts director of

1:45:58

certificate services. guy named

1:46:01

Bruce Morton. Way

1:46:04

back in 1994, Entrust built and

1:46:06

sold the first commercially

1:46:11

available public key

1:46:14

infrastructure. They started all

1:46:16

this. Five

1:46:18

years later in 1999 they

1:46:20

entered the public SSL market

1:46:22

by chaining to the thought

1:46:25

route and created

1:46:27

entrust.net and as I

1:46:29

said their name has been

1:46:31

around forever. You know I've seen it

1:46:34

when I've looked at lists of

1:46:36

certificates there's Entrust. Ten

1:46:39

years later Entrust was

1:46:41

acquired for 124 million

1:46:43

by Thama

1:46:48

Bravo a

1:46:50

US-based private equity firm. This

1:46:52

one of the list. Wow.

1:46:54

Now I don't

1:47:02

know and I'm not

1:47:04

saying whether being owned

1:47:06

by private equity may have

1:47:08

contributed to their behavior and

1:47:11

their downfall but if so

1:47:14

they would have that in common

1:47:16

with LastPass. Yeah and said Leo

1:47:19

and Red Lobster and about

1:47:21

a million other companies in the United States in the

1:47:23

last 10 years that have been

1:47:25

bought by private equity and then drained for

1:47:28

of their resources for money. It's

1:47:30

yeah. Google

1:47:33

in their FAQ answering the question

1:47:35

why is Chrome taking action replied

1:47:38

certificate authorities serve a privileged

1:47:40

and trusted role on the

1:47:42

internet that underpin encrypted connections

1:47:44

between browsers and websites with

1:47:47

this tremendous responsibility comes

1:47:49

an expectation of adhering

1:47:51

to reasonable and

1:47:54

consensus driven security and

1:47:56

compliance expectations including

1:47:59

those defined by the

1:48:01

CA browser TLS baseline

1:48:03

requirements. Over

1:48:05

the past six years, we

1:48:08

have observed a pattern of

1:48:11

compliance failures, unmet

1:48:14

improvement commitments, and

1:48:17

the absence of tangible,

1:48:19

measurable progress in response

1:48:21

to publicly disclosed incident

1:48:23

reports. When these

1:48:25

factors are considered in aggregate

1:48:28

and considered against the inherent risk

1:48:31

each publicly trusted CA poses

1:48:33

to the internet

1:48:35

ecosystem, it is our opinion

1:48:38

that Chrome's continued trust

1:48:40

in nTrust is

1:48:42

no longer justified. And

1:48:45

okay, this makes a key point. It's

1:48:48

not any one thing that

1:48:50

nTrust did taken in

1:48:52

isolation that resulted in

1:48:54

this loss of trust. The

1:48:57

loss of trust resulted

1:48:59

from multiple years

1:49:01

of demonstrated uncaring

1:49:04

about following the rules that

1:49:06

they had voted upon and

1:49:09

agreed to as a member

1:49:11

of this group. No one

1:49:13

wants to make nTrust an example. Too

1:49:16

many lives will be negatively impacted

1:49:18

by this decision. But

1:49:21

the entire system only functions

1:49:23

when everyone follows the rules

1:49:25

they've agreed to. nTrust

1:49:28

refused to do

1:49:30

that. So they had to go.

1:49:33

Let's take a look at some specifics. For

1:49:36

example, a few months ago, following

1:49:38

an alert from Google's Ryan Dixon,

1:49:41

nTrust discovered that all

1:49:43

of its EV certificates

1:49:45

issued since the implementation of

1:49:48

changes due to ballot SC-62V2,

1:49:54

which amounted to approximately 26,668.

1:50:00

certificates were missing

1:50:02

their CPS URIs

1:50:06

in violation of the EV guidelines.

1:50:09

Entrust said this was

1:50:11

due to discrepancies and

1:50:14

misinterpretations between the CA

1:50:16

browser forums TLS baseline

1:50:19

requirements and the extended

1:50:21

validation guidelines. Entrust

1:50:23

chose to not

1:50:26

stop issuing the EV

1:50:28

certificates. That's a

1:50:30

violation of the rules and

1:50:32

did not begin the process of revoking

1:50:34

the miss issued certificates.

1:50:38

That's another violation. Instead,

1:50:41

they argued that

1:50:43

the absence of the CPS

1:50:45

URI in their EV certificates

1:50:47

was due to ambiguities in

1:50:50

cab form requirements, which was not

1:50:53

the case. They said

1:50:55

that the absence of the CPS

1:50:57

URI had no security impact. That's

1:51:00

arguably true. And

1:51:02

that halting and revoking the

1:51:04

certificates would negatively impact customers

1:51:07

and the broader web

1:51:09

PKI ecosystem. In

1:51:12

other words, they thought they

1:51:14

were bigger than the rules. That

1:51:17

the rules were dumb or that

1:51:19

the rules didn't apply to them. Everyone

1:51:22

else has to follow them, but

1:51:24

not them. Entrust then

1:51:26

also proposed a ballot to

1:51:29

adjust the EV guidelines so

1:51:32

that they would not be out of compliance

1:51:35

to not require the CPS

1:51:38

URI. They also argued

1:51:40

that their efforts were better spent

1:51:43

focusing on improving automation

1:51:45

and handling of certificates

1:51:48

rather than on revocation

1:51:50

and reissuance. Wow.

1:51:55

Okay. Now the CPS URI

1:51:58

is truly incidental. CPS

1:52:00

stands for certification

1:52:03

practice statement and

1:52:06

EV certs are now supposed to

1:52:08

contain a CPS

1:52:10

URI link pointing

1:52:13

to the CA's issuing

1:52:15

document. So

1:52:18

is leaving that out a big deal?

1:52:21

Probably not from a security standpoint, but

1:52:23

it's worrisome when a

1:52:26

CA intentionally defies the

1:52:28

standards that everyone has

1:52:30

agreed to follow and

1:52:32

then argues about them and

1:52:35

is deliberately knowingly in mississuance.

1:52:41

A security and software engineer by

1:52:44

the name of Amir Omidi

1:52:47

has worked on maintaining certificate issuance

1:52:49

systems at Let's Encrypt and Google

1:52:51

Trust services and he's very active

1:52:54

in the PKI space. His

1:52:57

GitHub account contains 262 repositories

1:53:01

and it appears that he's currently

1:53:03

working on a project named Boulder

1:53:06

which is an ACME based certificate

1:53:09

authority written in Go. And

1:53:11

before that was Zlint,

1:53:14

an X.509 certificate

1:53:17

linter focused on Web

1:53:19

PKI standards and requirements. Yesterday,

1:53:22

just Monday, yesterday, he

1:53:25

posted a terrific

1:53:27

summary of

1:53:29

the way the public key infrastructure

1:53:31

industry thinks about these

1:53:33

things. He wrote, Entrust

1:53:37

did not have one big

1:53:39

explosive incident. The

1:53:41

current focus on Entrust started with

1:53:44

this incident. On

1:53:46

its surface, this incident was

1:53:48

a simple misunderstanding. This

1:53:50

incident happened because up

1:53:52

until the SC62 version

1:53:54

2 ballot, the CPS

1:53:56

URI field in the

1:53:58

certificate policy extension was

1:54:01

allowed to appear on certificates. This

1:54:04

ballot changed the rules and

1:54:06

made this field be considered

1:54:09

not recommended. However,

1:54:12

this ballot only changed

1:54:15

the baseline requirements and

1:54:17

did not make any

1:54:19

stipulation on how extended

1:54:22

validation certificates must be

1:54:24

formed. The EV guidelines

1:54:26

still contained rules requiring

1:54:29

the CPS URI extension.

1:54:32

When a CA writes

1:54:34

a mirror, when a CA has

1:54:37

an incident like this, the

1:54:39

response is simple. Stop

1:54:44

miss issuance immediately.

1:54:47

Fix the certificate profile

1:54:49

so you can resume

1:54:51

issuance. In

1:54:53

parallel, figure out how

1:54:56

you ended up missing this

1:54:58

rule and what the

1:55:00

rule cause and what the root

1:55:02

cause of missing this rule was.

1:55:06

Revoke the miss issued certificates within 120

1:55:08

hours of learning about the incident. Provide

1:55:14

action items that a reasonable

1:55:17

person would read and agree

1:55:19

that these actions would prevent

1:55:21

an incident like this happening

1:55:24

again. In other words,

1:55:26

this is all understood and trust

1:55:30

ignored it. He writes,

1:55:33

when I asked and

1:55:35

trust if they've stopped

1:55:37

issuances yet, they said

1:55:39

they haven't and they don't

1:55:41

plan to stop issuance. This

1:55:45

is where and trust decided

1:55:47

to go from an accidental

1:55:50

incident to willful miss issuance.

1:55:53

This distinction is an important one. He says,

1:55:55

and trust had

1:55:57

started knowingly miss issuing

1:56:01

certificates. Entrust

1:56:03

received a lot of pushback from the community

1:56:05

over this. This is

1:56:08

a line that a CA shouldn't, under

1:56:11

any circumstances, cross. Entrust

1:56:14

continued to simply not give a

1:56:17

crap, and I changed that

1:56:19

word to be a little more politically

1:56:21

correct, even after Ben

1:56:23

Wilson of the Mozilla Root Program

1:56:26

chimed in and said

1:56:28

that what Entrust is doing

1:56:30

is not acceptable. And

1:56:33

then, he writes, Entrust

1:56:36

only started taking action

1:56:39

after Ryan Dixon of the

1:56:41

Google Chrome Root

1:56:44

Program also chimed in

1:56:46

to say, it is

1:56:48

this, he said, this is unacceptable.

1:56:52

Okay, now I'll interrupt to mention that

1:56:54

this is an important distinction. The

1:56:56

executives at Entrust appeared

1:56:59

not to care about

1:57:02

any of this until

1:57:04

Google weighed in with the

1:57:07

power of their Chrome

1:57:09

browser. That was

1:57:12

a monumental mistake, and

1:57:14

it demonstrated a fundamental misunderstanding

1:57:17

of the way the

1:57:19

CA browser forum members operate.

1:57:22

None of this operates on

1:57:24

the basis of market power. It's

1:57:28

only about agreeing to and

1:57:30

then following the rules. It's

1:57:33

not about, oh yeah, make me.

1:57:36

We're not in the schoolyard anymore. Amir

1:57:40

continues, Entrust's delayed

1:57:42

response to the initial incident

1:57:44

spanning over a week compounded

1:57:47

the problem by creating

1:57:49

a secondary failure to

1:57:51

revoke on time incident.

1:57:54

As these issues unfolded, a flurry

1:57:57

of questions arose from the community.

1:58:00

Entrust responses were often

1:58:02

evasive or minimal, further exacerbating

1:58:04

the situation. This

1:58:06

pattern of behavior proved increasingly

1:58:09

frustrating, prompting me to delve

1:58:11

deeper into Entrust's past performance

1:58:14

and prior commitments. In

1:58:17

one of my earlier posts I found

1:58:19

that Entrust had made the promise that

1:58:23

we will not make the

1:58:25

decision not to revoke, which

1:58:28

they just had. We

1:58:30

will plan to revoke within 24 hours

1:58:33

or 5 days as applicable for the

1:58:35

incident, which they've said they

1:58:37

won't. We will provide

1:58:39

notice to our customers of our obligations to

1:58:41

revoke and recommend action within 24 hours or

1:58:43

5 days based on the baseline requirements, which

1:58:46

they won't do because they're not going to

1:58:48

revoke in the first place. He

1:58:50

says, this pattern of

1:58:52

behavior led to a troubling

1:58:55

cycle, Entrust making promises,

1:58:58

breaking them, and then making

1:59:00

new promises only to break

1:59:02

those as well. As

1:59:05

this unfolded, Entrust and

1:59:07

the community uncovered an alarming

1:59:10

number of operational mistakes culminating

1:59:12

in a record 18 incidents

1:59:16

within just 4 months,

1:59:19

notably about how many half

1:59:22

of these incidents involved Entrust

1:59:24

offering various excuses for failing

1:59:26

to meet the 120-hour certificate

1:59:29

revocation deadline, ironically,

1:59:31

a requirement they

1:59:33

had voted to implement

1:59:35

themselves. He

1:59:38

said, I do want to highlight

1:59:40

that the number of incidents is

1:59:42

not necessarily an indication of CA

1:59:45

quality. The worst CA

1:59:47

is the CA that has no

1:59:49

incidents, as it's generally

1:59:52

indicative that they're either not

1:59:54

self-reporting or not even aware

1:59:57

that they're misissuing. In

2:00:00

other words, mistakes happen. Everyone

2:00:03

understands that. No one

2:00:05

needs to be perfect here, but

2:00:07

it's how the mistakes that

2:00:10

are discovered are then handled

2:00:12

that demonstrates the trustworthiness of

2:00:15

the CA. Amir

2:00:18

said, due to the

2:00:20

sheer number of incidents and

2:00:22

entrusts poor responses up until

2:00:24

this point, Mozilla

2:00:26

then asks entrust

2:00:29

to provide a detailed

2:00:31

report of these recent

2:00:33

incidents. Mozilla

2:00:35

specifically asks entrust

2:00:37

to provide information regarding,

2:00:41

and then we have some bullet points, the

2:00:43

factors and root causes that lead, that led

2:00:46

to the initial incidents,

2:00:49

including commonalities among the incidents

2:00:51

and any systemic failures. Okay.

2:00:53

Now listen to this. I

2:00:55

mean, cause this is really

2:00:57

Mozilla getting up in the

2:00:59

end trust business. And

2:01:02

entrust apparently doesn't take kindly

2:01:05

to that. Okay. So

2:01:08

literally Mozilla confronts

2:01:10

entrust and says, we,

2:01:13

we want to know the factors

2:01:15

and root causes that lead,

2:01:17

that led to the initial

2:01:19

incidents highlighting common

2:01:21

out their hot, their commonalities

2:01:24

among the incidents and any

2:01:26

systemic failures. We

2:01:28

want to know entrusts,

2:01:31

initial incident handling and

2:01:33

decision-making in response to

2:01:35

these incidents, including any

2:01:37

internal policies or protocols

2:01:39

used by entrust to

2:01:41

guide their response and

2:01:44

an evaluation of whether their

2:01:46

decisions and overall response complied

2:01:49

with entrust policies.

2:01:52

We want your practice statement

2:01:54

and the requirements of

2:01:56

the Mozilla root program. In other

2:01:59

words, explain to us and

2:02:01

we're not kidding here how

2:02:03

this happened like are

2:02:06

you ignoring your own policies or

2:02:08

are these your policies in other

2:02:10

words WTF

2:02:13

and we want it in detail

2:02:15

please we need also

2:02:17

a D I'm literally this is in

2:02:19

the letter a detailed timeline

2:02:22

of the remediation process and

2:02:25

an apportionment of delays root

2:02:29

causes so please you know

2:02:31

elaborate on the delays which

2:02:34

were involved in this because

2:02:36

you know we're out

2:02:38

here we don't understand also

2:02:41

an evaluation of how

2:02:43

these recent issues compare

2:02:45

to the historical issues

2:02:48

referenced above and and

2:02:50

trust compliance with its

2:02:52

previously stated commitments which

2:02:55

everyone already knows is missing

2:02:58

Mozilla also asked writes a

2:03:00

mirror that the proposals meet

2:03:02

the following requirements so literally

2:03:05

these are what we need to know and you

2:03:08

here are the requirements you must

2:03:10

meet in your reply we want

2:03:15

clear and concrete steps that

2:03:17

n trust proposes to take

2:03:19

to address the root causes

2:03:21

of these incidents and delayed

2:03:24

remediation we want measurable

2:03:26

and objective criteria for Mozilla

2:03:29

and the community to evaluate

2:03:31

and trusts progress in deploying

2:03:34

these solutions and we want

2:03:37

a timeline for which interest

2:03:39

will commit to meeting these

2:03:41

criteria as

2:03:44

a mere said even here

2:03:47

he said Mozilla gave

2:03:50

and trust a one-month deadline

2:03:52

to complete this report Mozilla's

2:03:56

email served a

2:03:58

dual purpose it was

2:04:00

both a warning to Entrust

2:04:03

and an olive branch,

2:04:06

offering a path back

2:04:08

to proper compliance. This

2:04:11

presented Entrust with a significant

2:04:13

opportunity. They could have

2:04:16

used this moment to demonstrate

2:04:18

to the world their understanding

2:04:20

that CA rules are crucial

2:04:22

for maintaining Internet security and

2:04:24

safety, and that adhering

2:04:27

to these rules is a fundamental

2:04:29

responsibility. Moreover, Entrust

2:04:31

could have seized this chance

2:04:33

to address the community, explaining

2:04:36

any misunderstandings in the initial

2:04:38

assessment of these incidents and

2:04:40

outlining a concrete plan to

2:04:43

avoid future revocation delays. Unfortunately,

2:04:47

Entrust totally

2:04:49

dropped the ball on this.

2:04:52

Their first report was a

2:04:54

rehash of what was

2:04:57

already on Bugzilla offering

2:04:59

nothing new. Unsurprisingly,

2:05:01

this prompted a

2:05:04

flood of questions from the community.

2:05:07

Entrust's response? They decided to

2:05:09

take another crack at it with a

2:05:11

second report. They submitted this

2:05:14

new report a full two weeks after

2:05:17

the initial deadline. In

2:05:20

their second report, Entrust significantly

2:05:22

changed their tone, adopting

2:05:25

a more apologetic stance regarding

2:05:27

the incidents. However, this

2:05:30

shift in rhetoric was not

2:05:32

matched by their actions. While

2:05:35

expressing regret, Entrust

2:05:37

was still overlooking

2:05:40

certain incidents, delaying

2:05:42

the revocations of

2:05:44

existing miss-issuances and

2:05:46

failing to provide concrete

2:05:48

plans to prevent future

2:05:51

delayed revocations. An

2:05:53

analysis of these 18 incidents and

2:05:55

Entrust's responses serves as

2:05:57

a prime example of mishandled

2:06:01

public communications during

2:06:04

a crisis. Okay,

2:06:07

now stepping back from this for a

2:06:09

moment, the only way

2:06:11

to really read and understand this is

2:06:14

that the executives at

2:06:16

Entrust, and yes Leo, a public

2:06:19

equity owned for a private

2:06:22

equity, sorry,

2:06:28

private equity. Yeah, the

2:06:30

executives at Entrust didn't really take

2:06:37

any of this seriously. They

2:06:39

acted as though they were annoyed

2:06:42

by the gnats buzzing

2:06:44

around them who were telling them

2:06:47

how they should act and

2:06:49

what they should do. Amir

2:06:52

says, the consensus

2:06:54

among many community members

2:06:56

is that Entrust will

2:06:59

always prioritize their certificate

2:07:01

subscribers over their

2:07:03

obligations as a certificate authority.

2:07:06

And there it is in a single sentence.

2:07:09

The consensus among many community

2:07:11

members is that

2:07:13

Entrust will always prioritize

2:07:15

their certificate subscribers over

2:07:18

their obligations as a

2:07:20

certificate authority. He said, this

2:07:22

practice fundamentally undermines

2:07:25

internet security for everyone.

2:07:28

Left unchecked, it

2:07:30

creates a dangerous financial incentive

2:07:34

for other CAs to ignore

2:07:36

rules when convenient, simply

2:07:38

to avoid the uncomfortable task

2:07:40

of explaining to subscribers why

2:07:43

their certificates need replacement. Naturally,

2:07:46

customers prefer CAs that

2:07:48

won't disrupt their operations

2:07:50

during a certificate's lifetime.

2:07:53

However, for CAs, that

2:07:55

proper, that, I'm

2:07:58

sorry, for CAs, properly

2:08:00

adhere to the rules, this

2:08:02

is an impossible guarantee to make. In

2:08:06

other words, no one should expect CH to

2:08:08

be perfect. The

2:08:10

community here doesn't. They understand

2:08:13

mistakes will happen, but it's

2:08:15

maintaining the integrity of the

2:08:17

system is more important than

2:08:19

anything else. He says,

2:08:22

Furthermore, these incidents were not

2:08:24

new to Entrust. As

2:08:27

I've covered in earlier posts, Entrust

2:08:29

has continuously demonstrated that they're unable

2:08:31

to complete a mass revocation event

2:08:33

in the 120 hours defined by

2:08:35

and required by the

2:08:40

baseline requirements. This

2:08:42

pattern of behavior suggests

2:08:44

a systemic issue rather

2:08:46

than isolated incidents. Despite

2:08:49

there being over a

2:08:51

dozen root programs, there

2:08:54

are only four that are

2:08:56

existentially important for

2:08:59

certificate authority. The

2:09:01

Mozilla root program used

2:09:04

by Firefox and practically all

2:09:07

Linux distribution and FOSS

2:09:09

software. The

2:09:12

Chrome root program used by

2:09:14

Chrome, the browser in the OS

2:09:16

and some androids. The

2:09:18

Apple root program used by

2:09:21

Everything Apple and the

2:09:23

Microsoft root program used by

2:09:25

Everything Microsoft. He

2:09:27

finishes, Enforcement

2:09:30

over the operational rules of

2:09:32

a CA has been a

2:09:34

struggle in the past. A

2:09:36

root program only has

2:09:38

a binary choice to

2:09:41

either trust or distrust

2:09:44

a certificate authority. Now

2:09:48

there is one last much shorter piece

2:09:50

of interaction that I want to

2:09:52

share. It was

2:09:54

written by Watson Ladd, L-A-D-D,

2:09:57

who studied math at Berkeley and

2:09:59

his presently a principal software

2:10:01

engineer at Akamai. Among

2:10:04

his other accomplishments, he's

2:10:07

the author of RFC

2:10:09

9382, which specifies SPAC2,

2:10:11

a password-authenticated key

2:10:14

exchange system, and

2:10:16

his name has been on about six other

2:10:18

RFCs. So, you know, he's

2:10:21

a techie and he's in the game. In

2:10:25

the public discussion thread about

2:10:27

end trusts repeated in continuing

2:10:29

failings to correct their mistakes

2:10:31

and live up to the commitments they had made

2:10:34

to the CA browser community,

2:10:36

Watson publicly addressed a

2:10:39

note to Bruce Morton, end

2:10:42

trusts director of certificate services

2:10:44

who has been the face

2:10:46

of end trusts repeated failures,

2:10:48

excuses, and defiance. Watson-Ladd

2:10:52

wrote, Dear Bruce, This

2:10:56

report is completely

2:10:58

unsatisfactory. It starts

2:11:00

by presuming that the problem is for

2:11:02

incidents. End trusts

2:11:04

is always under an obligation to

2:11:07

explain the root causes of incidents

2:11:09

and what it is doing to

2:11:11

avoid them as per the CCADB

2:11:14

report guidelines. That's

2:11:16

not the reason Ben

2:11:18

and the community need this

2:11:21

report. And here he's referring

2:11:23

to Mozilla's Ben Wilson, who

2:11:25

initially asked end trust to explain how they

2:11:27

would deal with those ongoing problems and demonstrate

2:11:30

how they would be prevented in the future.

2:11:32

And as we know, end

2:11:34

trusts Bruce Morton basically blew

2:11:36

him off, apparently because he

2:11:38

wasn't from Google. Anyway,

2:11:42

Watson says, that's not

2:11:44

the reason Ben and the community

2:11:47

need this report. Rather, it's to

2:11:49

go beyond the incident report to

2:11:52

draw broader lessons and

2:11:54

to say more to help

2:11:56

us judge end trusts continued

2:11:59

ability stay in

2:12:01

the root store. The

2:12:03

report falls short of what

2:12:05

was asked for in

2:12:07

a way that makes

2:12:10

me suspect that Entrust

2:12:12

is organizationally incapable of

2:12:14

reading a document, understanding

2:12:17

it and ensuring

2:12:19

each of the clearly

2:12:21

worded requests is followed.

2:12:26

The implications for being a

2:12:28

CA are obvious. To start,

2:12:31

Ben specifically asked for an analysis involving

2:12:40

the historical run of issues

2:12:43

and a comparison. I don't

2:12:45

see that in this report at all. The

2:12:48

list of incidents only has ones from

2:12:50

2024 listed. There's no discussion of the

2:12:53

two issues specifically listed by Ben

2:12:55

and his message. Secondly,

2:12:58

the remedial actions seem to

2:13:00

be largely copy and pasted

2:13:02

from incident to incident without

2:13:04

a lot of explanation. Saying

2:13:07

the organizational structure will be

2:13:09

changed to enhance support, governance

2:13:11

and resourcing really doesn't

2:13:13

leave us with a lot of ability

2:13:16

to judge success or explain

2:13:18

how the changes made

2:13:20

sparse on details will lead

2:13:23

to improvements. Similarly,

2:13:26

process weaknesses are not really discussed

2:13:28

in ways that make clear what

2:13:30

happened. How can I use

2:13:32

this report if I was a

2:13:34

different CA to examine my organization

2:13:36

and see if I can do

2:13:39

better? How can we

2:13:41

as a community judge

2:13:44

the adequacy of the remedial

2:13:46

actions in this report? Section

2:13:49

2.4 I find mystifying.

2:13:51

To my mind, there's no inherent

2:13:53

connection between a failure to update

2:13:55

public information in a place where

2:13:57

it appears, a delay in recreating,

2:13:59

and a lack of information. configuring

2:14:02

a responder and a bug in

2:14:04

the CRL generation process beyond the

2:14:06

organizational. These are three

2:14:08

separate functions of rather different complexity.

2:14:11

If there's a similarity it's between the

2:14:13

latter two issues where there's a fit

2:14:15

was a failure to notice a change

2:14:17

in requirements that required action but that's

2:14:19

not what the report says. Why

2:14:22

were these three grouped together and not

2:14:24

others? What's the common failure here that

2:14:26

doesn't exist with the other incidents? If

2:14:30

this is the best and trust

2:14:32

can do why should we expect

2:14:35

and trust to be worthy

2:14:37

of inclusion in the future?

2:14:40

To be clear there are

2:14:42

CAs that have come back

2:14:44

from profound failures of governance

2:14:46

and judgment but the

2:14:48

first step in that process has

2:14:50

been a full and honest accounting

2:14:53

of what their failures have been

2:14:55

in a way

2:14:57

that has helped others understand

2:14:59

where the risks are and

2:15:01

helps the community understand

2:15:04

why they are

2:15:06

trustworthy. Sincerely

2:15:09

Watson Lamb. Watson was hopped up. That's

2:15:12

it doesn't sound like it but that's

2:15:14

that's what an engineer sounds like when

2:15:16

they get really mad. Well now

2:15:19

Leo yes I don't know

2:15:22

these entrust guys at

2:15:24

all but given

2:15:26

the condescension they've

2:15:29

exhibited it's not difficult

2:15:31

to picture them as some

2:15:34

C-suite stuffed shirts who

2:15:37

have no intention of being

2:15:39

judged by and pushed around

2:15:42

by a bunch of pencil

2:15:45

necked geeks but boy

2:15:48

did they read this one wrong those

2:15:50

pencil necked geeks with their

2:15:53

pocket protectors reached

2:15:55

consensus and pulled

2:15:57

their plug ejecting

2:16:00

them from the web

2:16:02

certificate CA business, they

2:16:04

had a hand in pioneering.

2:16:08

This is what happens when people who

2:16:10

only run businesses don't

2:16:12

understand the difference between a business,

2:16:16

a profit-seeking enterprise, and a

2:16:18

public trust, right?

2:16:20

And they don't understand that in order to

2:16:22

run your business, you've got to

2:16:25

satisfy the public trust part. You can't

2:16:27

just say, yeah, yeah, whatever. You

2:16:29

got to respond. And

2:16:31

notice that end trust was taken private.

2:16:33

So they no longer had literally a

2:16:36

public trust. Well, their business, except

2:16:38

that a certificate authority has a public

2:16:40

trust. Sorry. That's the job.

2:16:42

Yes, it is a public trust. It's

2:16:46

a public trust. Wow. Clearly,

2:16:48

they were in over their heads or something. Well,

2:16:52

but they started this business. I

2:16:54

mean- Is it the same people though, really?

2:16:57

Well, that's exactly, that's a great question.

2:16:59

It's like Joe Seegrist wasn't at last

2:17:02

pass at the end. Yes, exactly. Exactly.

2:17:05

So, you know,

2:17:07

some middle managers

2:17:09

rotated in and didn't understand that

2:17:11

the aggravation that

2:17:19

was simmering away in

2:17:22

this CA browser forum was

2:17:25

actually capable of boiling over and

2:17:28

ending their business. They didn't get it. They

2:17:30

really didn't get it. And they, you know

2:17:32

what? You know, well,

2:17:35

they know now. Whoops. Yeah,

2:17:38

it's over. They appear to

2:17:40

believe that the rules they agreed to did

2:17:43

not apply to them. Or

2:17:45

you know, I thought maybe it was the

2:17:47

extreme leniency that the industry had been showing

2:17:49

them that led them to believe that their

2:17:52

failures would never catch up with them. And

2:17:56

I, boy, but the worst thing that they

2:17:58

did was just to- basically blow

2:18:01

off, you know, the heads

2:18:03

of these root programs when they, when

2:18:05

they said, Hey, look, uh,

2:18:07

we see some problems here. We

2:18:09

need you to convince us that

2:18:13

you're worthy of our trust. Yeah. And the

2:18:15

untrust people probably just said F you.

2:18:18

Yep. Well, now

2:18:21

they're going to be out of business in four months. So

2:18:25

without the trust of a Chrome

2:18:27

and I presume other browsers, when Mozilla is

2:18:29

clearly going to file. Oh, Mozilla will be

2:18:31

right on their heels. And then, and then

2:18:33

edge and everybody else who does certificates will

2:18:35

follow. Right. But it

2:18:37

doesn't matter if Chrome doesn't, if you're not

2:18:39

in Chrome, that's right. People using

2:18:42

Chrome won't be able to go to sites that

2:18:44

have interest certificates game over. Right.

2:18:47

Yes. Yeah. Yes. And interest is

2:18:49

out of business. They will no

2:18:51

longer, the only thing they could

2:18:53

do would be to sell their

2:18:56

business. Remember when Symantec screwed up

2:18:58

royally, they ended up having to

2:19:00

sell their business to digicert. Right.

2:19:03

Because, you know, so, so

2:19:06

they also might face the wrath

2:19:08

of people who use

2:19:10

Chrome or rather

2:19:13

websites that use their certificates when

2:19:15

their customers can't get to them.

2:19:17

I mean, it might be some big

2:19:20

companies that have been all the

2:19:22

search that have been issued, stay

2:19:24

valid. Okay. That's, that's the good

2:19:26

thing. So, and so any site,

2:19:28

even through Halloween, even through the

2:19:30

end of October, any, so because

2:19:33

what Chrome is doing is

2:19:35

they're looking at the signing

2:19:37

date and they're saying anything

2:19:39

answer and interest signs after

2:19:42

October 31st, 2024, we're not going to

2:19:46

trust. We're going to take the user to the

2:19:48

invalid certificate page. So there, there is a little

2:19:50

bit of a, a warning

2:19:52

going out to people who use interest

2:19:55

certificates. You're going to need a new certificate

2:19:57

from a different company now. Yes. You

2:19:59

know, yes. by October. Yes. Yes.

2:20:01

You, you, you might as well

2:20:04

switch. You could buy an entrust

2:20:06

certificate up until

2:20:09

Halloween and it would stay

2:20:11

good for the life of the certificate, which has

2:20:13

now been reduced to one year. What is it?

2:20:16

386 days or something. Yeah.

2:20:19

So it's, you know, it's time

2:20:22

to switch providers. So

2:20:24

entrust loses all of

2:20:26

that ongoing revenue from,

2:20:28

from certificate renewals. They

2:20:30

also lose all of

2:20:32

the, of the second order

2:20:36

business that their customers got. But you know, they're

2:20:38

like, Oh, well, we'll also sell you some of this

2:20:40

and some of that. And you probably need some of

2:20:43

this over here. I mean, they're,

2:20:45

they're in a world of hurt. And you know,

2:20:49

unfortunately it couldn't happen to a

2:20:51

nicer group of guys because they

2:20:53

entirely brought this on themselves. It

2:20:55

was literally their arrogance of,

2:20:58

you know, we're, we are not going to

2:21:00

do what you are asking us to do.

2:21:02

And so the, the, the, the, the consensus

2:21:04

was okay. You don't have to,

2:21:06

but we don't have to trust you. We're not, we're

2:21:09

not making you. We're just going to stop believing you.

2:21:12

Right. That's really interesting. Usually what happens

2:21:14

with private equity is they

2:21:16

buy a company and sell off assets

2:21:18

or somehow make money. They,

2:21:21

uh, example of red

2:21:23

lobsters comes to mind. They, they,

2:21:26

the, the private equity company that bought

2:21:28

the, the red lobster restaurant chain took

2:21:32

the real estate that was owned by all

2:21:34

the restaurants and sold it to a separate

2:21:36

division, making enough, making almost

2:21:38

enough money to replace, to compensate with

2:21:40

the debt they'd incurred to buy it.

2:21:42

Cause that's what happens. They borrow money.

2:21:45

Then they squeeze the company, get all the debt paid

2:21:47

off. And then, but the

2:21:49

problem was now the, the restaurants

2:21:51

had to pay rent and

2:21:54

it could, and they went out of business and

2:21:56

that's what happens. You squeeze, uh, get

2:21:58

your, get your money. money back, get your

2:22:00

money out and then you don't care what happens. So

2:22:02

it may, I don't know what the, what they were

2:22:04

able to squeeze out of interest, but it may be

2:22:06

they got, they got what they wanted out of it

2:22:09

and they don't care at this point. That's

2:22:11

what it feels like. They

2:22:13

just don't care. They can come back from this, right? They could,

2:22:16

they could say, Oh yeah, wait a minute. Sorry.

2:22:18

We were wrong. Could they, or

2:22:20

is it, I don't think so. I mean, maybe they

2:22:22

changed their name to retrust. Wow.

2:22:26

No, I mean it's, it's over. I mean,

2:22:29

at the,

2:22:31

they can't like, like

2:22:33

say, Oh, we're really sorry. We didn't understand

2:22:35

that they've been,

2:22:37

this, I've never seen any of

2:22:40

this ever reversed. These are, you

2:22:43

know, these are slow moving icebergs

2:22:45

and when the iceberg hits

2:22:47

your ship, you know, it

2:22:49

doesn't turn around. Wrenned you

2:22:51

from STEM to STIR. They,

2:22:53

uh, uh, out of sync

2:22:56

in our discord says that interest is about 0.1%

2:22:59

of all certs compared to let's encrypt,

2:23:01

which is now 53% of all certs. Uh, why

2:23:06

not? It's free, right? Uh, and if you've

2:23:08

got to renew it every 384 days, you might as

2:23:10

well just go with let's encrypt. Yeah.

2:23:13

Last year we talked about the big seven where if

2:23:15

you only trusted seven CAs, you got 99.95 or something.

2:23:17

I don't remember. It was not on

2:23:23

that list. Okay. So

2:23:25

maybe this is just incompetence or

2:23:27

something. Well, it's yeah,

2:23:31

that I mean, you're incompetent. If you're

2:23:33

a manager who doesn't know how to

2:23:35

read email and understand the importance of

2:23:37

it to your career. I mean,

2:23:40

I doubt this Bruce guy is going to have

2:23:42

a job in four months. It

2:23:45

won't be, let's put it this way. It

2:23:47

will not be in the certificate business. Can't

2:23:50

opt-in certificate. Steve

2:23:52

Gibson. I love it. This was a fun one.

2:23:55

Uh, it's a little scary

2:23:57

early on, but you cut with open

2:24:00

SSH, but you kind of redeemed it

2:24:02

with some humor towards the end. What

2:24:04

a story that is. Amazing. Steve's at

2:24:07

grc.com. Now if you go to grc.com/email, you

2:24:09

can sign up for his mailing list. You don't

2:24:11

have to though. All it will do is validate

2:24:13

your email so that you can then email him.

2:24:15

So it's the best way to be in touch

2:24:18

with Steve. If you're there, you

2:24:20

should check out Spinrite version 6.1 is

2:24:22

out. It is now

2:24:24

easily the premier mass storage

2:24:27

performance enhancer. It's

2:24:30

like Viagra for your hard drive.

2:24:32

No performance enhancer maintenance

2:24:34

and recovery. You do. You

2:24:37

don't want a floppy drive. We're not talking

2:24:39

floppies here. This is a hard drive after

2:24:41

or an SSD or an SSD. Let's be

2:24:44

fair. It works on SSDs as well. We

2:24:48

made it so far, so far, so

2:24:50

close to the end. While you're

2:24:52

picking up GRC's Spinrite,

2:24:55

might want to get a copy of this podcast.

2:24:57

Steve hosts two different versions. The 64 kilobit

2:25:00

audio version, which we have as well. That's kind

2:25:02

of the standard. But he also has a 16

2:25:04

kilobit version for the bandwidth

2:25:07

impaired. That's the version that goes

2:25:09

to Elaine Ferris who transcribes all

2:25:11

this and delivers really excellent

2:25:14

human written transcriptions of every episode. Those

2:25:16

are all there as well. And

2:25:18

the show notes so that you can

2:25:20

follow along as you listen to

2:25:22

the show. We have 16 kilobit audio at our

2:25:24

website. We also have, I'm sorry, 64 kilobit audio.

2:25:27

We don't have 16. I wouldn't

2:25:29

put that on my site. Are you crazy? We have

2:25:31

64 kilobit audio

2:25:34

and video, which Steve would never put on his site.

2:25:37

So, you know, it comes, what comes around goes around.

2:25:40

If you want to watch, you can do that

2:25:42

as well at twit.tv slash SN. You

2:25:45

can also get, it's on YouTube as a

2:25:47

video, and you can also subscribe in your

2:25:49

favorite podcast player and get it auto magically

2:25:52

every week, right after our Tuesday record.

2:25:54

We do the show right after

2:25:56

Mac break weekly around 2 PM Pacific, 5 PM

2:25:58

Eastern 2100. The

2:26:01

stream is on YouTube live. That's

2:26:04

youtube.com/twit slash live.

2:26:07

And you know, if you subscribe and you smash the

2:26:10

bell, you'll get automatic notification when we go live. So

2:26:12

that might be worth doing that for you. What

2:26:15

else do I need to tell you? Oh, I, what I need

2:26:17

to do really is thank our club members who

2:26:20

make this show and all the shows we do possible

2:26:22

without your support. There would

2:26:24

be no twit. If you'd like to join the club, it's

2:26:26

a great group of people. Show

2:26:28

your support for the programming we do here and

2:26:31

meet some really smart, interesting people in our

2:26:33

discord. You get ad free versions of all

2:26:35

the shows plus special shows we don't do

2:26:37

anywhere else. And it's just $7 a month.

2:26:40

It's nothing you, although you can

2:26:42

pay more. You absolutely can. We will

2:26:44

not discourage that, but it starts at

2:26:47

$7 a month at twit.tv slash

2:26:49

club twit. Thank

2:26:52

you, Steve. Have a wonderful week and

2:26:54

I'll see you next time. Will do.

2:26:57

And I'm at long last, I

2:26:59

will be able to

2:27:01

show the picture of the week

2:27:04

to end all for patch

2:27:07

Tuesday. He's already got it. M

2:27:10

G will be arriving

2:27:12

in people's mail before

2:27:15

the ninth and all this picture

2:27:18

was made for windows.

2:27:20

Wow. So, well, here's

2:27:22

your chance. All you gotta do is

2:27:25

a good year. see.com/email and sign up

2:27:28

and you'll get it before anybody else does. Thank

2:27:31

you, Steve. Have a good one.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features