Podchaser Logo
Home
The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

Released Wednesday, 26th June 2024
Good episode? Give it some love!
The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

The Mixed Blessing of Lousy PRNG - Kaspersky Ban, EU vs. Google's Privacy Sandbox

Wednesday, 26th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

It's time for security now Steve Gibson is here

0:02

lots to talk about The

0:05

Commerce Department in the United States

0:07

bans the Kaspersky anti-virus Steve

0:09

talks about why and whether it's a

0:11

legitimate problem We hear from a security

0:13

researcher who is having trouble getting into

0:15

the United States Maybe you could help

0:18

and then we'll find out why every

0:20

once in a while It's a good

0:22

idea of a bad password generator all

0:24

that more coming up next on security

0:26

now Podcasts

0:29

you love from

0:31

people you trust This

0:40

is security now with Steve Gibson

0:42

episode 980 recorded

0:44

Tuesday June 25th 2024

0:49

the mixed blessing of lousy

0:51

PRNG It's

0:53

time for security now. Yeah, the show

0:55

you wait all week for let's

0:58

just make that your your tagline We

1:00

wait all week for Tuesday if

1:03

it's Tuesday, it must be Steven Gibson

1:05

day. Hello, Steve Gibson my friend it's

1:07

great to be back with you as

1:09

we close out the first half of

1:11

the year and Where

1:13

I I have to say it was

1:15

a little poignant. Well not poignant. It was it was it

1:19

Was very clear to me that we are closing

1:21

in on episode 1000 crossing the infamous 999

1:25

when I wrote 9 8 0 Wow No,

1:30

we're getting we're getting we're getting there.

1:32

Holy moly. Yes Okay,

1:35

so we I just

1:37

have to say and I assume you haven't

1:39

seen it yet because you haven't fallen off

1:41

your ball That we have

1:43

one of the my favorite pictures of the week

1:46

in a long time so That's

1:49

coming up. But I think a

1:51

great episode one with some interesting

1:53

lessons the

1:55

mixed blessing of a lousy

1:59

PRNG And I realized when

2:01

I was using you know PRNG that some

2:03

of our listeners by what a bring Of

2:06

course, we know that's a pseudo

2:08

random number generator. Yes, and the

2:11

Pringles were named after Sudan Probably

2:14

no, that's it exactly that this is

2:16

the audience they were targeted at because

2:19

we all sit around eating Pringles. Yes

2:22

But we're gonna answer some questions as we

2:24

always do before we get to our main

2:27

topic, which is How

2:29

long did it take for

2:31

Windows recent horrific Wi-Fi flaw

2:33

to become weaponized? What

2:37

are the imp- oh and oh boy, is there

2:39

a new twist on this Wi-Fi flaw, too What

2:42

are the implications of the

2:44

US Commerce Department's total ban

2:46

on Kaspersky which

2:49

will be coming into effect in a

2:51

few months Wow. Yeah How

2:53

is the Kremlin reacting to that and

2:56

who cares but still why would

2:59

an EU privacy watchdog file

3:02

a complaint against Google over

3:05

their privacy sandbox Which

3:08

is all about privacy as the name

3:10

suggests When

3:12

is an email tracking bug not

3:14

a tracking bug? What

3:17

can this podcast do to help

3:19

a well-known security researcher? Present

3:22

his work at Defcon and

3:25

blackhat this summer What's

3:28

another near certainty

3:30

for Microsoft's actual plan

3:32

for recall? This is

3:35

something else that occurred to me that I think everyone's gonna

3:37

go. Oh Of course

3:39

like the first time I had this I

3:41

thought a couple weeks ago and

3:43

what two mistakes Maybe

3:45

not only two but at least

3:47

these two have I been making

3:49

on this podcast finally,

3:52

why might a really

3:54

bad password generator wind

3:56

up being a good thing a mixed

3:59

blend as it were. Yeah, I'm

4:01

trying to think of why it could ever

4:03

be a good thing. And more importantly

4:07

what lessons do we learn

4:09

about cryptography overall from from

4:12

that. So yeah I think Leo,

4:15

we may actually have a good podcast

4:18

finally. Well after

4:20

980 attempts I think it's good.

4:22

I think we got the hang of it.

4:24

This might be something

4:27

where people come away thinking, you

4:29

know, that was okay. They're doing

4:31

alright these kids. Yeah. Well

4:33

let me tell you before we go much further about our

4:35

first sponsor of the show and then we can get into

4:37

the... I do have a penguin in my face I should

4:39

note. I'm

4:41

sorry. That's

4:44

rude. I

4:46

put a Linux penguin in your face. Sorry

4:49

you shouldn't go there. Let me put him

4:52

off to the side. But just a

4:54

little reminder,

4:58

using Windows you don't

5:00

have to. We should all actually be using.

5:03

That's right. I found out what that was.

5:05

The thought that I have about Windows may

5:08

cement that further. Oh boy. I

5:10

found out what that was. Remember

5:12

before the show Windows kept saying

5:14

restart, restart, restart. Yeah. And

5:16

then I was getting a UAC from 8-bit

5:18

solutions. Turns out that's the

5:20

Azure provider that Bitwarden uses. So

5:24

that was Bitwarden asking me to update

5:26

itself basically. Oh interesting. But you know

5:28

I have to say from a security

5:30

standpoint you don't want to see another

5:33

name show up when you're

5:35

reinstalling something. That means you have to go out and look

5:37

it up and figure out why it

5:39

wants to do that. So. Well and Leo

5:41

why is it only 8 bits? I would

5:44

think what is this from the 80s?

5:46

8-bit solutions? What? Is it a platform

5:50

of 6502 or something? That's

5:52

very odd. Yeah that's a good point. I

5:54

don't know. Well anyway now I'm gonna

5:56

install it because I've found out. I'd

6:00

be inclined to trust it. More

6:02

bits is better. Is it always better? That's

6:04

right, baby. If we learned anything in crypto,

6:07

the more bits you got, the better. That's

6:10

what did you call that? A padding of some

6:12

kind, right? I

6:15

can't remember. You had a good name for it. Password,

6:18

perfect paper, password, haystacks

6:20

or something, right? Is

6:22

it haystacks? Well the haystacks is an interesting

6:24

idea, but that's different than this. That's a

6:26

different one. Okay. Anyway, enough of

6:29

that. Enough fall to roll. Let's talk about

6:31

our sponsor Delete Me, and I know a

6:33

lot about Delete Me because we use it.

6:36

We started using it when a malicious,

6:40

oh, I'm going to call him a bad

6:42

guy, decided to pose as

6:44

my wife, the CEO of the company, and

6:47

sent a text message to all our direct reports

6:49

saying I'm in a meeting right now. I need

6:51

you to immediately buy $1,000 with

6:53

Amazon gift cards and send her this address. Now

6:57

our employees are a little

7:00

smarter than that, but what really

7:02

chilled me was not only did they

7:04

know Lisa's name and the company and

7:06

her phone number so they

7:09

could impersonate it, they knew her

7:11

direct reports, they knew their phone

7:13

numbers, and that was scary. Delete

7:15

Me deletes your personal information

7:18

off the internet. Who is the number

7:20

one problem here? Data

7:22

brokers. Until they make a

7:25

comprehensive privacy law in the US, which they

7:27

still haven't, those data brokers are legal and

7:29

run wild. If you've

7:31

ever searched for your name online, you know

7:33

how chilling this is. Almost

7:36

all your personal information is right there for

7:38

anybody to see. It's

7:41

not just you. If

7:43

you're a business, it's your company. If

7:47

you're a father or mother of a

7:49

family, it's your family, right? Delete

7:51

Me helps reduce risk from identity

7:53

theft, cybersecurity threats, harassment, and more.

7:55

Where do they get that information?

7:58

Data brokers. Delete me.

8:00

Now once you sign up for delete me, their experts

8:03

will find and remove your information from the data brokers.

8:06

And if you're doing the family plan, you can

8:08

assign a unique data sheet to each family member

8:11

tailored to them with easy

8:13

to use controls. Account owners can manage privacy

8:15

settings for the whole family. And

8:18

here's this is really important. After they do the

8:20

first initial delete, they will continue to scan and

8:22

remove your information regularly. I'm

8:24

talking addresses, photos, emails, relatives, phone

8:27

numbers, social media, property value, you

8:29

name it. It's out

8:31

there and it's available to the highest bidder.

8:34

You don't even have to be the highest bidder. They sell it

8:36

cheap. Protect

8:39

yourself. Reclaim your privacy. Visit

8:41

joindeleteme.com. That's what we use.

8:44

The offer code is TWIT. You're gonna get 20% off when

8:46

you do that. joindeleteme.com.

8:50

Twitch promo code

8:52

TWIT for 20% off.

8:56

All right, I am prepared to show

8:58

you the picture of the week as

9:01

soon as you say so. It is

9:03

a fave. I haven't looked at it yet.

9:06

It's just quick,

9:09

visual, simple, fun.

9:11

Okay. Do

9:14

you want me to show it and then you describe

9:16

it? Is that how you want to? I'd really, well

9:18

okay, yes. I would be good. All

9:20

right, so I have to figure out how

9:22

to show it first. There it

9:24

is. You know, it's funny, my thing is moving around

9:27

all the time and I don't, sometimes my, now it's

9:30

this laptop so it's just gonna take me

9:32

a second. Why don't you set it up

9:34

anyway? Okay, so anyway, the

9:37

picture, and I should note Leo, that 4,330 subscribers

9:39

to the SecurityNow

9:45

mailing list received this three

9:47

hours ago. Oh, so

9:50

they're already up on it. They know

9:52

all about it. Yeah, and the email

9:54

contained a thumbnail which a bunch of

9:57

them clicked in order to see the

9:59

full-size image. So anyway, I

10:02

gave this the title correlation

10:04

is not causation. That's a

10:07

very important concept that people

10:09

need to understand. Absolutely

10:11

is. Yes. And

10:14

we have a cute little, I'm not sure what

10:16

he is, kind of a dog, but

10:19

a small dog that's been

10:21

leashed to a... Isn't

10:27

it wonderful? He wants to get away badly.

10:30

He's looking at his master saying, hey,

10:33

what about me? Why am I

10:35

stuck here? So

10:37

he's tied up to this, what

10:39

would you call that? A bollard.

10:42

A bollard, yes. However,

10:45

something in the past has

10:47

whacked this bollard off to

10:49

the right so that it's

10:51

a leaning bollard. And

10:53

if you didn't know better, you'd think

10:56

that this was mighty dog and that

10:58

in trying to join his owner,

11:01

he had tugged

11:03

at this thing and yanked it

11:05

almost out of the pavement. Anyway,

11:07

I would commend our listeners to

11:09

go find today's picture of the

11:11

week because it is, it's a

11:14

goodie. A lot of

11:16

fun. And thank you

11:18

to whomever it was who sent it to

11:20

me. Okay, so last

11:23

week we opened with the

11:25

news that the previous

11:28

week's monthly Windows Patch

11:30

Fest had quietly

11:33

closed a remarkably worrisome

11:35

flaw that

11:37

had apparently been sitting undiscovered in

11:40

every Windows native Wi-Fi network

11:42

stack since the last time

11:45

Microsoft poked at it. And

11:47

there's been no definitive statement about

11:50

this because it appears that even

11:52

Microsoft is quite freaked out by

11:54

this one. A listener

11:56

of ours, Steven CW, sent

11:58

a relevant question. He said,

12:01

Hi Steve, long time listener,

12:03

our corporate IT group vets

12:05

windows patches, thus delaying them.

12:09

In the meantime, does turning off

12:11

the Wi-Fi adapter prevent the attack

12:13

you described? Okay,

12:16

now given the havoc that

12:18

past mistakes in Windows updates have

12:20

caused for corporate IT, especially remember

12:23

a couple of years ago when

12:25

Microsoft kept wiping out all printing

12:27

capability enterprise-wide, you know about like

12:29

once a month they would do

12:31

that, I suspect that

12:33

many organizations may have adopted a wait

12:35

and test to avoid

12:37

subjecting their users to such

12:39

mistakes. And it's

12:41

typically the case that even though 50

12:44

to more than 100 flaws may be

12:46

fixed in any given month, nothing

12:49

is really happening that's highly

12:51

time sensitive. But that's not

12:53

the case with this month's revelations. What

12:57

I saw and mentioned last

12:59

week at GitHub did

13:02

not make any sense to me since

13:04

it appeared to be too high

13:06

level. Remember I mentioned in passing

13:08

that there was already an exploit

13:10

on GitHub. Well since then someone

13:12

else appears to have found a

13:15

way to overflow the

13:17

oversized 512-byte

13:20

buffer which Windows

13:22

Wi-Fi driver provides for

13:24

SSIDs. But that's

13:26

not this problem. He

13:28

wrote thinking that this was

13:30

the critical 30078 CVE. He

13:36

initially wrote CVE 2024 30078 describes a vulnerability in

13:38

the way Windows handles SSIDs, you

13:45

know service set identifiers in

13:47

Wi-Fi networks. Windows has

13:49

a buffer for SSIDs up to 512

13:52

bytes long which exceeds the

13:55

Wi-Fi standard by

13:57

sending chunked frames

13:59

to increase the SSID size

14:01

beyond 512 bytes, a buffer

14:03

overflow can be triggered. He

14:06

says this exploit leverages this

14:08

vulnerability to cause a buffer

14:11

overflow by creating and sending

14:13

Wi-Fi beacon frames with oversized

14:15

SSID fields. Okay,

14:17

so that's a problem. But

14:20

then he realized that he had

14:22

found a different flaw from what Microsoft

14:26

patched, so in an update

14:28

he subsequently added, he said,

14:31

info. This repo does

14:33

not seem to be hitting the same

14:36

bug as in the stated CVE.

14:38

New information has

14:40

come to my attention thanks to

14:42

FarmPoet. The CVE 2024 30078 vulnerability

14:48

is in the function .11 translate 802.11

14:51

to ethernet endis packet.

14:58

And I should say that absolutely based

15:00

on that function name that makes total

15:02

sense. And he said of

15:04

the native Wi-Fi Windows driver nwifi.sys

15:06

where a very

15:09

specific frame needs to be constructed

15:12

to get to the vulnerable code

15:14

path, which as he

15:16

said his code, his current code

15:18

does not. So he said

15:20

I'm working on it. I've identified

15:22

the changes in the patched function

15:24

and I'm now working on reversing

15:27

to construct the relevant frame required

15:29

to gain code flow into this

15:31

segment. Okay,

15:33

so we have this guy

15:35

publicly working on a public

15:38

exploit for this very worrisome flaw and

15:40

we're about to see why it turns

15:43

out this is a lot worse than

15:45

it first seemed. Meanwhile

15:47

it may be that anyone who has

15:49

a spare $5,000 may

15:53

be able to purchase a working

15:55

exploit without waiting for a freebie

15:57

on github. online

16:00

publication Daily Dark Web,

16:03

believe it or not, there is such a thing,

16:05

writes, a threat actor has

16:07

announced the sale of an exploit for CVE

16:09

2024 30078, a remote code execution

16:14

vulnerability in the Wi-Fi driver

16:16

affecting all Windows Vista and

16:19

later devices. In

16:21

their announcement, the threat actor details

16:23

that the exploit allows for remote

16:26

code execution over Wi-Fi. Get

16:28

a load of this though, leveraging compromised

16:31

access points or saved

16:33

Wi-Fi networks. I'll

16:36

get more to that in a

16:38

second. The exploit reportedly works by

16:40

infecting a victim through Wi-Fi router-based

16:42

malware or simply by having the

16:44

victim's device be within range of

16:46

a Wi-Fi network they've previously connected

16:48

to. The exploit code is

16:51

offered for sale at $5,000 US with

16:55

the price being negotiable. The

16:57

seller also offers to develop custom

16:59

solutions tailored to the buyer's needs.

17:02

Anastasia, the new owner of the forum,

17:05

is listed as the escrow for this

17:07

transaction. Interested parties are instructed

17:09

to send a private message to the

17:11

threat actor with a warning that time

17:13

wasters and scammers will be ignored. Okay,

17:15

now, in

17:18

the first place, while we have no way

17:20

to confirm this, from what we've seen before,

17:23

it's entirely believable that

17:26

several weeks downstream from

17:29

the release of a patch, which

17:31

will have altered the binary

17:33

of some Wi-Fi component of

17:35

Windows, that by diffing,

17:37

you know, in hacker

17:39

parlance, the pre-

17:41

and post-patched files, the

17:44

change that Microsoft made to repair

17:47

the original driver's defect can be

17:49

readily found. This is what the

17:51

guy on GitHub is already

17:54

doing. But

17:56

the really interesting attack

17:58

vector that a occurred to me

18:00

when we first talked about this last week, but obviously

18:02

has occurred to the author of this $5,000 for sale

18:05

exploit, is the idea of infecting

18:11

vulnerable consumer routers or

18:13

corporate wireless access points,

18:16

which might well be, you know,

18:19

half the world's circumference away.

18:22

In other words, if a

18:24

vulnerable Wi-Fi router is available

18:27

anywhere in the world, it

18:29

could be infected with knowledge of

18:32

this critical Windows flaw, so

18:35

that any unpatched Windows Wi-Fi

18:37

laptop within range of that

18:40

router could be compromised,

18:42

and that would be

18:44

a very remote attack. It's

18:46

clear that the only reason Microsoft

18:49

was able to get away with

18:51

labeling this flaw as only being

18:53

important with a CVSS of

18:57

8.8 instead of critical with a CVSS of

18:59

9.8, is it, or maybe even 10, is

19:02

it it required

19:05

a nearby attacker? At

19:09

least that was the theory, but

19:11

in reality all it requires

19:13

is a nearby hostile radio,

19:15

and thanks to

19:17

the historical vulnerability of consumer

19:20

and enterprise routers, that's not

19:22

a high bar. The observation

19:24

here is that a

19:26

maliciously infected router may

19:29

not be able to attack the

19:31

machines connected to it by wire,

19:34

because there are no known

19:36

exploitable vulnerabilities in their wired

19:38

Ethernet network stacks, but

19:41

that same router may now

19:43

be able to successfully attack

19:45

those same or other machines

19:48

within its wireless reach, thanks

19:51

to the known presence of a, by

19:54

Microsoft's own assessment, readily

19:57

exploitable, low-complexity, highly reliable

20:00

likely to succeed flaw that

20:03

exists in any Windows machine since

20:05

Vista which is not yet received

20:07

the patch that first appeared only

20:09

two weeks ago. So to

20:13

answer Steven CW's question about

20:16

whether turning off, you know, disabling

20:19

the Wi-Fi on a machine would

20:21

protect it, the answer has

20:23

to be yes. Everything we

20:25

know, although I have to say I

20:27

looked around and as I

20:30

said Microsoft is oddly mute

20:33

on this whole thing. Normally you would expect

20:35

them to say mitigation, disable

20:38

Wi-Fi, but maybe they presume

20:40

so many people are using

20:43

Wi-Fi that you can't

20:45

really call it a mitigation if

20:47

you if like taking the machine

20:49

off the network is what it

20:51

takes to mitigate the problem. So

20:54

they're not suggesting that but yes

20:56

everything we know informs us that

20:58

turning off Windows Wi-Fi adapters will

21:00

completely protect any unpatched machine from

21:02

the exploitation of this vulnerability. Yeah but

21:05

you could also remove the machine from the

21:07

internet entirely, air gap it

21:09

and that would be good too. Or

21:11

Leo, I had something, it just

21:13

hit me, turn it off. Yeah

21:16

that'll fix it too. What a concept. That's

21:18

right pull the plug, shut it

21:20

down, you're safe. Anyway

21:22

I wanted to conclude this week's follow-up on

21:25

this CV by making

21:27

sure everyone understands that

21:29

the addition of this

21:31

remote router extension to

21:33

this vulnerability really

21:35

does change the game for it.

21:38

We know tens of thousands

21:41

of routers have already been

21:43

and are taken over and

21:46

are being used for a multitude

21:48

of nefarious purposes launching DDoS attacks

21:50

forwarding spam email as proxies to

21:53

probe the internet for other known

21:55

weaknesses and on and on. So

21:58

the bad guys are glittering. realize

22:01

that by updating the malware

22:03

that's already within their compromised

22:06

router fleets, they'll

22:08

be able to start attacking and

22:10

hijacking any Windows machines

22:12

that have not yet been

22:14

updated that have their wireless

22:16

turned on. And for

22:19

whatever reason history tells us that there will

22:21

be many such machines updating seems

22:23

to be a slow process. You

22:25

know and if for example Steven

22:27

CW acknowledged

22:30

that his corporate IT people

22:32

you know they're waiting now

22:34

because there's been too

22:37

much history of updates you

22:39

know destroying you know corporate

22:41

IT functioning so they're they're

22:43

taking a cautious process. Anyway

22:45

it's going to be interesting to see

22:48

whether bad guys how

22:52

long it takes bad guys to leverage

22:55

the idea of pushing

22:57

this flaw out to the routers

22:59

and seeing if they can remotely

23:01

grab wireless machines.

23:06

I'll share this piece of

23:08

news this next piece and interject some of

23:10

my thoughts along the way and Leo I know

23:13

you reacted a little bit as I did and I'm

23:17

of two minds so it

23:19

creates for some interesting

23:22

dialogue. Last Thursday Kim

23:24

Zetter writing for Zero Day posted

23:27

the news. The US government

23:29

which did it on the same day

23:32

the US government has expanded its ban

23:35

on Kaspersky software in

23:37

a new move aimed at

23:39

getting consumers and critical

23:42

infrastructure to stop using the Russian

23:45

companies software products citing

23:48

of course national security

23:50

concerns. The ban

23:52

using new powers granted to the

23:54

US Commerce Department would prohibit the

23:56

sale of Kaspersky software anywhere

23:58

in the US. US and

24:01

would also prevent the company

24:03

from distributing software security updates

24:06

or malware signature updates

24:09

to customers in the US. In other

24:11

words, they're being cut off. Signatures,

24:14

they explain or Kim explains,

24:16

are the part of the antivirus

24:19

software that detect malicious threats.

24:21

Antivirus vendors push new signatures to

24:23

customer machines, often on a daily

24:26

basis, to keep their customers protected

24:28

from new malware and threats as

24:30

the vendors discover them. Without

24:33

the ability to update the

24:35

signatures of customers in the

24:37

US, the ability of

24:39

Kaspersky software to detect threats on

24:41

those systems will significantly degrade over

24:44

time. The US Commerce

24:46

Department announced the ban

24:48

on Thursday after what

24:50

it said was an extremely

24:54

thorough investigation, but

24:58

did not elaborate on the

25:00

nature of the investigation

25:03

or what it may have uncovered

25:05

if anything. US

25:08

Secretary of Commerce Gina

25:10

Raimondo told reporters in a

25:12

phone call, given

25:14

the Russian government's continued

25:17

offensive cyber capabilities and

25:20

capacities to influence

25:22

Kaspersky's operations, we

25:25

have to take the significant

25:27

measure of a full prohibition

25:30

if we're going to protect Americans

25:32

and their personal data. Russia,

25:35

she said, has shown it

25:37

has the capacity and even

25:39

more than that the intent

25:41

to exploit Russian companies like

25:44

Kaspersky to collect and

25:46

weaponize the personal information of Americans

25:48

and that's why we're compelled to

25:50

take the action we're taking today.

25:55

Wow okay so in other words we

25:58

don't like their zip code So,

26:00

we're going to deny a

26:02

company against whom we have

26:05

no actionable evidence of wrongdoing

26:08

all access to the American

26:11

market because, being a

26:13

Russian company, they could be

26:15

forced to act against us. And

26:19

as I said, I'd say that I'm

26:21

evenly divided on this. Over

26:23

the years, we've covered

26:26

countless instances where Kaspersky

26:28

has been hugely beneficial

26:30

to Western software and

26:33

to Internet security globally.

26:36

Thanks to their work, for the past many

26:38

years, the world is a safer place than

26:41

it would otherwise be. So

26:43

to say, we don't like where

26:45

you live, so we cannot trust you,

26:47

is a bit brutal. But

26:50

at the same time, it is also

26:52

understandable because, being

26:54

in Russia, it's possible that

26:57

their actions may

26:59

not always reflect their values. And

27:02

it's not as if operating within a state where

27:05

we democratically elect our representatives

27:07

is all that much different,

27:10

right? After all, in the

27:12

US, we have warrant canaries.

27:15

Remember that Wikipedia explains a warrant

27:17

canary by writing, A

27:19

warrant canary is a method by

27:22

which a communications service provider aims

27:25

to implicitly inform its

27:27

users that the provider

27:30

has been served with a

27:32

government subpoena despite legal prohibitions

27:34

on revealing the existence of

27:37

the subpoena. The warrant

27:39

canary typically informs users that there

27:41

has not been a court-issued

27:44

subpoena as of a particular

27:46

date. If the canary

27:49

is not updated for the period specified

27:51

by the host, or if the warning

27:53

is removed, users might

27:55

assume the host has

27:58

been served with such a subpoena. subpoena.

28:00

The intention is for a provider to

28:03

passively warn users of the existence

28:05

of a subpoena, albeit violating the

28:07

spirit of a court order not

28:09

to do so while

28:11

not violating the letter of the

28:13

order. So again, this

28:16

is what state

28:19

entities do. And

28:21

in other words, the US, our

28:24

courts are able to say, we

28:26

demand that you turn over information within

28:28

a certain scope and by the way,

28:30

you're legally forbidden from disclosing that we've

28:33

asked and that you have complied. So

28:36

it's not my intent to

28:38

pass moral judgment here. I'm just saying

28:41

that what we see is

28:43

unfortunately, you know, all

28:46

nation states will act to

28:48

protect their interests and

28:50

that their client citizens have

28:52

little choice other than to

28:54

comply. So Kim's piece

28:58

continues. Ask

29:01

what evidence the government

29:03

found to support concerns that

29:05

the Russian government is using

29:07

Kaspersky software to spy on

29:10

customers. Romando and

29:12

other government officials on the

29:14

call declined to provide specifics.

29:17

One senior commerce official said on

29:19

background, in terms of

29:21

specific instances of

29:24

the Russian government using Kaspersky

29:26

software to spy, we

29:29

generally know that the Russian

29:31

government uses whatever resources are

29:33

available to perpetrate various

29:36

malicious cyber activities. We

29:39

do not name any particular

29:41

actions in this final determination,

29:43

but we certainly believe that

29:46

it's more than just a theoretical

29:48

threat that we describe. And that's

29:50

right, because these days, as we know,

29:53

life is all that's needed. Kim

29:56

writes, the ban will not go

29:58

into effect until September. September 29th

30:01

to give existing Kaspersky customers

30:03

in the US time

30:06

to find a replacement for

30:08

their antivirus software. The

30:11

ban on new sales of Kaspersky software in

30:13

the US, however, goes into effect on July

30:15

20th. Sellers

30:17

and resellers who violate the ban could

30:20

be subject to fines from the Commerce

30:22

Department and potentially criminal action. In

30:24

addition to the ban, the

30:26

Commerce Department also put three

30:29

Kaspersky entities on its trade

30:31

restrictions and entities list, which

30:33

would prohibit US-based suppliers from

30:36

selling to Kaspersky, though it's

30:38

unclear if Kaspersky currently has

30:40

US suppliers. A

30:42

Kaspersky spokesman in a statement

30:44

to Zero Day accused the

30:47

Commerce Department of making its

30:49

decision quote, based on the

30:51

present geopolitical climate and

30:53

theoretical concerns rather than on

30:56

a comprehensive evaluation.

30:59

Right. I mean, I feel bad for

31:01

Eugene Kaspersky. Everybody loves him. Yes.

31:04

But why not? I mean, we don't, we don't have

31:06

to have it, right? The ban

31:08

Huawei phones. I mean, right.

31:12

I mean, this is what we're beginning to

31:14

see happen, right? As

31:17

we, as we choose sides and

31:19

we, you know, pull the pull

31:21

the bridges, the, the, the draw

31:23

bridges up that you, the of,

31:25

of, of global commerce that

31:27

used to interconnect everyone. So

31:30

anyway, they apparently think they have

31:33

some legal standing. This

31:35

spokesperson said, we will continue

31:37

to defend ourselves against actions.

31:40

That's our reputation

31:42

and commercial interests. So

31:46

okay. Now as a little

31:48

bit of background, the department of Homeland

31:50

security had previously issued a directive in

31:52

2017 banning federal government

31:55

agencies and departments, not

31:57

consumers. And like everybody

31:59

now. which is what is about to

32:01

happen. So this was in 2017 just federal government agencies

32:05

and departments from installing Kaspersky

32:07

software on their systems. DHS

32:10

had also not cited any specific

32:12

justification for its ban at the

32:14

time but media reports

32:17

citing anonymous government officials back then

32:19

cited two incidents and we talked

32:21

about them on a podcast. According

32:24

to one story, an NSA contractor

32:27

developing offensive hacking

32:30

tools for the spy agency had

32:33

Kaspersky software installed on his

32:35

home computer. Yeah we reported

32:37

this story. Remember this? Yes,

32:40

yes. This was those NSA tools.

32:43

Right, he was developing these

32:45

NSA tools and the Kaspersky

32:48

software detected the source code

32:50

as malicious and extracted it

32:52

from the computer as

32:55

AV software often does. They

32:57

quarantined it. Yeah. Well they'd

32:59

actually send it to Kaspersky. And that's what

33:01

all anti viruses do. They quarantined it and

33:04

they sent it in. Exactly.

33:06

To the home office which in this case was

33:08

in Roskow. That was the eternal

33:10

blue leak. Yes. So

33:13

a second story claimed that Israeli spies

33:16

caught Russian government hackers using

33:18

Kaspersky software to search customer

33:20

systems for files containing US

33:22

secrets. So okay you

33:25

could install Kaspersky as

33:27

you can many other tools. You

33:29

know Mark Racinovich's PS

33:32

exec is a favorite tool

33:35

for bad guys to use. But

33:37

you know its intention is benign.

33:39

So Kaspersky for their

33:42

part denied that anyone

33:44

used its software to explicitly search

33:46

for secret information on customer machines

33:48

and said that the tools detected

33:51

on the NSA workers machine were

33:53

detected in the same way that

33:55

all AV software is designed to

33:57

detect malware on customer machines. malware.

34:00

It really was malware. Right,

34:03

right. Exactly. They were, they

34:05

were developing NSA malware for the

34:07

NSA. And you know, it's

34:09

funny too, cause it's a little reminiscent of

34:11

the Plex breach, which of course is the

34:14

way LastPass got themselves in trouble. You have

34:16

some, you know, some third party contractor using

34:18

your tools at home on their home machine

34:21

where they've

34:23

got, you know, AV software installed. Right.

34:26

It's like, whoops, not

34:28

quite secure. So anyway,

34:31

following that 2017 DHS directive, Best

34:35

Buy and other commercial software

34:37

sellers that had contracts with

34:39

Kaspersky to sell computers

34:41

with Kaspersky AV software pre-installed on

34:43

those systems, subsequently announced they would

34:46

no longer install the software on

34:48

computers they sold. This

34:50

didn't, however, put an end to

34:52

existing customers using Kaspersky software or

34:54

prevent new customers from purchasing the

34:57

software on their own. Today's

34:59

ban is designed to convince those

35:01

customers to stop using the software as

35:06

well. And get this, commerce

35:08

secretary Raimondo told

35:11

reporters when Americans have software from companies

35:14

owned or controlled by countries

35:16

of concern, such as

35:19

Russia and China integrated into

35:21

their systems. It makes all

35:23

Americans vulnerable. Those

35:25

countries can use their authority over

35:28

those companies to abuse that software

35:31

to access and potentially exploit

35:34

sensitive US technology and data.

35:38

You know, and I'll just note that the United

35:40

States is no different in that regard. It's just

35:42

that we're here and they're there. We know

35:45

we've covered the news that China's

35:47

government is similarly urging its businesses

35:49

to stop using windows. You know,

35:51

we clearly have a new cyber

35:53

cold war heating up and

35:56

unfortunately choosing sides and

35:58

cutting ties is part of the process. this. So

36:02

anyway, it's

36:09

unfortunately, oh, that that a

36:13

product that many people use is not

36:17

going to be available at the same

36:19

time. I guess

36:24

I, it feels to

36:26

me like Casper skis employees should have

36:28

seen the writing on the wall. They've

36:31

seen the tensions between the U S

36:33

and Russia heating up, you

36:35

know, it's easy for us to say, well, you know,

36:37

they could have left Russia, but they probably love Russia

36:39

as much as we love the U S. So, you

36:42

know, for most of them, it's just, it's just

36:44

a job. And I mean,

36:47

Eugene Kaspersky was trained at

36:49

the KGB technical school and

36:52

did have a job in the ministry

36:54

of defense when he founded Kaspersky antivirus.

36:56

So there are deep connections to

36:58

the Russian government and the GRU. Um, so

37:02

I, and I would note also

37:05

that AV software in particular has

37:08

a very intimate relationship

37:10

with, with an operating system.

37:12

It is in the kernel.

37:15

If it is absolutely true, and this

37:18

is the kind of thing that, that

37:20

keeps the military mind up all night.

37:23

You know, here we have a Russian company

37:26

that has an active connection to

37:28

all of the customers

37:31

machines in

37:33

the U S and you

37:36

know, it's not an, you

37:38

know, a text editor. It's running

37:40

a driver in the kernel. This

37:43

ain't Tik TOK. This is exactly.

37:45

Yeah. If it did want to

37:47

get nasty in

37:50

an instant, it could take

37:52

over all of the machines

37:54

where it's installed and that's been banned

37:56

on government machines for a long time.

38:00

2017. Yes. So I mean, yeah. It

38:04

makes sense. I mean, I

38:07

feel bad for Eugene. Everybody,

38:09

you know, part of the reason people are

38:11

upset is everybody loves Eugene Kaspersky. Dvorak used

38:13

to recommend Kaspersky all the time. He

38:15

loved it. But mostly because he used to

38:18

hang with Eugene and drink vodka during Comdex.

38:23

So, but honestly, there are plenty of

38:25

antiviruses out there. One

38:28

could argue you don't even really need an

38:30

antivirus. I was going to say, and you

38:32

and I, Leo, and my wife and everyone

38:35

I have any influence over, no longer uses

38:37

any. We just use Windows Defender. And

38:39

believe me, it apparently is doing the job because

38:41

it sure is a pain for me. Oh, yes.

38:45

Let's take a break. Oh, what a

38:47

thought. My goodness. Yeah.

38:52

Okay. Good. Okay. Yeah.

38:55

Let me just pull the copy up here. I don't

38:58

know. I'm a little, I'm

39:00

a little off today. Well, I'll just

39:02

note while you're doing that, that the

39:05

Kremlin has extended the duration on its

39:07

ban on Russian government

39:09

agencies and critical organizations

39:11

using IT services from unfriend,

39:14

from unfriendly countries. That's exactly what's going

39:17

to happen. And that ban will enter

39:19

into effect the extension on January 1st.

39:22

So that was when the previous one was going

39:24

to expire and we still don't

39:26

like each other. So, you

39:28

can't use those nasty Windows computers. I

39:30

mean, in a perfect world, I mean,

39:34

I often think that the best way to

39:38

keep from going to war is to

39:40

having economic dependency on each other. Yes.

39:42

Yes. That's why it's like it makes

39:45

no sense for us to be upset

39:47

with China. Everything we own comes from

39:49

China. Right. And

39:51

as a result, China is less likely to,

39:53

you know, screw with us. I would think

39:55

so. But maybe not. Who knows?

39:58

It's like, why are they messing with Taiwan? Taiwan

40:00

where the chips come from that seems they're out

40:02

of their they're out of their mind But see

40:04

that's more of a that's more see this is

40:06

a problem. That's not a rational thing. That's more

40:08

of an emotional thing just

40:10

like Russia invading Ukraine because it used

40:12

to be part of China and And

40:15

so on a back we want it back But

40:19

that's emotional. It's not rational clearly All

40:23

right, let's talk about our sponsor

40:25

for this particular portion of the show and actually

40:27

it's very timely because if

40:30

that NSA Contractor had

40:33

been running one password on

40:35

his machine. He wouldn't have had this problem

40:39

Every business should be using one

40:42

passwords Extended access management.

40:45

Let me talk about this and I

40:47

think you're gonna recognize an old friend

40:49

here in a perfect world End

40:52

users would only work on managed devices with

40:54

IT approved apps right apps that are up-to-date

40:56

apps that are clean apps that

40:58

are secure Unfortunately doesn't

41:01

work that way as you well know in your

41:03

enterprise everyday employees are bringing

41:05

in BYOD their personal devices and

41:08

Unapproved apps that aren't protected by MDM

41:10

or IAM or any other security tool

41:13

There is a big gap between the security

41:15

tools. We have in the way they

41:18

actually work and this is where one password

41:20

really Can

41:22

get involved they call it the access

41:24

trust gap and they've

41:26

created the first-ever solution to fill it One

41:29

password extended access management. It

41:31

secures every sign-in for

41:34

every app on every device Now

41:37

you of course, you know one password is

41:39

a password manager. It includes that it's the

41:41

password manager I probably close

41:44

to number one password manager now certainly

41:46

well known and well loved an enterprise

41:49

the device Trust solution

41:51

you also have heard of because it's collide

41:53

now you remember we were we were talking

41:56

about collide for a long time And

41:58

then we said collide got pepper by 1Password.

42:00

So this is now the

42:02

alliance of 1Password and Collide. This is

42:05

the resulting tool and what a great

42:07

idea, the 1Password Extended Access

42:09

Management. It cares about the user

42:11

experience and privacy, which

42:13

frankly means it can go places

42:16

other tools can't like personal devices,

42:18

contractor devices. It ensures that every

42:20

device is known and important,

42:22

healthy, and every

42:24

login is protected. So

42:27

stop trying to ban BYOD

42:29

or shadow IT. Start protecting

42:32

them with 1Password Extended Access

42:34

Management. Check it

42:37

out at 1password.com/security now.

42:40

Two names you know

42:42

and love together at last and now really

42:44

making a tool that everyone should have. That's

42:46

1-P-A-S-S-W-O-R-D 1password.com

42:52

slash security now. The one is

42:54

a number one. Thank you 1Password.

42:58

This is a perfect idea. Glad it's here.

43:02

Now on with the show. Glad you're here, Steve. Okay,

43:05

so I

43:08

saw a short blurb in the

43:10

Risky Business newsletter and all it

43:12

said was, sorry? When

43:15

you say Risky Business, I think of

43:17

Tom Cruise in Underpants. Yeah, I know.

43:19

Okay. Always. Always. Okay, so it just

43:21

was a short blurb. It said

43:24

Google Chrome complaint and

43:27

it read European Privacy

43:29

Organization, NOIB, and it's

43:33

not capitalized so NOIB, I

43:35

don't know, NOIB, it's

43:38

Austrian, has filed a complaint

43:41

with Austria's data protection

43:43

agency against Google for

43:45

its new privacy sandbox

43:48

technology. NOIB

43:50

says Google is tricking

43:52

users to enable

43:55

privacy sandbox by

43:57

falsely advertising it as an ad.

44:00

privacy feature. Of

44:02

course my reaction to that was, what?

44:06

So I dug a bit deeper. I went

44:08

over to the N O Y B E

44:10

U website and found their article with the

44:13

headline Google Chrome agree

44:16

to privacy feature it has in

44:18

quotes but get tracking.

44:20

Okay so their

44:23

piece begins with after years

44:25

of growing criticism over

44:28

invasive ad tracking Google

44:30

announced in September of 2023 that

44:32

it would phase out third party cookies

44:35

from its Chrome browser. Wait a minute

44:37

NYOB stands for none of your

44:39

business. That's

44:42

really good. In German I guess. Maybe

44:44

not. No it's English. Really? It's the

44:46

European Center for Digital Rights. N O

44:48

Y B. That is perfect.

44:51

None of your business. Okay

44:54

so this guy saying

44:56

they after years of

44:58

growing criticism over invasive ad tracking Google

45:00

announced it would phase out third party

45:03

cookies from its Chrome browser. So this

45:06

is already misleading because

45:08

while it's true that Google has been

45:10

using the same ad tracking

45:12

that the rest of the

45:14

advertising and data aggregation industry

45:17

uses the growing criticism has

45:19

been over the entire industry's

45:22

use of ad tracking not

45:24

just Google's. You know

45:26

as we've been carefully covering here what

45:29

Google is hoping to do with their privacy

45:31

sandbox is to change

45:33

the entire model of

45:35

the way advertising and its

45:38

user profiling operates by

45:40

inventing an entirely new way

45:42

for a user's browser to

45:45

intelligently select from among available

45:48

advertisements that are seen at

45:50

websites and

45:52

we've already heard from one of

45:54

our listeners whose job it is to implement the

45:57

server side technology of a major web

45:59

site. website that the rest

46:02

of the non-Google industry is

46:05

massively pushing back against

46:07

Google's attempt to end

46:10

tracking. Google really is trying

46:12

to end tracking and the rest of

46:14

the community says, no, we

46:16

like tracking. We don't want you to take it

46:18

away. Okay, so the beginning

46:20

of this article, I'll just share

46:23

the beginning, it reads, after years

46:25

of growing criticism over invasive ad

46:27

tracking, Google announced in September 2023

46:30

that it would phase out third party

46:32

cookies from its Chrome browser. That part I already

46:34

read. Since then, users

46:37

have been gradually tricked into

46:39

enabling a supposed

46:42

ad privacy feature that

46:45

actually tracks people. Okay,

46:48

it doesn't. While the

46:50

so-called privacy sandbox, again, and it

46:52

quotes from him, is

46:54

advertised as an improvement over

46:57

extremely invasive third party tracking,

46:59

which it is, the tracking is

47:02

now simply done within the browser

47:04

by Google itself, which

47:06

is not true. To do

47:09

this, the company theoretically needs

47:11

the same informed consent from

47:13

users. Instead, Google

47:16

is tricking people by pretending

47:18

to turn on an ad

47:20

privacy feature. NOIB

47:23

has therefore filed

47:25

a complaint with the Austrian

47:28

Data Protection Authority. Okay,

47:30

now the article goes on at length and

47:33

it never gets any more accurate. So

47:35

there's no point in dragging everyone through it. It's

47:38

full of misconceptions and an utter

47:40

lack of understanding of what Google

47:42

is trying to do. Google's

47:45

privacy sandbox system explicitly

47:47

does not track users,

47:50

which is precisely why the

47:52

rest of the well-established tracking

47:54

industry is freaking out over

47:56

it and scurrying around trying

47:58

to come up with alternative

48:00

tracking solutions. This NOIB is

48:03

a privacy watchdog agency, as I

48:05

said, based in Austria. I looked

48:08

around their site and they appear

48:10

to gauge their value

48:13

to the world by

48:15

the number of complaints they're able

48:17

to file per day. They're

48:20

complaining about everyone and

48:23

everything. So, they're

48:25

kind of like a rabid version

48:28

of the IETF. You

48:30

know, like the IETF, they are

48:32

never going to be happy with

48:35

anything short of complete

48:38

and total legally and

48:40

technically enforced internet anonymity.

48:43

And in a perfect world, that would

48:45

be great. But as

48:48

we know, that's unlikely to happen.

48:51

You know, giving the author of

48:53

this the most sweeping benefit of the

48:55

doubt possible, the only thing I can

48:57

imagine is that he confuses, hopefully

49:01

not willingly, he

49:03

confuses tracking with profiling.

49:06

Those two words are different and so is

49:08

what they mean. You

49:10

know, perhaps he sees no difference.

49:13

Perhaps he doesn't consider Google's privacy

49:15

sandbox to be, you know, the

49:17

ad privacy feature that Google does.

49:20

We're told that websites

49:22

which are able to offer

49:25

identification of the

49:27

viewers of the ads they display,

49:30

or at least some reasonable assurance

49:32

of the relevance to them of

49:34

the ads, can double the revenue

49:37

from their advertising. The

49:40

problem, therefore, is not Google,

49:42

who's been working long and hard

49:45

to find a way to do

49:47

this without tracking. The problem now

49:49

is becoming websites and their advertisers

49:51

who are refusing to change their

49:54

own thinking. It's challenging. By

49:56

the way, you said IETF. Pretty

49:59

sure you didn't mean it. the IETF.

50:01

Oh my god. Because I don't think

50:03

the IETF cares. Of course. No. Thank

50:05

you, thank you, thank you Leo. You

50:07

just saved me from receiving a thousand

50:10

emails. I just saved the IETF, the

50:12

Internet Engineering Task Force, is not at

50:14

all cares about this at all. Thank

50:16

you so much. I just thought I'd

50:18

bet you. Yes. Not the I, I

50:20

have a couple of errata coming up.

50:22

So we've just reduced. I'm trying when

50:25

I can. Sometimes I miss it, I

50:27

don't know. Thank you. So there you

50:29

go. The IETF.

50:31

Exactly. Thank you. But

50:34

speaking of tracking, after

50:36

last week's podcast, as

50:39

planned, I finished

50:41

the implementation of GRC's subscription

50:43

management front end and

50:45

turned to the email sending

50:48

side. I designed

50:50

a layout and template

50:52

for the weekly podcast announcements

50:54

I plan to start sending

50:56

and Saturday afternoon, US

50:59

Pacific Time, I sent

51:01

the first podcast summary

51:03

email to this podcast's

51:05

4239 subscribers.

51:08

And then this morning, about

51:10

three hours before actually, we

51:13

started an hour later than usual. So

51:15

about four hours before this podcast began,

51:17

I sent a

51:20

similar summary of today's

51:22

podcast to 4330 subscribers

51:25

that listed grown by about 100

51:27

over the past week. So email

51:29

is starting to flow from GRC

51:32

and everybody who has subscribed should

51:34

have it. If you don't find

51:36

it, check your spam folder because

51:38

it may have been routed

51:40

over there. Rob

51:43

in Britain said, Hi, Steve,

51:47

as Apple broke their IMAP

51:50

message read flag a while back,

51:52

I've been using the blue mail app

51:55

to get my mail. Blue

51:57

mail includes a track of

52:00

the day. tracking image detector, and

52:02

guess what? It flagged

52:04

your email message as

52:07

containing one. As

52:09

a Brit, the irony of

52:11

a security podcast tracking me

52:14

does not escape me. Okay,

52:16

now, Rob was one

52:18

of a couple of people who replied

52:20

with a, what the? When

52:23

their email clients reported that

52:25

a so-called tracking bug was

52:28

present in their email from me,

52:31

and since that's what their client

52:34

calls it, it's natural

52:36

for concern to be raised. So I

52:38

wanted to correct the record about when

52:40

an email bug is tracking someone and

52:43

when it's not. The

52:46

TLDR is, it's not tracking

52:48

you if it's a

52:50

bug you indirectly asked for

52:53

and if it's only

52:55

linked back to whom you asked

52:58

from. The

53:00

confusion arises because our email clients

53:02

have no way of knowing that

53:04

this incoming email is not unwanted

53:07

spam and

53:10

that makes all the difference in the world

53:12

about the purpose and implications of the bug.

53:15

Because if it were an unwanted

53:17

spam email as opposed to

53:19

email everyone has been clamoring

53:22

for, you would definitely

53:24

not want your opening of that

53:26

email to send a ping back

53:29

to the Cretans who

53:31

are despoiling the world with spam. But

53:34

in this case, no one is being

53:36

tracked because the image

53:38

link points only back to me,

53:41

back to GRC, the source

53:43

of the email that was sent to you,

53:46

which only those who jumped through some hoops

53:48

to ask for it in the first place

53:50

would have received. Also

53:53

unlike pretty much everyone else

53:55

and against the advice of

53:58

some well-informed others, I,

54:00

GRC, am sending

54:02

the email myself, not

54:05

through any of the typical

54:07

third-party cloud providers that most

54:09

organizations have switched to now

54:11

using. As a

54:13

consequence, the email address our

54:16

subscribers have entrusted to me

54:18

will never be disclosed to

54:20

any third party. And

54:22

as I noted, that single-pixel bug

54:24

is only coming back to me

54:27

to allow me to obtain

54:29

some statistics about the percentage of email

54:31

I send that's open and viewed. And

54:34

I've learned some interesting things thanks to

54:36

that little bug. For example, half

54:38

of our listeners, well, I guess I already

54:41

knew this already, half of our listeners are

54:43

using Gmail, but I did not know that

54:45

fully one quarter of

54:47

our listeners are using

54:50

Mozilla's Thunderbird as their email

54:52

client. I thought that was

54:54

interesting. So basically

54:56

three quarters of

54:58

everybody who has listed for email

55:02

and received and opened their email from

55:04

me the last two that I've sent,

55:07

three quarters are either half of

55:11

them are, half of the

55:13

total is Gmail and the other

55:15

one quarter is Mozilla's Thunderbird. I'll

55:20

also note that as

55:22

regards this bug, the SecurityNow

55:24

emails contain a link to

55:27

the full-size

55:30

picture of the week and

55:33

the show notes and GRC's SecurityNow

55:36

summary page, all

55:39

in the email, back to GRC. So

55:41

it's not as if anyone who receives these emails

55:44

from me and clicks any of their links is

55:47

being stealth. Also

55:49

I chose to embed a reduced

55:51

size picture of the week as

55:54

a visible, it's about 250 pixels

55:57

wide, thumbnail image so

55:59

that the email. email would be self-contained

56:02

and complete. I could have

56:04

linked back to GRC for

56:06

the retrieval of the thumbnail when viewed.

56:08

In that way I would have obtained

56:11

have obtained the same feedback that the

56:13

single pixel image provided and presumably

56:16

since it's like 250 by 203 pixels it will just

56:18

look like a real image

56:22

and it's visible and no email

56:25

client would say oh you

56:27

got a tracking pixel in your email.

56:30

Right anyway it's certainly the case

56:33

that unsolicited

56:35

commercial spam email

56:38

contains tracking bugs to

56:41

inform their senders when

56:43

their obnoxious unwanted spam

56:45

has been opened by

56:47

its recipient. Anyone

56:49

who thinks that describes the weekly

56:51

podcast summaries they signed up for

56:53

will be glad that every

56:56

one of my emails contains a

56:58

very clearly marked unsubscribe link and

57:01

of course it has immediate effect.

57:03

There's none of this please allow

57:05

two weeks for your unsubscribe to

57:08

be processed nonsense. I've seen that

57:10

from other you know major mailers

57:13

and I just think wow aren't

57:17

you using computers? Anyway

57:19

my work after today's podcast

57:21

will be to automate the

57:23

sending of these weekly podcast

57:25

summaries. At the

57:27

moment sending a new email to

57:29

a list is not difficult but

57:32

it does involve a large number

57:34

of steps and decisions which are

57:36

redundant week from week so I

57:38

want to take a bit more

57:40

time to build some infrastructure to

57:42

make it simple and mistake-proof. And

57:48

Leo I wish I had you are

57:52

you nearby? No. I'm

58:10

coming. He can see me.

58:12

I know he knows I'm coming. Yes I

58:15

do. I tweaked my knee. Oh God

58:20

I heard about like going up into the attic

58:22

or something. Yeah I don't, I'm not used to

58:24

stairs and I

58:26

fell up and it fell

58:29

down. So like hit the front of your

58:31

knee on the, oh it'll get better. And

58:36

of course stairs to an

58:38

attic are probably not padded or carpeted. Not good stairs.

58:40

Oh no. So

58:42

there are two things. We needed to take

58:44

a break because we went a long time

58:47

before our second one. And

58:50

you need to hear this next thing

58:52

because this is from Orange Psi. Orange

58:54

Psi. Yes. The security researcher. And you

58:57

sent me a note about this

58:59

and I uh, I know what

59:01

you're gonna say. So yeah let's

59:03

pause. All right I

59:06

am here and we will pause and then we

59:08

will talk about Orange Psi. But

59:10

first no sighs

59:12

only pleasant smiles for

59:15

our sponsor Mylio. I love

59:17

Mylio. It's solved. It's

59:20

uh, and I suspect this is how Mylio

59:22

came about. It scratched somebody's itch but it

59:24

sure scratched mine. I

59:26

have and I bet you do too, lots of

59:28

photos. I have more than 200,000 photos.

59:32

Lots of documents. Lots of videos.

59:34

And I had no good way to organize it.

59:36

When Google bought Picasa and put them out of

59:38

their misery, it was

59:40

the end of the line for what I was using

59:42

to keep things in order. And

59:45

yes sure I used Google Photos but it

59:47

really kind of made me queasy to

59:49

put all of my stuff up in the cloud. Mylio

59:53

was the solution. Unbelievable

59:56

solution. And what I love

59:58

about it is I can organize my photos. I

1:00:00

can have them on all my devices and if I

1:00:03

want I don't need to use the cloud or

1:00:05

I can use Steve's technique of pre-internet

1:00:09

encryption Miley

1:00:11

Oh uses strong encryption will crips all your

1:00:13

data and then store it on your choice

1:00:15

of cloud They even have their own Miley

1:00:17

Oh cloud But between encryption and the ability

1:00:20

to put it on every device that just

1:00:22

just solved all my issues And

1:00:24

I haven't even scratched the surface of what

1:00:26

Miley Oh will do Miley Oh Automatically does

1:00:29

face recognition So if I have pictures of

1:00:31

Steve and Lisa and my family and so

1:00:33

where I will say that's Steve That's Lisa

1:00:35

and then it goes through all the rest

1:00:38

of the photos and amazingly accurately finds them

1:00:40

all in the background Not

1:00:43

going to the cloud on device does

1:00:45

the same thing? Automatically with pictures

1:00:48

of pets dogs cats even fish

1:00:51

Mice gerbils it can tell

1:00:54

Different, you know mountains rivers

1:00:56

streams it organizes it all

1:00:58

automatically Automatically lets

1:01:00

me curate it. I can

1:01:02

share it if I want to the genealogy

1:01:06

site run by the LDS

1:01:08

Church Which is

1:01:10

a great genie. It's the genealogy site.

1:01:13

So it's a great genealogy tool It

1:01:17

even could pull photos from other

1:01:19

sources that I haven't

1:01:21

had in my library Instagram Facebook

1:01:24

it even will take Google takeout from Google photos

1:01:26

download them all and here's the best part with

1:01:28

Miley Oh photos plus It

1:01:30

eliminates all the duplicates. So when I say

1:01:32

I have 200,000 photos, I have exact I

1:01:34

have 200,000 unduplicated unique

1:01:38

photos and the best of

1:01:40

all you all the files stay on your devices

1:01:42

not on somebody else's Server it

1:01:44

supports your folder structure. So

1:01:47

it makes it easier to search curate

1:01:49

and organize your media across all devices

1:01:52

Privacy first cloud backups are

1:01:54

entirely optional But

1:01:56

if you do choose to use them as I said it

1:01:58

automatically encrypts the data for security.

1:02:00

MyLio Photos Plus is amazing and

1:02:03

all of the AI, all the

1:02:05

categorization is done entirely on device.

1:02:07

Nothing is uploaded, nothing can

1:02:10

be data mined. You can

1:02:12

tag and search for photos using

1:02:14

all of the above, you know,

1:02:16

face, object recognition, metadata. It

1:02:18

takes all the metadata from my camera.

1:02:21

It supports every feature I've

1:02:23

got in my camera. It

1:02:26

supports every file format and all

1:02:28

of its private on device. Put

1:02:31

privacy first with MyLio Photos Plus.

1:02:33

To help you get started we've

1:02:35

got a special offer on your first month free

1:02:38

that's $99 for your

1:02:41

first year of MyLio Photos

1:02:43

Plus. I paid for it immediately after

1:02:45

I tried it. Don't miss

1:02:47

this great deal though. Sign up and get your free

1:02:49

month at our

1:02:52

exclusive address mylio.com/twit.

1:02:55

It's hard for me to tell you all of

1:02:57

the things it can do because there's so many.

1:03:00

You will be impressed. You will say hey, scratch

1:03:03

is my edge

1:03:05

too. myliomylio.com/twit. You,

1:03:07

trust me, I know you've been looking for this. It's

1:03:10

here. mylio.com/ twit.

1:03:15

Okay, now back to the show we

1:03:17

go Mr. G. So

1:03:20

I got an email with

1:03:22

the subject seeking assistance

1:03:24

for Black Hat USA

1:03:27

visa issue. And

1:03:29

when I saw that this was from

1:03:32

Orange Psi whose

1:03:35

name should be familiar to all

1:03:37

of our longtime podcast

1:03:39

listeners I thought really?

1:03:41

That one? I mean that Orange Psi?

1:03:44

So the email reads hello Steve Gibson

1:03:46

and Leo Laporte. My name

1:03:48

is Orange Psi, a security

1:03:50

researcher from Taiwan. While

1:03:53

I'm not a listener of the show, Jonathan

1:03:55

Leichu, a friend

1:03:58

of mine, says you feature being

1:06:00

flagged by the US. Since

1:06:03

then, I've been unable to enter the

1:06:05

United States to present at DEF CON

1:06:07

and Black Hat USA in person. In

1:06:11

2022, I tried applying for

1:06:13

a business tourist visa at the

1:06:15

embassy. However, the consular

1:06:17

officer couldn't decide and my application

1:06:19

had to be sent to DHS

1:06:22

for further administrative processing. After

1:06:24

several months of review, I never

1:06:26

got a response and missed the

1:06:29

2022 DEF CON Black Hat USA

1:06:31

dates. This year, I submitted

1:06:33

my latest research and was accepted by

1:06:36

Black Hat USA in May of 2024.

1:06:38

To catch up

1:06:42

with the visa this time,

1:06:44

I reapplied for the B1-B2

1:06:46

visa in January and had

1:06:48

the interview on March 19th.

1:06:51

However, three months have passed and

1:06:53

there's still no update. As a

1:06:55

security researcher, I try to do

1:06:58

the most impactful research and I'm

1:07:00

keen on having my research seen

1:07:02

by the world, especially at the

1:07:04

top hacker gatherings like Black Hat

1:07:07

USA. I'm currently seeking all

1:07:09

the help I can get to break

1:07:11

through this situation. I hope

1:07:13

this gives you a better understanding

1:07:15

of the situation I'm facing. This

1:07:17

has been a long and troubling

1:07:19

issue for me. If you have

1:07:21

any advice or guidance to offer,

1:07:23

it would be greatly appreciated. Here's

1:07:25

my contact information in case anyone

1:07:28

needs it. Thank you, Orange

1:07:30

Psi. Okay, so

1:07:33

this is great if we could help. And

1:07:36

by we, I mean everyone listening.

1:07:39

So the moment I saw his name,

1:07:41

you know, as I said, my eyes

1:07:43

open wide because of course we recognized

1:07:45

him from all the times we've talked

1:07:48

about his many significant contributions to the

1:07:50

security of this industry and its software

1:07:52

systems. I don't actively

1:07:54

maintain the sorts of contacts that he

1:07:56

needs for this like with the State

1:07:59

Department. but I'm always surprised

1:08:01

and flattered when I learn about

1:08:03

the roles of many of the

1:08:05

people who are listening to this

1:08:07

podcast and who consider it to

1:08:09

be worth their time. So

1:08:12

I'm sharing Orange Psi's plea

1:08:15

in the sincere hope that we

1:08:17

do have listeners here who

1:08:19

may have the connections required to solve

1:08:21

this problem for him. This

1:08:24

year's DEF CON and Black Hat USA

1:08:26

conferences are being held near the start

1:08:28

of August, and today is our

1:08:30

last podcast of June, so we only have

1:08:32

a month to go. I

1:08:34

wrote back to Orange Psi to tell him

1:08:37

that I would be honored to do anything

1:08:39

I could to help by giving his situation

1:08:41

a larger audience. I also

1:08:43

asked how someone who was in

1:08:45

a position of authority might contact

1:08:47

him if they needed further clarification.

1:08:50

He replied, Hi Steve,

1:08:52

thank you for your response. I

1:08:54

really appreciate your help. My

1:08:57

only concern comes via a

1:08:59

friend, in that the

1:09:01

US government can be very sensitive

1:09:03

to and he has

1:09:05

in quotes, media pressure. And

1:09:08

there have been cases where this has

1:09:10

led to a permanent ban on entry.

1:09:13

Although security now is not traditional

1:09:15

media, I still hope

1:09:18

that when mentioning my case, it

1:09:20

can be done in a neutral manner. When

1:09:23

seeking help, please ask listeners

1:09:25

to do so in their

1:09:28

personal capacity rather than representing

1:09:30

me, the media, or any

1:09:32

other sensitive entities. So

1:09:35

anyway, I, speaking for myself, would

1:09:37

ask anyone to heed that. If

1:09:41

you're in a position to help, please

1:09:43

understand and be gentle if you're

1:09:46

able to determine what's

1:09:52

going on and why. I

1:09:54

asked him for a link to a web

1:09:56

page of contact information, which he provided by

1:09:59

the US government. but all

1:10:01

he wrote there was, hi, I'm

1:10:03

Oren Tsai, a security researcher

1:10:05

from Taiwan. I really wanna go to the

1:10:07

US to present my latest research at Black

1:10:09

Hat USA 2024 in person. If

1:10:13

you have any suggestions, please feel

1:10:15

free to email me at orange

1:10:18

at sign chroot.org. Thank

1:10:23

you. So with

1:10:25

that, I'm leaving this in

1:10:27

the hands of our wonderful listeners. Please

1:10:30

don't do anything if you are not

1:10:32

the right person. I would hate to

1:10:34

make matters worse, but if you are

1:10:37

the right person or have a sufficiently

1:10:39

close relationship with someone who is, then

1:10:41

it would be wonderful if we

1:10:44

were able to help him. His

1:10:46

years of past work have shown

1:10:48

that he is exactly the sort

1:10:50

of security researcher whose work should

1:10:52

be encouraged. Mark

1:10:56

Zip sent me a note.

1:10:59

He said, hi, Steve. Seems to me

1:11:02

that an overlooked problem with

1:11:04

recall is, and this

1:11:06

was interesting, is third-party

1:11:08

leakage. Listeners to

1:11:11

security now may lock down

1:11:13

their machines and opt out

1:11:15

of recall, whereas the people

1:11:17

with whom we interact may not.

1:11:21

If I write an email to a friend, their

1:11:24

recall instance now knows of

1:11:26

our correspondence. We

1:11:28

can think of other leakage easily. For

1:11:31

instance, people frequently share passwords via

1:11:33

email. More examples should be

1:11:35

easy to imagine. Okay,

1:11:37

so first of all, I think Mark

1:11:39

makes a great point. Many

1:11:41

people who've been critical of

1:11:44

recall have likened it to

1:11:46

spyware or malware, you

1:11:48

know, that's now being factory installed.

1:11:52

Through our first podcast about this, you

1:11:54

know, well, I should

1:11:56

say, although our first podcast about

1:11:58

this was titled by

1:12:00

me, the 50 gigabyte privacy

1:12:03

bomb, I

1:12:05

have never characterized recall

1:12:07

as spyware or malware

1:12:10

because both of those things

1:12:12

require malicious intent. And

1:12:15

at no point have I

1:12:17

believed or do I believe

1:12:19

that Microsoft has ever had

1:12:21

a shred of

1:12:23

malicious intent for recall.

1:12:26

I've seen other commentators suggesting that

1:12:28

the entire purpose of recall is

1:12:31

to eventually send the collected data

1:12:33

back to Redmond, you know,

1:12:36

for some purpose. I think

1:12:39

that's irresponsible nonsense and

1:12:41

it's a failure of imagination. For

1:12:44

one thing, Microsoft knows that

1:12:46

in today's world they could

1:12:48

never do that without

1:12:51

being found out immediately. We

1:12:54

are all now watching them too

1:12:56

closely. And besides, why

1:12:59

would they? The details of

1:13:02

some grandmother's management of her

1:13:04

canasta group is nothing that

1:13:06

Microsoft cares about. But

1:13:09

that's not to say that there

1:13:11

would not be some value to having

1:13:14

the AI residing

1:13:16

in grandma's computer be

1:13:19

aware of her interest

1:13:21

in canasta. If

1:13:23

Windows continues to

1:13:26

evolve or maybe devolve

1:13:28

into an advertising

1:13:31

platform, which

1:13:33

would be unfortunate, but

1:13:35

seems likely based on the way it's going, Microsoft

1:13:41

would be crazy not

1:13:44

to use their recall

1:13:47

AI's digested history and

1:13:49

understanding of its machine's

1:13:51

user to improve

1:13:54

the relevance of such

1:13:56

advertising. And as

1:13:58

we know, this could all

1:14:00

be done locally on the machine,

1:14:02

much as Google's privacy sandbox will

1:14:04

be doing in the user's web

1:14:06

browser. In this case,

1:14:08

the Windows OS itself would

1:14:11

be pulling the most relevant ads

1:14:13

from the internet for display either

1:14:15

in Windows itself or in their

1:14:18

Bing web browser. So

1:14:21

we now have one declared

1:14:23

and two undeclared but obvious

1:14:25

uses for recall. And none

1:14:27

of these applications for recalls

1:14:29

data requires it to ever

1:14:31

leave its local machine environment.

1:14:34

The concern Mark raised about third party leakage,

1:14:36

I think, is a good one. It

1:14:39

probably hadn't occurred to most of us

1:14:41

that not only would our own machines

1:14:44

be recording our every move, but that

1:14:46

all of our personal interactions with any

1:14:48

others would also be captured by their

1:14:51

instances of recall. Last

1:14:54

week, we quoted Matthew Green

1:14:56

on the topic of Apple's

1:14:58

cloud compute design. He

1:15:00

wrote, TLDR,

1:15:03

it's not easy. And

1:15:05

he said, building trustworthy computers

1:15:08

is literally the hardest problem

1:15:10

in computer security. He said,

1:15:13

honestly, it's almost the only

1:15:15

problem in computer security. But

1:15:18

while it remains a challenging problem, we've

1:15:21

made a lot of advances. Apple

1:15:23

is using almost all of them.

1:15:25

So that was Matthew talking about

1:15:27

Apple's cloud compute. But the point

1:15:29

being, building trustworthy

1:15:31

computers is the hardest

1:15:33

problem we have. So

1:15:37

in Apple's case, they

1:15:39

have the comparative luxury of

1:15:42

housing their cloud compute infrastructure

1:15:44

in data center facilities surrounded

1:15:47

by strong physical security. Even

1:15:50

so, the architecture Apple has

1:15:53

designed does not require its

1:15:55

physical security to hold

1:15:58

in the presence of an infiltrate. adversary,

1:16:01

but they have physical access

1:16:04

security nevertheless. That's

1:16:06

something Microsoft does not

1:16:08

have with their

1:16:10

widely distributed Windows workstations.

1:16:14

Grandma always leaves

1:16:16

her co-pilot plus PC open,

1:16:18

logged in, and unlocked just

1:16:21

like her back door. So

1:16:24

Microsoft's challenge is greater than

1:16:26

Apple's, which Matthew Green has

1:16:28

just made clear is already the hardest

1:16:30

problem in computer security. And

1:16:33

as we've seen with last

1:16:35

week's revelation of a supercritical

1:16:37

Wi-Fi proximity remote code execution

1:16:39

flaw that's apparently been present

1:16:41

in Windows forever, at least

1:16:43

since Vista, whatever

1:16:45

solution Microsoft finally implements

1:16:48

will need to be something we've

1:16:51

not yet seen them successfully

1:16:53

accomplish. Let me say that again

1:16:55

because I think it's really important and it's exactly

1:16:57

the right way to think about this. Whatever

1:17:00

solution Microsoft finally

1:17:02

implements to protect its

1:17:05

recall data will

1:17:07

need to be something we've

1:17:09

not yet seen them

1:17:11

successfully accomplish. What

1:17:14

everyone else other than

1:17:16

Microsoft clearly sees is

1:17:18

just how much having

1:17:20

recall running in a PC raises

1:17:23

the stakes for Windows security. But

1:17:26

so far we've seen zero indication

1:17:28

that Microsoft truly understands that this

1:17:30

is not something they could just

1:17:33

wave their hands around and claim

1:17:35

is now safe for them to

1:17:37

do because they said so. What's

1:17:40

not clear is whether they'll be

1:17:42

able to use the hardware that's

1:17:44

already present in those

1:17:46

co-pilot plus PCs to implement the sort

1:17:49

of super secure

1:17:51

enclave they're going to

1:17:53

need. And this is to your point Leo

1:17:55

you made a couple weeks ago about you

1:17:58

know that's really being what's now necessary.

1:18:01

And that makes it even

1:18:03

more doubtful that they'll be

1:18:05

able to securely retrofit the

1:18:07

inventory of existing Windows 11

1:18:09

hardware to provide the required

1:18:11

level of security. It may

1:18:13

take new hardware. Apple

1:18:16

has only managed to do it

1:18:18

for their iPhone handsets because their

1:18:20

hardware architecture is so tightly closed.

1:18:24

Windows has never been since

1:18:26

it's an OS designed to run

1:18:29

on third-party OEM hardware. So

1:18:31

for example, the phrase secure

1:18:34

boot is an oxymoron

1:18:36

since secure boot bypasses

1:18:38

are continually surfacing. I

1:18:42

realize that I'm spending a great deal

1:18:44

of time on recall. This is

1:18:46

now the fourth podcast where I've given it

1:18:49

some significant discussion. And

1:18:51

of course for the first two podcasts it was our

1:18:54

main topic. But given the

1:18:56

security and privacy significance of Microsoft's

1:18:58

proposal, it would be

1:19:00

difficult to give it more time than

1:19:02

it deserves. And

1:19:05

finally, I have two pieces of errata.

1:19:09

The first came from someone who

1:19:12

wanted to correct my recent

1:19:14

statement about the duration of

1:19:16

this podcast. He noted

1:19:18

that since we started in 2005, we are still in

1:19:20

our 19th year of the podcast, not

1:19:27

as I have been erroneously saying, in

1:19:29

our 20th year. So

1:19:32

in two months, we

1:19:34

will be having our 19th birthday, not

1:19:37

our 20th birthday. He

1:19:41

said, quote, the reason we listen

1:19:43

is that we know you care about getting

1:19:46

the details right. I'm

1:19:48

glad that comes through. So

1:19:50

I'm happy to correct the record. And

1:19:54

the second mistake several of

1:19:56

our astute listeners have spotted is

1:19:58

that I've been erroneous saying that

1:20:01

the big security changes in

1:20:03

Windows XP, its built-in

1:20:05

firewall being enabled by default and

1:20:08

its users access to raw sockets

1:20:10

being restricted, came about

1:20:12

with the release of XP's

1:20:14

final Service Pack 3. That's

1:20:18

wrong, it was the

1:20:20

release of XP's Service Pack

1:20:22

2 where Microsoft finally decided

1:20:24

that they needed to get

1:20:26

more serious about XP's security

1:20:29

and made those important changes.

1:20:31

So a thank you to

1:20:33

everyone who said, Steve, I

1:20:36

appreciate the feedback. Always. Wow,

1:20:38

that's a deep cut. I mean

1:20:41

have you talked about that in

1:20:43

a while? Yeah, last couple

1:20:45

weeks actually. Alright. It's

1:20:48

a long time ago you can be

1:20:50

excused for not remembering the exact details.

1:20:53

And I think the reason I'm, I

1:20:55

was getting

1:20:57

hung up on it is that I

1:20:59

have had occasion to install some Windows

1:21:01

XP's way

1:21:03

later and of course after installing

1:21:05

it I always install Service Pack

1:21:08

3 which was the last

1:21:10

Service Pack in order to bring it up, to

1:21:12

bring it current. But I do remember when they,

1:21:14

it was a big deal when

1:21:16

they built a firewall into Service Pack 2. That

1:21:18

was like, in fact, I

1:21:20

think we pretty much said don't use

1:21:22

XP until Service Pack 2 comes out,

1:21:25

basically. It was a much-needed Service

1:21:28

Pack as I remember. Yep. Alright,

1:21:30

well you're forgiven. I'm

1:21:34

always happy to correct my mistakes. That's good.

1:21:38

Let us talk about something

1:21:40

very important, our sponsor for this

1:21:42

hour, and then get to

1:21:44

the pseudo random

1:21:46

number generator or Pringle as

1:21:48

I call it. For hell.

1:21:50

For the Pringle from hell.

1:21:53

Although if I'd only used it

1:21:55

I would be in heaven. So

1:21:58

that's why it's a double-edged sword. true yeah you

1:22:01

would be a little richer yeah as

1:22:03

opposed to a richer see it it peaked

1:22:05

at sixty seven thousand dollars a

1:22:07

bit well thanks a

1:22:10

lot Leo don't do the man is he you

1:22:12

for don't do the man at his hard drive

1:22:14

yeah 50 of them would be worth

1:22:16

what three million almost

1:22:19

four million or north north of

1:22:21

yep that hurts it's

1:22:24

just money Steve money doesn't buy I'll just have

1:22:26

to earn at the old fashion way yeah there

1:22:28

you go let's

1:22:31

talk a little bit about the thinks to

1:22:34

canary because this is such a cool product

1:22:36

they've been advertising on this

1:22:38

show almost exclusively for almost a decade now

1:22:41

and we brought them a lot of

1:22:43

customers maybe you're one of them if

1:22:46

you are you maybe you've even tweeted your love

1:22:48

for the thinks canary in which case you're at

1:22:50

the canary dot tools

1:22:52

slash love page like

1:22:55

the CTO or is it the see

1:22:57

so of slack and so many other

1:22:59

well-known people would say you

1:23:02

know you could have perimeter defense fine all

1:23:04

fine and dandy but do you know if

1:23:07

someone's penetrated your network and how do you know

1:23:09

and more importantly do you know in

1:23:11

a timely fashion that's what the things

1:23:14

to canary does thinks to

1:23:16

canaries are honeypots basically they're

1:23:18

about the size of external USB hard

1:23:20

drive they're tiny little things they

1:23:22

have two connections one to a for

1:23:25

Ethernet plug it into your network one

1:23:27

to the wall give it some power and Bing

1:23:29

bada-bing bada-boom you got honey pot and

1:23:32

man these honey pots can be anything mine's a

1:23:34

Synology NAS just cuz I you know I could

1:23:37

make it other things but I'm just lazy but it's

1:23:39

easy to do you just go to the console you

1:23:41

can choose from a Windows server a Linux

1:23:44

server an IIS server SSH

1:23:48

you can make it a skate of literally

1:23:50

make it a skate of device if you're

1:23:52

worried about the Israeli army hacking

1:23:54

your centrifuges hey this

1:23:56

is what you need the

1:23:59

thing is once you set it up up, it doesn't

1:24:01

look like honeypot. It doesn't look vulnerable.

1:24:03

It looks valuable and that's the key.

1:24:05

You can also use your

1:24:07

things to set up tripwires. They call

1:24:09

them Canary tokens. Little files, put

1:24:12

them anywhere on your network. They could

1:24:14

be PDFs or DocEx or XLS. I have

1:24:16

a number of Excel

1:24:18

spreadsheets saying things

1:24:20

like employee passwords, employee

1:24:22

information, that kind of thing. Things that the

1:24:24

bad guys who penetrate your network go, oh,

1:24:27

and they open them and now as soon

1:24:29

as they open them, you

1:24:31

get an alert. Only the alerts that matter,

1:24:33

only the alerts that tell you someone's in

1:24:35

my network. On average, people

1:24:37

don't learn about prowlers inside

1:24:39

their network for 191 days. They could do a

1:24:42

lot of damage in that time.

1:24:44

Whether it's a malicious insider, an evil

1:24:46

maid who's in there, or it's somebody

1:24:49

who penetrated your defenses and is now

1:24:51

wandering around and they're very good at

1:24:53

hiding their tracks. They think, okay, sniff

1:24:55

here and there. They're looking for valuable

1:24:57

things. They can exfiltrate customer information, embarrassing

1:25:01

emails, that kind of thing. They're also looking for

1:25:03

where you keep your backups so

1:25:05

that when they do trigger a ransomware attack,

1:25:07

they know your backups are encrypted too. That

1:25:09

kind of thing. They're nasty, but the things

1:25:11

Canary detects them. Whether it's

1:25:13

accessing lure files or

1:25:16

brute forcing your fake internal SSH server, as

1:25:18

soon as they do that, your things Canary

1:25:20

will immediately tell you you've got a problem.

1:25:23

You could do it email, text, Slack,

1:25:26

syslog. You've got your own hosted console. They have

1:25:29

an API if you want to add it to

1:25:31

other things. Pretty much any way you

1:25:33

want to be notified. SMS,

1:25:35

of course, text. Then

1:25:38

you just wait. Push notifications, web hooks. Then

1:25:41

you just wait. Attackers who breached

1:25:43

your network, malicious insiders, and other

1:25:45

adversaries inevitably will make themselves known

1:25:47

by trying to attack your things

1:25:49

Canary or those Canary tokens. Then

1:25:52

you can route them, routes them

1:25:54

from your network. Visit

1:25:56

canary.tools slash twit. Now

1:25:59

some big. operations might have hundreds of them,

1:26:01

small places like ours might say have five.

1:26:04

Let's say five. Five thinks canaries,

1:26:06

$7,500 a year. You get five of them, you get

1:26:10

your own hosted console, you get your upgrade support,

1:26:12

your maintenance, and if you use the code twit

1:26:14

in the how did you hear about us box

1:26:16

you're gonna save ten percent off

1:26:19

on your thinks canaries for life. Now

1:26:23

if you're at all concerned good news you

1:26:25

can always return your thinks canaries within

1:26:28

two months you get 60 days

1:26:30

for a 100% money back refund.

1:26:33

But I have to tell you during all the years Steve

1:26:35

and I have been talking about thinks canaries in this show

1:26:37

no one has ever

1:26:39

asked for a refund not one not one

1:26:42

that tells you

1:26:44

something once you get these thinks canaries in

1:26:46

your network you're gonna go oh how

1:26:49

did I live without them. Visit

1:26:52

canary.com slash

1:26:54

the offer code twit in

1:26:56

the how did you hear about us box gets you 10% off

1:26:58

for life canary.tools

1:27:03

slash twit don't forget the offer

1:27:05

code TWIT and we thank

1:27:07

thanks to canary for supporting the

1:27:10

good work that Steve does here. Steve

1:27:13

let's talk about perks. Yes

1:27:17

Leo you may think it was bad

1:27:20

it's worse than you could

1:27:22

have imagined. Great. So

1:27:26

yeah the mixed blessing of

1:27:28

a lousy pseudo

1:27:30

random number generator or when

1:27:32

are you very glad that your

1:27:35

old password generator used

1:27:38

a very crappy pseudo random

1:27:40

number generator. So

1:27:42

today I want to share the true story

1:27:44

of a guy named Michael who

1:27:47

after generating 43 .6

1:27:51

Bitcoin lost the password

1:27:53

that was used to protect it with

1:27:56

Bitcoin currently trading at around $60,000 US Yes,

1:28:00

for each coin, that's around $2.6 million

1:28:03

worth of Bitcoin waiting for him at

1:28:06

the other side of the proper password.

1:28:09

Unlike many similar stories, this one has

1:28:11

a happy ending, but it's the

1:28:13

reason for the happy ending that

1:28:16

makes this such an interesting story for

1:28:18

this podcast and offers so many lessons

1:28:20

for us. Okay, now

1:28:23

by pure coincidence, the story was recently written

1:28:25

up by the same guy, Kim

1:28:28

Zetter, who wrote that piece

1:28:30

about Kaspersky for Zero Day that we

1:28:32

were discussing earlier. Kim's

1:28:34

story for Wired is titled, How

1:28:37

Researchers Cracked an 11-Year-Old Password

1:28:39

to a $3 Million Crypto

1:28:43

Wallet. He wrote,

1:28:46

two years ago when Michael, and he

1:28:48

has that in air quotes, Michael wants

1:28:51

to remain anonymous, two years ago, because

1:28:53

now Michael has a lot of

1:28:55

money, and he would rather just keep it to himself.

1:28:58

Two years ago, when Michael,

1:29:00

an owner of cryptocurrency, contacted

1:29:02

Joe Grand to help him

1:29:05

recover access to about $2

1:29:08

million worth of Bitcoin he had

1:29:10

stored in an encrypted format on

1:29:12

his computer, Joe turned

1:29:14

him down. Michael, who's

1:29:16

based in Europe and asked

1:29:18

to remain anonymous, stored the

1:29:21

cryptocurrency in a password-protected digital

1:29:23

wallet. He generated a

1:29:25

password using the RoboForm password

1:29:27

manager and stored that password

1:29:29

in a file encrypted with

1:29:31

a tool called TrueCrypt. At

1:29:34

some point, that file got corrupted, and

1:29:37

Michael lost access to the 20-character password

1:29:39

he'd generated to secure his 43.6 Bitcoin,

1:29:41

worth a total of about 4,000 pounds

1:29:44

or 5,300 back

1:29:50

in 2013 when it was generated and

1:29:52

stored. That's right, Michael, use the RoboForm

1:29:54

password manager. to

1:30:00

generate the password, but did not store

1:30:02

it in his manager. He

1:30:04

worried that someone would hack his computer

1:30:07

to obtain the password. Reasonable

1:30:09

concern. Joe

1:30:11

Grand is a famed hardware hacker

1:30:13

who in 2022 helped

1:30:16

another crypto wallet owner recover

1:30:18

access to two million dollars

1:30:20

in cryptocurrency he thought he'd

1:30:22

lost forever after forgetting

1:30:25

the pin to his

1:30:27

Trezor wallet, which is a

1:30:29

hardware device. Since then, dozens

1:30:32

of people have contacted

1:30:34

Grand to help them recover

1:30:36

their treasure. But Grand,

1:30:39

known by the hacker handle Kingpin,

1:30:41

turns down most of them for various

1:30:44

reasons. Grand is an

1:30:46

electrical engineer who began hacking computing

1:30:48

hardware at age and

1:30:50

in 2008 co-hosted the

1:30:53

Discovery Channel's prototype this

1:30:55

show. He now

1:30:57

consults with companies that build complex

1:30:59

digital systems to help them understand

1:31:02

how hardware hackers like him might

1:31:04

subvert their systems. He

1:31:06

cracked the Trezor wallet in 2022 using

1:31:10

hardware techniques that forced the

1:31:12

USB wallet to reveal its

1:31:15

password. But Michael

1:31:17

stored his cryptocurrency in a

1:31:19

software-based wallet which meant none

1:31:22

of Grand's hardware skills were

1:31:24

relevant this time. He

1:31:26

considered brute-forcing Michael's password writing

1:31:30

a script to automatically guess millions

1:31:33

of possible passwords to find the

1:31:35

correct one but determined

1:31:37

this wasn't feasible. Right,

1:31:39

you know 20 characters, upper and lower,

1:31:42

special cases, numbers and so forth as we

1:31:44

know 20 characters that's

1:31:46

strong security. He briefly

1:31:48

considered that

1:31:55

the RoboForm password manager Michael

1:31:57

used to generate his password

1:32:00

might have a flaw in

1:32:02

the way it generated passwords which would allow

1:32:04

him to guess the password more easily. Grand,

1:32:07

however, doubted such a

1:32:09

flaw existed. Michael

1:32:16

contacted multiple people who specialize

1:32:18

in cracking cryptography. They

1:32:20

all told him there's no chance

1:32:23

of retrieving his money, and I

1:32:26

should mention they should have

1:32:28

been right. Joe

1:32:31

Grand should have been right. All these

1:32:34

crypto specialists should have been right.

1:32:38

Last June he approached Joe

1:32:41

Grand again, hoping to convince

1:32:43

him to help, and this time Grand agreed

1:32:46

to give it a try, working

1:32:48

with a friend named Bruno in

1:32:50

Germany who also hacks digital wallets.

1:32:54

Grand and Bruno spent months reverse

1:32:56

engineering the version of

1:32:59

the RoboForm program that they

1:33:01

thought Michael had probably used

1:33:04

back in 2013, and

1:33:07

found that the

1:33:09

pseudo-random number generator used to

1:33:11

generate passwords in that version

1:33:14

and subsequent versions until 2015 did

1:33:19

indeed have a

1:33:21

significant flaw. Let

1:33:23

me just say calling

1:33:25

it a significant flaw is like,

1:33:29

you know, I don't know what. It's understatement.

1:33:32

Calling noon daylight or something. I

1:33:34

mean, okay. The

1:33:38

RoboForm program unwisely tied

1:33:40

the random passwords it generated,

1:33:42

and I should explain, I've

1:33:44

dug down into the technology.

1:33:46

I'm going to go into

1:33:48

kind of detail that our

1:33:50

listeners want after I'm through

1:33:52

sharing what Kim

1:33:55

wrote. So he wrote the

1:33:57

RoboForm program unwisely tied the random.

1:34:00

passwords it generated to the date and

1:34:02

time on the user's computer. It

1:34:05

determined the computer's date and time and

1:34:07

then generated passwords that were predictable. If

1:34:10

you knew the date and time and

1:34:12

other parameters, you could compute any password

1:34:14

that would have been generated on a

1:34:16

certain date and time in the past.

1:34:19

If Michael knew the

1:34:21

day or general time frame in 2013

1:34:24

when he generated

1:34:26

it, as well as

1:34:29

the parameters he used to generate

1:34:31

the password, for example the number

1:34:33

of characters in the password including

1:34:35

lower and upper case characters, figures

1:34:37

and special characters, and

1:34:39

by figures I guess he means numbers and special

1:34:41

characters, this would narrow the

1:34:43

possible password guesses to a manageable

1:34:45

number. Then they

1:34:48

could hijack the roboform function responsible

1:34:50

for checking the date and time

1:34:52

on a computer and get

1:34:54

it to travel back in time, believing

1:34:57

the current date was a day in the 2013

1:34:59

time frame when Michael

1:35:01

generated his password. Roboform

1:35:04

would then spit out the same

1:35:06

passwords it generated on the days

1:35:08

in 2013. There was

1:35:10

one problem, Michael could

1:35:12

not remember when he

1:35:14

created the password. According

1:35:17

to the log on his

1:35:19

software wallet, Michael moved Bitcoin

1:35:21

into his wallet for the first

1:35:23

time on April 14th 2013,

1:35:28

but he couldn't remember if he generated the

1:35:30

password the same day or sometime

1:35:32

before or after that. So

1:35:35

looking at the parameters of

1:35:37

other passwords he generated using

1:35:40

roboform, Grand and Bruno

1:35:42

configured roboform to generate 20

1:35:44

character passwords with upper and

1:35:47

lowercase letters, numbers and eight

1:35:49

special characters from March 1st

1:35:52

through April 20th 2013. It failed to generate

1:35:57

the right password. So,

1:36:00

Grandin Bruno lengthened the time frame

1:36:02

from April 20th out to June

1:36:04

1st, 2013, using the

1:36:08

same parameters. Still no

1:36:10

luck. Michael says that

1:36:12

Grandin Bruno kept coming back to him,

1:36:14

asking if he was sure about this

1:36:16

or that parameter that he'd used. He

1:36:19

stuck to his first answer. Michael said,

1:36:22

They were really annoying me, because who

1:36:24

knows what I did ten years ago.

1:36:28

Anyway, he found other passwords he generated with RoboForm in

1:36:30

2013, and two of them did not

1:36:33

use any special characters. So

1:36:35

Grandin Bruno adjusted. Last

1:36:38

November, they reached out again to

1:36:40

Michael to set up a meeting in person.

1:36:43

Michael said, I thought, oh my

1:36:46

god, they're going to ask me again

1:36:48

for the settings. Instead

1:36:51

they revealed that they had

1:36:53

finally found the correct password.

1:36:56

No special characters. And

1:36:58

it was generated on May 15th, 2013, at 4.10.40pm, GMT. Grand

1:37:09

wrote in an email to Wired, We

1:37:12

ultimately got lucky that our

1:37:15

parameters and time range was

1:37:17

correct. If either of those

1:37:19

were wrong, we would have continued

1:37:21

to take guesses and shots in the dark

1:37:23

and it would have taken significantly longer to

1:37:26

pre-compute all the possible passwords. Kim

1:37:28

then provides a bit of background about

1:37:31

RoboForm, writing, RoboForm,

1:37:35

made by US-based Cyber, spelled

1:37:37

with an S, systems,

1:37:40

was one of the first password

1:37:42

managers on the market, and

1:37:45

currently has more than 6 million

1:37:47

users worldwide, according to a company

1:37:49

report. In

1:37:51

2015, Cyber, S-I-B-E-R, seemed

1:37:54

to fix the RoboForm

1:37:56

password manager. In

1:37:59

a cursory science, Grand and Bruno

1:38:01

could not find any sign that

1:38:03

the pseudo-random number generator in the

1:38:06

2015 version used

1:38:08

the computer's time, which makes

1:38:11

them think they removed it to fix

1:38:13

the flaw, though Grand says they

1:38:15

would need to examine it more thoroughly to be

1:38:17

certain. Cyber Systems

1:38:19

confirmed to Wired that it did

1:38:21

fix the issue with version 7.9.14

1:38:23

of RoboForm released

1:38:29

on June 10, 2015,

1:38:32

but a spokesperson would not answer

1:38:34

questions about how it did so.

1:38:38

In a changelog on the company's website,

1:38:40

it mentions only that cyber

1:38:42

programmers made changes to quote,

1:38:45

increase randomness of generated

1:38:47

passwords unquote. But

1:38:50

it doesn't say how they did this. For

1:38:53

spokesman, Simon Davis says that

1:38:55

RoboForm 7 was

1:38:57

discontinued in 2017. Grand

1:39:01

says that without knowing how

1:39:03

cyber fixed the issue, attackers

1:39:05

may still be able to

1:39:07

regenerate passwords generated by versions

1:39:10

of RoboForm released before

1:39:12

the fix in 2015. He's

1:39:15

also not sure if current versions

1:39:17

contain the problem. He said quote,

1:39:20

I'm still not sure I would trust

1:39:22

it without knowing how they actually improved

1:39:25

the password generation in

1:39:27

more recent versions. I'm not

1:39:29

sure if RoboForm knew how

1:39:31

bad this particular weakness was

1:39:33

unquote. Kim

1:39:36

writes customers may also still be

1:39:38

using passwords that were generated with

1:39:40

the early versions of the program

1:39:42

before the fix. It

1:39:44

doesn't appear that cyber ever notified

1:39:46

customers when it released the fixed

1:39:48

version 7.9.14

1:39:52

in 2015 that they really should

1:39:54

regenerate new passwords for critical accounts

1:39:56

or data. The company

1:39:59

did not respond to a question

1:40:01

about this. If

1:40:03

cyber did not inform customers,

1:40:05

this would mean that anyone

1:40:07

like Michael, who used RoboForm

1:40:09

to generate passwords prior to

1:40:11

2015 and are still

1:40:13

using those passwords, may

1:40:16

have vulnerable passwords that hackers

1:40:18

can regenerate. Grant

1:40:20

said, quote, we know that most

1:40:22

people don't change passwords unless they're

1:40:25

prompted to do so. He

1:40:27

added that out of 935 passwords

1:40:30

in my password manager, he

1:40:32

said, not RoboForm, 220 of

1:40:35

them are

1:40:37

from 2015 and earlier, and

1:40:40

most of them are for sites

1:40:42

I still use, unquote. Depending

1:40:45

on what the company did to

1:40:47

fix the issue in 2015, newer

1:40:49

passwords may also be vulnerable. We

1:40:51

don't know. In

1:40:53

November, Grant and Bruno, having

1:40:55

earned their reward, deducted

1:40:58

a percentage of Bitcoin

1:41:01

from Michael's account for the work

1:41:03

they did, then gave

1:41:05

him the password to access the

1:41:08

rest. The Bitcoin was worth

1:41:10

38,000 per coin at the time. Michael

1:41:15

waited until it rose to 62,000 per coin and sold some

1:41:17

of it. He

1:41:20

now has 30 Bitcoin, now worth 3

1:41:23

million and is waiting for the value to

1:41:25

rise to $100,000 per

1:41:28

coin. Michael says

1:41:30

he was lucky that he lost

1:41:33

the password years ago because

1:41:35

otherwise he would have sold off the Bitcoin

1:41:37

when it was worth 40,000 per

1:41:40

coin and missed out on a

1:41:42

greater fortune. He said,

1:41:44

quote, my losing the password was

1:41:46

financially a good thing, unquote. Yeah,

1:41:48

that's how I feel. If I ever,

1:41:50

now you can never recover yours,

1:41:52

but if I ever remember my

1:41:54

password, why it's just been

1:41:57

a long term savings account. Okay.

1:42:00

So, but a bad PRNG. Oh,

1:42:03

they're always bad. Aren't they? That's

1:42:06

what the pseudo means. Oh,

1:42:08

well, Leo, wait for this. Oh my

1:42:10

God. First of all, Robo

1:42:13

form is probably a well-known

1:42:15

name to everyone, even those of us who

1:42:17

never had occasion to use it. They were

1:42:19

one of the first. I'm in that camp.

1:42:21

You're in that camp. I

1:42:23

think I use it back in

1:42:26

the day though. I mean, okay. Because it

1:42:28

was the only, it was the first one.

1:42:31

Yeah. Yes. Yes.

1:42:34

Okay. But since this podcast has

1:42:36

been going since 2005, we've covered the span of time

1:42:38

that Roboform was

1:42:42

apparently using a horrific

1:42:45

password generation scheme. One

1:42:49

of this podcast's early and

1:42:51

continuing focuses has been

1:42:53

on the importance of the strength

1:42:55

of pseudo random number generators used

1:42:58

in cryptographic operations. So

1:43:00

I was quite curious to

1:43:02

learn more about what exactly

1:43:04

Grand and Bruno found when

1:43:06

they peeled back the covers

1:43:09

of Roboform circa 2013. I

1:43:14

was reminded of a line

1:43:16

from the sci-fi movie Serenity

1:43:18

where our villain says to

1:43:20

Mel, it's worse than you know,

1:43:24

to which Mel replies, it usually is.

1:43:29

Believe it or not, whenever

1:43:31

the user of Roboform set

1:43:34

version 7.9.0 and probably my theory is even

1:43:41

version one, but we'll get to that

1:43:43

in a minute, but definitely 7.9.0,

1:43:45

which was released

1:43:48

on June 26 of 2023. Whenever

1:43:52

the user pressed its generate

1:43:54

password button Roboform

1:43:58

up until it's repair to. years

1:44:00

later, with 7.9.14, simply took

1:44:02

the Windows Systems UNIX time,

1:44:05

which is the number of

1:44:11

seconds elapsed since January 1,

1:44:13

1970, and directly and deterministically

1:44:15

used that time

1:44:20

of day to produce

1:44:23

the user's password. Boboform

1:44:27

didn't even take the

1:44:30

trouble to create

1:44:32

a unique per-system salt

1:44:35

so that differing installations

1:44:38

would produce differing bad

1:44:40

passwords. This

1:44:43

meant that if two users

1:44:45

anywhere were to press

1:44:48

the Generate Password button within

1:44:50

the same one-second

1:44:52

interval, if

1:44:54

they were using the same password

1:44:56

parameters, identical passwords would

1:44:59

be generated. Grandin

1:45:02

Bruno discovered something else when

1:45:04

they opened up Roboform. The

1:45:07

designers of this password generator that should

1:45:09

really just be called a time

1:45:11

scrambler realized that

1:45:14

if a user happened to press

1:45:16

the Generate Password button a second

1:45:18

time within the same

1:45:20

second, the same password would

1:45:22

be generated. To cover

1:45:25

up this flaw, they subtract a

1:45:27

fixed amount of time from the

1:45:29

system time for repeats. What

1:45:32

an utter disaster. One

1:45:34

thing we don't know is for

1:45:37

how long Roboform's password generator

1:45:39

was this horrific before

1:45:41

it was changed. I

1:45:44

originally wrote before it was fixed, but

1:45:46

we don't know how it was changed. But

1:45:57

I have a theory about that. is

1:46:01

that this must have

1:46:04

been the original

1:46:06

implementation of

1:46:08

Roboform's password generator. The

1:46:11

reason I think that is

1:46:13

that by 2013 no one

1:46:17

would have ever designed such

1:46:20

a horrifically lame password

1:46:22

generation scheme. This

1:46:25

had to have been a very

1:46:28

early password generator created

1:46:30

back in the late 90s or

1:46:32

early 2000s before

1:46:34

there was much awareness of the

1:46:36

proper way to do these things.

1:46:39

And then following the well

1:46:42

understood property of software inertia,

1:46:45

10 to 15 years went by

1:46:47

without anyone at Roboform bothering to

1:46:50

think about it again because

1:46:52

it was after all

1:46:55

producing random appearing passwords.

1:46:58

But for some reason, whatever

1:47:00

reason, eventually someone noticed

1:47:04

and apparently fixed

1:47:06

it. We don't know how, but at

1:47:08

least changed it. Grand and

1:47:10

Bruno note that something did finally change

1:47:13

in 2015 with

1:47:16

7.9.14. But since

1:47:19

Roboform is both closed

1:47:21

source and closed

1:47:24

mouthed, we have

1:47:26

no idea what may have precipitated the

1:47:28

change nor what the new

1:47:30

algorithm was changed to. So I'm

1:47:34

put in mind of Bitwarden,

1:47:36

the password generating sponsor of

1:47:38

this network where we can

1:47:40

know anything we want to

1:47:43

know about its innards. First,

1:47:46

because if we asked we'll be

1:47:48

told. Secondly, because it's

1:47:51

probably openly documented. And thirdly, because

1:47:53

the source code of the solution

1:47:55

is publicly available. None

1:47:58

of which is true for for

1:48:00

Roboform. The final

1:48:02

note that's worth repeating is

1:48:05

the point that Grand highlights. Regardless

1:48:07

of their apparent complexity, we

1:48:10

now know that's an illusion.

1:48:13

It's just the scrambled

1:48:15

time of day and date, without

1:48:19

even having any per-system

1:48:21

salt, which means that all user scramblings

1:48:23

are identical for all owners of Roboform,

1:48:25

probably from the beginning, its

1:48:30

first release through 2015. Therefore, any passwords that were

1:48:33

ever generated by Roboform,

1:48:40

presumably until version 7.9.14, can be reverse

1:48:42

engineered. So, the first thing that's

1:48:46

required to be a password is that

1:48:49

the set of possible passwords

1:48:51

can be further narrowed

1:48:55

by the degree to which their approximate

1:48:57

date of creation is known. Even

1:49:00

if the format of the password is

1:49:02

not known, there are a limited number

1:49:04

of choices available for upper and lower

1:49:06

case, special characters, numbers, and length. So,

1:49:09

if someone were determined to crack into

1:49:12

something that was being protected by a

1:49:14

password that they had reason to believe

1:49:16

had been generated by Roboform, and

1:49:19

they had some idea of when, such

1:49:21

as the date of the protected

1:49:23

account's creation, it's not a stretch

1:49:25

to imagine that it could be

1:49:27

done. Sure, I would

1:49:30

put the chances of this actually

1:49:32

happening being done as extremely remote

1:49:34

at best, but anyone

1:49:36

who was using Roboform back then,

1:49:38

who may have never had the

1:49:40

occasion to update their password since,

1:49:43

should at least be aware that

1:49:45

those passwords were simply generated by

1:49:48

scrambling the time of day, and

1:49:51

with a resolution of only one

1:49:53

second. There are not a

1:49:55

cryptographically strong number of seconds in a

1:49:57

day. While

1:50:00

I don't want to throw shade

1:50:02

on Robiform's products of today, which

1:50:05

might be excellent. Given

1:50:08

the history that has just been revealed,

1:50:11

Robiform is certainly not something

1:50:13

I could ever use or

1:50:15

recommend, especially when there are

1:50:17

alternatives like Bitwarden and 1Password

1:50:19

which are hiding nothing and

1:50:21

Robiform is hiding everything. And

1:50:24

this brings me to the final and most important point

1:50:26

and lesson I want to take away from this. Way

1:50:29

back when I and

1:50:31

this podcast first endorsed

1:50:34

LastPass, I was able

1:50:36

to do so with full confidence and

1:50:39

in fact the only reason I was able to

1:50:41

do so and did was

1:50:43

because the product's original designer

1:50:45

Joe Seegrist completely

1:50:47

disclosed its detailed operation

1:50:49

to me. It

1:50:52

was the 21st century and

1:50:55

Joe understood that the value he

1:50:57

was offering was not

1:50:59

some secret crypto mumbo jumbo.

1:51:03

That was 20th century thinking.

1:51:06

Joe understood that the value

1:51:08

he was offering was a

1:51:11

proper implementation of well

1:51:13

understood crypto that was then

1:51:15

wrapped into an appealing user

1:51:18

experience. The value

1:51:20

is not in proprietary secrecy,

1:51:22

it's in implementation, maintenance and

1:51:25

service. As we know, many

1:51:27

years and ownership changes later,

1:51:30

LastPass eventually let us down.

1:51:33

I hope Joe is relaxing on a

1:51:35

beach somewhere because he earned it. So

1:51:40

the lesson we should take from

1:51:42

what can only be considered a

1:51:44

Robiform debacle is that

1:51:47

something like the design of a

1:51:49

password generator is too

1:51:51

important for us to trust

1:51:53

without a full disclosure of

1:51:56

the system's operation and its

1:51:58

subsequent assessment by individuals. independent

1:52:00

experts. Any password

1:52:02

generator that anyone is using

1:52:05

should fully disclose its

1:52:07

algorithms. There's no

1:52:09

point in that being secret in

1:52:11

the 21st century. It doesn't necessarily

1:52:14

need to be open source, but

1:52:16

it must be open design. No

1:52:19

company should be allowed to get

1:52:21

away with producing passwords for us

1:52:23

while asking us just to assume

1:52:25

those passwords were

1:52:28

properly derived just because

1:52:30

their website looks so

1:52:32

nice. What the

1:52:34

marketing people say has

1:52:36

exactly zero bearing on

1:52:39

how the product operates. It's

1:52:41

obvious that we cannot assume that

1:52:44

just because a company is offering

1:52:46

a fancy looking crypto product that

1:52:48

they have any idea how to

1:52:51

correctly design and produce such a

1:52:53

thing. There's no reason

1:52:55

to believe that there are not

1:52:57

more robo forms out there. What's

1:53:01

the best way? I mean, software

1:53:05

random number generators are

1:53:09

pseudo because they repeat eventually. Remember

1:53:11

that the first thing I started

1:53:14

doing, the first piece

1:53:16

of technology I designed for squirrel

1:53:18

and I talked about it on

1:53:20

the podcast was I created what

1:53:22

I called an entropy harvester. It

1:53:24

was harvesting entropy

1:53:27

from a range of

1:53:29

sources. It was pulling

1:53:31

from Windows own random

1:53:34

number generator. I fed mouse

1:53:37

clicks and network

1:53:42

received network packets, DNS,

1:53:46

transfer rates, all the noise

1:53:49

that I could was constantly

1:53:52

being poured into a hash that

1:53:55

that squirrel was churning. And the

1:53:57

idea was to. that

1:54:00

to create something unpredictable,

1:54:03

unpredictability is the single

1:54:05

thing you want. And

1:54:08

so the idea was that, I

1:54:10

mean like almost immediately squirrels,

1:54:13

pseudo random number generator would

1:54:16

just have so much noise poured

1:54:18

into it, all of

1:54:20

that affecting its state, that there

1:54:22

would be no way for anybody

1:54:24

downstream to have ever been able

1:54:27

to predict the state that

1:54:29

squirrels pot of entropy was in

1:54:32

at the time that it generated

1:54:34

a secret key. Right, Galia

1:54:36

is reminding us that CloudFlare uses a

1:54:38

wall of lava lamps to

1:54:40

generate their random numbers. But

1:54:44

it's not the seed you're generating

1:54:46

because as I remember with software

1:54:48

random number generators, if

1:54:51

you reuse the same seed, you'll

1:54:53

get the same sequence of numbers, it'll repeat

1:54:55

events, right? Those are

1:54:58

old pseudo random number generators. That's not how we

1:55:00

do it anymore. Right. Okay.

1:55:03

And I do remember you saying the best way to do

1:55:05

it would be use a capacitor. Was

1:55:07

that right? Actually a diode.

1:55:10

A diode, that's right. A

1:55:12

reverse bias diode where you put

1:55:14

it just

1:55:17

at the diode

1:55:19

junction's breakdown voltage and

1:55:22

what happens is you

1:55:24

get completely unpredictable electron

1:55:26

tunneling across the reverse

1:55:28

biased junction

1:55:30

to literally create hiss.

1:55:34

If you listen to it, it is hiss. Right.

1:55:37

And it is truly, it is

1:55:39

quantum level noise. Wow.

1:55:42

And that's as good as it gets. That would be the best way you think,

1:55:44

as good as it gets. That is

1:55:46

what all of the

1:55:48

true random number generators now do,

1:55:50

is a variation on that. They

1:55:53

actually do some post-processing because

1:55:55

the noise can be skewed,

1:55:57

but it is utterly unknowable.

1:56:00

This is actually a fascinating problem

1:56:03

in computer science because

1:56:06

you know, we might say well is a coin flip random?

1:56:08

Well it is with a perfect

1:56:10

coin but no coin is perfect. A

1:56:14

roulette wheel is random with a perfect roulette

1:56:16

wheel but there is no such thing. They

1:56:19

all have biases. I

1:56:21

was asked in 1974 to

1:56:26

design a little

1:56:29

machine that some people

1:56:31

would take to Las Vegas and

1:56:34

it was going to be operated with tow

1:56:36

switches because it could not.

1:56:38

That's the eudemonic pie. This was in

1:56:40

Santa Cruz right? There's

1:56:43

a book about this. Actually it was close

1:56:46

to Santa Cruz. Yeah there's a famous book

1:56:48

about this. Have you read the eudemonic pie?

1:56:51

No. Well they got caught. But

1:56:55

they made a lot of money. Yeah

1:56:58

and what they were doing was they

1:57:01

were recording, at least

1:57:03

in the case of the guys who asked me

1:57:05

to develop this thing, they

1:57:08

were recording roulette wheel results

1:57:10

because no roulette wheel is

1:57:13

perfect. Right. And

1:57:16

believe it or not they had

1:57:18

this thing running already and they

1:57:21

were using a wire recorder to

1:57:23

record tones that their toes

1:57:25

were generating and they wanted me to

1:57:28

do a solid state version for them.

1:57:30

They wore computers in their shoes to

1:57:34

basically solve roulette and

1:57:37

they went a lot of money and because

1:57:39

people are used to people counting cards in

1:57:41

blackjack but everybody in

1:57:43

Vegas assumes a roulette wheel can't be

1:57:46

beat. Well it can. If

1:57:48

you haven't read this book you gotta read it. I wonder if

1:57:50

it's the same guys. Very

1:57:53

interesting story. The eudemonic pie.

1:57:56

And I'm pretty sure that they were in

1:57:59

Santa Cruz area. Well, that would be

1:58:01

the right physical area because I was in Mountain

1:58:03

View, which is just and it was a computer

1:58:06

they would Wow,

1:58:08

yep Wow How

1:58:11

fascinating is that? So

1:58:14

yeah, I see maybe someday maybe she

1:58:17

got a little you know a

1:58:19

slow week I know there's never a slow week

1:58:21

on this show you could talk a

1:58:23

little bit about random numbers and Why

1:58:26

they're pseudo and why you know how

1:58:28

to how it's a challenge It's not

1:58:30

a not it's a non-trivial way to

1:58:32

to generate those with

1:58:34

computers and crucially important. It's funny because

1:58:38

We think about crypto as solving

1:58:40

all the problems but but

1:58:43

I'm I'm not sure I can think of an

1:58:45

instance Where you

1:58:47

don't need something random when you're choosing

1:58:49

a private key for a public key

1:58:51

crypto You need a high quality random

1:58:54

numbers and we've seen failures of that

1:58:56

where for example studies

1:58:58

of the private keys used on

1:59:00

web servers have have turned

1:59:02

up a surprising number of collisions right

1:59:04

of private

1:59:07

keys because they were all They

1:59:10

were all getting their key shortly

1:59:12

after turning on a version of

1:59:14

Linux that hadn't yet had time

1:59:17

To to to develop enough entropy.

1:59:19

It hadn't warmed up its pseudo

1:59:21

random number generator enough I think

1:59:23

that you were I can't

1:59:25

believe you're not heard of the book the book focuses

1:59:27

on a group of University of California Santa Cruz Physics

1:59:30

graduate students who in the

1:59:32

late 70s and early 80s

1:59:34

designed and played miniaturized computers

1:59:36

hidden in specially modified platform

1:59:38

sold shoes to predict the

1:59:41

outcome of casino roulette games I Think

1:59:44

I think you were on an unwitting You

1:59:48

didn't do it though, right? I didn't do it. You didn't

1:59:50

do it. They found somebody to do it

1:59:53

Wow what a story That

1:59:56

may also be the one of the first wearable computers

1:59:58

I Steve

2:00:02

Gibson, you see, he has a

2:00:05

history in this business. He

2:00:07

knows what he's talking about. That's why we listen

2:00:09

to him with such rapt attention. Steve

2:00:12

does Security Now every Tuesday.

2:00:14

We try to start right after Mac Break Weekly around 1.30 p.m.

2:00:18

This, you know, often bleeds over

2:00:20

to about 2 p.m. Pacific. That's 5 p.m.

2:00:22

Eastern, 2100 UTC. We

2:00:25

do stream it live. You're just so impatient to

2:00:28

get your Security Now fix, you can't wait. You

2:00:31

go to youtube.com/twit slash live and you

2:00:33

can catch the live stream. But

2:00:36

of course, we have on-demand

2:00:38

versions because it's a podcast. Now

2:00:40

Steve has some interesting versions that are

2:00:42

unlike anything else. He

2:00:45

has the 64-kilobit audio. I

2:00:47

would say that's the canonical version. We have that

2:00:49

at our website as well. We

2:00:52

have video, which is absolutely not canonical.

2:00:54

But you have something even weirder,

2:00:57

which is 16-kilobit audio, which

2:00:59

sounds a little bit like Thomas Edison

2:01:01

on a recording disc, but

2:01:04

it's got the virtue of being a very, very

2:01:06

small file size. Now Elaine

2:01:08

Ferris, who is a court reporter and a

2:01:10

transcriptionist, then takes that file and

2:01:12

types it up. So Steve also has a full

2:01:15

human-written, not AI-generated transcript

2:01:18

of each show, of all 980 shows. Those

2:01:23

are all on his website. grc.com.

2:01:26

He also has his show notes there in the picture of the

2:01:28

day and all of that stuff. Now if

2:01:31

you go to grc.com/email, you can sign

2:01:33

up to get that stuff in

2:01:35

the mail automatically. But

2:01:38

you don't have to. In fact, the default

2:01:40

is off. It just basically approves your address

2:01:42

so that you can email Steve after that.

2:01:45

So that's a new feature that Steve's added. You may hear

2:01:47

him talk about it on the show. He

2:01:49

also has, and I think you might have heard him talk

2:01:51

about this as well, a little

2:01:53

thing called Spinrite 6.1, the world's best

2:01:56

Macintosh. storage

2:02:01

performance enhancer maintenance and

2:02:04

recovery utility. Did I get all

2:02:06

that? I think I did. Well done. Yes and

2:02:09

it is well worth it. It's Steve's bread and butter

2:02:11

so you support him when you buy it and

2:02:13

of course it's pretty much a lifetime license.

2:02:17

I mean he's very lenient

2:02:19

with all of that. So go on

2:02:21

in there if you already bought one you can

2:02:23

get an upgrade of 6.1. There's

2:02:26

lots of other free stuff there. Shields up and I

2:02:29

really think this Valadrive program is

2:02:31

very important. It validates the

2:02:33

actual that the USB key that you purchased

2:02:35

on Amazon actually has the amount of data

2:02:37

it says it does. Lots of them don't.

2:02:40

Valadrive will let you know.

2:02:43

Our site is twit.tv slash SN

2:02:45

for security now. That's where you can

2:02:47

download the show. There's

2:02:49

a YouTube channel with just the video. There's audio

2:02:51

too but I mean the video versions are on

2:02:54

YouTube and that's nice for sharing if you say

2:02:56

oh I gotta send the boss this clip. Oh

2:02:59

wow. Just clip

2:03:02

it on YouTube. YouTube makes it very easy. You can send

2:03:04

it to him and everybody has access to that. The

2:03:07

best way to get the show though is to

2:03:09

subscribe so you get the show automatically.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features