Podchaser Logo
Home
545: 3,062 Days Later

545: 3,062 Days Later

Released Monday, 15th January 2024
Good episode? Give it some love!
545: 3,062 Days Later

545: 3,062 Days Later

545: 3,062 Days Later

545: 3,062 Days Later

Monday, 15th January 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

This kind of goes back to the reason

0:02

for Bcache.js even existing. Going

0:04

back to Bcache, I

0:06

was at Google, and we're trying to

0:09

use these really high-performance

0:11

SSDs that Google

0:14

was building internally. And this is when

0:16

SSDs were new, so everyone was looking

0:18

for applications, and caching was the natural

0:21

solution. But it turned out to be

0:23

that just producing a Btree that

0:25

is fast enough for indexing all block

0:28

IOs was a bit of a challenge.

0:31

Binary search turns out to

0:33

really suck if you're really pushing

0:35

it really hard, because binary search

0:37

is the worst possible

0:39

algorithm for the way CPU caches work.

0:43

One of the things they don't tell you back

0:45

in school. So I

0:47

came up with Heitzinger arrays

0:50

before it was even published. Heitzinger

0:53

arrays are where you build up a

0:55

binary search tree in an

0:57

array, kind of the same way you would build up

0:59

a standard binary heap. So this

1:02

means prefetching actually works. So

1:04

that plus a whole bunch of

1:06

crazy optimizations to make

1:08

the nodes as small as possible,

1:11

plus some other math so we

1:13

can convert back from Heitzinger indexes

1:15

to in-order traversals to

1:18

get back to the sorted order

1:20

representation. That made Bcache's

1:24

Beacher implementation one of the fastest

1:26

around. Actually the fastest

1:30

ordered persistent key value store as far as

1:32

I know. That was when I

1:34

knew I and the other guys working

1:36

on Becache back then that we

1:39

were onto something. We had something that might

1:41

be a good basis for file system. Hello

1:45

friends and welcome

1:47

back to your

1:50

weekly Linux talk

1:53

show. My

1:58

name is Chris. My name is Wes. My name

2:00

is Brent. Hello, gentlemen. Coming

2:03

up on the show today, yes, BcashFS

2:06

has shipped and

2:08

it wants to sting like ZFS and

2:11

float like XFS. We'll

2:13

bring the value to you this week and

2:15

share insights from our in-depth conversation with Kent

2:17

Overstreet, the creator of

2:19

BcashFS. From the problems it

2:22

is built to solve to the process of getting

2:24

it past Linus himself and a lot more. We'll

2:26

get into all of that. We'll round it out

2:29

with some great boosts and picks and

2:31

a lot more. So let's say good morning to our

2:33

friends over at Tailscale. tailscale.com/Linux Unplugged. Try

2:35

it out for free on 100 devices. tailscale.com/

2:41

Linux Unplugged. It is really the easiest way to

2:43

connect all your devices and services

2:45

to each other wherever they're at across

2:47

carrier grade NAT across the world.

2:50

And it's fast, really fast. And

2:53

it's powered and protected by WiGuard. That's

2:56

right. tailscale.com/Linux Unplugged. Go say good

2:58

morning and try it out for free on

3:00

100 devices. tailscale.com/Linux

3:03

Unplugged. And

3:05

of course, time appropriate greetings to our Mumble room.

3:07

Hello virtual lug. Hey Chris. Hey

3:10

Wes and hello guys. Hello.

3:13

Hello. Thank you very much

3:15

for joining us today, guys. Nice

3:18

to see you there. We call that Mumble room

3:20

our low latency audio feed because it is. It's

3:22

Opus. It's running in Mumble. It's a free

3:24

software running on our Linux recording system, a free software

3:26

stack, a great way to listen on Sunday while

3:29

we do the show. It's almost like you're here in

3:31

the studio. It is almost as good as it gets

3:33

really. Nix

3:36

Con North America and scale are just around the

3:38

corner March 14th and 17th. Just

3:42

a few episodes away, like 10, 9

3:45

episodes, something like that away. I

3:47

do it and I do it in love. That's

3:49

smart. I'm just that way we can think about

3:52

like from a content standpoint and

3:54

with this trip we're trying to do something kind

3:56

of ambitious. We're in the middle of an ad

3:58

winter and it's always kind of. been a

4:00

goal of mine to go cover these Linux

4:02

events and The longer I

4:04

do it the more I actually think they're kind

4:07

of essential to what we call the quote-unquote Linux

4:09

community What we see online is

4:12

an echo of what actually takes place at these events

4:14

And that's where the real Linux community is and I've

4:16

always tried my best and always failed I'm always going

4:18

to try to do better to cover

4:21

these events and convey why

4:23

that matters in free software and I've

4:26

always felt like the most honest way to do

4:28

that is to get there either by paying for

4:30

it on our own or By

4:32

the audience helping us get there because it's like it

4:35

I just wanted to be a pure

4:37

thing because I wanted to be a pure contribution

4:39

back to the free software community and That's

4:42

why this this idea of trying to get to scale

4:44

and nixcon was like, okay, this is maybe our opportunity

4:47

It is the ad pocalypse. Maybe the audience will

4:49

step up and Man have

4:51

they because we've had a goal to raise 8 million

4:54

sats via boost very ambitious,

4:56

but we're trying to cover the gas and an Airbnb

4:58

and whatnot and It's

5:00

been going really well so far. We

5:02

have raised 92 percent of our goal 7

5:07

million three hundred and twenty four five hundred

5:09

and fifty one sats towards our trip to

5:12

go to scale Ninety two percent

5:14

of the way there Wow, that's so impressive.

5:16

I'm really grateful This

5:19

has been not just a goal for

5:21

this year, but doing something like this Where

5:24

JB could go cover it? With

5:27

no attachments to any

5:29

commercial entity just completely

5:32

focused on the event Not

5:34

trying to get a return for a sponsor That

5:36

we could cover something like this Powered

5:39

by the audience has been a long-term goal of

5:41

mine and we are 92 percent

5:43

of the way to raising 8 million sats I'd

5:46

love to tip it over even a little bit

5:49

just because it's California and it's extremely expensive Now

5:51

listener Jeff pointed out on the live stream today

5:54

that if you have some on-chain Funds

5:56

And you don't have an easy way to get them

5:58

into lightning so you can boost. Well.

6:01

There's a tool that. Three of

6:03

us have used in the past to

6:05

do just that to support our favorite

6:07

pie cabinets called Bolts.exchange B O L

6:09

Tz.exchange. In. Go both

6:11

ways. You. Can go from on chain

6:13

lightning enlightening to on chain. If you got

6:15

no idea what I'm talking about, don't worry,

6:17

Unplug core.com You can become a members and

6:20

supporters that way. They have kids there as

6:22

well, but ninety two percent. Are almost

6:24

there. Up as close that gap and support

6:26

the show at the same time. Thank you for been. Very

6:31

be cash Ss, where does it said? I

6:33

think that's kind of everybody's question when I

6:35

tell them be cash if as the ships

6:37

and linux six seven last Sunday's we realize.

6:40

It. Came out. On licence

6:42

to and why should you care? And

6:44

Brand did the math. And.

6:46

Acts of already forgot what you said

6:49

bread but we've been talking about be

6:51

cash if as here on this show

6:53

since survey says linux Unplugged one five

6:55

eight which was. Ah, two

6:57

thousand and sixteen in August. Now

6:59

why have we been talking? About.

7:02

A file system. Since. August

7:04

Two, Two thousand and sixteen. And.

7:07

I would like to make the case and. You

7:10

guys feel free to. Fact.

7:12

Checker interrupt or whatever. Ah,

7:14

but I feel like Linux still hasn't

7:16

solved the problem of a completely robust,

7:19

competitive file system. That. Is.

7:22

Even semi seats are comparable to

7:24

what an I phone can do

7:26

and what. Windows, X p

7:28

can do. Extended for

7:30

is a great file system Performance. It's

7:33

reliable. Lean and mean. Modern.

7:37

It is not. As impressive as I

7:39

worked so well for solo, Yeah, and I'm

7:41

grateful. You know what else worked so well

7:44

for so long? Hss.

7:48

Extended family. Sat. Thirty

7:50

Two. Also. Very

7:52

fast file systems, low overhead have

7:54

served us well for a very

7:56

long time. But. they are inappropriate

7:59

for our modern or modern

8:01

server workload. They're inappropriate. I

8:04

think Extended 4 is inappropriate as well. I

8:06

love what it's done for us, but

8:08

I think we as a community have fallen

8:11

behind. Some distributions,

8:13

like Fedora OfferButterFS, OpenSUSE,

8:16

or before that, RiserFS, of

8:19

course Ubuntu has made strides by offering

8:21

ZFS as an installation option. But

8:24

fundamentally, Extended 4 lacks certain features that

8:27

an iPhone has, that every Mac that ships

8:29

has, that Windows has had since XP. Things

8:32

like shadow volume copies, in other

8:34

words snapshots and copies, and sub-volumes,

8:36

maybe even compression or encryption. Things

8:40

that protect users, protect

8:42

the hardware, like their SSD and reduce write.

8:45

Things that are actually useful in a workstation

8:47

environment, like sub-volumes for home directories and things

8:49

like that. Things that can

8:51

enable some of those workflows that we've talked

8:53

about when we talked about bulletproof Linux setups.

8:56

Or like on a server, you know, having

8:58

the ability to send your file system is

9:00

an extremely useful functionality. And

9:02

ButterFS and ZFS have this functionality, but Extended

9:04

4 does not. And

9:06

XFS has been around a long time. In fact,

9:09

XFS is older than Extended 4 and Extended 3

9:11

and others. It's a classic. It's

9:13

been around since the 90s, but we

9:15

still as a community haven't embraced it for

9:17

whatever reason and made it the default file

9:19

system, even though it

9:21

continues to be developed. Extended 4 remains

9:23

the champion. And I

9:26

think, as a result, basic

9:28

functionality is now lacking in both the

9:30

desktop experience and in the server tooling

9:33

experience. How do you build a

9:35

standardized way, say in the Dolphin File Manager or

9:37

maybe in System D, to recover

9:39

files and restore files if

9:41

90% of the file systems that are deployed

9:44

out there by default don't

9:46

have that functionality? You can't. You're

9:49

not going to build the ability to recover files

9:51

in the Dolphin or on

9:53

GNOME files unless there's a standardized

9:55

API that maybe 60%

9:58

Or more of your user base could

10:00

potentially... the have access to system to

10:02

isn't going to have this built in

10:04

at a level that is extremely accessible

10:06

and deployable until. Ninety.

10:08

Percent of the service deployed have this

10:11

file system, so like even though Linux

10:13

has access to Boston for these advanced

10:15

features because. I don't have a

10:17

lot tested or the not compatible or than a

10:19

trusted what's whatever reason they are not seen as

10:21

that new to the donor default in the to

10:23

get them the result is. People. Just

10:26

don't trust that we have those features available. or rook will

10:28

I would say I? would you put the time in a

10:30

developing? say ah. A. File recovery mechanism

10:32

like Shadow certified copy the spinner Windows Xp

10:34

since Windows Xp and damn you know Mac

10:36

O S I used to on this here

10:38

show. Give him a hard time for a

10:40

just as plus. Oh yeah, and then they

10:43

layered time machine on top of that. And

10:45

then they laughed and shamed us. And they

10:48

even revamped Time Machine. Now to take advantage

10:50

of the snapshot capabilities. Are file systems leave

10:52

and modernized? Time missing? They rolled out a

10:54

brand new file system to all of their

10:56

devices and production and then rewrote the backup

10:59

tools that use it, rewrote the installer, and

11:01

fundamentally how Mac O S installation updates are

11:03

done. While. We've sat on

11:05

our hands and done nothing. So.

11:08

It's embarrassing from a workstation standpoint, and it's

11:10

kind of embarrassing on a server side standpoint,

11:12

and there must be some reason for it.

11:16

There must be some reason for but I hope and

11:18

I don't know if it's it but I my hope

11:20

is the perhaps be cash Fs will fit into the

11:22

spot. Where. Do you think you

11:24

mean the new default linux file system Sunday

11:26

when deciding when it's ready or not there

11:29

yet. A be cash F

11:31

s is. What? Were taken

11:33

to calling a third generation file system.

11:36

And. I think you you probably explains the best was

11:38

well I mean if he just i think about.

11:41

Your. Cycles and Technology. We had the generation

11:43

a file systems that he actually for an

11:45

X F S and came from I'm and

11:47

then Cfs came on the scene and was

11:50

one of the first copy on right file

11:52

systems that was widely deployed and used in

11:54

Trusted and taught us the whole next set

11:56

of features that could expect from file systems.

11:59

but what, ZFS got

12:01

started sometime in 2004-ish? I

12:05

don't know, something like that. And the bowels of

12:07

Sun Microsystems. Right, and ButterFS, I guess it was,

12:09

I don't know exactly when development started, but maybe

12:12

2008, it was merged anyway into

12:15

the kernel in 2009. But

12:17

BcacheFS has had time to

12:20

learn from those things, right? Like, you can

12:22

take a look at how modern file systems

12:24

have been designed, the existing stuff, what's worked,

12:26

what hasn't worked, what issues they've run into.

12:28

Because I mean, DFS and ButterFS are both

12:30

great, but they're designed to have had to

12:32

make compromises, and in today's

12:35

era, maybe you want to sort of remake

12:37

those compromises or take another whack at it, and you

12:39

can have new simplifying assumptions, or at

12:41

least look to see, like, if I was going to

12:43

do this from scratch with these features not bolted on,

12:45

but designed in from the get-go, how

12:48

might you do it? And that's at least some of

12:50

what BcacheFS is trying to do. The

12:53

challenge, really, is that BcacheFS is

12:55

moving quick, and I think

12:57

the documentation can lag, behind development, especially

12:59

as things are getting added, now

13:02

that it's been mainlined. When you look

13:04

at the feature set today, I think it's safe to

13:06

say it's got copy-on-write, all right, it's got extended

13:09

ACL support, it's got sub-volume support, but

13:11

there are features that are missing that

13:13

we would consider to be, when I'm

13:15

sitting here going on about standard table-stake

13:18

features, there are some still from BcacheFS

13:20

that are not yet merged in. Yeah,

13:22

right, send-receive, for instance, that's not there.

13:24

Snapshots work, snapshots are doing great, seemingly,

13:26

but not send-receive quite yet. Erasure

13:29

coding, aka, like RAID 5, 6

13:31

type things, that's not

13:33

yet in, but coming soon. I think

13:35

also, there's probably a cohort of ButterFS

13:38

users that have never been super thrilled

13:40

with ButterFS, but maybe they don't wanna

13:42

use DFS because it doesn't have a

13:44

kernel module, they're on a Raspberry Pi,

13:46

maybe, low-end hardware. While

13:48

BcacheFS isn't yet optimized for low-end systems,

13:50

I think there is still a cohort

13:52

of ButterFS users that may wanna migrate

13:55

one day to Bcache. I think

13:57

it'd be pretty easy, too, just

13:59

in the sense that... There's a lot of similarity between

14:01

the file systems, right? They're both in kernel, so you don't

14:03

have to deal with anything new there. There's

14:06

also a lot of places with BKSHFS where

14:08

the sort of user-experienced interface is quite similar

14:10

if you're used to ButterFS, and the

14:12

snapshot interface is sort of modeled on the way ButterFS

14:14

does it, so that'll be familiar to you. And

14:16

then I think you might just think, you know, it's

14:19

going to offer some things like better RAID if

14:21

that is useful to you that your kernel file

14:23

system can't really do. So we had

14:25

a lot of questions like, what's working, what's next? So

14:28

we asked Kent to join us

14:30

on the pod, and he was out and about,

14:32

but was kind enough to share some of his

14:34

time with us. He's

14:36

a thoughtful guy and truly an expert on the state of

14:38

modern file systems, from what I can tell. We

14:41

wanted to share the highlights of that chat

14:43

with you, and my biggest question going into

14:45

this episode was, well, given

14:47

ZFS and ButterFS and XFS, as we've

14:49

been saying, why do

14:52

you really need yet another modern

14:54

file system? Is

14:56

the feature set of ButterFS and ZFS performance

14:58

and reliability and scalability of XFS? If we

15:00

can do that, then we'll have something. So

15:03

his quick answer, his elevator pitch is the

15:05

feature set and

15:07

reliability of ZFS, but the scalability of

15:09

XFS and speed. Kind

15:11

of like I said, you know, he wants to have

15:14

something, and I think he's really

15:16

close, that stings like

15:18

ZFS and floats like XFS that you could

15:20

have built into your Linux kernel. Yeah,

15:23

I know there's some work. I'm not sure that's

15:25

ready either, but work on like a no-cal path

15:27

that turns off the copy-on-write and is aiming to

15:30

be at least comparable to performance with XFS. I

15:32

don't think it's there. There's a lot of optimizations,

15:34

Kent talks about that are on the

15:36

table that can be done, that have not yet been done.

15:38

So, you know, we'll see when do those actually land. But

15:40

it seems like there's still plenty of low-hanging

15:42

fruit out there. Yeah, I think XFS is

15:45

really, in his mind, the one file system

15:47

to beat right now. Not ZFS so much.

15:49

XFS is going to be harder to dethrone.

15:53

XFS performs and scales really well, and

15:55

it's Eurobus. And those guys have been

15:58

real professionals, how they've been. been

16:00

doing it. But it's also like a

16:02

codebase from the 90s. It shows its

16:04

age in areas. And if

16:06

we can really get the performance and

16:09

the scalability to compete with XFS,

16:12

that's what's on my mind. So,

16:14

Jens, I know you've used XFS in the

16:16

past. I'm curious what your opinion is. Chris,

16:18

you've probably used pretty much every file system.

16:22

I mean, I still have XFS quite a

16:24

bit upstairs on my big old scary raid.

16:26

That's what I use. I'm actually

16:28

using LVM and then XFS

16:30

on top of that. I've used it in

16:32

production too. It is a really good file system. It

16:35

was one of the first ones that I used because

16:37

it had extended attribute support, which was necessary for Windows

16:39

shops for the clients that I was deploying at.

16:42

Oh, right. And it just has

16:44

a good recovery tool set. Like the thing that I

16:47

think I really appreciated is I have been in situations

16:49

where I've had to recover data and XFS tooling was

16:51

there for me. It came through. The file system was

16:53

solid. I think I agree

16:55

with Kent. Performance wise, feature set wise, it is

16:57

one to beat. Right. Yeah. If you're

16:59

not relying on some of these modern file

17:02

system things like snapshots or copy on write

17:04

functionality, then yeah, XFS has always been the

17:06

thing I've reached for. And now

17:08

they've been trying to add some of those advanced features

17:10

on, but it's an old

17:12

design from the 90s that you kind of

17:14

have to stretch and make work in this

17:16

paradigm, whereas BKSHFS is

17:18

just modern. It's had time to

17:21

learn from both file systems, but

17:24

also from databases, which is something Kent

17:26

spent a lot of time looking into.

17:28

And it's actually kind of like a

17:30

file system on top of a database.

17:32

Yeah, that's where I put most

17:34

of my effort in for

17:37

sure. And that was the

17:39

dream. There's been multiple

17:41

efforts to do that. Microsoft had

17:43

a big one, ReFS, but

17:46

it's hard to make a generic

17:49

database scale in all the

17:51

ways that a file system can. If a

17:53

file system needs to. And so that

17:55

was a pretty ambitious project. And

17:57

compare it like technology wise to. VFS

18:00

or ButterFS, the other two main

18:02

contenders, feature set-wise, VFS

18:05

was able to do snapshots by

18:07

giving up on extents. That's

18:10

the old indirect block scheme. If

18:14

you've done file system with snapshots, you

18:16

understand why they did that. Extents

18:20

that overlap in arbitrary ways

18:22

plus snapshots introduces lots

18:25

of rain melting problems. As

18:27

you're considering, it was the first mainstream,

18:30

I mean there was Waffle before, the

18:32

first mainstream file

18:34

systems for snapshots, it's

18:37

pretty understandable. But

18:39

performance-wise, it's

18:42

never going to scale as well as

18:44

something that's excellent-based. I think

18:46

too when you're launching a new file system post

18:48

ButterFS, there's some natural questions Linux users are

18:51

going to have around, well have you solved

18:53

for this problem? Have you solved for this

18:55

problem? So we talked to Ken about that.

18:57

You also asked a great question about erasure

19:00

coding. I am really excited

19:02

about erasure coding in BcacheFS because the vast

19:04

majority of it is done and the design

19:06

did turn out really, really nicely. No ray

19:09

at all. I'm excited to

19:11

put it through more benchmarking. Functionality-wise,

19:14

like all the core algorithmic problems

19:16

are done, getting

19:18

it to play nicely with copy GC was a

19:21

big endeavor. But

19:23

actually stabilizing a feature and

19:26

getting it really ready for prime time, that's

19:28

like as long as developing it was in the

19:30

first place. Now, I

19:33

need a little lesson here for those of us who are less file

19:36

system initiated. He

19:38

mentioned, well you mentioned eraser coding.

19:41

Can you give us insight into what

19:43

that is? And he mentioned also the raid hole. I

19:45

have guesses. Yeah. This is

19:47

a classic problem that ButterFS has been struggling with

19:49

for a while. Well

19:52

actually, Ken kind of goes into why it's

19:54

been a big problem for ButterFS. And it's

19:56

sort of a design issue that BcacheFS just

19:59

simply... isn't going to be subject to. Like

20:02

you mentioned, the RAID hole has always been the big problem

20:04

in RAID systems. When

20:07

you're doing RAID 5 and 6, this doesn't apply to RAID 1.

20:10

And you've got these stripes of

20:12

unrelated data, and you've got

20:14

parity blocks that let

20:16

you reconstruct any of those

20:18

fails. Well, if you do

20:20

a partial overwrite to some

20:22

of the data in a stripe, it's fine

20:25

if you're writing a whole stripes worth at one

20:27

time, but that doesn't usually happen because your writes

20:29

are usually not aligned so nicely. The

20:31

stripes are also quite big. Then

20:34

there's always a window in time

20:36

where the data that you updated

20:38

is inconsistent with the P and

20:40

the Q block because you can't

20:42

do writes to different drives atomically.

20:46

There's always going to be a window

20:48

where either your redundancy information got written

20:51

and the new data didn't get written,

20:53

or the new data did get written

20:55

and the new redundancy information didn't get

20:58

written. And you think, oh, that's not so

21:00

bad. And the data that

21:02

I wrote is possibly inconsistent if I

21:04

crash, but that's just the data that

21:06

I wrote. I mean,

21:08

most applications can deal with stuff

21:11

that they were in the

21:13

progress of writing that wasn't, I have to

21:15

say, being corrupt. But actually, no, because

21:18

this causes... If you then crash

21:21

and lose a drive, so you

21:24

have to do reconstruct reads, it

21:26

will cause you to reconstruct incorrect

21:28

data for everything else in that

21:30

stripe that shared the P and the

21:32

Q with. So it's a

21:34

really nasty issue. And this

21:37

is the fundamental problem that ButterFS has been struggling with.

21:39

They're close to a fix. I

21:41

think they're testing it. But

21:44

they... And it's really burned their reputation, I

21:46

would add. Yeah, right. And it's

21:48

one of the things that ZFS has had working and

21:50

is one reason to use ZFS. And Brent, to your

21:53

question, this is when you want to use something like

21:55

RAID 5, RAID 6, where you're

21:57

not doing a full mirror type setup because you want to

21:59

get... more efficiency when we go to store more with

22:01

less having to use fewer disks.

22:04

And so you use these parity calculation, you

22:06

can write these parity bits instead of more

22:09

replicas. And you could do

22:11

it with ButterFS, you could do it with MD RAID,

22:13

but you're gonna run into these write-hole problems depending on

22:15

the setup and ButterFS just didn't have an answer for

22:17

it. So Linux hasn't had a very good default

22:20

answer for that problem unless you want to go with CFS.

22:22

Which means as far as file systems built

22:25

into Linux, we haven't had a great answer.

22:28

The reason why BcacheFS doesn't have this

22:30

problem? Well... Well, we're a

22:32

copy-on-write file system. Our whole

22:34

file system is based on not overriding

22:37

existing data in place. So

22:40

why would we even overwrite a

22:42

stripe, a part of a stripe in

22:44

place? We just create

22:47

big stripes, same as we

22:49

normally write the buckets, and we'll

22:52

create new stripes as needed, but we

22:54

won't overwrite our existing stripes. It's kind

22:56

of integrated with our garbage collection. The

22:59

one trick to that is that as

23:01

we're building up stripes, we'll always be building

23:04

up full stripes, not small stripes like CFS

23:06

does. Data won't be redundant

23:08

right away because we can't write out the

23:10

P and the Q until we have a

23:13

full stripe short of data. So we

23:15

just replicate the writes initially, and

23:18

then as soon as we've built up a stripe, then

23:20

we discard the extra replicas. And

23:22

then the cool thing about that is that if

23:24

nothing in the system doesn't F-sync, forces

23:27

a flush, then we can overwrite

23:29

those buckets where we wrote the extra

23:32

replicas to for the next replicated

23:35

write, and it will

23:38

only cost us bus bandwidth. There

23:40

will be a very small performance set

23:42

to these extra replicated writes. It's

23:44

just a neat idea where you

23:47

basically start by doing an extra application. So

23:49

you do have to have more copies than

23:52

you need, but only until you've sort of

23:54

spooled enough writes where you can write out

23:56

the P and Q parity bits, and then

23:58

you fix everything up and the extra. data

24:00

goes away and you get a really nice on-disk layout and

24:02

if things go well with the in terms of f-sync you

24:04

kind of get to use the same the same area on

24:06

disk to just keep doing that over and over as you're

24:09

writing to the disk so it can all be very quick.

24:15

collide.com slash unplugged. If you're

24:17

in IT, if you deal with security,

24:19

you have to hear this. You've noticed

24:21

probably a reoccurring pattern especially over the

24:23

last few years as BYOD has become

24:25

more common but often by

24:27

no fault of their own employees or

24:29

their machines their devices are a

24:31

common threat vector. Phished passwords

24:33

can also be a huge problem. Stolen credentials

24:36

are just unfortunately more and more common and

24:38

of course they don't even realize it's happened

24:40

most the time. It's not their

24:42

fault. I think in a lot of ways the technology

24:44

so far has really failed them and they have inadequate

24:47

preventative measures and perhaps you even as

24:49

a corporation have policies and procedures

24:51

in place to make sure that they have these

24:53

tools but there hasn't been a

24:55

great way to enforce that that isn't a massive

24:57

burden on IT. That's where

24:59

Collide comes in. It is a solution to this

25:02

challenge for those in security or in IT. Collide

25:04

ensures that only secure devices can access your

25:07

network and your apps so you say goodbye

25:09

to compromised credentials and phished credentials because Collide's

25:11

checking that stuff before they can connect and

25:14

don't worry about a diverse operating system fleet. Collide

25:17

gives you one dashboard for Linux, Mac,

25:19

and Windows and Collide

25:22

will help end users solve problems so if they

25:24

run into something where they're out

25:26

of compliance, where they have phished credentials,

25:29

maybe they don't have the right patches, maybe they

25:31

don't have the right software installed, maybe they don't

25:33

have whatever it is, Collide

25:35

can help guide them through the process of fixing

25:37

that without putting that burden on IT. By using

25:40

your messaging system, by using your processes and your

25:42

procedures, Collide can help them figure it out on

25:44

their own. They're smart people. They

25:46

don't need to message IT for everything. It

25:49

doesn't burden IT. It gives you more management and it helps

25:51

end users solve their own problem. It swaps

25:53

that whole dynamic between end users

25:55

and IT as well. That's

25:58

huge. I wish I would have had this with... I

26:00

was doing IT. So go check it

26:02

out and support the show. They got a

26:04

demo over there. If you watch that, I'll

26:06

kind of explain it to you a little

26:08

bit further. It's a great way to see

26:10

how it works and support the show by

26:12

going to collide.com/unplugged. That's K-O-L-I-D-E dot

26:15

com slash unplugged. collide.com/unplugged.

26:24

Well all that theory sounds nice, but

26:26

if you're a practical file system user

26:28

out there, you might be wondering what

26:30

types of workloads is BcacheFS actually ready

26:32

for or even meant for? Workload

26:35

wise, people are throwing

26:37

database workloads at it. There's

26:40

a guy out in China who

26:42

has been throwing really crazy MongoDB

26:44

workloads at it and pushing

26:47

snapshots really hard. Yeah, ideally

26:50

any workload. It should be

26:52

a truly general-purpose file system. Scales

26:54

just fine. I know I've

26:57

heard people say that ButterFS does not really

26:59

scale past maybe a

27:01

hundred snapshots. The FSDK algorithms

27:04

do not handle the references between some keys

27:07

to multiple keys and other snapshots. That's

27:09

very well, but BcacheFS should

27:11

scale to as many snapshots as you can take.

27:15

It's got writable snapshots. The

27:17

compression is really good. People have been really happy

27:19

with that. Oh, I heard that and I said,

27:21

oh wait, wait, I'd like to know more about

27:24

the compression. But yeah, the way

27:26

BcacheFS does compression is extant-based

27:28

instead of block-based. So

27:30

we're doing compression at 64 or 128 K granularity, which

27:32

is quite a bit better than

27:36

is typical. And so our

27:38

compression ratios are quite a bit better than

27:40

other file systems that do compression. We

27:43

follow the process of these projects

27:45

getting upstreamed and at times there

27:48

are moments of drama. And a

27:50

lot of times it's when the

27:53

developer has to interact with the kernel

27:55

maintainer. Well in this

27:57

case that means Kent has to interact

28:00

with Linus and he got feedback

28:02

from Linus, which he incorporated. So

28:04

we wanted to know what that experience was like. Oh,

28:07

that was, that was stressful. Um,

28:11

have a, it's, it's stressful partly because

28:13

no one really knows what the process is

28:15

or should be. The process is

28:18

always kind of ad hoc consensus

28:20

based, but all the

28:22

people he consents us from are kind of

28:24

angry and irritable and don't want to be

28:26

bothered. But that's, that's also what,

28:29

as it should be, the bar should be high.

28:31

We do want to have high standards and we

28:33

do want to be pushing people to write the

28:35

best code that they can. And

28:37

there's, there's questions like who's going to be on

28:39

the hook for, for maintaining this thing down

28:41

the line. All the other people in the

28:43

VFS that have to deal with file systems.

28:47

Like if, if you're hacking on VFS code, which I've

28:49

done in the past, and then you have to update

28:51

all the file systems or all the block layer drivers,

28:54

then kind of that gets added.

28:57

That's overhead for you. And if, if

29:00

that code maybe is a pain to work with,

29:02

if the tests aren't there, then those are

29:04

pretty real issues. I think another

29:07

common concern that I've heard people bring

29:09

up is, well, is this

29:11

just one guy? Right. Just

29:13

because it's merged in the kernel doesn't mean anyone else in the kernel knows

29:16

how or wants to work on it. Yes.

29:18

That's, that's forefront of my mind. Now

29:21

that I'm upstream, it looks like funding is going to

29:23

be getting easier. I had a

29:25

decent amount of funding for, for like six years from

29:27

a company in Germany. There was enough for me to

29:29

work on it full time. And

29:32

then tech downturn happened actually video

29:34

edit, like the

29:36

strikes in LA was what did it

29:38

for them and had to cut pullback.

29:42

So now I'm just on my Patreon funding, but now

29:44

that I'm upstream, I'm getting more attention for like the

29:46

NixOS guys and foundations in Europe. And

29:49

there might actually be money for spinning

29:51

up an actual team and getting some

29:53

younger guys involved. That would be

29:55

great. Get some young blood into the kernel, you

29:58

know, learn from somebody like Kent who really knows this stuff.

30:00

Did you catch it, Brent? Did you catch what he said

30:02

in there? I heard that special

30:04

n-word that we love around here, Nick. I

30:06

certainly did too. Yeah. I was

30:08

like, oh, now what's going on with Nick? Also,

30:11

just the Nick's watch people have been great to

30:13

work with. They've

30:15

been kind of at the forefront

30:17

of getting Bcash FS out there

30:19

and getting integration stuff sorted and

30:23

just doing stuff without B having to get involved at

30:25

all. They've been great to work with. And

30:28

now there's people from the foundation that funds

30:30

Nix to us that are talking about finding

30:33

my way. That would be really cool

30:35

if that happens. Yeah, that would be. Wouldn't that be

30:37

great? Two great things

30:39

working together. And Nix has been

30:41

a great way to experiment with Bcash FS along the

30:43

way. Oh, yeah. It makes it really easy. So

30:47

it's like before we talked

30:49

with, before we spoke with Kent, we

30:51

decided, well, let's take a really quick look at it. We didn't

30:53

do much. We just looked at the tooling and looked at it,

30:55

but Nix makes it really, really,

30:58

really possible. He also was talking

31:00

about how he wants to rebuild his test environment

31:02

on Nix to make it a lot cleaner. He

31:05

has a lot of good things to say about the Nix folks. We

31:08

did not prod and it just came up in

31:10

the special. You know what else?

31:12

Well, actually, this one maybe had a slight

31:15

prodding by Wes Payne, but this also came

31:17

up in the conversation. What other reoccurring topic?

31:19

I'll definitely be an early one. Yeah. I

31:21

am all for Rust. I

31:23

dipped into Rust years ago. What

31:26

eight or so years ago? And I'm

31:28

in love with it pretty

31:30

early on. It's, in

31:33

my opinion, the biggest advance

31:36

that we've had in systems

31:38

programming for decades. The

31:41

dream to me, I hate debugging. I

31:44

want to be able to finish a chunk of code and

31:46

move on to the next thing and know that the code that

31:48

I wrote is done,

31:50

is correct, that I'm not going to have to come back to it. Rust,

31:53

with the borrow checker, is able to

31:55

prove the

31:57

correctness of huge swaths of things

32:00

that we had to analyze manually before. It

32:03

makes huge classes of bugs go away.

32:06

I'd like to know if people like it when we go

32:08

this deep, if when we get this technical, maybe people are

32:10

glazing over at this point, but I really

32:12

enjoyed picking his brain about this stuff and he has

32:14

a wealth of knowledge about other file systems as well.

32:17

There's some pretty decent docs

32:19

over at bcachefs.org including

32:22

a user manual which goes into more detail

32:24

if you want it about the architecture underneath

32:26

the file system, the extents, buckets, and

32:29

how compression and encryption works. I know you just

32:31

mentioned this is feeling like a technical talk, but

32:33

I love hearing about the people working on these

32:36

technologies as well. So often we

32:39

just concentrate on the technology itself, but there's

32:42

a real human writing this stuff it turns out, and

32:44

someone who's passionate about some of the tools we are.

32:47

Yeah, definitely. Yeah, I feel like

32:49

Kent is unplugged people. He would

32:52

so fit in at a meetup,

32:54

it's ridiculous. So when we

32:56

got connected, I mean first of all

32:58

he talked about NixOS, right? He's a

33:00

Rust fan and he's a file system developer.

33:03

So obviously a Linux user. Yeah, so like

33:05

those are all check in the boxes to

33:08

be, you know, great to talk to at a meetup, but

33:10

when we got connected to him he was on video and

33:13

he was hanging out, chilling,

33:15

camping across the country in the back

33:18

of his Subaru. When it

33:20

got merged I was in

33:22

Badlands National Park, way

33:25

down a dirt road, camped out

33:27

over just gorgeous scenery, middle of

33:29

nowhere. He's just doing it, you

33:31

know? It's like, it's not like

33:33

a super glorious setup, but he's

33:36

just making it work. He's

33:38

changing the Linux file system landscape from

33:40

the back of his Subaru while he's

33:42

traveling. I guess he was

33:44

in Portland not too long ago too, so we

33:46

just missed him. Maybe if he ever makes it out again we'll take

33:49

him out, show him a great time. Super,

33:51

super grateful for Kent for taking some time to talk

33:53

with us from the road. I think it was snowing

33:55

behind him too, so I hope he's staying warm out

33:57

there. We should have asked him what he's got for

33:59

heat. But it was

34:01

a great chat and it really made me excited

34:03

for the future of BcacheFS. I think it's

34:06

in the keep an eye and test it

34:08

stage. Yeah, exactly. It only just got merged.

34:10

I know like 6.8, there's already some performance

34:12

improvements coming down the pipe. So there'll probably

34:15

be a few releases while things stabilize and

34:17

back out. But yeah, try it out on

34:19

some test workloads and see what you think.

34:21

The on-disk format should be solid, is stable,

34:23

has been stable for years. I

34:26

mean, he essentially had some of the basics of it figured

34:28

out in 2015 when he started it

34:30

and some of that really hasn't changed.

34:32

Yeah, right. Kind of grew out of

34:35

Bcache underneath and then this. It's got

34:37

the B-tree layer and the transactional database

34:39

on top of the B-tree and then

34:41

you got the file system. unpluggedcore.com.

34:46

Use promo code 2024 and

34:48

take $3 off a month forever

34:52

for your membership. You get double

34:54

the content. You could get the live stream, which as

34:57

of right now is a two hour and 16 minute-ish episode

35:00

of the show. You support

35:02

each production directly. If

35:04

you use the promo code 2024, you for a

35:06

very limited time get that deal. Now, I checked

35:09

on this morning, two spots

35:11

are left. And I thought, what

35:13

are the chances? Somebody's

35:15

going to use that maybe on the live stream. By the time this

35:18

publishes, nobody will get to take advantage of it. Thank you everybody who

35:20

did, by the way. So I'm going to

35:22

add 10 more slots just for people that didn't get a chance to

35:24

hear it last week. 10 more slots.

35:26

If you use the promo code 2024, it'll

35:28

take $3 off a month forever. We'll put

35:30

a link directly to that. And

35:32

then you get the live stream version, the

35:35

bootleg or the ad-free version that Drew still cuts

35:37

together so you get that nice tight production. But

35:39

they're going quick. It applies to new purchases,

35:41

existing members, or if you're reactivating

35:43

an expired subscription, or if you want to upgrade to a

35:46

network membership, it'll apply to that too, which then

35:48

you support all the shows. Of course, you

35:50

can always boost in as well. The nice

35:52

thing about that, of course, is that immediately goes

35:54

directly to the people involved in the production.

35:57

We also kick back some to the Podfurs developers

35:59

and the podcasters. index as well to help

36:01

support those projects and we're

36:03

counting those towards our trip to

36:05

scale. Thank you everybody who has become a member

36:07

though you can become a member at unpluggedcore.com and

36:10

of course we have links on

36:12

our website. Now

36:17

we've been doing some crazy stuff here this year and

36:19

we'd love to hear if you've been joining us say

36:21

on the 32-bit challenge or what you think of these

36:24

new file systems. linuxunplugged.com

36:27

contact for that or you can

36:30

boost in. Boost to grand. Thank

36:33

you everybody who does boost in and we

36:35

have a manual boost this

36:37

week. Our first ever on-chain

36:39

boost gentlemen and it's

36:41

probably a good thing because of the size

36:43

of this whopper and this is helping us

36:46

get to scale that's got us a big

36:48

way to our goal that we talked about

36:50

earlier today came in via matrix from I'm

36:52

happy on January 8th for

36:55

3,225,275 cents. Officially

37:05

making I'm happy the largest booster

37:07

in the history of the unplugged

37:10

podcast. A boost comes in a

37:12

unique way. Yes

37:14

he writes first time long time and

37:16

long time Linux user as well. Well

37:19

thank you. Remember when

37:21

kernel 2.0 came out and we're still

37:23

cool. Yes all right. Heck yes. I

37:25

mean 2.0 come on man that's not

37:27

that long ago right. Anyways

37:30

he writes I've been a listener of your podcast

37:32

for about six years now. The insights and advice

37:34

I've gained have been invaluable as my role as

37:36

a CTO in a governmental institution and

37:38

in managing my own small business. I

37:41

love that he's a CTO in a government institution and

37:43

he's boosting us. That is a good

37:45

sign. That's special. That's making me feel good. I

37:48

says this boost is a token of appreciation for the

37:50

tremendous work you do. It is possible

37:52

to be great to have the show focus

37:54

on crucial topics from the perspective of a

37:57

large institution maybe central identity like free IPA

37:59

and administration. Enterprise things

38:01

like identity are an interesting topic for

38:03

sure. Anyways, he says keep

38:05

up the excellent work and I look forward to the

38:07

return of Linux Action News one day. If I remember

38:09

correctly, this boost will be the largest ever. Correct?

38:12

You are, sir. And I

38:14

challenge everyone to beat me. Challenge on.

38:18

Yes, you should. Thank you so much

38:21

for that tremendous support. That's

38:32

very, very, very much appreciated. And that played a

38:34

big role in our over 90 percent milestone this

38:36

week. So

38:40

very much, very much. And

38:43

then, geez, wow, very much also

38:46

appreciated and deleted coming in as our

38:48

second baller for 623,456 sets. Hey,

38:53

Enterprise! He says third place is just a fancy

38:57

word for losing. When I

38:59

did my 32-bit challenge, I allowed myself a, quote, unlimited budget.

39:03

If there was a problem that could be solved with money,

39:05

I didn't have to take the spend it. I like this

39:07

approach. That meant, you

39:09

know, big upgrades like on SSDs that

39:11

use PATA and upgrading from one gig

39:14

to two gigs. The difficulty of

39:16

sourcing these upgrades was immense and I

39:18

had to buy some very shady vendors.

39:20

One of which still hasn't delivered my

39:22

order. Oh, no. Systems

39:26

that are focused to continue to run 32-bit

39:28

only are going to become extremely difficult to

39:30

support. Systems that are forced, he says, to

39:32

continue. I know. And maybe you could become

39:34

a specialist. I felt

39:36

like, and I still feel like my takeaway was, and I'd be curious

39:38

to know if you guys feel this way, we could not

39:41

probably do this challenge at the end of this year. Like,

39:44

that was maybe the door's closing. Did you get

39:46

that sense, too? Oh, yeah. I mean, like, the

39:48

software support is limited or a hassle. Yeah,

39:51

and then the hardware support is just going to

39:53

keep degrading even if people elect to continue software

39:55

support. And it sounds like where there

39:57

was support, you often had to build it yourself.

40:00

and the hardware couldn't really handle that in a reasonable way. So

40:02

it's a loose loose.

40:05

Chris I did notice that was

40:08

multiple boosts and one of those was a

40:10

one two three four five six. So the

40:12

combination is one two

40:14

three four five. That's

40:17

the stupidest combination I've ever had in my life!

40:19

Thank you, thank you. Nice catch. Marauded

40:22

Mood came in with a live boost for 100,000 cents. The

40:27

shivering cement running from here to

40:29

Pasadena. Just says

40:31

scale boost and thank you. We

40:33

appreciate the help. We do. If

40:37

you do boost in with a scale

40:39

boost too, let us know if we're gonna see you there.

40:41

I'd love to put a face to the boost name. VT52

40:43

boosted in with a total of two boosts

40:45

96,459 sats and one of those was a

40:47

row of ducks. BSD

40:54

Bake Off with Brent Lee. When? Hmm,

40:57

hmm. I mean I don't

41:00

know if the audience appreciates the scale at

41:02

to which I find BSD annoying. This

41:06

would be one of the hardest challenges. I

41:08

also proposed a challenge to the boys over

41:10

the break and neither one of them

41:12

bit even a little bit. I was just curious when's

41:14

the last time you actually gave BSD like a good

41:17

run? In my nightmare last

41:19

night? In my nightmares. In my

41:21

nightmares. Reasons? Like if you had

41:23

to give top three reasons? It's

41:25

like it's like if you were

41:28

born on Earth and then you tried

41:30

to live on Mars. It would be

41:32

just possible but constant friction.

41:34

Constant friction. See I was enjoying the

41:36

I liked that challenge. I was down

41:38

to do it. I just I was

41:41

just having such an easy time with

41:43

my part of the 32-bit challenge. Yeah,

41:45

Mr. Mangio over there. I think a

41:47

Bake Off sounds great. I

41:50

mean I gotta be convinced. I don't know. I

41:52

really... You know that free BSD kernel has some

41:54

neat stuff in it. Open BSD does some neat

41:56

stuff too. We'll give you better hardware. Somebody give

41:58

me something I could do in some... ideas and

42:00

maybe what to try. I can make

42:02

a router challenge. I like my idea a lot

42:04

better than the BSD challenge. But

42:08

it's fine. It's fine. Thank

42:10

you for the support. I don't know about a BSD bake-off though.

42:13

Now VT's second boost. Due

42:16

to some weather-related ISP issues, I haven't

42:18

been able to finish customizing it, but

42:21

I've got a BBS up and running

42:23

if folks are interested. You

42:26

can tell them that pebcac.laul. Just a great URL. I

42:28

don't see it. It's not responding. Is it up?

42:31

Is it up right now? Due

42:36

to some weather-related ISP issues, I

42:38

haven't been able to finish customizing

42:40

it. So maybe... Oh,

42:42

I'm getting something. Look at this. Synchronet BBS

42:45

for Linux version 3.19. All right. Awesome.

42:48

Pebcac.laul. Yeah. I

42:51

think this is our first

42:53

boosted in claim of... So

42:55

it's pebcac.laul. P-E-B-K-A-C.laul. Some lovely

42:57

ASCII art in the banner here. That's

42:59

great. Nice job. VT,

43:02

you crushed it. I

43:04

think that means we now have a

43:06

crowning official BBS for JB, don't we?

43:09

You know what's wild? VT

43:12

boosted that on the 8th, and

43:15

the episode went out on the 7th. He

43:17

did that like... Wow. It's

43:19

impressive. What's even more wild about

43:21

it is the boost. The boost came in at

43:23

least our time at 8am, and the episode had

43:26

only been out probably for like 12 hours at

43:28

most. So he got this BBS. Maybe he already

43:30

had it? I don't know, but

43:32

well done. Well done. Bullet

43:35

Parrot comes in with 60,000 sats using the

43:37

index. Here's a small

43:39

amount to get the gang a little closer to

43:41

scale, and if you end up at Texas Linux

43:43

Fest, I hope to come say hi to y'all.

43:45

Great shows. Mmm, California.

43:47

Beautiful. Thank you. We

43:51

were going to get there. Helping us ticket

43:53

one episode at a time. We're

43:55

going to be showing up like bosses. People will be like,

43:57

oh yeah, how'd you get here? We'll be like sats. I

44:00

paid for that Linux limo. That's what people say, right? When

44:02

you show up at a place, they'll be like, Hey, how'd

44:04

you get here? That's a pretty good question. Well, you'll be

44:06

wearing your Bitcoin shirt. Yeah. Well, no, I, you know, it's

44:08

not great off-site, actually. Not the best. Not great. Space

44:13

Nerd Mo comes in with 55,555 cents. I

44:18

love that name! Coming in hot with the

44:20

booth! Kicking in to help with the journey

44:22

to scale. Thank you very much. Hey, thanks

44:24

for shimmering cement running from the

44:26

hills of Pasadena. MixieBeep

44:28

boosted in with 50,000 sats

44:31

from Cast-O-Matic, simply saying, scale

44:34

boost! Mmm,

44:36

California! Beautiful. That's

44:39

very much appreciated. We're getting there. We're gonna get there

44:41

and then I'm gonna miss all these boosts. It's gonna

44:44

be a thing, but still love hearing from you

44:46

guys. You have to keep boosting in, but we

44:48

really appreciate the support. Night62

44:51

came in with 50,000 sats and says

44:53

this boost is to help contribute to

44:55

getting to scale. Hey, thanks for

44:57

shimmering cement running from here to Pasadena.

45:25

I'm inspired by all the support. Got any

45:27

ideas of how we can document it for you?

45:30

Boost in. What would you like to know? Yeah, definitely let us know.

45:33

Hybrid Sarcasm boosting in with four boosts for

45:35

a total of 41,000, but

45:38

I think it's important here to note that there are three empty boosts

45:40

of 12,345, or 1, 2, 3, 4, 5, and

45:46

then a final boost for 3,965 sats that just says, they've gone

45:49

to blood! The

45:54

Entire spaceballs Boost stack in one

45:56

boost and unlocked it! The Hell

45:58

was that?! Baseball

46:00

one save the flat. Our

46:03

first gone to Plaid Boost.

46:05

well done hybrid. sarcasm will

46:07

help Then. The whole stack

46:09

right there. In one go. To.

46:12

Egbert boosted in again this episode. thirty

46:14

thousand and Sixty six it took she's

46:16

Pam to do my zip code buddhist.

46:18

Also glad Brand got the pronunciation of

46:21

my username right. Last episode triggered when

46:23

I'm done brands less as I just

46:25

really I just I have happened again

46:28

at. Will

46:30

if this is a year. Us

46:32

zip code looks like a the

46:34

postal code and Cobb County Georgia

46:36

with cities including Marietta, Or.

46:39

Hello Cobb County. And. A

46:41

similar in in yes thank you for boosting in

46:43

let herself with that that right or totally off

46:45

base for those with i go to the whole

46:47

globe and like it because it's a big map.

46:49

It's a big man I'm always impressed up with

46:51

you can annual was handed to me upside down.

46:54

In remake. You've eaten comes in

46:56

with twelve thousand, one hundred and ninety

46:58

One said okay I've been waiting for

47:00

my time to do postcode boost. We

47:02

also have two letters hear the first

47:04

two numbers are a letter and the

47:07

second to numbers. Or. Letters.

47:10

Took a the first half of my posts

47:12

cause there is the fruit as my both

47:15

a lot of as as pathetic as a

47:17

simple thing but we need to take out

47:19

a considerable love that understandable Watched him and

47:21

really enjoyed the thirty of it. Keep of

47:23

the great word remake you that so that

47:25

this has made me laugh. So I'm Wes

47:27

I see you're doing the math over wow

47:29

Lucky to have no of this is right.

47:31

Check my math the boys are fired if

47:33

if it's like same twelve is a letter

47:35

nineteen as a letter and then we just

47:37

got this. this one here. the and from

47:39

twelve one nine one too timid. To him.

47:41

Ah ok well and could be L and

47:43

in that would make nineteen s Yes, so

47:45

we'd have Ls one. Which. Seems

47:48

to be a postal code in Leeds, England, but then

47:50

I don't know is that the full code that could

47:52

be wrong as in a. Remake.

47:55

Need Nelson? It's half the passcode. Southern.

47:58

Fried Sassafras comes in with. One Two

48:00

Three Four Five. Baseball's Best. One Two Three

48:02

Four Five. Yes, that's amazing. I've got the

48:04

same combination of my luggage using the podcast

48:06

index and the right. One negative of catching

48:09

up while driving is that I almost forget

48:11

the topic I wanted to boost in on.

48:13

Barring that, keep up the good work. And.

48:15

May the Schwartz be with you. Yeah I

48:18

to stand again alligator. Our.

48:20

Dear friend says the might a or trick

48:22

for Christmas us. Oh. I do

48:24

love this and I don't know I

48:26

haven't actually adopted at but. When.

48:28

Cessna Mike is flying and I imagine other

48:30

pilots as well. He gets a notepad any

48:33

stretch it is gonna strap and is it's

48:35

just wrapped around his thigh. And. It's

48:37

right there on his leg while his fly and

48:39

ready to go. Any can just note stuff down.

48:41

I imagine the pen attaches in some clever way.

48:43

I didn't really analyzes general cross reason but that's

48:45

with them. Footwear that. And.

48:48

I sizing Southern Fried Sassafras and myself would both

48:50

benefit from such a thing indeed as we remember

48:52

will look it up in the potion. I might

48:54

go on. But. Thank you for the

48:56

boost. Maybe. A to thanks for What

48:59

else would disrupt the other side? Where.

49:01

I'm a voice gotta have my gun.

49:04

He. announced his maybe an Odi a.

49:08

Severe in a gun case? Oh

49:10

right. Yeah, so care costs gasket we

49:12

call. Our brain Bousson

49:15

with two thousand sense. First time booster.

49:17

Thanks for the show in the community.

49:20

My. First Linux install was a dual

49:22

boot Yellow Dog Linux flesh that

49:24

unless Nine as go on My

49:26

wife's Apple Powerpc own graphic design

49:28

production machine says she graciously let

49:30

me play on the were coward

49:32

well as I sympathize with that.

49:34

Otters not a brain well done

49:37

well that does feel little risky

49:39

to and you know what I'd

49:41

like to think because it makes

49:43

me feel good is that users

49:45

like you and I The word

49:47

they were pioneering Yellow Dog Linux

49:49

on these machines. We validated.

49:52

Yum! Which. Was then

49:54

brought the fedora and then later became

49:56

you know d an effort inspired dean of

49:58

so like thanks to basically. Otter brain

50:00

and me. We. Have the enough

50:02

is what I'm saying. Yeah, yeah,

50:05

right now checks out. The.

50:08

Checks out here see it comes

50:10

in with one two three four

50:12

five ounce soda companies. Cities wasn't

50:14

so three four five. Stupid

50:18

comedies. I live my life. I'd

50:20

and ah this is great suggestion they right I'd

50:23

say for the scale sounds like maybe get. Lady.

50:25

Dupes accelerating o sea

50:28

has. The. Quietest engines or

50:30

something that big. And if

50:32

you get down if you get down to like the

50:34

pipe you can hear the look look look look look

50:36

look like there is like an old schools look look

50:38

look look look look look look look look look look

50:40

and sitting there for prefer. The.

50:42

Man has forward muffled that saying rightfully

50:44

so. It's very quiet and when you're

50:46

so you can like crawl through the

50:48

campground, I don't like a noise unlike

50:50

the diesel pushers. With. Marriages

50:53

wake up the whole neighborhood but I would love to

50:55

give Sound of Are Soaring says still when that's when

50:57

I when every ten gets gone and the that things

50:59

design the rev so he gets up to like five

51:01

thousand and six thousand whoop You know just for just

51:03

get on the freeway and the whole that they're. It.

51:06

Sounds like a beast. I must.

51:08

Now. Swat came in with her of ducks.

51:12

The heat since you like when amp.

51:14

Way. Chrissy, like when it rang widely, I do,

51:17

I feel like it's I like when I

51:19

feel like it's, you know, taking care of

51:21

certain things in. Increased

51:23

since. Love

51:25

as you get it he had his name

51:27

get out ah yes I do you know

51:29

I've done as that up when amp which

51:31

also brings back fond memories for me as

51:34

well. I wanted to share of find during

51:36

the L Reds made a web reinvestment a

51:38

son of when am so we can enjoy

51:40

the original version on all o us prefer

51:42

flee the a firefox of course although x

51:44

and a mess with not so that I

51:47

that. That's. Pretty nice and it's up

51:49

on get have to you want to check it

51:51

out and it even has like a windowed mode

51:53

and a little desktop environment. The Kennel a site.

51:56

Old. School windows. Very. Impressed.

51:59

i love the name to be WebAmp. WebAmp, right. Yeah, that's

52:01

probably, you probably could not actually use the real

52:03

name. They've also got some themes right here. There's

52:05

a Winamp skin museum. Their skin's right on the

52:07

desktop that you can go with. This

52:10

is nice. Yeah, this is well done. I

52:13

would say indeed that is taking care of

52:15

the Halamas behind region. D3Bot

52:18

comes in with some space balls.

52:21

One, two, three, four, five. Yes.

52:24

That's amazing. I've got the same combination in

52:26

my luggage. They're using Podverse, the GPL cross-platform

52:28

podcasting, 2.app. And they

52:30

say, I had to change my luggage because of

52:33

that sound bite. I know, right?

52:35

It's embarrassing. It happens. Just go

52:37

the opposite way. Yeah. Who's

52:39

never, no, never get that. For real

52:41

though, thanks for the membership discount code. I've been

52:44

occasional booster, but I wanted to get the ad-free

52:46

feeds and occasionally the wacky unedited live feed. Okay,

52:48

the wacky feed. That's the

52:50

new branding. That's the wacky one this week. It

52:52

is. Thank you again, DexBot. And

52:55

I hope you enjoy the live feed this

52:57

episode. I definitely recommend checking it out. Thank

52:59

you, everybody who boosts in. We

53:02

had 16 boosters this

53:05

week. Now, across the standard

53:07

boost system, we still brought in a

53:09

remarkable 1.1 million SATs.

53:12

Thank you, everybody. That's great by any standard. It

53:14

is absolutely just a huge, huge remarkable thing that

53:16

we're actually going to get to this milestone. I

53:18

just didn't know if we could do it and

53:20

it is really great to see it. So thank

53:22

you, everybody who does boost in or has been

53:24

streaming in. I see you all in on or

53:26

streaming right now. Really? That's great.

53:28

Somebody's listening. I don't even know we're talking about

53:30

them, dude. And they're streaming a SAT. That's so

53:32

cool. So thank you, everybody who supports us either

53:34

with a boost or by membership or by streaming

53:36

those SATs. Now, we did get that

53:39

on-chain boost. So before

53:41

we get there, I just want to say thank you to all

53:43

the traditional boosters. You're doing a good job. And then thanks

53:46

to IMAPI, we have that 3 million

53:49

SAT boost that put us to

53:51

a grand total this week of

53:54

4,186,259 SATs raised. That

54:01

is really remarkable and it does indeed. Winner.

54:04

It really whips the llama's

54:06

ass. Thank you everyone. I

54:09

think we're, you know, at this point I think it's a lock. I

54:11

would love to get to our milestone just

54:13

in case the price slides a bit, just

54:16

in case gas goes up a bit. I would love

54:19

to make it to that milestone, but I think at

54:21

this point, one way or another, like, we're going to

54:23

make it happen. Even if we're personally thrown in or

54:25

whatever, like, we're going to get there at that point.

54:27

I would love to complete that milestone so that way

54:29

we have that insurance policy, we have that safety, and

54:31

we're not taking that risk. But,

54:34

man, this has been really great to see. And we

54:36

are coming up with another solution for Texas Linux Fest.

54:38

We don't want to come and ask for every single

54:40

thing. We're trying to be really respectful about that. And

54:43

this is a value for value production. It's a fantastic

54:45

way. Not only are you supporting the show, but you're

54:47

kind of helping us get to

54:49

that next big content thing that we can then turn around

54:51

and make a show out of for you. It's

54:54

kind of a great investment in future entertainment

54:56

and, hopefully, information, if you will. Now,

54:59

we have all kinds of links in the

55:01

show notes, so go over to linuxunplugged.com. No

55:03

special pick for you this week, but we

55:05

do have a question. Either write this in

55:07

on the contact form or please boost in. Do

55:10

you agree? Is it time to replace Extended

55:12

For? Practically everywhere. Not

55:14

necessarily everywhere in every use case.

55:17

But is it time for Extended For to

55:20

retire and for something else to come in its

55:22

place? Or, maybe

55:24

I'm wrong. I have a sense I could be way off on this

55:27

just because I don't hear a lot of people

55:29

talking about this issue. So I'd love to

55:31

know your feedback either via Boost or via the contact

55:33

page. Let us know, and that will help shape

55:35

our future. Are you ready to move on? No,

55:37

you know, we are live. We do this

55:39

here show on Sundays at noon Pacific, 3

55:41

p.m. Eastern. See you next week. Same

55:44

bad time, same bad station. And

55:47

we now have the new Jupiter Station feed. Go

55:49

search in a podcasting 2.0. We have a lit feed, Jupiter Station.

55:52

And you can listen live in

55:54

your podcast app. You don't have to switch apps. I've

55:56

always thought that was so weird. Right? Why can't we

55:58

just be live right there where you are? looking at

56:00

the passion of the media catch up partners

56:03

and to know you have some like recording

56:05

conflict from the live stream into the fetus

56:07

when we're not live in the system stuff

56:09

in the stupider station brand new just

56:11

want to see i think they're going

56:13

to talk about the website appreciate you very much for

56:15

listening to sharing the show and of course for

56:18

all these thank you to our mumble

56:20

room for helping us as well there is great and i

56:22

like a lot of it here

56:24

right back here

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features