Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
Two and a half admins episode 200. I'm Joe. I'm
0:05
Jim. And I'm Alan. We did it
0:07
boys. We got to 200. And
0:09
to celebrate, we're going to do a free consulting
0:12
special. So it'll be all of your questions all
0:14
the time. A reminder that you
0:16
can send in your questions for future episodes
0:18
to show at 2.5admins.com. And also a thank
0:20
you to everyone who supports us with PayPal
0:22
and Patreon. We really do appreciate that. You
0:24
can learn more at 2.5admins.com slash
0:27
support, as well as getting
0:29
advert free RSS feeds and some
0:32
episodes early. You also get to
0:34
skip the queue for free consulting, which
0:36
is what a bunch of people did this time along
0:38
with a bunch of people who aren't patrons. But let's
0:40
start with all the Patreon questions. So
0:43
Scott writes, I'm curious if Jim and
0:45
Alan have noticed that hard drive models seem
0:47
to be rotated out of production much more
0:49
quickly as of late. It
0:51
feels like every time I need to buy a new hard
0:53
drive, I never get the capacity that I'm looking for. Even
0:56
if I'm not in need of an upgrade. For
0:58
example, several months ago, I bought a couple
1:00
of refurbished 12 terabyte Iron Wolves
1:03
for my home backup server. Just
1:05
this morning, one of them faulted. So I went
1:07
online to see about getting a replacement. I
1:09
looked on all my usual websites and
1:11
couldn't find any 12 terabyte models for
1:13
sale, refurbished or otherwise, except for a
1:15
few third party sellers on Amazon at
1:17
a ridiculous markup. I wound up
1:20
just picking up an 18 terabyte disc to fill
1:22
the gap for now, since I don't want
1:24
to wait for the warranty replacement to show up. But
1:26
I feel like this wasn't an issue a few years ago.
1:29
Is this because technologies like Hammer are becoming
1:31
more mature, and we're seeing the
1:33
density shoot up like crazy? I guess
1:35
it makes sense that they wouldn't want to keep an
1:37
excessive number of product lines going. Nope, that's not the
1:39
reason. Actually, hard drive models, as far
1:42
as I can tell, aren't rotating through any
1:44
more frequently than they ever have. The
1:46
big difference is normal people
1:48
are starting to not care about increased
1:50
capacity per drive on rust discs because
1:52
they've gotten so huge. Ten
1:55
years ago, when you had a transition from
1:57
4 terabytes to 8 terabytes, almost anybody who
1:59
saw a terabyte drives for almost the
2:01
same price as four terabyte drives were immediately
2:03
going to go for the eight terabyte because
2:06
they were probably getting crowded. But
2:08
we finally got to the point now, there
2:10
are lots and lots of people who don't
2:12
actually need more than about 12 terabytes of
2:14
storage. And even amongst folks who
2:16
are building RAID arrays, you know, the folks
2:18
who built arrays out of, you know,
2:21
say eight to 10 12 terabyte drives,
2:23
very frequently don't actually need more storage than
2:25
that. So I don't think it's so much
2:27
that the models are updating more quickly is
2:30
that you care a lot less than you
2:32
used to. His complaint mostly is
2:34
that he can't find a 12 terabyte drive for sale
2:36
anymore. Only the bigger sizes. And
2:38
I think part of that is that yes,
2:41
there's some rotation happening faster. Although part of that
2:43
is also just the vendors like the stores don't
2:45
want to be caught with a whole bunch of
2:47
hard drives that nobody wants or that they're going
2:50
to have to sell for less than they paid
2:52
for them because, you know, the price
2:54
per terabyte is coming down. So there's a chunk
2:56
of that. And some degree
2:58
it does feel to me like I've tried to
3:00
quickly find a graph of hard drive size over
3:02
time, but most of them are either over such
3:04
a long time span, you can't really tell. But
3:07
I do feel like we settled in this area
3:09
where like the three and four terabyte drives were
3:11
pretty popular for a number of years and I
3:13
bought a whole bunch of them and then suddenly
3:16
couldn't anymore. But at the
3:18
same time, I also realized that I have
3:20
a bunch of 12 terabyte drives that are
3:22
failing now. And I looked at it
3:24
and they're more than five years old. So that
3:26
a warranty. A 12 terabyte drive is not as
3:29
new of a thing as it is in my
3:31
head. And so it makes
3:33
sense that hard drive manufacturers aren't still
3:36
producing 12 terabyte models because
3:38
that was more than five years ago.
3:40
And if every bit of manufacturing capacity
3:42
they have is making 18, 20 and 24 terabyte drives
3:44
now, not the 12 terabyte drives. Seven
3:49
years. Seagate introduced 12 terabyte drives to
3:51
consumers in the mass market seven years
3:53
ago in 2017. Yeah. So
3:56
it is not that odd for those
3:58
models to stop being available now. now,
4:00
seven years later. You saw
4:02
similar transition times between four terabyte and
4:04
eight terabyte, two terabyte and four terabyte.
4:07
It's just about how it goes. Yeah. I think
4:09
part of that is my perception was once we
4:11
got to three terabyte, that was big enough that
4:14
I bought all of them when
4:16
they first came out, but I just
4:18
kept using them until they weren't available
4:20
anymore basically. Because to Jim's point, once
4:22
they got much bigger than that, like
4:24
a decent sized array, you didn't generally
4:26
need more than that. I had an anecdote of
4:28
this literally come up today where a friend asked
4:30
me, I want to build a NAS, I need
4:34
a single digit number of terabytes. I'm
4:36
like, you can't buy a
4:38
hard drive in a single digit number of terabytes
4:40
anymore. I guess you're getting a
4:42
mirror of the smallest hard drives you can buy. Or
4:45
go solid state. Yeah. If you only need a
4:47
small number, yeah, we could look at probably for
4:49
the same budget, you would get a lot less
4:52
space. But if it's a couple
4:54
of terabytes of NVMe versus tens of
4:56
terabytes of hard drive, if you only
4:58
need a couple of terabytes, then your
5:01
money's better spent on the higher performance
5:03
drives. But yeah, I've been in
5:05
the same position as Scott. And yeah, I
5:07
replaced a bunch of six,
5:09
eight and 12 terabyte drives with 16
5:11
and 18 terabyte drives that
5:13
I got from server part deals. And
5:16
when it's time to replace drives, I go
5:18
there and find the lowest cost per terabyte
5:20
that's the right SAS versus SATA, depending what
5:22
I'm replacing, and just get a
5:24
couple of those. Yeah, that's the other thing that's
5:26
probably worth addressing here is if you've got a
5:28
rate array and a disk fails and you need
5:31
to buy a new one and replace it, or
5:33
you're looking to expand it, whatever. Generally
5:36
speaking, you're not actually looking specifically for
5:38
the same capacity as all the other
5:40
drives in the array, you're looking for
5:42
that capacity or larger. And as Alan
5:45
said, you're looking for a cost per
5:47
terabyte. Because ideally, if you're
5:49
keeping this array alive for multiple generations
5:51
of drives, you want to be able
5:54
to auto expand it, you know, once
5:56
the smallest drive remaining in the
5:58
array gets replaced with a larger size, then
6:01
you can increase the capacity. But
6:03
until then, yeah, you don't specifically
6:05
want to match an 8
6:07
terabyte drive to an array built
6:09
out of 8 terabyte drives, not
6:12
when, you know, 12 terabyte drives
6:14
and 14 and 18 are literally
6:16
the same cost and newer and
6:18
faster. You know, that's what you
6:20
do. Harold, who's a
6:22
patron, writes, I know there's a
6:24
lot of fanfare about USB-C, but I've had a
6:26
lot of trouble with it, which
6:29
I never had with previous ports or connectors.
6:31
The essential phone was the first device I
6:33
owned with USB-C. I had no problems with
6:35
it. Since then, I've
6:37
had two phones. First was a Moto G
6:40
power. Two years later, it won't stay connected
6:42
to its USB-C cable to charge. Then I
6:44
got a Pixel 5, which is a flagship
6:46
phone and it is in the same state
6:48
as the Moto G, but at least it
6:50
has wireless charging and KD Connect, otherwise it
6:52
would be garbage. Obviously
6:55
it is a cheap plug, but why is it
6:57
so cheaply made? Is it more
6:59
prone to wear and tear than other ports or
7:01
is manufacturing going downhill? My ThinkPad,
7:03
which I've had for many more years, is
7:06
fine, but then again, I'm not unplugging it
7:08
nearly as often. I
7:10
can't replicate this person's results,
7:13
unfortunately, so I don't know how
7:15
to answer that. In my family,
7:18
we have had markedly fewer issues
7:20
with failing ports since the transition
7:22
to USB-C. Basically, the
7:24
only one in my extended family
7:26
who has managed to damage USB-C
7:29
ports is my
7:31
wife's little brother who has
7:33
some mental and physical issues
7:35
and he's extremely rough on
7:37
hardware. He destroys everything, which
7:40
includes USB-C. My own kids
7:42
routinely destroyed USB-A ports and
7:44
they haven't yet broken anything
7:47
USB-C. Yeah, micro USB was prime
7:49
for it because people would jam it in the
7:51
wrong way. Yeah, and USB-C being universal. It
7:53
was the one thing I would give the
7:55
Lightning Connector over USB-C is they made the
7:58
part that would break off beyond the case.
8:00
not the device end and
8:02
USBC maybe should have inverted
8:04
that as well, but I've not managed to break any
8:06
of my devices, but my use case is a bit
8:10
different and I do worry about like my
8:12
Lenovo X270 that I bought had USBC charging,
8:15
but also still had
8:17
the rectangular Lenovo charging port, but
8:19
my new T14 that came yesterday
8:21
has USBC only, which is mostly a pain because
8:23
I don't have that many USBC chargers and I
8:25
need to get some more. But
8:28
if it were to break that port, that
8:30
would be an issue for me,
8:32
but I've not had that problem with any of
8:34
my devices. Well, the T14 that I got for
8:37
my wife's niece recently, I think it was a
8:39
Gen 3, that has got at least two USBC
8:41
ports that are capable of charging it. The one
8:43
that says it's for charging, obviously
8:45
will charge it, but the other ones will as well. So
8:47
don't worry too much. I'm actually at
8:50
the point now where I'm a little annoyed
8:52
that everybody is still doing the full on
8:54
wired USBC rather than doing
8:56
MagSafe style charging. I know in an
8:58
earlier episode, Alan and I both came
9:01
across as very skeptical of the inexpensive
9:03
plug-in USBC devices that allow you
9:05
to have a magnetically coupled connection.
9:08
But since several of our listeners said, no,
9:10
those things are great. You guys are idiots.
9:13
I bought some to test and yeah, I can confirm those
9:15
things are freaking great. I use them all over the place.
9:17
And as a matter of fact, that's
9:19
what's currently on my little
9:21
brother-in-law's laptop, you know, the one who keeps
9:24
destroying everything, he destroyed the charging port and
9:26
half of the other hardware on his latest
9:28
laptop a couple of months ago. And I
9:31
made certain that the next laptop that I
9:33
got for him had USBC power delivery. And
9:36
I bought, you know, the little right
9:38
angle adapters to plug in. So everything
9:40
is just magnetic connected. And
9:43
my little brother kind of hates
9:45
it because, you know, he unplugs it
9:47
all the time. But like, that's a good
9:49
thing. He's actually unplugging it as opposed to
9:52
destroying his freaking laptop when he bangs
9:54
that stuff around. So it's great. I
9:57
should mention also when I say that
9:59
we're talking about magnetic coupled and USB
10:02
power delivery, by his
10:04
laptop, it draws a lot
10:06
of current across that magnetic coupling
10:08
and it works fine.
10:10
Like there's no issue even with, you
10:12
know, high voltage, high current USB power
10:14
delivery over these magnetic coupled connections. So
10:16
I'm to the point now of thinking
10:18
like those should actually be standard. And
10:21
if you want something to be more resistant
10:23
to the cable, you know, coming unplugged, well,
10:25
then maybe you should have like a retainer
10:28
for that magnetically coupled connection for
10:30
that device, rather than just
10:33
relying on, you know, the mechanical crankiness
10:35
of pins to keep it in. Yeah.
10:39
To Harold's point, I wonder if it's
10:41
maybe he has bad USB-C cables rather
10:43
than the device, that's the problem, I
10:45
don't know. That is entirely possible. Okay,
10:48
this episode is sponsored by
10:50
Tailscale. Go to tailscale.com/two five
10:53
A. Tailscale is
10:55
an intuitive programmable way to manage a
10:57
private network. It's zero trust
10:59
network access that every organization can use.
11:02
And with Tailscale's ACL policies, you
11:04
can securely control access to devices
11:07
and services with next gen network
11:09
access controls. Loads of
11:11
the late night Linux family hosts
11:13
use Tailscale for all sorts, including
11:15
controlling 3D printers, remoting into their
11:17
relative systems for support, controlling
11:19
home assistant, and sending
11:21
ZFS snapshots to offsite backup locations.
11:24
I got it set up in minutes and you can too.
11:27
So support the show and check out Tailscale
11:29
for yourself. Go to tailscale.com/two
11:32
five A and try out
11:34
Tailscale for free for
11:36
up to a hundred devices and three users
11:38
with no credit card required. That's
11:41
tailscale.com/two five A.
11:45
Toby, who's a patron, writes, I'm
11:47
now running ZFS on my VPS. It
11:49
works a treat and I enabled compression
11:51
and encryption. However, one thing
11:53
that took me by surprise is how slow
11:55
and CPU intensive it is to do a
11:58
DU-HSC. on
12:00
a folder with many subfolders and files. As
12:03
a comparison, I tested this by downloading
12:05
an xCloud release zip and then
12:07
unzipped it and then ran
12:10
timedu-hsc on various partitions, on
12:12
the same machine using the
12:14
same drive, with the following
12:16
results. LVM and XFS 21 seconds.
12:20
ZFS no compression no encryption 2
12:22
minutes 3 seconds. ZFS with
12:25
compression and encryption 7 minutes
12:27
7 seconds. As you
12:29
can see, even without encryption and compression,
12:31
ZFS is way slower. So how does
12:33
one get directory usage on ZFS in
12:35
a reasonable time? Okay,
12:37
so the first thing here is you
12:39
said nextcloud and since you said nextcloud,
12:42
that means you're using ACLs. You
12:45
didn't specify an operating system directly,
12:47
but you said LVM, so that
12:49
means Linux. That leaves
12:51
us with something that you almost certainly didn't
12:53
do, which is ZFS set xattr
12:56
equals sa on
12:58
the the directory containing your nextcloud
13:00
files before you unzip the files
13:02
into it. If you
13:05
don't set xattr equals sa, then ZFS
13:07
has to store the metadata for each
13:09
file in a completely separate block. Whereas
13:12
if you set xattr equals sa, the
13:15
metadata actually gets stored in the
13:17
leading block on smaller files with
13:19
a tremendous increase in performance when
13:22
you're just static every file out of
13:24
a huge list, which is exactly what
13:26
you're doing here. So the first
13:28
thing is going to be make sure you set xattr
13:31
equals sa on anywhere that you're doing your
13:33
nextcloud stuff. Do that before you
13:35
actually set nextcloud up in that data
13:37
set and then try your tests again.
13:39
I would also say
13:41
I would expect your results with
13:43
encryption to remain considerably slower for
13:45
this workload. Compression however, on its
13:48
own, should not affect it. Also,
13:50
the results seem oddly slow for even
13:52
LVM there. The number of
13:55
files in the nextcloud source is
13:57
probably not tens of thousands
13:59
or anything. And especially if
14:01
you just unzipped it, in ZFS those
14:03
files should be in the cache, at
14:05
least the metadata for them anyway, mostly.
14:07
So I would expect the results to
14:09
run DU to take two minutes to
14:12
be kind of a little out
14:14
there. Although I guess the VPS probably has
14:16
very little amount of RAM. There we go.
14:18
Yeah, you said this is on a VPS,
14:20
not on a host in a data center
14:22
that Toby is setting VMs up on. Sure,
14:24
but like if it has a gig of
14:26
RAM, it should have some cache for the
14:28
size of this tarball or zip file. Even
14:30
if it has two gigs of RAM, you're
14:32
not going to be left with a whole
14:34
lot of workable cache. Have you
14:36
ever tried to set ZFS up on Linode
14:39
or DigitalOcean? It's painful. Yeah, I have it
14:41
on all my DigitalOcean boxes and they only
14:43
have a gig of RAM and they use
14:45
at least like half of it for cache.
14:47
All I can say is I can absolutely
14:50
confirm the result that he's seeing with like
14:52
there is a definite performance penalty going
14:54
from, you know, EXT4 or
14:57
XFS to especially an untuned
14:59
ZFS on something as itty
15:01
bitty as the typical VPS. Yeah, in general, ZFS
15:04
is always going to be slower because it's doing
15:06
a lot more work. XFS isn't
15:08
generating a checksum to see if the data
15:10
is corrupt or not before returning it to
15:12
you, even when you're just looking at the
15:14
metadata to run DU. And
15:16
it's obviously not doing compression and encryption and so
15:18
on. But as Jim said, especially with
15:20
the X adders thing, that's causing it to take
15:22
twice as many IOPS and on a slow VPS,
15:25
that's a lot of extra work. To
15:27
your other question about getting the directory size in a
15:29
reasonable amount of time, DU is not
15:31
bad. If it's a separate
15:34
data set, then ZFS list will just
15:36
know these numbers ahead of time and
15:38
you're fine. There's also another mechanism in
15:40
ZFS where for
15:42
each data set, you can tell how
15:44
much space was used by each username.
15:46
So the ZFS user space command on
15:48
a data set can instantly tell you
15:51
the gym user is using eight gigabytes and
15:53
the Allen user is using four gigabytes in
15:55
this data set because it tracks
15:57
it as you write the data and
15:59
keeps that information. up to date so
16:01
that you don't have to go and
16:03
calculate it at runtime with something like
16:05
DU. Today I learned, what was the
16:07
command again Alan? Setfs, space,
16:10
user space, all one word, space, the
16:12
dataset. And it will say the
16:15
POSIX user Alan is using 1.5 terabytes and
16:17
the POSIX user Jim is using 350
16:20
megabytes. And it can also tell you how
16:22
many files that is. Just randomly
16:24
looking at my podcast directory, I have 5.5
16:26
thousand files and
16:29
another user has 22 files and the third
16:31
user has 57 files. And that
16:33
gives you that data instantly on any dataset.
16:35
Well we've got a huge list of questions
16:37
so we better move on. But if you
16:39
want to learn more about ZFS stuff then
16:41
I recommend joining Jim's Practical ZFS Discourse Forum
16:44
where you can ask questions and discuss it
16:46
to your heart's content. Brian
16:48
who's a patron writes, I have
16:50
a small amount of data to back up by today's
16:52
standards, one and a half terabytes, and
16:55
I currently do a full backup of all
16:57
my data to a series of external USB
16:59
hard drives that I rotate on a schedule
17:01
basis. Some of these backup
17:03
drives are stored off-site. Jim
17:05
and Alan have mentioned several times not
17:08
to use external USB drives since the
17:10
drives are not high quality. What
17:12
would you recommend that I use instead for cold
17:14
storage backups? Purchase a 2.5 inch
17:17
server hard drive and put it in USB Caddy?
17:19
Ideally, yes, but I do not think
17:21
that you should just rush
17:24
out and throw away everything you have to replace it
17:26
with that. Will you get better
17:28
results on average with NAS
17:31
or server grade drives in a
17:33
Caddy than with the USB portable
17:35
hard drives at the manufacturer's cell?
17:37
Yes, but you've already said that you have
17:40
a whole series of them and you're rotating
17:42
them regularly. If you're accurately describing your backup
17:44
routine and you actually have the discipline to
17:46
keep it up, you're basically
17:48
good to go even with garbage
17:50
drives. Feel good about what
17:53
you have and what you're doing. As you
17:55
replace them, I would advise Exos or Ironwool's
17:57
drives in generic Caddy. Yeah, although you mentioned
17:59
two and a half inch, probably because you're
18:01
wanting to not need the separate power that
18:03
three and a half inch drives normally do.
18:06
In which case, I'd suggest a
18:08
two terabyte SATA SSD will give
18:10
you what you're looking for and
18:12
not be overly expensive. That's probably
18:14
only $100 or so for
18:16
the SSD and that still gives you the
18:19
capacity you need. But yeah, like Jim said,
18:21
as long as you're using multiple of them,
18:23
external USB drive isn't bad. We were most
18:25
talking about people trying to build the main
18:27
array out of USB drives and just don't
18:29
do that. And also don't
18:31
depend on one external USB drive as your
18:33
backup because yes, there's a good chance that
18:35
it'll go bad. But if you have multiple, all
18:37
of them going bad at once seems less likely.
18:40
And finally, don't depend on even one USB external
18:42
drive hooked up 24, 7, 365. That is not
18:44
what that connection
18:46
type is for. You will have bad
18:49
results. Yeah. What I've learned from 200
18:51
episodes of this show is that you
18:53
should treat every single hard drive as
18:55
if it is about to die. If
18:58
you do that, then you'll be fine because
19:00
you'll have enough copies of your data in enough different
19:02
places that when they do
19:04
die, you'll be fine. That is correct.
19:06
Yeah. It's like the opening line of
19:08
the ZFS book I wrote was literally
19:10
like, your hard drives are lying to
19:13
you. They're just going to die. ZFS
19:15
at least warns you and helps you stop them.
19:17
Well, all we are is dust in the wind,
19:19
dude, and hard drives are no different. Yeah. When
19:22
you think about how a hard drive works where
19:24
there's this head floating less than
19:26
the width of a hair over
19:28
a platter spinning at 7,200 RPM,
19:30
there's a good chance something's going
19:33
to go wrong. You did mention
19:35
USB SSDs. It's worth mentioning,
19:37
don't get those little Samsung ones that
19:40
are an NVMe drive in
19:42
an enclosure that you can't open
19:44
easily because they are garbage.
19:46
Get yourself a SATA one and an enclosure
19:48
for it, a caddy for it. Mostly just
19:51
because the SATA ones don't get as hot
19:53
and don't need separate cooling and external
19:56
enclosures never had good cooling. Yeah,
19:58
I would agree. form factor is
20:01
far safer for this than NVMe. NVMe
20:03
drives tend to eat themselves a lot,
20:05
kind of no matter what. I mean,
20:07
even inside a chassis, an
20:10
NVMe drive is more likely
20:12
to have overheating type issues than a SATA
20:14
drive. But when you're talking about
20:16
putting it in a cheap enclosure, yeah,
20:18
you definitely don't want NVMe. We should
20:20
be more specific that when Jim says
20:22
NVMe there, he's talking about M.2. There
20:25
are NVMe drives that look like a two
20:27
and a half inch SATA drive, that's the
20:29
U2 and U3 format, and they have
20:32
lots of metal and they're not the thing that looks like a
20:34
stick of gum that gets really hot. Okay,
20:37
this episode is sponsored by One
20:39
Password. In a perfect world,
20:41
end users would only work on managed
20:44
devices with IT approved apps. But
20:46
every day, employees use personal devices
20:48
and unapproved apps that aren't protected
20:50
by MDM, IAM, or
20:53
any other security tool. There's
20:55
a giant gap between the security tools we have
20:57
and the way we actually work. One
20:59
Password calls it the access trust gap, and
21:02
they've also created the first ever solution to fill
21:04
it. One Password Extended Access
21:07
Management secures every sign in for
21:09
every app on every device. It
21:11
includes the password manager you know and love, and
21:14
the device trust solution you've probably heard of
21:16
on this podcast back when it was called
21:18
Collide. One Password Extended Access Management
21:21
cares about user experience and privacy,
21:23
which means it can go places
21:25
other tools can't, like personal and
21:27
contractor devices. It ensures that
21:29
every device is known and healthy, and
21:31
every login is protected. So
21:33
stop trying to ban BYOD or Shadow IT,
21:36
and start protecting them with
21:38
One Password Extended Access Management. Support
21:41
the show and check it
21:43
out at onepassword.com/two five A.
21:46
Tony, who's a patron, writes, how did you
21:49
get to your level of knowledge? Is
21:51
it only experience or are there learning paths
21:53
that you would recommend? How
21:55
would you recommend someone like me getting to
21:58
your knowledge level? Well... First
22:00
you find something that you want to do and then
22:02
you do it. You know, it's how I learned. I'm
22:04
pretty sure that's the majority of how Alan learned. And
22:07
for that matter, although it's a different skill set,
22:09
I'm pretty sure that's how Joe learned too. Ultimately,
22:12
you know, you just reach for something that's a little
22:14
bit out of your grasp. You say, Hey, I know
22:16
what this thing is. I don't know how to do
22:18
this thing yet, but I have a
22:20
general big picture idea of what it does and
22:23
how it would probably have to work for the
22:25
most part. And then you just
22:27
buckle down implementing it. And you know, you figured
22:29
out as you go along, but the
22:31
thing that really keeps you invested in keeps
22:33
you learning and keeps you retaining that knowledge
22:36
is doing things that you wanted to
22:38
do in the first place. If
22:41
you can't find a goal that you want to
22:43
hit, then you may have trouble acquiring the knowledge.
22:45
Yeah, that was definitely it for me. I got
22:48
into Unix because I had
22:50
been into IRC for a little while and I run driven
22:52
my own IRC server and I
22:54
eventually learned that, you know, I wanted a Unix
22:56
machine to run it on. And
22:58
so I then I had to learn that and
23:01
how to download a tarball and extract it and
23:03
compile software and run it and edit the config
23:05
files and all that from there,
23:07
and then I decided I don't like
23:09
all these hosting providers. They're too janky. I'm going to
23:11
be my own hosting provider. And I learned how to
23:14
do all that. And then when you're renting out access
23:16
to your machine and people are trying to
23:18
break it, you learn how to fix it and be smarter than them.
23:21
Another big thing is take
23:23
copious notes, document what you're
23:25
doing. Not just half ass,
23:27
like take it seriously. My
23:29
favorite process when I'm learning something new is
23:31
I will first figure out how to do
23:33
it and I will try to take
23:36
notes as I go along. And then as soon as I've got
23:38
it working, I throw the whole damn thing away. And I try
23:40
to do it again, following my notes. I
23:42
will invariably find something that I didn't document
23:44
well enough in my notes. And if
23:47
I did a really good job the first time
23:49
around, I may know what it is that I
23:51
didn't write immediately and be able to just fix
23:53
it that way, or I may
23:55
have to actually figure out the things that
23:57
I failed to document properly, but either way.
24:00
if I had to think at any point during
24:02
this process when I'm trying to follow my own
24:04
notes that I've already written, well, once
24:07
I get done and get it working, I throw
24:09
it away again and start over. Until I can
24:11
get to the point where I can literally just
24:13
follow my own notes blindly without really having to
24:16
think about it and everything works at the end,
24:18
I'm not done yet. And what
24:20
that does for you is, in addition to giving
24:22
you awesome documentation to follow later on when you've
24:24
forgotten how to do this thing because it's been
24:26
two or three years, but now you need to
24:28
do it again and you'd really rather not have
24:30
to spend another 20 hours figuring it out. In
24:33
addition to that, you'll learn so much
24:35
more about it because in that process of
24:37
throwing it away and having to do it
24:40
again until you can literally just follow your
24:42
notes, you're not only producing
24:44
that structure on the page, you're
24:46
also creating that structure in your
24:48
head. You understand what it is
24:51
that you did so much better
24:53
once you've actually gotten through the
24:55
full process top to bottom in
24:58
a coherent, logical, easy to follow
25:00
format. Steven, who's a
25:02
patron, writes, I have several
25:04
sabrant external hard drive enclosures, but I
25:06
cannot always swap drives between them. I
25:09
have narrowed it down to two types
25:11
where all drives formatted with one enclosure
25:13
works in this set of enclosures, and
25:16
all the ones formatted with the second work with
25:18
the second. I also remember a
25:20
previous episode where Jim gave a reason for this,
25:22
but I cannot remember what or why this can
25:24
occur. Do you guys know why
25:27
this may be the case? Is there
25:29
a way to detect or correct for using
25:31
drives formatted with the other type of enclosure?
25:33
I think I know the answer to this. It's
25:36
something to do with certain enclosures reserving a
25:38
few blocks at the beginning or something. Usually the end,
25:40
I think, but yes, it can be that and
25:43
just what sector size they use, but
25:45
a lot of times it is that
25:47
little bit of reserve space for their
25:49
own metadata, and then if that
25:51
aligns with the GPT table or not. Although usually
25:53
it really depends on what you mean by formatted
25:55
here, and oftentimes I would expect you
25:57
to be able to make it work. If
26:00
you don't care about the data that's on the drive when you're swapping enclosures,
26:02
you can just put down a new partition table and it'll
26:04
be fine. In most of the
26:07
other cases, operating systems have different ways
26:09
of dealing with a drive that says
26:11
it's this big, but actually the hardware
26:13
reports not being that big. Some
26:15
of them just won't show you the partitions. Some of them will
26:17
tell you there's an error and sometimes it's
26:19
easy to fix and sometimes it's not. The
26:22
Caddies that I use are typically, they're
26:24
not trying to be that clever. The Caddy is
26:26
not inserting any metadata of its own. It's literally
26:28
just a USB to set a bridge. How
26:31
you make certain that the Caddy that you're buying
26:33
is a simple USB to set a bridge and
26:35
is not doing something more
26:37
advanced and or foolhardy on its own, I
26:40
don't know the answer to that one, unfortunately. I wish I did. Some
26:43
of it can also just be like the sector
26:45
size. If it always tries to
26:48
show 4K sectors and the drive has an odd
26:50
number of 512E sectors or something, it
26:52
can get weird. Other than
26:55
checking that it's not something with the GPT partition
26:57
table, I don't know what else
26:59
to say. Yeah, I kind of wonder if
27:01
this isn't an issue with some of the
27:03
drives being a different capacity than the others
27:05
or some being 512 versus 4K and
27:08
one of the Caddy models being older and
27:10
not understanding either too large a capacity that
27:12
didn't exist when that Caddy was made or
27:15
possibly not understanding a separate sector
27:17
size. Because again, ultimately, you're really
27:19
not expecting these things to be
27:21
doing much with the
27:24
drive. We're not talking about a
27:26
RAID enclosure. Now, when you're talking about something
27:28
that sets up a RAID array, well, then
27:30
yeah, you've necessarily got some pretty proprietary metadata
27:32
that's going. You don't
27:34
know where on those drives actually can be the
27:36
beginning, can be the end, can be both, and
27:38
you can have some serious issues with that. But
27:40
a single USB enclosure sold
27:42
without drive really shouldn't be doing anything but
27:45
bridging USB to Caddy. Yeah, and I think
27:47
that's generally the biggest thing is if you
27:49
can buy it without a drive, that's more
27:51
likely to be universal than when it comes
27:53
with a drive. But yeah, the
27:55
other thing to remember is the point of these
27:58
USB things is actually to take the often SATA
28:00
drive and make it look like a SCSI drive,
28:02
which is not the same thing. But
28:04
it shouldn't really need metadata, but they do all
28:06
kinds of weird things. And depends
28:09
how well written the firmware is that is doing
28:11
the conversion from SATA to SCSI. I
28:14
think the big takeaway that I'm getting here is
28:16
maybe avoid Sabron enclosures. I recognize that brand. I've
28:18
used a couple of those in the past, but
28:20
that's always been kind of a, kind
28:23
of like a second choice for me. Like
28:25
it, it works, but I don't know, just
28:27
something doesn't really leave me with the best
28:30
taste in my mouth about that brand. I've
28:33
more commonly used them for like completely
28:35
dumb 2.5 to 3.5 adapters to
28:38
stick into cases. Kind of hard to mess that up.
28:41
Michael writes, is there a
28:43
program or setting that will send an
28:46
email or text notification when a windows
28:48
servers CPU temperature exceeds some threshold? I
28:51
know of programs that will display temperatures
28:53
on request, but nothing that supports
28:55
remote monitoring. When you occasionally log
28:58
onto the server administratively, does
29:00
it have to be done manually via a schedule
29:02
task? I no longer run windows.
29:04
So truly asking for a friend. Yeah. We
29:06
believe you, Michael. Yeah. The request
29:08
seems a little weird. Like normally what you
29:10
would do for this type of thing is
29:13
have monitoring that is going to go and
29:15
check on all the servers and do it.
29:17
And that mostly doesn't happen that
29:19
much on the server. Maybe there's an agent on the
29:21
server that provides the answers to what is your temperature,
29:23
but you want the stuff that's checking
29:25
it to be on a different machine because
29:28
if the CPU temperature is getting that high
29:30
on the machine, the machines having trouble and
29:32
you don't trust it to alert you anymore.
29:34
Also, I've tried to remember the last
29:37
time I had a server overheat, like outside
29:39
of a hardware failure, that's not
29:41
likely to be a thing. And I
29:43
also caution about deciding on a
29:46
threshold there because most modern CPUs
29:48
will keep underclocking themselves to stay
29:50
under the temperature at
29:52
which they have a problem. And so
29:54
the server might never get to an
29:56
extreme temperature. It'll just keep getting slower
29:58
until it's not overheated. Yeah, but maybe
30:01
you want to know why your server is so slow.
30:03
Maybe it would be nice to see at a glance,
30:06
oh, it's been running at 90 degrees
30:08
for three hours straight now, and it's slower
30:10
than it was three hours ago. How about
30:12
that? But yeah, to Alan's
30:14
point, I'm the same way. Generally, if I
30:16
want to monitor something like that on a
30:19
Windows box, I will just
30:21
use Nagios and I'll write custom plug-ins for
30:23
it, just like I would on a Linux
30:25
machine. Now, with that said, there's a
30:27
tool that we used extensively at Ars Technica when
30:29
we were doing hardware reviews
30:32
of new systems called HW info
30:34
that gives you extremely low level
30:36
information about everything from temperatures
30:38
to voltage levels to you name it, you
30:40
know, on all kinds of pieces of your
30:43
machine. And although we didn't use
30:45
it for what you're looking for, HW
30:47
info I just checked absolutely can
30:49
be set up to do email
30:51
notifications, actually several other types of
30:53
notifications. It's got plug-ins to connect
30:55
to various kinds of external monitoring
30:57
systems. So that's probably going to be
31:00
the answer that you're looking for is look
31:02
for HW info, install it and find out what
31:04
it can do for you. Yeah, but I guess
31:06
my earlier point was more if you
31:08
set the threshold to 95, it might never get there
31:10
because it'll stop at 90. But yes,
31:12
having monitoring and history can be
31:15
really helpful. On our servers,
31:17
we log the temperature of every core constantly and
31:19
make a graph out of it, and
31:21
it can be really interesting to see when the
31:23
server wasn't any busier, but suddenly got a lot
31:25
hotter. Oh, maybe the air conditioning at the data
31:27
center had a problem, or even
31:30
just my house is hotter today than
31:32
it normally is because we're under a
31:34
heat dome here in Canada today, and
31:37
how that affects everything else that's going on.
31:39
But yeah, if you're going to monitor CPU
31:41
temperature, you might as well be monitoring everything.
31:43
And so yeah, a remote monitoring system may
31:45
be fed with HW info, sounds like the
31:47
right solution. Right, well, we
31:49
better get out of here then. Thank you, everyone
31:52
who sent in your questions. Clearly, we did not
31:54
get to anywhere near all of them. We
31:56
will try and answer them as soon as we
31:58
possibly can. So stay tuned for
32:01
that. But in the meantime, if
32:03
you want to send in any more questions, you
32:05
can do so show at 2.5admins.com. In
32:08
particular, any of you folks out there who, you
32:10
know, live day in, day out in the Windows
32:12
world, Al and I do work with Windows every
32:14
single day, but it's not
32:16
our favorite platform. And some of y'all may know
32:18
some tools that we're not familiar with. But
32:21
for now, you can find me at
32:23
joerest.com/mastodon. You can find me at mercenarycysadmin.com.
32:26
And I'm at Alan Jude. We'll see
32:28
you next week. Microsoft
32:30
Mechanics www.microsoft.com.com
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More