Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
What is the status serverless computing and
0:02
Python in 2024? What
0:04
are some of the new tools and best practices? Well,
0:07
we're lucky to have Tony Sherman, who
0:09
has a lot of practical experience with
0:12
serverless programming on the show. This
0:14
is Talk Python to Me, Episode 458 recorded January 25th, 2024.
0:20
Welcome to Talk Python to Me,
0:22
a weekly podcast on Python. This
0:38
is your host, Michael Kennedy. Follow me
0:40
on Mastodon where I'm at M Kennedy
0:42
and follow the podcast using at Talk
0:45
Python, both on bostodon.org. Keep
0:47
up with the show and listen to over
0:49
seven years of past episodes at TalkPython.fm. We've
0:52
started streaming most of our episodes
0:54
live on YouTube. Subscribe to our
0:56
YouTube channel over at TalkPython.fm slash
0:58
YouTube to get notified about upcoming
1:00
shows and be part of that
1:02
episode. This
1:05
episode is brought to you by Sentry.
1:07
Don't let those errors go unnoticed. Use
1:09
Sentry like we do here at TalkPython.
1:11
Sign up at TalkPython.fm slash Sentry. It's
1:14
brought to you by Mail Trap, an
1:17
email delivery platform that developers love. Try
1:19
for free at mailtrap.io. Tony,
1:24
welcome to Talk Python to Me. Thank you.
1:26
Thanks for having me. Fantastic to have you
1:28
here. It's going to be really fun to
1:30
talk about serverless. The joke with the cloud
1:32
is, I know you call it the cloud, but
1:34
it's really just somebody else's computer, but we're not
1:36
even talking about computer, we're just talking about functions.
1:39
Maybe someone else's functions. I don't know, we're going
1:41
to find out. Yeah, I actually saw a recent
1:43
article about server free. Somebody
1:45
trying to move completely.
1:49
Because as you might know, serverless
1:51
doesn't mean actually no servers. Of
1:53
course, server free. We
1:56
could just get the thing to run on
1:58
the BitTorrent network. Got it. Okay.
2:02
I don't know. I don't know. We'll figure it out.
2:04
But it's going to be super fun. We're going to
2:06
talk about your experience working with serverless.
2:08
We'll talk about some of the choices people
2:10
have out there and also some of the
2:13
tools that we can use to do things
2:15
like observe and test our serverless code. For
2:17
that though, tell us a
2:19
bit about yourself. Sure. So I'm
2:21
actually a career changer. So I
2:23
worked in the cable industry for
2:26
about 10 years and
2:28
doing a lot of different things
2:30
from installing K, knock out the
2:32
door cable guy to working on
2:34
more of the outside plant. But
2:36
at some point, I was
2:38
seeing limits of career path there. And
2:41
so my brother-in-law is a software engineer
2:43
and I had already started going
2:45
back to school, finishing my degree and I was
2:47
like, okay, maybe I should look into this. And
2:49
so I did, I took an intro to programming
2:51
class. It was in Python. And that
2:54
just, yeah, led me down
2:56
this path. So now for the past four
2:58
years or so, been working professionally in the
3:00
software world, started out in a QA role
3:03
at an IoT company. And
3:05
now I'm doing a lot of serverless
3:07
programming in Python these days. Second company
3:09
now, but that does
3:11
some school bus safety products. So a
3:13
lot of Python, a lot of serverless.
3:16
Well, serverless and IoT feel like they
3:18
go pretty hand in hand. Yeah. Another
3:20
thing is with serverless is
3:22
when you have very like spiky traffic,
3:24
like if you think about school buses,
3:26
we have a lot coming on. There's
3:28
a choice a day. Bimodal, exactly. Like
3:30
the 8am shift and then the 2.30 to 3
3:33
shift. Yeah. Yeah. That's
3:35
a really good use case for serverless. Something
3:37
like that. Are you
3:39
enjoying doing the programming stuff, say the
3:42
cable stuff? Absolutely. Sometimes I live in
3:44
Michigan. So I look outside and look
3:46
at the snow coming down or these
3:49
storms. And yeah, I just, yeah, I really, some
3:51
people like you don't miss being outside. Maybe every
3:53
once in a while, but I can go walk
3:55
outside on a nice day. You
3:58
can choose to go outside. not made
4:00
to go outside and have a sleet
4:02
or rain. Yeah, absolutely. We just had
4:04
a mega storm here and just the
4:06
huge tall trees here in Oregon just
4:08
fell left and right. And
4:11
there's in every direction that I look, there's
4:13
a large tree on top of one of
4:15
the houses of my neighbors. I know
4:17
maybe a house to over, but it just took out
4:19
all the everything that was a cable in the air
4:21
was taken out. So it's just been a swarm of
4:23
people who are out in 13 degree
4:26
Fahrenheit negative nine Celsius weather.
4:29
And I'm thinking, I'm not really choosing to
4:31
be out there today probably. Right. A lot
4:33
of people probably know what serverless is, but
4:35
I'm sure there's a lot who are not
4:37
even really aware of what serverless programming is,
4:40
right? Let's talk about
4:42
what's the idea. So what's the Zen of
4:44
this? Yeah, I made the joke that serverless
4:46
doesn't mean there are no servers, but and
4:48
there's hopefully I don't butcher
4:50
it too much, but it's more functions
4:53
as a service. There's other things that
4:55
can be serverless too. There's serverless databases
4:57
or a lot of different services
5:00
that can be serverless, meaning you don't have
5:02
to think about like how to operate them,
5:04
how to think about scaling them up. You
5:07
don't have to spin up VMs or
5:09
Kubernetes clusters or anything. You don't have
5:11
to think about that part. It's just
5:13
your code that goes into it. Yeah.
5:17
Serverless functions are probably what people are most
5:19
familiar with. And that's, I'm sure what we'll
5:21
talk about most today, but that's really
5:23
the idea. You don't have to manage the server.
5:26
Sure. And that's a huge barrier. I
5:29
remember when I first started getting
5:31
into web apps and programming and
5:33
then another level when I got
5:35
into Python because I had not
5:37
done that much Linux work, getting
5:39
stuff up running, it was
5:41
really tricky. And then having the
5:43
concern of is it secure? How
5:46
do I patch it? How do I back it up?
5:48
How do I keep it going? All of those things
5:51
are non-trivial, right? There's a lot to think about.
5:53
And if you work at an
5:56
organization, it's probably different everywhere you go
5:58
to that they, you know. how they
6:00
manage their servers and things. So putting some
6:02
stuff in the cloud kind of brings some
6:04
commonality to it too. Like you can learn
6:07
how the Azure cloud or Google cloud or
6:09
AWS, how those things work and have
6:11
some common ground too. I also
6:13
feel more accessible to the developers in a
6:16
larger group in the sense that not a
6:18
DevOps team that kind of takes care of
6:20
the servers or a production engineers where you
6:23
hand them your code. It's a little closer
6:25
to just, I have a function and then
6:27
I get it up there and it continues
6:29
to be the function. And that is a
6:32
different mindset too. You see it all the
6:34
way through from writing your code to deploying
6:36
it without maybe an entire DevOps team that
6:39
you just say, here you go
6:41
deploy this. In my world, I
6:43
mostly have virtual machines. I've moved
6:46
over to a Docker
6:48
cluster. I think I've
6:50
got 17 different things running in the
6:52
Docker cluster at the moment, but both
6:54
of those are really different than serverless.
6:57
So it's been working well for me, but
6:59
when I think about serverless, let me know
7:01
if this is true. It feels like you
7:04
don't need to have as much of a
7:06
Linux or server or
7:08
sort of an ops experience to
7:11
create these things. You could probably get away
7:13
with like almost none, right? Like at the
7:16
simplest form, like with
7:18
AWS for instance, their
7:20
Lambda functions, you can,
7:23
and that's the one I'm most familiar with.
7:25
Forgive me for using them as an example
7:27
for everything. There's a lot of different serverless
7:29
options, but you could go into
7:32
the AWS console and you could
7:34
actually write your Python code right
7:36
in the console, deploy that. They
7:38
have functioned URLs now. So you
7:40
could actually have, I mean,
7:43
within a matter of minutes, you could have
7:45
a serverless function set up. AWS Lambda, right?
7:47
That's the one. Lambda being, I
7:50
guess a simple function, right? We have Lambdas in
7:52
Python. They can be one line. Sure
7:54
you can have more than one line in
7:57
the AWS Lambda, but there are limitations though
7:59
with Lambda. to that you that are
8:01
definitely some pain points that I ran into.
8:03
Oh really? Okay, what are some of the
8:05
limitations? Package size is
8:07
one. So if you start thinking
8:10
about all these amazing packages on
8:12
PyPI, you do have to start
8:14
thinking about how many you're going to bring in. And I
8:17
don't know the exact limits off the
8:19
top of my head but it's pretty
8:21
quick Google search on their package size.
8:23
It might be like 50 megabytes zipped
8:26
but 250 when you decompress it to
8:28
do a zip base then they do
8:30
have containerized lambda functions that
8:32
go up to a 10 gig limit
8:34
so that helps but interesting. Okay, those
8:36
ones used to be less performant but
8:38
they're catching up to where they're that
8:40
was really on something called cold starts.
8:42
But they're getting I think pretty close
8:44
to it not not being a very
8:46
big difference whether you dockerize or zip
8:48
these functions but but yeah, so when
8:50
you start just like pip installing everything,
8:52
you got to think about how to
8:55
get that code into your function and
8:57
how much it's going to
8:59
bring in. So yeah, that definitely was a limitation
9:02
that had to quickly
9:04
learn. Yeah, I guess it's
9:06
probably trying to do pip install
9:08
dash R effectively to start
9:11
the and you can't go overboard with this.
9:13
Yeah, when you start bringing in packages, maybe
9:15
like some of the scientific packages, you're definitely
9:17
going to be hitting some size limits. And
9:20
with the containerized ones, basically, you get probably
9:22
give it a Docker file and a command
9:24
to run in it. And it can build
9:26
those images before and then just execute and
9:28
just do a Docker run more or less
9:31
how those ones work is you
9:33
store an image on like
9:35
their container registry Amazon's is it ECR,
9:37
I think. And so then you pointed
9:39
at that and yeah, it'll execute
9:42
your like handler function when that
9:45
when the lambda gets called. Yeah,
9:47
excellent. Out the audience, Kim says
9:49
AWS does make a few packages
9:51
available directly just by default and
9:53
lambda. That's nice. Yeah, Bodo, which
9:56
if you're dealing with AWS and
9:58
Python, you're using the Bodo. photo,
10:00
package, and yeah, that's included for you.
10:02
So that's definitely helpful in any of
10:04
their transitive dependencies would be there. I
10:07
think photo used to even include like requests,
10:10
but then I think they eventually dropped
10:12
that with some SSL stuff. But
10:15
yeah, you definitely, you
10:17
can't just like pip install anything and not think of
10:19
it unless, depending on how you package
10:22
these up. So sure that makes sense. Yeah, of
10:24
course, they would include their own Python libraries, right?
10:27
Yeah, it's not exactly small. I think
10:29
like photo core used to be like 60 megabytes,
10:32
but I think they've done some work to
10:34
really get that down. Yeah, that's not too
10:36
bad. I feel like photo
10:38
core, photo three, those are constantly
10:40
changing, like constantly. Yeah, as
10:43
fast as AWS add services, that
10:46
they'll probably keep changing quickly. So I
10:48
feel like those are auto generated, maybe
10:50
just from looking at the way the
10:52
API looks in the way they look
10:54
written. Yeah, that's probably the case. Yeah.
10:57
And then you can do that with like their, their infrastructure
10:59
as code CDK, it's all like TypeScript originally,
11:01
and then you have your Python bindings for
11:04
it. And so when you see a change,
11:06
it doesn't necessarily mean, oh, there's a new
11:08
important aspect added. It's probably just, I don't
11:10
know, people have actually pulled up the console
11:12
for AWS, but just the amount of services
11:15
that are there. And then think each one
11:17
of those has its own full API, like
11:19
a little bit of the one of the
11:21
machines. So we regenerated it. But it might
11:23
be for some part that you never call,
11:25
right? Like you might only work with S3
11:28
and it's only changed. I don't know, EC2
11:30
stuff. Right? Exactly.
11:33
Indeed. All right. Let's talk
11:35
real quickly about some of the places
11:37
where we could do serverless, right? We've
11:39
mentioned AWS Lambda. Yep. Maybe
11:41
touch on just 1 million requests free per
11:44
month. That's pretty cool. Getting like jumping into
11:46
AWS sometimes sounds scary, but they have
11:48
a pretty generous free tier. Definitely
11:50
do your research on some of the security
11:52
of this. But yeah, you can, a million
11:54
requests free per month. You probably have to
11:57
look into that a little bit because it's,
11:59
you have your your memory configurations too.
12:01
So there's probably, I don't
12:04
know exactly how that works within their free
12:06
tier, but your charge, like with Lambda at
12:08
least, it's your like invocation
12:10
time and memory and also amount
12:13
of requests. So yeah. I've
12:15
always confused when I look at that and go, okay,
12:18
with all of those variables, is that
12:20
a lot? I know it's a lot,
12:22
but it's hard for me to conceptualize. I use a little
12:24
more memory than I thought. So it costs, wait a minute,
12:26
how do I know how much memory I use? What
12:29
does this mean in practice? Yeah, it's built
12:31
by how you configure it too. So if
12:33
you say I need a Lambda with 10
12:35
gigs of memory, you're being built at that
12:37
like 10 gigabyte price
12:41
threshold. There is a really cool tool
12:43
called AWS Lambda Power Tuner. Yeah,
12:49
what that'll do is you can, it
12:51
creates a state machine in AWS. Yeah, I
12:53
think I did send you a link to
12:56
that one. So the Power Tuner will create
12:58
a state machine that invicates your Lambda
13:00
with several different memory configurations. And you
13:02
can say, I want either the best
13:05
cost optimized version or the best performance
13:07
optimized version. And that'll tell you, like
13:09
it'll say, okay, yeah, you're best with
13:11
a Lambda configured at 256 megabytes for
13:14
memory. Sorry, yeah, for the link, it's,
13:16
this is Power Tools. This is a
13:18
different, amazing package. Yeah, I got it,
13:20
I got it, yeah. Maybe I didn't
13:23
send you the Power Tuner. Okay,
13:25
sorry, that's a little bit. It's news to me. I'll look
13:27
at the date. Okay, sorry, yeah. And they
13:30
have similar names. There's only so many ways
13:32
to describe stuff. Right, yeah, okay, they have
13:34
it right in their AWS. This one probably.
13:36
Yep, and it is an open source package.
13:38
So there's probably a GitHub link in there,
13:40
but yeah. And this will tell you like
13:43
the best way to optimize your Lambda
13:45
function, at least as far as memories
13:47
concerns. Yeah, really good tool. It gives
13:49
you a visualization, it gives you a
13:51
graph that will say, okay, here's where
13:53
cost and performance meet. Yeah, that's cool.
13:55
It's really excellent for figuring that out.
13:57
Yeah, at least in, in. AWS
14:00
land. I don't know if some of the
14:02
other cloud providers have something similar to this,
14:04
but yeah, it's definitely a really
14:07
helpful tool. I'm confused and I've been doing cloud
14:09
stuff for a long time when I look at
14:11
it. There's some interesting things here. So you can
14:13
actually have a lambda invocation that
14:15
costs less with a higher memory
14:17
configuration because it'll run faster. So
14:19
you're, I think lambda bills like
14:21
by the millisecond now, you can
14:24
actually because it runs faster, it
14:26
can be cheaper to run. That
14:28
explains all the rust that's been
14:30
getting written. Yeah. Yeah. There's
14:33
a real number behind this. I mean, we
14:35
need it to go faster, right? Yeah. I
14:37
think maybe AWS lambda is one of the
14:39
very first ones as well to come on
14:41
with this concept of serverless. I don't know
14:44
for sure, but it probably is. And then
14:46
yeah, your other big cloud providers have them
14:48
and now you're actually even seeing them come
14:50
up with a lot of like Vercel has
14:53
some type of serverless function. I don't know
14:55
what they're using behind it, but it's almost
14:57
like they just put a nicer UI around
15:00
AWS lambda or whichever cloud provider
15:02
that's potentially backing this up.
15:05
But they're just reselling their flavor
15:07
of somebody else's cloud. Yeah. It
15:09
could be because the adversal obviously
15:11
they have a really nice suite
15:13
of products with a good UI, very
15:16
usable. So this
15:18
portion of talk by the enemy is
15:20
brought to you by multi-platform error monitoring
15:23
at century code breaks. It's
15:25
a fact of life with century code. You
15:27
can fix it faster. Does your
15:29
team or company work on multiple
15:31
platforms that collaborate? Chances are
15:33
extremely high that they do. It
15:35
might be a reactor view front
15:38
end JavaScript app that talks to
15:40
your fast API backend services could
15:42
be a go microservice talking to
15:44
your Python microservice or even native
15:46
mobile apps talking to your Python
15:48
backend APIs. Now let
15:50
me ask you a question. Whatever combination
15:52
of these that applies to you, how
15:55
tightly do these teams work together, especially
15:57
if there are errors originating a one
15:59
layer, but because becoming visible at the
16:01
other. It can be super
16:03
hard to track these errors across platforms and
16:05
devices, but Sentry has you
16:07
covered. They support many JavaScript front-end frameworks,
16:10
obviously, Python backend, such as FastAPI and
16:12
Django, and they even support
16:14
native mobile apps. For example,
16:16
at TalkPython, we have Sentry integrated into
16:18
our mobile apps for our courses. Those
16:21
apps are built and compiled in native code with
16:23
Flutter. With Sentry, it's literally a few
16:25
lines of code to start tracking those errors. Don't
16:28
fly blind. Fix code faster with
16:30
Sentry. Create your Sentry account at talkpython.fm
16:32
slash Sentry, and if you sign up
16:35
with the code TalkPython, one word, all
16:37
caps. It's good for two free
16:39
months of Sentry's business plan, which will give you
16:41
up to 20 times as many monthly events, as
16:43
well as some other cool features. My
16:45
thanks to Sentry for supporting TalkPython to me.
16:50
So, Vercel, some of them people can try, and
16:53
then we've got the
16:55
two other hyperscale clouds, I guess you call
16:57
them Google Cloud has serverless, right? Not sure
16:59
which ones, they might just be
17:02
called Cloud Functions, and Azure
17:04
also has- Cloud Run, they got Cloud
17:06
Run and Cloud Functions. I have no
17:08
idea what the difference is, though. Azure
17:10
also has a serverless product, and I'd
17:12
imagine there's probably even more that we're
17:14
not aware of, but yeah, it's nice
17:16
to not think about setting up servers
17:18
for something, so. I think maybe,
17:21
is it Fast? Yeah, Function as a Service, let's see.
17:23
Yeah. Search for F-A-A-S
17:25
as in instead of PaaS
17:27
or IaaS, right? There's, we've
17:29
got Almeda, Intel, I
17:32
saw that IBM had some. Oh,
17:34
there's also the, we've got DigitalOcean.
17:36
I'm a big fan of DigitalOcean, because I feel
17:39
like their pricing is really fair, and
17:41
they've got good documentation stuff. So they've
17:43
got, sorry, that you can, I
17:46
don't use these, but I suppose you could. Yeah,
17:49
I haven't used these either, but yeah. And
17:52
yeah, as far as cost, especially for
17:54
small, personal projects and things where you
17:56
don't need to have a server on
17:58
all the time, Yeah,
18:00
pretty pretty nice if you have yeah a
18:02
website that you need something server side where
18:04
you gotta have some Python But you don't
18:06
need a server going all the time. Okay,
18:08
like maybe I have a static site But
18:11
then I want this one thing to happen
18:13
if somebody clicks a button something like that
18:15
Yep, you could be completely static but have
18:17
something that is Yeah,
18:19
yeah that one function call that you do need
18:22
Exactly. And then you also pointed out
18:24
that cloudflare has some form of serverless
18:26
I haven't used these either. But yeah,
18:28
I do know that they have some
18:31
type of functions as a service
18:33
as well I don't know what frameworks
18:35
for languages. They let you write them
18:37
in there I use bunny net for
18:39
my CDN. It's just absolutely awesome platform
18:41
I really love it And one of
18:43
the things that they've started offering I
18:46
can get this stupid Completely useless cookie
18:48
banner to go away is they've offered
18:50
of what they call edge compute where
18:52
what you would do I don't know
18:54
where to find it somewhere maybe but
18:57
basically the CDN has a hundred
18:59
and fifteen hundred and twenty points
19:02
of presence all over the world where
19:04
you know This one's close to Brazil.
19:06
This one's close to Australia, whatever and
19:08
but you can actually run serverless functions
19:11
on those things So you deploy them
19:13
so the code actually executes in a hundred
19:16
and fifty hundred fifteen locations Yes, probably cloudflare
19:18
something like that as well. I don't know
19:20
one way or the other AWS has they
19:23
have lambda at edge at
19:25
the edge so that's goes hand in hand with
19:27
their CDN cloud front
19:30
I believe yeah, so they have something
19:32
similar like that where you have a
19:34
lambda that's going to be Performant because
19:36
it's yeah distributed across their CDNs. Yeah
19:38
CDNs. That's a whole nother world. They're
19:40
getting really advanced Yeah, maybe that's a
19:42
different show It's not a show today,
19:44
but it's just the idea of like
19:46
you distribute the compute on the CDN
19:49
It's pretty nice. The drawback is it's just
19:51
JavaScript, which is okay, but it's not the
19:53
same as Python Yeah, I wonder
19:56
if you could do high script interesting
19:58
thought. Yeah. Yeah, we're getting closer and
20:00
closer to Python in the browser. So my
20:02
JavaScript includes this little bit of WebAssembly. I
20:04
don't like semicolons, but go ahead and run
20:06
it anyway. Yeah,
20:08
out in the audience, it looks
20:10
like Cloudflare probably does support Python,
20:13
which is interesting. There's so many
20:15
options out there for serverless functions
20:17
that are, yeah, especially if you're
20:19
already in if you're maybe deploying
20:21
some static stuff over Cloudflare or
20:24
bracel. Yeah, it's sometimes nice
20:26
just to be all in on one
20:28
service. Yeah, it really is. Let's
20:30
talk about choosing serverless over
20:33
other things, right? You actually laid
20:35
out two really good examples, or
20:37
maybe three even with the static
20:39
site example, but bursts of activity
20:41
and really focused time, but really
20:43
low, incredibly low usage other times.
20:45
You're Black Friday traffic, right? Like
20:47
you do not have to
20:50
think of like how many servers to
20:52
be provisioned for something like that. Or
20:54
if you don't know, I think
20:57
there's probably some, I actually know there's been
20:59
like some pretty popular articles about people
21:01
like leaving the cloud. And yeah, if
21:03
you know your scale and exactly what
21:06
you need, you probably can
21:08
save money by just having your own
21:10
infrastructure set up or but yeah, if
21:12
you don't know, or it's very like
21:14
spiky, you don't need to have a
21:16
server that's consuming a lot of power
21:18
running 24 hours a day, you
21:22
can just invoke a function as you need.
21:24
So yeah, there's a super
21:26
interesting series by David Heinemann
21:29
or Hansen of Ruby on Rails
21:31
fame and from base camp about how
21:33
base camp has left the cloud and how they're saving $7
21:36
million and getting better performance over five
21:39
years. Yeah, that's a big investment, right?
21:41
They bought they paid $600,000 for hardware,
21:43
right? Only so many people can
21:48
do that. You got to have that
21:50
running somewhere that with backup power. So
21:52
what they ended up doing for this
21:54
one is they went with some service
21:57
called guest. Okay, hosting, which is like
21:59
white glove. So white labeled
22:01
is the word I'm looking for where it just looks like
22:04
it's your hardware, but they put it into a mega
22:06
data center and they'll have the hardware shipped to
22:08
them and somebody will just come out and install
22:11
it into RACs and go, here's your IP address.
22:14
Like a virtual
22:16
VM in
22:18
a cloud, but it takes three weeks to boot.
22:23
Which is almost diving into it
22:25
because it's almost the exact opposite
22:27
of the serverless benefits.
22:29
It's insane stability. I have
22:31
this thing for five years. We
22:35
have 4,000 CPUs we've installed and we're using
22:37
them for the next five years rather than
22:39
how many milliseconds am I going to run
22:41
this code for? Yeah, it's definitely the far
22:43
opposite. And yeah, maybe serverless
22:45
isn't for every use case, but it's
22:47
definitely a nice tool to have in
22:49
the toolbox. And yeah, you definitely
22:51
even working in serverless, if you're, yeah,
22:54
eventually you're going to need like maybe
22:56
to interact with the database that's got
22:58
to be on all the time. But
23:00
yeah, there's a lot of, it's a good
23:02
tool, but it's definitely not the one
23:05
size fits all solution. So let's talk databases
23:07
in a second. But for when
23:10
does it make sense to say we're going
23:12
to put this, it was supposed I have
23:14
an API, right? That's a pretty, an API
23:16
is a real similar equivalent to what a
23:18
serverless thing is. Like Nicole's API,
23:20
I think is going to happen. I'm going to
23:22
call this function that things going to have. Suppose
23:24
I have an API and it has eight endpoints
23:26
to written a fast API or whatever it is.
23:28
It might make sense to have that as serverless,
23:30
right? You don't want to run a server and all that kind
23:32
of thing. But what if I have an API with 200 endpoints?
23:35
Like, where is the point where there are so many
23:37
little serverless things? I don't even know where to look.
23:40
They're everywhere. Which version is this one? You know what
23:42
I mean? Like, where's that trade off and how do
23:44
you and the people you work with think
23:46
about that? As you start like getting into
23:48
these like microservices, how small do you want
23:51
to break these up? Yeah, there is
23:53
some different thoughts on that. Even like
23:55
a lambda function, for instance, if
23:57
you put this behind an API, you can
23:59
use a single Lambda function for
24:02
your entire REST API, even if it is
24:04
200 endpoint. The whole app
24:07
there and then when a request comes in,
24:09
it routes to whatever part of your app
24:11
that you might need. There's a package called
24:13
Power Tools for Py, AWS Power Tools. I
24:15
don't know how to write Power Tools for
24:17
Py thought. Yeah,
24:19
I know the similar name. So they
24:22
have a really good like event resolver.
24:24
So you can actually, it almost
24:26
looks flask or some of
24:28
the other Python web frameworks.
24:30
And so you have this resolver,
24:33
whether it's API gateway, and AWS
24:36
or different, they have a few
24:38
different options for the API itself.
24:40
But yeah, you in theory, you
24:42
could have your entire API
24:45
behind a single Lambda function, but
24:47
then that's probably not optimal, right?
24:49
So you're, that's where you have
24:51
to figure out how to break that up. Yeah,
24:54
they do that same, the
24:56
decorators app.post or
24:58
yeah, and your endpoints
25:00
and you can do the, have variables
25:03
in there where maybe you have ID
25:05
as your lookup and it can slash
25:07
user slash ID is going to find
25:09
your, find a single user. In their
25:12
documentation, they actually address this a little
25:14
bit. Do you want to do, they
25:17
call it either like a micro
25:19
function pattern where maybe every single endpoint
25:21
has its own Lambda function. But yeah,
25:23
that's a lot of overhead to maintain.
25:25
If you had like you said, 200
25:27
endpoints, you have 200 Lambdas. You
25:30
got to upgrade them all at the same time.
25:32
So they have the right consistent data models and
25:35
all that. Yeah, that's gnarly. There's definitely some even
25:37
conflicting views on this. How micro
25:40
do you want to go? And
25:42
I was able to go to
25:44
AWS re-invent in November, and they
25:46
actually pitched this, this hybrid, maybe
25:48
if you take your like CRUD
25:51
operations, and maybe you have your
25:53
create, update, and delete all on
25:55
one Lambda that's with its configuration
25:57
for those, but your read is.
26:00
on another lambda. So maybe your
26:02
CRUD operations, they all interact with
26:04
a relational database, but your reader
26:06
just does reads from a Dynamo
26:09
database where you sync that data
26:11
up. And so you could have
26:13
your permissions separated for each of
26:15
those lambda functions and people
26:18
reading from an API don't always need the
26:20
same permissions as updating, deleting. And so yeah,
26:22
there's a lot of different ways to break
26:24
that up. And how micro do you go
26:26
with this? That's how they micro.
26:28
Can you go? Yeah. Yeah. Because it sounds
26:31
to me, like if you had many of them,
26:33
then all of a sudden you're back to, wait,
26:35
I did this because I didn't
26:37
want to be in DevOps. And now I'm a
26:39
different kind of DevOps. That package,
26:41
the Power Tools is does a lot
26:44
of like heavy lifting for you. At
26:46
PyCon, there is a talk on serverless
26:48
that they, the way they described the
26:50
Power Tools package was it, they had
26:53
said like codified your serverless best practices.
26:55
And it's really true. They give a
26:57
lot, there's so many different tools in
27:00
there. And there's a, a logger, like
27:02
a structured logger that works really well
27:04
with lamb. You don't even have to
27:06
use like the AWS login services. If
27:08
you want to use like Datadog or
27:11
Splunk or something else, you it's just a
27:13
structured logger and how you aggregate them is
27:15
like up to you. And you can even
27:17
customize how you format them, but it's works
27:20
really well with Lambda. Yeah. You probably
27:22
could actually capture exceptions and stuff with
27:25
something like Sentry even, right? Oh yeah. Python
27:27
code, there's no reason you couldn't. Some of
27:29
that comes into packaging up those libraries
27:31
for that. Yeah. You do have
27:33
to think of some of that
27:35
stuff, but Datadog, yeah,
27:38
Datadog, for instance, they provide something called like a
27:40
Lambda layer or a Lambda extension, which is another
27:42
way to package code up. That just makes it
27:45
a little bit easier. Yeah. There's a lot of
27:47
different ways to attack some of these problems. A
27:49
lot of that stuff, even though they have nice
27:51
libraries for them, it's really just calling a HTTP
27:53
endpoint and you could go, okay, we need something
27:56
really light. I don't know if request is already
27:58
included or it may be included. but there's
28:00
some gotta be some kind of HTTP thing
28:02
already included. We're just going to directly call
28:04
it, not pulling all these packages. Yeah. This
28:07
code looks nice. This power tools code, it
28:09
looks like well-written Python code.
28:11
They do some really amazing stuff
28:13
and they bring in a pedantic
28:16
too. Being mostly in serverless,
28:18
I've never really gotten to use like
28:20
fast API and leverage pedantic as much,
28:22
but with power tools you really can.
28:25
So, so they'll package up
28:27
pedantic for you. And so you can
28:29
actually, yeah, you can have
28:31
pedantic models for validation on these.
28:33
It's like a Lambda function, for
28:35
instance, it always receives an event.
28:37
There's always two arguments to the
28:39
fun, to the handler function. It's
28:41
event and context and like event
28:43
is always a, it's
28:46
a dictionary in Python. And so
28:48
they can always look different. If
28:51
you look in the power tools, GitHub,
28:53
their tests, they have an API, gateway
28:56
proxy event.json or whatever, right?
28:58
You don't want to parse that
29:00
out by yourself. They have
29:02
pedantic models or they might actually
29:05
just be Python data classes, but
29:07
that you can, you can say,
29:09
okay, yeah, this function is going to
29:11
be for an API gateway proxy event,
29:14
or it's going to be an S3
29:16
event or whatever it is. There's so many
29:18
different ways to receive events from
29:20
different AWS services. So yeah, power tools
29:23
gives you some nice validation and yeah,
29:25
you might just say, okay, yeah, the
29:27
body of this event, even though I
29:30
don't care about all this other stuff
29:32
that they include the path headers,
29:35
queer string parameters, but I just need like
29:37
the body of this. So you just say,
29:39
okay, event.body, and you can
29:41
even use, you can validate that further. The
29:43
event body is going to be a pedantic
29:46
model that you created. Yeah, there's a lot
29:48
of different pieces in here. If I was
29:50
working on this and it didn't already have
29:52
pedantic models, I would take this and go
29:54
to Jason pedantic. Oh, I didn't even know
29:56
this existed. That's very nice. Okay. Boom. Put
29:58
that right in there. And there
30:01
you go, it parses it onto a nested
30:04
tree, object tree of the model. But
30:06
if they already give it to you,
30:08
they already give it to you, then
30:10
just take what they give you. Yeah. Those
30:12
specific events might be data classes instead
30:14
of pedantic, just because you don't, that
30:16
way you don't have to package pedantic up
30:18
in your Lambda. But yeah, if you're already
30:21
figuring out a way to package power tools
30:23
here, or you're close enough that you probably
30:25
just include pedantic too. Yeah. I think they
30:27
just added this feature where it'll
30:29
actually generate open API schema for you.
30:31
Okay. Fast API does that as well,
30:33
right? That's something you can leverage power
30:36
tools to do now as well. Oh,
30:38
excellent. And then you can actually take
30:40
the open API schema and generate a
30:42
Python client for it on top of
30:44
that, I think with automation. So you
30:46
just say robots all the way down,
30:48
automated turtles all the way down. I
30:50
haven't used those open API generated clients
30:52
very much. I was always like
30:54
skeptical of them. I just feel heartless and
30:56
soulless, I guess, the word looks for. And
30:58
he's just like, okay, here's another star org,
31:01
star KW org thing. Or couldn't you
31:03
just write it, take some reasonable defaults
31:05
and give me some keyword. Or just
31:08
add to how I feel. But if it's better
31:10
than nothing, that's better than nothing. You can see
31:12
like power tools. They took a lot of influence
31:14
from fast API. It does seem
31:16
like it. Yeah, for sure. It's definitely really powerful.
31:18
And you get some of those same benefits. This
31:21
is new to me. It looks quite nice.
31:23
Another comment by chem is, I need to
31:25
use serverless functions for either things that run
31:27
briefly, like once a month on a schedule
31:30
or the code that processes stuff coming
31:32
in on an AWS SQS,
31:34
simple queuing service queue of
31:37
unknown schedule. So maybe
31:39
that's an interesting segue into how
31:42
do you call your serverless code? As
31:44
we touched on, there's a lot of
31:46
different ways from AWS, for instance, to
31:49
do as Yeah, like AWS Lambda has
31:51
Lambda function URLs. I haven't used those as
31:53
much, but if you just look at like
31:55
the different options and like power tools, for
31:58
instance, you can have a. load
32:00
balancer that's gonna, where you set
32:02
the endpoint to invoke a Lambda,
32:04
you can have API gateway, which
32:06
is another service they have. So
32:09
there's a lot of different ways,
32:11
yeah, SQS, that's almost getting into
32:13
a way of streaming or an
32:15
asynchronous way of processing data. Yeah,
32:18
maybe in AWS you're using
32:20
a queue, right? That's filling
32:22
up and you say, okay, yeah, every time this
32:24
queue is at this size or
32:26
this timeframe, invoke this Lambda and
32:28
process all these messages. So there's
32:30
a lot of different ways to
32:33
invoke a Lambda function. If it's, I mean,
32:35
really as simple as you can invoke them
32:38
like from the AWS CLI or, but yeah,
32:41
most people are probably have some kind of
32:43
API around it. Yeah, almost make them look
32:45
like just HTTP endpoints. Right. Markout
32:48
there says, not heard talk of
32:50
ECS, I don't think, but I've
32:52
been running web services using Fargate
32:54
serverless tasks on ECS for years
32:56
now. Are you
32:58
familiar with this? I haven't done anything with this. I'm like vaguely familiar with
33:00
it, but yeah, this is like a serverless,
33:03
yeah, serverless compute for
33:05
containers. I haven't used this personally,
33:08
but yeah, very like similar concept
33:10
where it scales up for you.
33:13
And yeah, you don't have to
33:15
have things running all the time, but yeah, it
33:17
can be dockerized applications. Now, in fact,
33:19
the company I work for now, they do
33:21
this with their Ruby on Rails applications. They
33:24
dockerize them and run with Fargate. This
33:27
portion of TalkBython.me is brought to
33:30
you by Mailtrap, an email delivery
33:32
platform that developers love. An
33:35
email sending solution with industry best
33:37
analytics, SMTP and email
33:39
API SDKs for major programming
33:42
languages and 24 seven human
33:44
support. Try for free
33:47
at mailtrap.io. Creating
33:51
Docker containers of these things, the
33:53
less familiar you are with running that tech
33:55
stack, the better it is in Docker, you
33:57
know what I mean? Yeah, I could run
33:59
straight. Python. But if it's Ruby on Rails
34:01
or PHP, maybe it's going into a container. That
34:04
would make me feel a little bit better about
34:06
it. Especially if you're in that workflow of handing
34:08
something over to a DevOps team, right? Like you
34:11
can say, here's an image or a container
34:13
or a Dockerfile that, you know, that will
34:15
work for you. That's maybe a
34:17
little bit easier than trying
34:19
to explain how to set up an environment or
34:21
something. Yeah. Yeah. Fargate's a really good
34:24
serverless option too. Excellent. What
34:27
about performance? You talked about having a
34:29
whole API apps like FastAPI, Flask,
34:31
or whatever. The startup of those apps
34:34
can be somewhat, can be non-trivial basically.
34:36
And so then on the other side,
34:38
we've got databases and stuff. And one
34:40
of the bits of magic of databases
34:43
is the connection pooling that happens, right?
34:45
So the first connection might take 500
34:47
milliseconds, but the next one takes one.
34:50
It's already open effectively. That's
34:52
definitely something you really have to take into consideration
34:55
is like how much you can do. That's where
34:57
some of that like observability is some of like
34:59
the tracing that you can do and
35:01
profiling is really powerful. AWS
35:04
Lambda, for instance, they have
35:06
something called cold starts. The
35:09
first time like a Lambda gets
35:11
invoked, or maybe you have 10
35:14
Lambdas that get called at the same time,
35:16
that's going to invoke 10
35:18
separate Lambda functions. So that's like
35:20
great for the scale, right? That's
35:22
really nice. But on a cold
35:24
start, it's usually a little bit
35:26
slower invocation because it has to
35:28
initialize. I think what's happening behind
35:31
the scenes is they're like, they're moving
35:33
your code over that's going to get
35:35
executed. And anything that happens
35:37
like outside of your handler
35:39
functions, so importing libraries, sometimes
35:41
you're establishing a database
35:43
connection, maybe you're loading some
35:45
environment variables or some secrets.
35:48
And yeah, there's definitely performance
35:50
is something to consider. You
35:52
mentioned Rust. Yeah, there's probably
35:54
some more performant run times
35:56
for some of these serverless
35:58
functions. Yeah. and heard some
36:00
people say, okay, for
36:02
like client facing things, we're not going to
36:05
use serverless, like we just want that performance.
36:07
That can have an impact on you. Yeah,
36:09
on both ends that I've pointed out, right?
36:11
Like the App Store, but also the service,
36:14
the database stuff with the connection pooling and...
36:17
Yeah, relational databases too. That's an interesting
36:19
thing. Yeah, what do you guys do?
36:21
You mentioned Dynamo already. Yeah, so Dynamo
36:23
really performant for a lot
36:25
of connections, right? So Dynamo
36:28
is a serverless database that can
36:30
scale, you can query it over and that's not
36:32
going to... It doesn't reuse a
36:35
connection in the same way that like
36:37
a SQL database would. So that's an
36:39
excellent option. But if you
36:41
do have to connect to a relational database
36:43
and you have a lot of invocations, you
36:46
can use a like a proxy if
36:49
you're all in on AWS. And so
36:51
again, sorry if this is really AWS
36:53
heavy, but if you're using their like
36:55
relational database service RDS, you can use
36:58
RDS proxy, which will use like a
37:00
pool of connections for your Lambda function.
37:02
Oh, interesting. Yeah, that can give you
37:05
a lot of performance or at least
37:07
you won't be running out
37:09
of connections to your database. Another
37:11
thing too is just how you structure
37:14
that connection. I mentioned cold Lambdas, you
37:16
obviously have warm Lambdas too. A Lambda
37:18
has its handler function. And so anything
37:21
outside of the handler function can get
37:23
reused on a warm Lambda. So you
37:25
can establish the connection to a database
37:27
and it'll get reused on every invocation
37:30
that it can. That's cool. Do you have to
37:32
do anything explicit to make it do that or
37:34
is that just a... It just has to be
37:36
outside of that handler function, your
37:38
top level of your file. So it makes
37:40
me think almost one thing you would consider
37:43
is like profiling the import
37:45
statement almost right. And that's what
37:47
we normally do. But there's a
37:49
library called import profiler that actually
37:51
lets you time how long different
37:53
things take to import. It could
37:55
take a while, especially if you
37:57
come from not from a native.
38:00
of Python way of thinking in C
38:03
sharp or C plus or something
38:05
you say hash include or using
38:07
such and such. That's a compiler type thing
38:10
that really has no cost. But
38:12
there's code execution when you import something in
38:14
Python and some of these can take a
38:16
while right there's a lot of tools for
38:18
that there's some I think even maybe specific
38:20
for Lambda. I don't like data dog has
38:22
a profiler that gives you like this. I
38:25
forget what the graphic is called. And like a
38:27
flame graph. Yeah, that will give you like a
38:29
flame graph and show okay, yeah, it
38:31
took this long to make
38:33
your database connection this long to
38:35
import pedantic and it took this
38:37
long to make a call
38:40
to DynamoDB. So you actually break
38:42
that up. AWS has x ray,
38:44
I think which does something similar
38:46
to yeah, it's definitely something to
38:49
consider another just what your packaging
38:51
is definitely something to watch
38:53
for. And okay, yeah, I
38:55
mentioned using pants to package lambdas.
38:58
And they do. Hopefully
39:00
I don't butcher how this works behind
39:03
the scenes. But they're using rust and
39:05
they'll actually infer your dependencies for you.
39:07
And so they have a an integration
39:10
with AWS lambda. They also have it
39:12
for Google Cloud Function. So yeah, it'll
39:14
go through you say here's like my
39:16
AWS lambda function, this is the file
39:18
for it and the function that needs
39:20
to be called and it's going to
39:22
create a zip file for you that
39:25
has your lambda code in it and
39:27
it's going to find all those dependencies
39:29
you need. So it'll actually by default,
39:31
it's going to include Bodo that you
39:33
need if you're using Bodo, if you're
39:36
going to use pi MySQL
39:38
or whatever library it's going to
39:40
pull all those in and zip that
39:42
up for you. And so you just open
39:44
up that zip and you see, especially if
39:46
you're sharing code across your code base, maybe
39:49
you have a shared function to make some
39:51
of these database connections or calls like you
39:53
see everything that's going to go in there.
39:55
How like pants says
39:57
is it's file based, sometimes just for
40:00
like ease of imports, you might throw a
40:02
lot of stuff in like your, your init.py
40:04
file and say, okay, yeah, from, you add
40:06
all kind of bubble up all your things
40:09
that you want to import in there. If
40:11
one of those imports is also
40:13
using OpenCV and you don't need that, then pants
40:18
is going to say, Oh, he's importing
40:20
this. And because it's file-based now this
40:23
Lambda needs OpenCV, which is a massive
40:26
package that's going to, it's going
40:28
to impact your performance, especially in
40:30
those cold starts because that, that code
40:32
has to be moved over. So that's
40:34
pretty interesting. So an alternative
40:37
to saying, here's my requirements
40:39
or my pyproject.toml lock
40:41
file or whatever, that just lists everything the
40:43
entire program might use. This could say you
40:45
can import this function and to do that,
40:47
it imports these things, which import those things.
40:50
And then it just says, okay, that means
40:52
here's what you need. It's definitely one of
40:54
the best ways that I
40:56
found to, to package up Lambda functions.
40:59
I think some of the other tooling might do some
41:01
of this too, but yeah, a lot
41:03
of times it would require like requirements
41:05
that TXT, but if you have a
41:07
large code base too, where maybe you
41:09
do have this shared module for that,
41:12
maybe you have 30 different Lambda functions
41:14
that are all going to use some kind
41:16
of helper function. It's just going to
41:18
go and grab that. And it doesn't have to be
41:20
like pip installable pants is smart enough to just be
41:22
like, okay, it needs this code. But
41:24
yeah, be careful how there's so many other
41:27
cool things that pants is doing that they
41:29
have some really nice stuff for testing
41:31
and the linting and formatting. That's there's
41:33
a lot of really good stuff
41:35
that they're doing. So yeah, I had Benji on
41:37
the show to talk about pants. That was fun.
41:39
Yeah. Let me go back to this picture. It's
41:41
just the picture. I have a lot of things
41:43
open in my screen now there.
41:47
So on my server setup that I described,
41:49
which is a bunch of Docker containers running
41:51
on one big machine, I can
41:53
go in there and I can say tail this
41:56
log and see all the traffic to all the
41:58
different containers. I can tell another log. You
42:00
can just see like the login,
42:02
logbook, logguru, whatever output of that
42:04
or just web track. There's
42:07
different ways to just go, I'm just going
42:09
to sit back and look at it for
42:11
a minute. Make sure it's chilling, right? If
42:13
everything's so transient, not so easy in the
42:16
same way. So what do you do? Power
42:18
Tools does, they have their structured logger that
42:20
helps a lot. But yeah, you have to
42:22
aggregate these logs somewhere because yeah, you can't,
42:24
a Lambda function you can't like SSH into.
42:27
Yeah, you can't have to thank you long. Yeah,
42:30
yeah. You need to have
42:33
some way to aggregate these. So like AWS
42:35
has CloudWatch where that will
42:37
like by default kind of log all
42:39
of your standard out. So
42:41
even like a print statement would
42:43
go to CloudWatch just by default.
42:45
But you probably want to structure
42:47
these better with most likely in
42:49
JSON format, just most tooling around
42:51
those is going to help you.
42:53
Yeah, the Power Tools structured logger
42:56
is really good. And you can
42:58
have a single log statement, but
43:00
you can append different keys to it. And
43:02
it's pretty powerful, especially because you don't want
43:04
to like, I think, so if
43:06
you just printed something in a Lambda function,
43:09
for instance, that's going to be like a
43:11
different row on each of your, by default
43:14
CloudWatch, it'll be, how
43:16
it breaks it up is really odd unless you
43:19
have some kind of structure to them. Okay. So
43:21
definitely something to consider. Something
43:24
else you can do is, yeah, there's
43:26
metrics you can do. So
43:28
like how it works with like CloudWatch,
43:30
they have a specific format. And if
43:32
you use that format, you can, it'll
43:35
automatically pull that in as a metric.
43:37
And like Datadog has something similar where
43:39
you can actually go in there. You
43:41
can look at your logs and say, find a value
43:44
and be like, I want this to be a metric
43:46
now. And so that's
43:48
really powerful. The metrics
43:50
sounds cool. So I see logging and
43:52
racing. What's the difference between those
43:54
things? Tracing is a level and
43:56
just a high level of logging. Tracing
43:58
does have a lot more to do. with your
44:00
performance or maybe even closer
44:02
to tracking some of these
44:05
metrics, right? I've used the
44:07
Datadog tracer a lot and
44:09
I've used the AWS like x-ray, their
44:11
tracing utility a little bit too and
44:13
so like those will show you. So
44:15
like maybe you are reaching out to
44:17
a database right into S3. Almost
44:20
like a APM application performance monitoring
44:22
where it says, yes, you spent
44:24
this much time in a SQL
44:26
query and this much time in
44:30
idantic serialization whereas the login would say a
44:32
user has been sent a message. Yep, tracing
44:34
definitely is probably more around your performance and
44:36
things like that. So I've been saying that
44:39
I could do that. You see it in
44:41
the Django debug tool or in the pyramid
44:43
debug tool bar but they'll be like, here's
44:45
your code and here's all your SQL queries
44:47
and here's how long they talk and you're
44:50
like, wow, that thing is reaching deep down
44:52
in there. The Datadog one is very interesting
44:54
because it just knows that this
44:56
is a SQL connection and it tells you
44:58
like, oh, okay, this SQL connection took this
45:01
long and it's like I didn't tell it to
45:03
even trace that. Like it just like it knows
45:05
really well. Yeah, so like the instrumentation. So one
45:07
thing to know a SQL connection to open is
45:10
another to say and here's what it sent over
45:12
SSL by the way. How do you get in
45:14
there? Yeah, yeah, it's definitely. It's
45:16
in process so it can do a lot
45:18
but it is impressive to see those things
45:20
that work. Alright, so that's probably what the
45:22
tracing is about, right? Definitely probably more around
45:24
performance. You can put some different things in
45:26
tracing too. I've used it to say, we
45:29
talked about those like database connections to say,
45:31
oh yeah, this is reusing a connection
45:33
here because I was trying to like debug some stuff
45:35
on am I creating a
45:37
connection too many times so I don't
45:39
want to be so yeah, you can
45:41
put some other useful things in tracing as
45:43
well. Yeah, and pad out in the
45:45
audience. Let's move around. We're using many
45:47
micro services like single execution involves many
45:49
services basically. It's hard to follow the
45:52
logs between the services and tracing helps
45:54
tie that together. Yeah, that's for sure.
45:56
Alright, let's close this out Tony with
45:58
one more thing that I'm not sure. sure
46:00
how constructive it can be. There probably
46:02
is some ways, but testing,
46:04
right? Yeah. You
46:07
can't set up your own, if you could set
46:09
up your own Lambda cluster, you might just
46:11
run that for yourself, right? How are you
46:13
going to do this, right? Yeah, to some
46:15
extent you can, right? There's a Lambda Docker
46:17
image that you could run locally and
46:20
you can do that. But if your
46:22
Lambda is reaching out to DynamoDB, I
46:24
guess there's technically a DynamoDB container as
46:27
well. It's a
46:29
lot of overhead to set this up, but rather
46:31
than just doing flask start or whatever, the
46:33
command is still going up a flask. I
46:35
press the go button in my IDE and
46:38
now it's... That's
46:40
definitely... And there's more and more tooling coming
46:42
out that's coming out before this kind of
46:45
stuff. But if you can unit test,
46:47
there's no reason you can't just run
46:50
unit tests locally. But
46:52
when you start getting into the integration test,
46:55
you probably get to the point where maybe
46:58
you just deploy to
47:00
actual services and it's always trade-offs,
47:02
right? There's costs associated with it.
47:04
There's the overhead of, okay, how
47:06
can I deploy to an
47:09
isolated environment? But maybe it interacts
47:11
with another microservice. There's definitely trade-offs,
47:13
but testing is... I could see
47:15
that you might come up with
47:17
a QA environment almost, like a
47:19
mirror image that doesn't share any
47:21
data but is sufficiently close. But
47:23
then you're running... I mean, that's
47:25
a pretty big commitment because you're
47:27
running a whole replica of whatever
47:29
you have. Yeah, QA environments are
47:31
great, but you might even want
47:33
lower than QA. You might want
47:35
to have a dev or like
47:37
a one place I worked at,
47:39
we would spin up an entire
47:41
environment for every PR. You
47:44
could actually... Like when you created a
47:46
PR, that environment got spun up and
47:48
it ran your integration tests and system
47:50
tests against that environment, which
47:52
simulated your prod environment a little
47:54
bit better than running locally
47:56
on your machine. Certainly a challenge
47:59
to test. this and yeah I can
48:01
imagine that it is yeah and there's always this
48:03
one-off things too right you can't
48:06
really simulate like that memory limitation
48:08
of a lambda locally as
48:10
much as when you deploy it and things
48:12
like that so that would be much much
48:14
harder I maybe you could run a docker
48:17
container and put a memory limit on it
48:19
that might work but it's you're back into
48:21
like more and more DevOps to avoid DevOps
48:24
so there it goes but interesting all right
48:26
anything else you want to add to this
48:28
conversation before we wrap it up about out
48:31
of time here I'm done I have it
48:33
hopefully we covered enough yeah there's a lot
48:35
of good resources the tooling that I've mentioned
48:37
like power tools and pants just
48:39
amazing communities like power tools has a
48:42
discord and go on there and ask
48:44
for help and they're super helpful pants
48:46
has a slack channel you can join
48:48
their slack and ask about things and
48:50
so those two communities have been really
48:52
good and really helpful in this a
48:54
lot of good talks that are available on
48:57
YouTube too so just yeah there's definitely resources
48:59
out there and a lot of people have
49:01
bought this for a while so excellent you
49:03
know just start from just create function and
49:06
start typing yeah cool all right before you
49:08
get out of here though let's get your
49:10
recommendation for a PyPI package something
49:13
notable something fun we've talked a lot about
49:16
it but power tools it's definitely one that
49:18
is like every day getting used for me
49:20
so that yeah power tools
49:22
for lambda and Python they actually support
49:25
other languages too so
49:27
they have the same functionality
49:29
for node.js for typescript and
49:31
.net and so sure this
49:33
one definitely leveraging power tools
49:35
and pydantic together just really
49:38
made like a serverless a lot of fun
49:41
to write yeah definitely doing great things there
49:43
excellent I'll put all those things in the
49:45
show notes yeah it's been great to talk
49:47
to you thanks for sharing your journey down
49:50
the serverless path yeah
49:52
thanks for having me yeah joychattin
49:54
say bye yeah this has been
49:56
another episode of TalkPython to me Thank
49:59
you to our. Sponsors be sure to check out what
50:01
they're offering. It really helps support the show. Take.
50:05
Some stress or your life get
50:07
notified immediately but errors and performance
50:09
issues in your web or mobile
50:11
applications with century. Just. Visit:
50:13
Talk Python.a Femme Slashed Century and get
50:15
started for Free And be sure to
50:18
use the promo code Talk Python or
50:20
One Word. Mail.
50:22
Trap an email deliver a
50:24
platform that developers love try
50:26
for free at mail trap.io.
50:30
On Level up your Python, we have one
50:32
of the largest catalogues of Python video courses.
50:34
Over a top i thought or content ranges
50:37
from to beginners to deeply advanced topics like
50:39
memory and a thing. And best of all
50:41
there's not a subscription and site. Check it
50:43
out for yourself at training that have Python.
50:45
None of them. Be. Be
50:48
sure to subscribe to the show, open your favorite
50:50
podcast app, and search for Python. We should be
50:52
right at the top. You can also
50:54
find the iTunes feed at slash iTunes,
50:56
the Google Play feed at slash Play,
50:58
and the Direct RSS feed at
51:00
slash RSS on talkpython.fm. Live
51:02
streaming most of our recordings these days. If
51:05
you wanna be part of the show and
51:07
habs your comments featured on the air, be
51:09
sure to subscribe to our youtube channel Ad
51:11
Hoc Python, Better Family/you Tube. This.
51:13
Your Host Michael Kennedy. Thanks so much for listening.
51:15
I really appreciate it. Now get out there and
51:18
rights. And. Python code.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More