Podchaser Logo
Home
Parallel Python Apps with Sub Interpreters

Parallel Python Apps with Sub Interpreters

Released Saturday, 3rd February 2024
Good episode? Give it some love!
Parallel Python Apps with Sub Interpreters

Parallel Python Apps with Sub Interpreters

Parallel Python Apps with Sub Interpreters

Parallel Python Apps with Sub Interpreters

Saturday, 3rd February 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

It's an exciting time for the capabilities

0:02

of Python. We have the faster CPython

0:04

initiative going strong, the recent async

0:07

work, the adoption of typing.

0:09

And on this episode, we discuss

0:11

a new isolation and parallelization capability

0:13

coming to Python through sub interpreters.

0:16

We have Eric Snow, who spearheaded the work to

0:19

get them added to Python 312, and

0:21

is working on the Python API for them

0:23

in 311, along with Anthony

0:25

Shaw, who's been pushing the boundaries of

0:27

what you can already do with sub

0:29

interpreters. This is Talk Python to

0:31

Me, episode 446, recorded

0:34

December 5th, 2023. Welcome

0:51

to Talk Python to Me, a

0:53

weekly podcast on Python. This is

0:55

your host, Michael Kennedy. Follow me

0:57

on Mastodon, where I'm at mkennedy,

0:59

and follow the podcast using at

1:01

TalkPython, both on bastodon.org. Keep

1:03

up with the show and listen to

1:06

over seven years of past episodes at

1:08

talkpython.fm. We've started streaming most

1:10

of our episodes live on YouTube. Subscribe

1:12

to our YouTube channel over at

1:14

talkpython.fm slash YouTube to get notified about

1:17

upcoming shows and be part of that

1:19

episode. This episode is

1:21

sponsored by PyBytes Developer Mindset Program.

1:23

PyBytes' core mission is to help

1:26

you break the vicious cycle of

1:28

tutorial paralysis through developing real-world applications.

1:31

The PyBytes Developer Mindset Program will help

1:33

you build the confidence you need to

1:35

become a highly effective developer. And

1:38

it's brought to you by Sentry. Don't

1:40

let those errors go unnoticed. Use

1:42

Sentry. Get started at talkpython.fm slash

1:45

Sentry. Anthony, Eric, hello,

1:47

and welcome to Talk Python. Hey. Hey,

1:49

guys. It's really good to have you

1:51

both here. You both have been on

1:53

the show before, which is awesome. And

1:56

Eric, we've talked about sub-interpreters before, but

1:58

they were kind of a dream. dream

2:00

almost at the time. That's right. Now,

2:03

they feel pretty real. That's right. Yeah,

2:05

it's been a long time coming. And

2:07

I think the last time we talked, I've always

2:09

been hopeful that it seems like it was

2:11

getting closer. So with 312, we

2:13

were able to land Per interpreter

2:15

Gil, which kind of was the last piece

2:18

that the foundational part I wanted to do. A lot of

2:20

cleanup, a lot of work that had to get done, but

2:23

that last piece got in for 312. Excellent.

2:26

Excellent. So good. So

2:28

let's do a quick check-in with you all. It's

2:30

been a while. Anthony, I'll start

2:32

with you. I got a quick intro for people who

2:35

don't know you, although I don't know how that's possible.

2:37

And then just what you've been up to. Yeah,

2:40

I'm Anthony Shaw. I work at

2:42

Microsoft. I lead the Python advocacy team. And

2:46

I do lots of Python

2:48

stuff, open source, testing things,

2:50

building tools, blogging, building

2:54

projects, sharing things. You have a book? Something

2:56

about the end of Python? I have a book as well. I

2:59

forgot about that. Yeah, there's a

3:01

book called CPython Internal Switches, a book all

3:03

about the Python compiler and how it works.

3:06

You suppressed the memory of writing it. Like it

3:08

was too traumatic. It's down there. Yeah, I'm not

3:10

used to getting. Yeah,

3:13

that book was for 3.9. And

3:16

people keep asking me if I'm going to

3:18

update it for 313 maybe,

3:20

because things keep changing. Things

3:24

have been changing at a more rapid pace than they

3:26

were a few years ago as well. So that maybe

3:28

makes it more challenging. Yeah, recently

3:31

I've been doing some more research as

3:34

well. So I just finished my master's

3:37

a few months ago and I

3:39

started my PhD and I'm looking

3:41

at parallelism and Python as one of

3:43

the topics. So I've been quite

3:45

involved in sub-interpreters and the free-threading

3:48

project and some other stuff as

3:50

well. Awesome. Congratulations on the

3:52

master's degree. That's really great. And I

3:54

didn't realize you were going further. So

3:56

Eric. Eric Snow. So

3:58

I've been working on Python. as a core

4:00

developer for over 10 years now, but

4:03

I've been participating for even longer

4:06

than that. And it's been

4:08

good. I've worked on a

4:10

variety of things, a lot of

4:12

stuff down in the core runtime

4:14

of CPython. And I've been working

4:16

on this, trying to find a

4:18

solution for a multi-core Python since,

4:20

really since 2014. Yeah.

4:23

So, I've been slowly, ever so

4:25

slowly working towards that

4:28

goal, and we've made it with 3.12s, and

4:31

there's more work to do. But that's kind of

4:33

a lot of the stuff that I've been working

4:35

on. I'm at Microsoft, but don't work with Anthony

4:37

a whole lot. I work

4:39

on the Python performance team

4:42

with Guido and Grant Booger and

4:44

Mark Shannon, Eric Cajuil, and we're

4:46

just working generally to make Python

4:49

faster. So my part of that

4:51

has involved seven workers. Interestingly

4:53

enough, I've only

4:56

really this year that I've been able

4:58

to work on all this sub-interpreter stuff

5:00

full-time for that I was working on,

5:02

mostly on other stuff. So

5:04

it's kind of a, this year's been a good

5:07

year for me. Yeah, I would say that must

5:09

be really exciting to get the like, you know

5:11

what? Why don't you just keep, just do that.

5:13

That'd be awesome for us. Yeah, it's been awesome.

5:15

Well, maybe since you're on the team, it's

5:18

a quick check-in on faster CPython. It's

5:20

made a mega difference over the last

5:22

couple of releases. Yeah, it's kind of

5:24

interesting. Mark Shannon

5:26

definitely has a vision. He's

5:28

developed a plan like years ago,

5:31

but we finally were able

5:33

to put him in a

5:35

position where he could do something

5:37

about it. And we've all been kind of pitching

5:39

in. A lot of it

5:41

has to do with just applying some

5:43

of the general ideas that are out

5:45

there regarding dynamic languages and optimization. Things

5:48

have been applied to other things like

5:50

HHPM or

5:52

various JavaScript runtimes. And

5:55

So a lot of

5:57

specialization, adaptive specialization. A

6:01

few other techniques but right now

6:03

I saw and a lotta stuff

6:05

were able to exit in first

6:08

three at three Eleven In Three

6:10

Twelve there was encased in much

6:12

impassable staff whose are gearing up

6:15

to effectively adage it into see

6:17

Python and that's required up a

6:19

lot as kind of behind the

6:22

scenes work to get things in

6:24

the right places and so with

6:26

for someone targeting Three Thirteen for

6:29

that. So. I right now I'm

6:31

a sequence where things are at

6:33

it were kind of breakeven performance

6:35

wise but there's a lot of

6:38

stuff that we can do. I

6:40

love optimization works that really hasn't

6:42

even been done yet and I'll

6:44

take that performance improvement a pretty

6:46

drastically alter his calf hard to

6:49

say when we're gonna be but

6:51

for Ciresi thirteen a second pretty

6:53

good for at least said some

6:55

performance improvements because of the the

6:57

to the new optimization work. As

7:00

that's exciting now we have no real

7:02

dead at the moment right? But not

7:04

in See Python Now A Man in

7:06

the Empire did He has ended our

7:08

house. And know

7:10

what? That's actually a super exciting cause

7:12

I feel like that can be another

7:14

big boost potentially yeah with the jet

7:17

you com dos or to things like

7:19

enlightening of small methods and and the

7:21

musician based on type information and and

7:23

at all that set of once it

7:25

was exciting parts for me is that

7:27

a lot this work but not long.

7:30

after i joined the team says

7:32

but to two years ago to

7:34

and half years ago that somewhere

7:36

in there pretty early on we

7:38

started i reach out to other

7:41

folks other projects they were instead

7:43

in performance as performance on of

7:45

python code and and move works

7:47

pretty hard to cooperate with them

7:50

so by bird the team over

7:52

at mehta as they have a

7:54

love interest in making sure i

7:56

found this place isn't an they're

7:59

so we've actually worked very closely with

8:01

them and they're able to take advantage of

8:03

all the work that we've done, which is

8:05

great. Yeah, there seems to be some synergy

8:07

between the sender team and the faster CPython

8:09

team. So awesome. But let's

8:12

focus on a part that is

8:15

there, but not really utilized very

8:17

much yet, which is the sub-interpreters.

8:19

So back on, what is this?

8:22

2019. Eric,

8:24

I had you on and we talked

8:26

about can sub-interpreters free us from Python's

8:28

Gill and since then

8:31

this has been accepted,

8:33

but it's Anthony's fault that we're here.

8:36

Because Anthony posted over on Massadon, hey,

8:38

here's a new blog post, me running

8:40

Python parallel applications with sub-interpreters. How

8:42

about we use Flask and fast

8:45

API and sub-interpreters and make that

8:47

go fast. That sounded more

8:51

available in the Python level than I

8:53

realized the sub-interpreter stuff was. So that's

8:55

super exciting both of you. Yeah, it's

8:57

been fun to play with it and

8:59

try and build applications on it and

9:02

stuff like that. Working

9:04

with Eric probably over the last couple

9:06

of months on things that we've discovered

9:08

in that process. Especially

9:12

with the extension stuff. Daytime? Yeah,

9:14

that's one. With

9:17

C extensions and I think that some of those

9:19

challenges are going to be the

9:22

same with free threading as well. It's

9:25

how C extensions have state where they put

9:28

it, whether that's thread safe.

9:31

As soon as you open up

9:33

the possibility of having multiple gills

9:35

in one process then what

9:38

John does is that create? Absolutely.

9:41

Well, I guess maybe some

9:43

nomenclature first, not no-gill Python

9:45

or sub-interpreter, free threaded. Is

9:47

that what we're calling it?

9:49

How do we speak about this? It's

9:53

not quite settled but I think

9:55

a lot of people have taken to referring to

9:57

it as free threaded. I can go with that.

10:00

People still talk about no-gill, but free-threaded

10:02

is probably the best bet. Are

10:05

you describing what it does

10:07

and why you care, or are you describing the

10:09

implementation? The implementation is it has no-gill, so it

10:11

can be free-threaded, or it has subinterpreters, so it

10:13

can be free-threaded. But really, what you want is

10:16

the free-threaded part. You don't care actually about the

10:18

gill too much, right? It's

10:21

interesting. With subinterpreters, it really

10:23

isn't necessarily a free-threaded model.

10:27

It's kind of free-threaded only in

10:29

the part at which you're moving

10:32

between interpreters. So you only have to care about

10:34

it when you're interacting between interpreters. The rest of

10:36

the time, you don't have to worry about it.

10:40

With the no-gill, it's more what

10:42

we think of as free-threading,

10:44

where everything is unsafe. Right.

10:48

For people who don't know, the no-gill stuff is what's coming out of the

10:50

Cinder team and from Sam Gross. That was also

10:52

approved, but was the biggest caveat I've ever seen

10:54

on an approved cup. Like,

10:57

we approved this, but we also reserved the

10:59

right to completely undo it and not approve it

11:01

anymore. But it's also a

11:03

compiler flag that is an optional

11:05

off-by-default situation, so it should

11:08

be interesting. Yeah, we can maybe compare and

11:10

contrast them a bit later as well. Yeah,

11:12

absolutely. Well, let's start with what is an

11:15

interpreter. So then how

11:17

do we get to subinterpreters, and then what work did

11:19

you have to do? I heard there was a few

11:21

global variables that are being shared. Oh, my gosh. Yeah.

11:24

So let's give people a quick rundown of

11:26

what is this and how is it, this

11:29

new feature in 3.12, changing

11:31

things. Yeah, subinterpreters, in

11:34

a Python process, when you

11:36

run Python, everything that happens,

11:38

all the machinery that's running

11:40

your Python code is running

11:42

with a certain amount of

11:44

global state. And

11:46

historically, you can think of

11:48

it as across the whole process. You've got

11:50

a bunch of global state. If

11:53

you look at all the stuff

11:55

like in the sys module, sys.modules

11:57

or sys.whatever, All those things are

11:59

shared. Third, across the whole the

12:01

whole runtime. So if you have

12:03

different threads for instance running it

12:05

they'll sure that stuff even though

12:07

you have different code run in

12:10

each thread. So off at runtime.

12:12

State is size is everything that

12:14

Python needs in order to run.

12:16

But what's interesting is that the

12:18

vast majority of it you can

12:20

and think of as factual interpreter

12:22

and so that state if we

12:24

treat as isolated and I'm were

12:26

very careful about it then we

12:28

can have multiple of. Them At

12:30

means that when your Python code

12:32

runs that they can run with

12:34

a different set of this novel

12:36

state, different modules, imported different things

12:38

going on, different threads that are

12:41

unrelated and really don't affect each

12:43

other at all. And then, and

12:45

where's that in mind? You can

12:47

take it one step further and

12:49

say, well, what's completely isolate those

12:51

and like how deep in fact,

12:53

I'm sure it kills right and

12:55

then at that point, but that's

12:57

where the magic have happened, so

12:59

that's. Kind of de Maya

13:01

for school in this whole project

13:03

with skits at St Because once

13:05

you get their Van Nuys, it

13:07

opens up a lot of possibilities

13:09

when it comes to concurrency. Him

13:12

and pilotless? how than Anthony answer

13:14

running with his one person shown

13:16

off? Effing yeah? Absolutely. This

13:20

portion of Top Python to me is brought to you

13:22

by the pie by it's. Python. Developer

13:25

Mindset program. Is

13:27

run by my two friends and frequent guess

13:29

Bob Duggar most and Julian to queer. Innocent

13:32

of mean telling about it. Let's hear

13:34

them described their program. In.

13:36

Of Rover Ai, machine learning

13:38

and large language models are

13:41

revolutionizing how we live and

13:43

work. Eyes. And stance at

13:45

the forefront. Don't. Get left

13:47

behind in this technological

13:49

evolution. Tutorial: Paralysis.

13:52

It's a thing of the past. With

13:55

Pie My Coaching you move beyond endless

13:57

tutorials to become an efficient, skilled, price

13:59

and. developer, we focus on

14:01

practical, real-world skills that prepare you for

14:03

the future of tech. Join

14:06

us at PyBites and step into

14:08

a world where Python isn't just

14:10

a language, but a key to

14:12

unlocking endless possibilities in the tech

14:14

landscape. Check out our

14:16

12-week PDM program and embark on a

14:19

journey to Python Mastery. The

14:21

future is Python, and with PyBites, you're

14:23

one step ahead. Apply

14:26

for the Python developer mindset today. It's

14:29

quick and free to apply. The

14:31

link is in your podcast player show notes. Thanks

14:34

to PyBites for sponsoring the show. One

14:37

thing I don't know the answer to, but it

14:39

might be interesting, is Python has a memory

14:43

management story in front of the

14:45

operating system, virtual memory that's assigned

14:47

to the process with tools, arenas,

14:50

blocks, those kinds of things. What's

14:52

that look like with regard to subinterpreters? Each

14:55

subinterpreter have its own chunk or set

14:57

of those for the memory it allocates,

14:59

or is it still a shared one

15:01

thing per process? It's

15:04

per interpreter. This is something that was

15:06

very global. Like

15:09

you pointed out earlier, this whole project was

15:11

all about taking all sorts of global

15:13

state that was actually stored in C

15:15

global variables all over the place, right?

15:18

Taking those in together into one place

15:21

and moving those down from the

15:25

process global state down into

15:27

each interpreter. One

15:29

of those things was all of

15:31

the allocator state that we have

15:33

for objects. Python

15:35

has this idea of different levels

15:38

of allocators. The object allocator is

15:40

what's used heavily for Python objects,

15:42

of course, but some other state

15:44

as well. Object

15:47

allocator is the part that has all

15:49

the arenas and everything like you were

15:51

saying. Part

15:53

of what I did before we could

15:55

make the gil per interpreter, we had

15:57

to make the allocator state per interpreter.

16:00

Well, the reason I think that is interesting asking

16:02

about it, one because of the

16:04

Gil, obviously, but the other one is, it seems

16:06

to me like these sub interpreters could be used

16:08

for a little bit of stability

16:10

or isolation, or run some

16:12

kind of code. And when that line exits, I

16:15

want the memory freed, I want models unloaded, I

16:17

want it to go back to the way it

16:19

was. You know what I mean? There

16:21

is normally in Python, even if the

16:23

memory becomes free, right, it's still got some of

16:25

that like, well, we allocated the stuff now we're

16:28

holding it to refill it. And then you don't

16:30

unimport modules, but modules can

16:32

be pretty intense, actually, if

16:35

they start allocating a bunch of stuff themselves and

16:37

so on. What do you guys think about this

16:39

as an idea, as an aspect of it? Yeah,

16:41

there's one example, it's been coming across

16:43

recently, and this is a pattern, I think it's

16:47

a bit of an anti-pattern, actually,

16:49

but some Python packages, they

16:52

store some state information

16:54

in the module level. So

16:57

an example is a SDK that I've been

16:59

working with, which has just been

17:01

rewritten to stop it from doing

17:03

this, but you would put the

17:05

API key of the SDK, you

17:08

would import it, so you'd import

17:10

x, and then do like x.apiKey

17:12

equals. So it basically

17:14

stores the API key in

17:16

the module object, which is

17:19

fine if you've imported the

17:22

module once and you're using it once. What

17:25

you see is that if you put that

17:28

in a web application, it just assumes that

17:30

everyone uses the same key. So

17:32

you can't import

17:35

that module and then connect to it

17:37

with different API keys, like you'd have

17:39

different users or something. Right. So

17:41

you have to have a multi-tenancy, right,

17:43

where they would say, enter their chat

17:46

GPT, open AI key, and then they

17:48

could work on behalf of that, right?

17:50

That potentially is something like that, right?

17:52

Yeah, exactly. So that's kind of like

17:54

an API example, but there are other

17:56

examples where, let's say you're loading data

17:58

or something, and it's... and it

18:00

stores some temporary information somewhere in like

18:03

a class attribute or even like a

18:05

module attribute like that then if

18:08

you've got one piece of code loading

18:10

data and then in another thread in

18:12

a web app or just in another

18:14

thread generally, you're reading another piece of

18:16

data and they're sharing state somehow and

18:18

you've got no isolation. Some of that

18:20

is due to the way that people

18:23

have written the Python code or the

18:25

extension code has

18:27

kind of been built around, oh,

18:29

we'll just put this information here and they

18:31

haven't really thought about the isolation. Sometimes

18:34

it's because on the C

18:36

level especially that because the Gil was

18:38

always there, they've never had to worry

18:41

about it. So, you know, that's you

18:43

could just have a counter for example,

18:45

or there's an object which is a

18:47

dictionary that is a cache of something

18:50

and you just put that as a static

18:52

variable and you just read and write from

18:54

it. You've never had to worry about thread

18:56

safety because the Gil was there to kind

18:58

of protect you. You probably shouldn't

19:00

have built it that way, but it didn't really

19:03

matter because it worked. What

19:05

about this Anthony? What if we can write out

19:07

one line? It'll probably be safe, right? If we

19:10

can fit it just one line of Python code,

19:12

it'll be okay. Yeah. Diction.add,

19:16

what's wrong there? Diction.get, fine. Yeah.

19:18

So yeah, what we're seeing within

19:21

subinterpreters, I think what's the

19:23

concept that people will need

19:25

to kind of understand is

19:27

where the isolation is because

19:30

there are different models for writing parallel

19:33

code and at the moment we've got

19:35

coroutines which is

19:38

asynchronous so it can run concurrently. So that's

19:40

if you do a sync in a way

19:42

or if you use the old coroutine decorator.

19:45

You've also got things like generators which

19:47

are kind of like a concurrent pattern.

19:50

You've got threads that you can create. All

19:53

of those live within the same

19:56

interpreter and they share the same information.

19:58

So you don't have to... if

20:00

you create a thread inside that

20:02

thread, you can read a variable from

20:04

outside of that thread and

20:06

it doesn't complain. You don't

20:09

need to create a lock at the moment, although

20:11

in some situations, you probably should. And

20:15

you don't need to re-import modules and

20:17

stuff like that, which can

20:19

be fine. And then at the other

20:21

extreme, you've got multi-processing, which is a

20:24

module in the standard library that allows

20:26

you to create extra Python processes and

20:28

then gives you an API to talk

20:31

to them and share information between them.

20:33

And that's the other extreme, which is

20:35

the ultimate level

20:39

of isolation. You've got a whole separate

20:41

Python process. But

20:43

instead of interacting with it via the command

20:45

line, you've got this nice API where you

20:48

can almost treat it like it's in the

20:50

same process as the one you're running from.

20:52

You get a return value from a process, for example. But

21:00

the thing is, if you peel back the

21:02

covers a little bit, then how it sends

21:04

information to the other Python process involves a

21:07

lot of pickles and

21:09

it's not particularly efficient. And also,

21:12

a Python process has a lot

21:14

of extra stuff that you maybe

21:16

necessarily didn't even need. You

21:18

get all this isolation from having it, but you

21:21

have to import all the modules again. You have

21:23

to create the arenas again or the memory allocation.

21:25

You have to do all the startup process again,

21:27

which takes a lot of time. It's like at

21:30

least 200 million seconds. You're taking the Python code again,

21:32

right? At least the PYC. Yeah,

21:34

exactly. So you basically

21:36

created a whole separate Python. And

21:38

if you do that just to

21:40

run a small chunk of code,

21:42

then it's not probably the best

21:44

model at all. You have a

21:46

nice graph that shows the rate

21:48

as you add more work and

21:50

you need more parallelism. One

21:55

thing that struck me coming to Python from other

21:57

languages like C++, C++, C++, C++, C++, there's

22:00

very little locks and events threading,

22:02

coordinating stuff in Python. And I

22:04

think that there's probably a ton

22:06

of Python code that actually is

22:08

not actually thread safe, but people

22:10

kind of get away with it

22:12

because the context switching is so

22:14

coarse-screened, right? Like you say, well,

22:16

the guild's there, so you only

22:19

run one instruction at a time,

22:21

but like this temporary invalid

22:23

state you went into as part of like, does

22:25

your code running like, took money out of this

22:27

account, and then I'm gonna put it into that

22:29

account. Those are multiple Python lines, and there's

22:31

nothing saying they couldn't get interrupted between

22:33

one to the other, and then things

22:35

are busted, right? I feel there's some

22:38

concern about adding this concurrency, like, oh, we're

22:40

having to worry about it, like you probably

22:42

should be worrying about it now. Not as

22:44

much necessarily, but it's, I feel

22:46

like people are getting away with it because

22:48

it's so rare, but it's not, it's a

22:50

non-zero possibility. What do you guys think? Yeah,

22:52

and those are real concerns. It's

22:55

as, there's been lots of

22:57

discussion with the no-gill work about

22:59

really what matters, what

23:01

we need to care about, really

23:04

what impact it's gonna have, and

23:06

I mean, it's probably gonna have

23:08

some impact on people with Python

23:11

code, but it'll especially have impact

23:13

on people that maintain extension modules.

23:16

But it really is all

23:19

the pain that comes with free threading.

23:23

That's what it introduces, the

23:26

benefits as well, of course. But

23:29

what's interesting, I'd like to think of, subinterpreters

23:32

kind of provide the same facility,

23:34

but they force you to be

23:36

explicit about what gets shared, and

23:39

they force you to do it

23:41

in a thread-safe way. So it's,

23:44

you can't do it without thread safety,

23:46

and so it's not an issue. And

23:48

it doesn't hurt that people

23:51

really haven't used subinterpreters

23:53

extensively up till now, whereas threads are

23:55

kinda something that's been around for quite

23:57

a while. Yeah, it has been. Well,

24:00

it's subinterpreters have traditionally just been a thing

24:02

you can do from C extensions or the

24:04

C API, which really limits them from being

24:07

used in just a standard, like, I'm working

24:09

on my web app, so let's just throw

24:11

in a couple of subinterpreters, you know? But

24:15

in 3.13, is that when we're looking

24:17

at having a Python-level API for creating

24:19

interacting with? Yeah, I've

24:21

been working on a PEP for that,

24:24

PEP 5.5.4, recently created a new PEP

24:26

to replace that one, which is PEP

24:28

7.3.4, that's the one. So

24:32

that's the one that I'm targeting for 3.13.

24:35

And it's pretty straightforward,

24:37

create interpreters and

24:40

kind of look at them and,

24:42

with an interpreter, run some code,

24:44

you know, pretty basic stuff. And

24:46

then also, because subinterpreters aren't quite

24:49

so useful if you can't cooperate

24:51

between them, but there's also a

24:54

queue type, you know, you push

24:56

stuff on and you pop stuff

24:58

off and it's pretty basic. So

25:00

you could write something like, oh wait,

25:03

queue.popper or something like that. Yeah,

25:05

so. Yeah, this is

25:07

really cool. And the other thing that I want to talk

25:09

about here, looks like you

25:12

already have it in the PEP, which

25:14

is excellent, somehow I missed that before,

25:16

is that there's a, we have thread

25:18

pool executors, we have multiprocessing pool executors,

25:20

and this would be an interpreter pool

25:22

executor. What's the thing in there? People

25:25

are already familiar with using concurrent of futures.

25:28

So if we can present the same

25:31

API for subinterpreters, it

25:33

makes it really easy because you can set

25:35

it up with multiprocessing or threads and switch

25:38

it over to one of the other pool

25:40

types without a lot of fuss. Right, basically

25:42

with a clever import statement, you're good to

25:44

go, right? From whatever import,

25:46

like multiprocessing pool executor as pool executor

25:48

or interpreter pool executor as pool executor,

25:50

and then the rest of the code

25:53

could stay potentially. Yeah. What I expected.

25:55

The communication, like you gotta, it's gotta

25:57

kind of be a basic situation. Yeah.

26:00

because there are assumptions. Yeah,

26:02

and it should work mostly the

26:04

same way that you already use

26:06

it with threads and multi-processing. We'll

26:09

see, there's some limitations with subinterpreters

26:12

currently that I'm sure we'll work

26:14

on solving as we

26:16

can. So we'll see,

26:18

it may not be quite as efficient

26:21

as I'd like at first with the

26:23

interpreter pool executor, because we'll probably end

26:25

up doing some pickling stuff kind of

26:27

like multi-processing guys. Although I

26:29

expect it'll be a little more efficient. This

26:32

portion of TalkPython to me is brought to you

26:34

by Sentry. You know Sentry for the air monitoring

26:36

service, the one that we use right here at

26:38

TalkPython, but this time I want to

26:40

tell you about a new and free workshop, Heaming

26:43

the Kraken, managing a Python

26:45

monorepo with Sentry. Join

26:48

Salma Alam-Neyor, Senior Developer Advocate

26:50

at Sentry and David Winterbottom,

26:52

Head of Engineering at Kraken

26:54

Technologies, for an inside look

26:56

into how he and his team develop,

26:58

deploy, and maintain a rapidly

27:01

evolving Python monorepo with over four

27:03

million lines of code that powers

27:05

the Kraken utility platform. In

27:08

this workshop, David will share how his department

27:10

of 500 developers who deploy around 200 times

27:12

a day, use

27:15

Sentry to reduce noise, prioritize issues, and

27:17

maintain code quality without relying on a

27:19

dedicated Q&A team. You'll learn

27:22

how to find and fix root causes

27:24

of crashes, ways to prioritize the most

27:26

urgent crashes and errors, and tips

27:28

to streamline your workflow. Join them

27:30

for free on Tuesday, February 27th, 2024

27:32

at 2 a.m. civic time. Just

27:36

visit talkbython.fm slash sentry-monorepo, that

27:38

link is in your podcast

27:40

player show notes. 2

27:43

a.m. might be a little early here in the

27:45

U.S., but go ahead and sign up anyway if

27:47

you're a U.S. listener, because I'm sure they'll email

27:49

you about a follow-up recording as well. Thank

27:52

you to Sentry for supporting this episode. I

27:55

was gonna save this for later, but I think maybe

27:57

it's worth talking about now. So first of all, Anthony.

28:00

you wrote a lot about and have

28:02

actually had some recent influence on what

28:04

you can pass across say the starting

28:06

code and then the running interpreter that's

28:08

kind of like the sub interpreter doing

28:10

extra work. Want to talk about like

28:12

what data exchange there is? Yeah, so

28:14

when you're using any of these models,

28:17

multi-processing sub interpreters

28:19

or threading, I guess you've got three

28:23

things to worry about. One is how do you create

28:25

it in the first place? So how do you create

28:27

a process? How do you create an interpreter? How do

28:29

you create a thread? The second thing

28:31

is how do you send data to it? Because

28:33

normally the reason you've created them is because you

28:35

need it to do some work. So

28:38

you've got the code which is when you

28:40

spawn it, when you create it. The

28:43

code that you want it to run but

28:45

that code needs some sort of input and

28:47

that's probably going to be Python objects. It

28:49

might be reading files for example or listening

28:52

to a network socket. So it might be

28:54

getting the input from somewhere

28:56

else but typically you need to give

28:58

it parameters. Now

29:00

the way that works in multi-processing

29:03

is mostly reliant on pickle. So

29:05

if you start

29:08

a process and you give it some

29:10

data either as a parameter or you

29:12

create a queue and

29:15

you send data down the queue or

29:17

the pipe for example, it pickles

29:19

the data. So you can put a Python

29:21

object in that uses the pickle module, it

29:23

converts that into a byte string and then

29:25

it basically converts the byte string on the

29:28

other end back into objects. That's

29:30

got its limitations because not everything can

29:33

be pickled and

29:35

also some objects especially if you've

29:37

got like an object which has

29:39

got objects in it and it's

29:42

deeply nested or you've got a

29:44

big complicated dictionary or something that's got

29:46

all these strange types in it which

29:48

can't necessarily be rehydrated just

29:51

from a byte string. An

29:53

alternative actually I do want to point out because

29:56

for people who come across this issue quite a

29:58

lot, there's another package called DIL. on

30:00

PyPI. So if you think of

30:03

Pickle, think of Dil. Dil

30:06

is very similar to Pickle. It has

30:08

the same interface, but

30:10

it can pickle slightly more exotic

30:12

objects than Pickle can. So

30:15

often if you find that you've tried to

30:18

pickle something, you try to share it with

30:20

a process or a sub-interpreter, and it comes

30:22

back and says, this can't be pickled, you

30:25

can try Dil and see if that works. So

30:28

yeah, the typical way of

30:30

doing it is that you would pickle

30:32

an object and then on the other

30:35

end, you would basically unpickle it back

30:37

into another object. The downside of that

30:39

is that it's pretty slow. It's equivalent.

30:42

Like if you use the JSON module

30:44

in Python, it's kind of similar, I

30:46

guess, to converting something into JSON and

30:48

then converting it from JSON back into

30:50

a dictionary on the other end. Like

30:53

it's not a super efficient way

30:55

of doing it. So sub-interpreters have

30:57

another mechanism. I haven't read pep

30:59

734 yet. So I

31:03

don't know how much of this

31:05

is in the new pep Eric or if

31:07

it's in the queue. But there's

31:09

a much the same. Okay, it's

31:11

much the same. So there's another

31:14

mechanism with sub-interpreters because

31:16

they share the same process,

31:19

whereas multiprocessing doesn't their separate processes

31:21

because they share the same process,

31:23

you can basically put some data

31:25

in a memory space, which can

31:27

be read from a separate interpreter.

31:29

Now you need to be, well, Python needs

31:31

to be really careful. You don't need to worry too

31:33

much about it, because that complexity

31:35

is done for you. But

31:38

there are certain types of objects

31:40

that you can put in as

31:42

parameters. You can send either a

31:44

startup variables for your sub-interpreter, or

31:46

you can send via a

31:48

pipe basically backwards and forwards between

31:50

the interpreters. These are

31:52

essentially all the immutable types for

31:55

Python, which is like

31:57

string unicode strings, byte strings. ball,

32:01

non, integer, float, and

32:04

tuples. And you can

32:06

do tuples of tuples as well. And it

32:08

seems like the tuple part had to

32:11

something that you added recently, right? So

32:13

as I implemented tuple sharing just last

32:15

week. Yeah, that was, that's in now.

32:17

I really wanted to use it. So I

32:20

thought, well, instead of keep, I kept complaining

32:22

that it wasn't there. So I thought instead

32:24

of complaining, I might as well thought that

32:26

Eric can work out how to implement it.

32:28

Yeah, it's awesome. Hey, yeah, you got your

32:30

dictionary. So that's one thing. Yeah, exactly.

32:32

So one thing that I thought that

32:35

might be awesome, are you familiar with

32:37

message spec, you guys seen message spec?

32:39

It's like pedantic in the sense that

32:41

you create a class with types, but

32:43

the, the parsing performance is quite a

32:45

bit like much, much faster, 80 times

32:47

faster than pedantic, um, 10 times

32:50

faster than Marsh arrow and Seattle

32:52

and so on, and faster still

32:54

even then say Jason or you

32:57

Jason. So if you make sense

32:59

to use this turn into its serialization format,

33:01

the bites, send the bites over and then

33:03

pull it back, I don't know, might give

33:05

you a nice, you can share by strings.

33:07

So you can stick something into pickle or

33:09

you can use, um, like, uh,

33:11

message spec or something like that to serialize

33:13

something into a bite string and then receive

33:15

it on the other end and rehydrate it.

33:18

Hydantic, like I didn't, it's awesome as well. Just, this

33:20

is meant to be super fast with a little bit

33:23

of less behavior, right? Yeah. So this

33:25

is a kind of a design thing. I

33:27

think people need to consider when they're

33:29

like, great, I can run everything in parallel now. Um,

33:32

but you have to kind of unwind

33:34

and think about how you designed your

33:36

application, like at which point do you

33:38

fork off the work and how do

33:40

you, how do you split the data?

33:42

Um, you can't just kind of go

33:44

into it assuming, Oh, we'll just have

33:46

a pool of workers and we've kind

33:48

of got this shared area of data.

33:51

Everybody just reads. Yeah. I'll pass

33:54

it a point or two, a million entry list and

33:56

I'll just run with it. Yeah. Cause I mean, in

33:58

any language, you're going to get issues. you

34:00

do that even if you've got shared memory

34:02

and it's easier to read and write to

34:04

the different spaces, you're going to get issues

34:07

with locking. And I think it's also important

34:09

with free threading if you read the spec

34:12

or kind of follow what's happening with free

34:14

threading. It's not like the gills

34:16

disappeared. The gills

34:19

been replaced with other locks. So

34:21

there are still going to be locks, you

34:24

can't just have no locks if you've got

34:26

things running. Right, like it

34:28

moves some of the reference counting stuff into

34:30

like well, it's fast on the default thread,

34:32

the same thread, but if it goes to

34:34

another, it has to kick in another more

34:36

thread safe case that potentially is slower and

34:38

so on. Yeah. So yeah, the really

34:41

important thing with subinterpreters is that they have their own,

34:44

well, have their own gill. So

34:46

each one has its own lock,

34:48

so they can run fully in

34:51

parallel just as they could with

34:53

multiprocessing. So I feel like a

34:55

closer comparison with subinterpreters is multiprocessing.

34:57

Yeah, because they basically run fully

35:00

in parallel. If you start four of them

35:02

and you have four cores, each core is

35:04

going to be busy doing work. You start

35:06

them, you give them data, you

35:08

can interact with them whilst they're running.

35:12

And then when they're finished, they can

35:14

close and they can be destroyed and

35:16

cleaned up. So it's much closer to

35:19

multiprocessing. But the big difference

35:22

is that the overhead, both on the

35:25

memory and CPU side of

35:27

things is much smaller. Separate

35:29

processes with multiprocessing are pretty heavy

35:31

weight, they're big workers. And

35:33

then the other thing that's pretty significant is

35:36

the time it takes to start one. So

35:39

starting a process with multiprocessing takes quite

35:41

a lot of time and it's significantly, I

35:43

think it's like 20 or 30 times

35:46

faster to start a subinterpreter. You have

35:48

a bunch of graphs for it somewhere.

35:50

There we go. So

35:52

I scrolled past it, there we go. It's not

35:54

exactly the same, but kind of captures

35:56

a lot of it there. So

35:59

one thing that I think think is exciting,

36:01

Eric, is the interpreter pool,

36:03

sub-interpreter pool, because a lot

36:05

of the difference between

36:07

the threading and the sub-interpreter

36:09

performance is that startup of

36:12

the new arenas and importing the standard library, all

36:14

that kind of stuff that still is going to

36:16

happen. But once those things are loaded up in

36:18

the process, they could be handed work easily, right?

36:21

And so if you've got a pool of, like

36:23

say, you have 10 cores, you've got 10 of

36:25

them just chilling, or however many you've sort of

36:28

done enough work to do in parallel, then you

36:30

could have them laying around and just send,

36:32

like, okay, now I want you to run

36:34

this function, and now I want you to

36:36

run this, and that one means go call

36:38

that API and then process it. And I

36:40

think you could get the difference between threading

36:42

and sub-interpreters a lot lower by having them

36:44

kind of reuse, basically. Yep, absolutely. There's

36:48

some of the, the

36:50

key difference, I think, is mostly that

36:52

when you have mutable data, whereas

36:55

with threads, you can share it, so threads

36:57

can kind of talk to each other through

37:00

the data that they share with

37:02

each other. Whereas with sub-interpreters, there

37:04

are a lot of restrictions, and I

37:06

expect we'll work on that to an extent,

37:08

but it's also part of the programming model.

37:11

And like Anthony was saying, if

37:13

you really want to take advantage of parallelism,

37:15

you need to think about it. You need

37:17

to actually be careful about your data and

37:19

how you're splitting up your work. I think

37:21

there's going to be design patterns that we

37:24

come to know, or conventions we

37:26

come to know, let's suppose

37:28

I need some calculation and I'm

37:30

going to use it in a for loop. You don't run

37:32

the calculation if it's the same over and over every time

37:34

through the loop. You run it and then you use the

37:36

result, right? So in this similar thing here

37:38

would be like, well, if you're going to process

37:41

a bunch of data and the data comes from,

37:43

say, a database, don't do the query and hand

37:45

it all the records. Just tell it, go get

37:47

that data from the database. That way it's already

37:49

serialized in the right process and there's not this

37:52

cross serialization either pickling or

37:54

whatever mechanism you come up with, right?

37:56

But try to think about when you

37:59

get the data. Can you delay

38:01

it until it's in the sub process and

38:03

our sub interpreter rather and so on right?

38:05

Mm-hmm. Yeah, definitely One

38:07

interesting thing is that type 734 I've

38:11

included memory view as one of

38:13

the types as supported So basically

38:16

you can take a memory view

38:18

of any kind of object that

38:21

implements the buffer protocol so like

38:23

numpy arrays and stuff like that

38:25

and Pass that memory

38:28

view through to another interpreter and you can

38:30

use it and doesn't make a copy

38:32

or anything It actually uses the same

38:34

underlying data. They actually get shared. Oh,

38:36

that's interesting Yeah, so there's

38:38

and I think there's even

38:40

more room for that it with other

38:42

types but we're certain small

38:44

but the the key thing there is

38:46

that like

38:49

you're saying I mean the which was coming

38:52

up with different models and patterns and

38:55

Libraries, I'm sure they'll come up

38:57

as people feel out Really?

39:00

What's the easiest way to take advantage of

39:02

these features and that's that's the sort of

39:04

thing that will apply not just to General

39:07

free threaded don't kill but also

39:09

subinterpreters. Mm-hmm. Definitely. It's gonna be

39:11

exciting so I guess I want

39:14

to move on to talk about working with this in Python

39:16

and The stuff that you've done

39:18

Anthony, but maybe a quick comment from the audience

39:20

is Jazzy asked Is this build on top of

39:23

a queue which is building top of link list

39:25

because I'm building this and this my research led

39:27

Me to these data structures. I guess that's the

39:30

communication across sub interpreter cross

39:32

interpreter communication Yeah, it was sub

39:34

interpreters like in peps in three

39:36

four It's a queue

39:38

implements the same interfaces as

39:41

the queue from the queue module But there's

39:43

there's no reason why people couldn't

39:45

implement whatever data structure they

39:47

want for communicating between sub interpreters and

39:50

then that data structures in charge of

39:53

Preserving thread safety and and so forth. Yep.

39:55

Excellent. Yeah, it's not a standard cue It's

39:57

like a concurrent key or something along those

39:59

lines All right,

40:01

so all of this we've been talking about

40:03

here is, we're looking

40:06

at this cool interpretable executor stuff.

40:08

That's in draft format, Anthony, for

40:11

3.13. And

40:13

somehow I'm looking at this running

40:15

Python parallel applications and subinterpreters that

40:17

you're writing. What's

40:19

going on here? How do you do this magic? You

40:21

need to know the secret password. In

40:25

Python 3.12, it's a very simple thing.

40:30

The C API

40:32

for creating subinterpreters

40:35

was included. And a lot

40:37

of the mechanism for

40:40

creating subinterpreters was included.

40:42

So there's also

40:44

a... In CPython,

40:46

there's a standard library which I think everybody

40:48

kind of knows. And

40:51

then there are some hidden modules

40:53

which are mostly used for testing. So

40:57

not all of them get bundled, I think, in the

40:59

distribution. I think a lot of the

41:01

test modules get taken out. But

41:04

there are some hidden modules you can use for testing.

41:07

Because a lot of the tests we've seen,

41:09

Python has the test C APIs. And nobody

41:11

really wants to write unit tests in C.

41:13

So they write the test in Python, and

41:16

then they kind of create these modules that

41:18

basically just call the C functions. And so

41:20

you can get the test coverage and do

41:22

the testing from Python code. So

41:25

I guess what was from peps 6...

41:29

I can't remember. I look at too many peps 6. Eric,

41:33

I'll probably know. What

41:35

is now pep 7.34? But

41:38

the Python interface to create subinterpreters.

41:41

A version of that was included in

41:43

3.12. So you

41:45

can import this module called underscore

41:48

XX subinterpreters. And

41:50

it's called underscore XX because it kind of

41:52

indicates that it's experimental and it's underscore because

41:55

you probably shouldn't be using it. It's

41:58

not safe for work to me. Yeah,

42:00

I don't know. But

42:03

it provides a good way of

42:06

people actually testing this stuff and

42:08

seeing what happens if I import

42:10

my C extension from a sub-interpreter.

42:14

So that's kind of some of what I've been doing

42:16

is looking at, okay,

42:18

what can we try and do in parallel?

42:21

And this blog post,

42:23

I wanted to try

42:26

a whiskey or an ascii web app. And

42:30

the typical pattern that you have at the

42:32

moment, and I guess how a lot of

42:34

people would be using parallel code but without

42:36

really realizing it, is

42:38

when you deploy a web app for

42:40

Django, Flask, or FastAPI, you

42:43

can't have one girl per

42:45

web server because if you've got one

42:47

girl per web server, you can only

42:50

have one user per website, which is

42:52

not great. So the

42:54

way that most web servers implement this

42:56

is that they have a pool of

42:58

workers. Gunicorn

43:02

does that by spawning Python processes

43:04

and then using the multiprocessing module.

43:06

So it basically creates multiple Python

43:08

processes all listening to the same

43:11

socket. And then

43:13

when a web request comes in, one of

43:15

them takes that request. It also then inside

43:17

that has a thread pool. So

43:20

even basically a thread pool is

43:23

better for concurrent code. So

43:26

Gunicorn normally is used in

43:28

a multi-worker, multi-thread model. That's

43:30

how we kind of talk about it. So

43:32

you'd have the number of workers that you

43:34

have CPU cores and then inside that you'd

43:37

have multiple threads. So

43:40

it kind of means you can handle more requests at

43:42

a time. If you've

43:44

got eight cores, you can handle at

43:46

least eight requests at a time. However,

43:48

because most web code can be concurrent

43:50

on the back end, like you're making

43:53

a database query or you're reading some

43:55

stuff from a file like that, that

43:57

doesn't necessarily need to hold the code.

44:00

deal so you can run it concurrently

44:02

which is why you have multiple threads.

44:05

So even if you've only got

44:07

8 CPU calls, you can actually

44:09

handle 16 or 32 web requests

44:11

at once because some of

44:13

them will be waiting for the database server

44:15

to finish running at SQL query or the

44:18

API that it called to actually reply.

44:21

So what I wanted to do

44:23

with this experiment was to look

44:25

at the multi-worker, multi-thread model for

44:27

web apps and say, okay, could

44:29

the worker be a sub-interpreter

44:31

and what difference would that

44:33

make? So instead of using

44:35

multi-processing for the workers, could

44:37

I use sub-interpreters for the

44:39

workers? So

44:41

even though the Python interface in

44:44

3.12 is experimental, they basically

44:46

wanted to adapt Hypocorn which is

44:48

a web server for

44:51

high-scheme whiskey apps in Python. I

44:54

wanted to adapt Hypocorn and basically

44:56

start Hypocorn workers from a sub-interpreter

44:58

pool and then seeing if I

45:01

can run Django, Flask and FastAPI

45:03

in a sub-interpreter. So a single

45:05

process, single Python process but running

45:07

across multiple cores and

45:10

listening to web requests and basically

45:12

running and serving web requests with

45:14

multiple guilds. So that was

45:16

the task. So in the article you

45:18

said you had started with G Unicorn

45:20

and they just made too many assumptions

45:22

about the web workers being truly

45:24

sub-processes. But Hypocorn was a better

45:27

fit you said from... Yeah, it

45:29

was easier to implement this experiment

45:31

in Hypocorn. It had

45:33

like a single entry point because when

45:35

you start a sub-interpreter, you need to

45:37

import the

45:40

modules that you want to use. You

45:43

can't just say, run this

45:45

function over here. You can but

45:47

if that function relies on

45:49

something else that you've imported, you need to import that

45:51

from the new sub-interpreter. So

45:54

what I did with this experiment

45:56

was basically start a sub-interpreter that

45:59

imports Hypocorn. listens to the

46:01

sockets and then is ready to

46:03

serve web requests. Interesting. Okay. And

46:05

at minimum, you got it working,

46:07

right? Yeah, it did a hello

46:10

world. So we got that working.

46:12

So I was pleased with that.

46:16

And then kind of started doing some more

46:18

testing of it. So, you know, how many

46:20

concurrent requests can I make at once? How

46:22

does it handle that? What does my CPU

46:24

core load look like? Is it distributing it

46:26

well? And then

46:28

kind of some of the questions are, you

46:31

know, how do you share

46:33

data between the sort of

46:35

interpreters? So the minimum I

46:37

had to do was each interpreter needs

46:39

to know which web socket should I

46:41

be listening to? So like which network

46:43

socket once I started, what port is

46:45

it running on? And is

46:47

it running on multiple ports? And which one should I

46:50

listen to? So yeah, that's the first thing I had

46:52

to do. Nice. Yeah, maybe just tell people real quick

46:54

about just like, what are the commands

46:56

like at the Python level that you look

46:59

at in order to create an interpreter, run

47:01

some code on it and so on? What's

47:03

this weird world look like? There, do you

47:05

want to cover that? Yeah, there's a whole

47:07

lot. If we talk about pep734, you have

47:09

an interpreter's module, it's

47:14

a create function and it returns to

47:16

an interpreter object. And then once you

47:18

have the interpreter object, you'll have it

47:20

has a function called run method.

47:25

The interpreter object also has a method

47:27

called exec. I'm turning it

47:29

over. It's a exec sync, because

47:31

it's synchronous with the current thread.

47:34

And whereas the exec run will create

47:36

a new thread for you and run

47:38

things in that there. So there's kind

47:40

of different use cases. But it's basically

47:42

the same thing. You have some code

47:44

currently supports, just you

47:47

give it a String

47:50

with all your code on it, like you

47:52

load it from a file or something. you

47:54

know. basically, it's a script that's going to

47:56

run in that sub interpreter. And Alternately, you

47:58

can give it a function. And.

48:00

If I was, that function isn't a

48:02

closure. doesn't have any arguments and stuff

48:04

like that, so it's just like boy

48:06

a six. basically a script, right? She

48:08

got something like that. You can also

48:11

passed that through and then it runs

48:13

it and that's just not it is.

48:15

You want to get some results back,

48:17

you're gonna have to manually past Impact

48:19

on like you do it spreads. For

48:22

that out something you already understand people

48:24

writing three one of those channels and

48:26

them just we've heard, exit and then

48:28

refund the channel some like that gun.

48:30

So there's a way to say things

48:33

like just run and there's also a

48:35

that's a breed an interpreter and then

48:37

you could use the interpreter to do

48:39

things and others you only pay. The

48:42

process like start up costs ones right

48:44

yeah yeah yeah can. Also you can

48:46

call that though the run multiple times

48:48

and each time it kinda ads on

48:50

to what ram. Before so if you're

48:53

right ransom code that that modifies things are

48:55

imports modules that nuts or think those are

48:57

still be there the next time your parents

48:59

and code in an interpreter which is nice

49:02

because then if you got some stirred up

49:04

stuff that you need to do one time

49:06

you can do that at a time right

49:08

after you create interfered but then in touch

49:11

your my loot in your workers then you

49:13

run again and all that stuff is ready

49:15

to go. Oh that's interesting cause and I

49:17

think about say of my web app Swanson

49:20

talk to Mcgrady be in. Use Beanie

49:22

and you go to been any talented like

49:24

create our connection or I'm among a d

49:26

be client pool and it does all that

49:29

seventies ambient a talk to like their that

49:31

in college anger library. Go to the that

49:33

class and your query on it. You could

49:36

run that start up code like ones potentially

49:38

in have that pulled us hanging around for

49:40

subsequent work. Nice or it. ah the see

49:42

some more stuff so he said you got

49:45

it working pretty well Anthony and this is

49:47

one of the challenges of trying to get

49:49

it to shut down or I. Asked

49:52

yeah so in person when

49:55

you start passing process you

49:57

can press control see him

49:59

to quit. Which is a keyboard

50:01

interrupt am that kind of censor

50:03

interrupt in that process and for

50:05

all these web servers have got

50:07

like a mechanism for clean shutting

50:09

down cause Yemeni to see if

50:11

you press control seat over just

50:13

terminate the processes am because when

50:15

you write a and Ascii up

50:17

in particular you can have like

50:19

events that you can do so

50:21

people who's done fast a p

50:23

I probably know the like the

50:25

on events am decorator that you

50:27

compare and say when my app

50:29

store sop. Create a database connection

50:31

cool and when assess down and going

50:34

to clean up for this stuff. So

50:36

I'm is the web servers decided to

50:38

shut down for whatever reason whether you

50:40

press control see your it just decided

50:43

to close for whatever reason I'm a

50:45

nice to tell all the work has

50:47

declared the shutdown Clean Me I'm so

50:50

signals and like the signals much or

50:52

am doesn't work between sub interpreters because

50:54

it kind of room is sits in

50:57

it and tap the state from one

50:59

and and. I'm so what I

51:01

did was better to use a tunnel

51:03

so that the the main worker like

51:06

the coordinator when the had a shutdown

51:08

request it would send a message to

51:10

or the sub interpreters to say okay

51:13

can you stop Now I'm and it

51:15

would kick off I'm a job by

51:17

city tell hotter corn in this case

51:19

to shut down clean a cool any

51:22

shutdown functions the my have I'm and

51:24

then log a message to say that

51:26

shutting down as well because yes thing

51:29

is we web services. if it

51:31

is terminated immediately am and then you

51:33

let your logs and you're like okay

51:35

why did the website suddenly stop working

51:37

and there was no log entries is

51:39

had his just went from i'm handling

51:42

requests to just you know absolute silence

51:44

and that was i wouldn't be very

51:46

helpful so nice the right log messages

51:48

and is to cool like shutdown functions

51:50

and stuff i've so what i did

51:53

was an and this is i guess

51:55

my is kind of a bit of

51:57

a turtles of the way down but

51:59

inside the submitter I start another thread

52:03

because if you have a polar

52:05

which listens to a signal on

52:07

a channel, that's a blocking operation.

52:10

So at the bottom

52:12

of my subinterpreter code, I've got, okay,

52:14

run Hypercon. So it's going to run,

52:17

it's going to listen to subgrid, subweb

52:19

requests. But I need to also be

52:21

able to run concurrently in the subinterpreter

52:23

a loop which listens to the communication

52:26

channel and sees if a shutdown

52:28

request has been sent. So

52:31

this is kind of maybe an

52:33

implementation detail of how interpreters work

52:35

in Python, but interpreters

52:37

have threads as well. So

52:39

you can start threads inside

52:41

interpreters. So

52:43

similar to what I said with G unicorn

52:46

and Hypercoin, how you've got multi-worker, multi-thread, each

52:48

worker has its own threads. In

52:50

Python, interpreters have the threads. So

52:52

you can start a

52:55

subinterpreter and then inside that subinterpreter,

52:57

you can also start multiple threads.

53:00

And you can do coroutines and all that kind of

53:02

stuff as well. So basically,

53:04

what I did is to start a

53:06

subinterpreter which also starts a thread and

53:08

that thread listens to the communication channel

53:10

and then waits for a shutdown request.

53:12

Right. Tell us, Hyperhorn. All right, you're

53:15

done. We're out of here. Yeah. Okay.

53:18

Interesting. Here's an interesting question from the

53:20

audience from Chris. Well, it says, when

53:23

you... We talked about the global kind of startup,

53:25

like if you run that once, it'll already be set.

53:27

And does that make

53:29

code somewhat non-deterministic in the subinterpreter?

53:31

And if you explicitly work with it, no.

53:33

But if you're doing the pool, which one

53:35

do you get? Is it initialized or not? Eric,

53:37

you have an idea of a

53:39

startup function that runs in the interpreter

53:42

pool executor type thing? Or is it

53:44

just they get doled out and they

53:46

run what they run? With

53:50

concurrent features, it's already kind of

53:52

a pattern. You have an initialized

53:54

function that you can call that'll

53:56

do the right thing. And then

54:00

you have your task

54:02

that the worker's actually running. So

54:05

with the, I don't

54:08

know, I wouldn't say it's

54:10

non-deterministic unless you

54:12

have no control over it. I mean, if

54:15

you wanna make sure that state progresses

54:17

in an expected way, then you're gonna

54:19

run your own subinterpreters, right? But if

54:22

you have no control over the subinterpreters,

54:24

you're just like handing off to some

54:26

library that's used to subinterpreters, I

54:29

would think it'd be somewhat not

54:31

quite so important about whether

54:33

it's deterministic or not. I mean,

54:36

each time it runs, there

54:39

are a variety of things. The

54:41

whole thing could be kind of reset or

54:45

you could make sure that anything that

54:47

runs it, any part of your code

54:49

that runs is careful

54:51

to keep its state

54:54

self-contained and therefore you

54:56

preserve determinist behavior that

54:58

way. I do a lot as I'll

55:00

write code that'll say, if this

55:02

is already initialized, don't do it again.

55:04

So I talked about the database connection

55:06

thing. If somebody were to call it

55:08

twice, it'll say, well, looks like the

55:10

connection's already not none, so we're good.

55:13

You could just always run the startup code

55:15

with one of these short circuit things that

55:18

says, hey, it looks like this interpreter, this

55:20

is already done, we're good. But that

55:23

would probably handle a good chunk of

55:25

it right there. But we're back

55:27

to this thing that Anthony said, right? We're

55:29

gonna learn some new programming patterns, potentially, yeah,

55:31

quite interesting. So we talked at the

55:33

beginning about how subinterpreters have their own

55:36

memory and their own module loads and

55:38

all those kinds of things, and that

55:40

might be potentially interesting for isolation. Also

55:43

kind of tying back to Chris's comment

55:45

here, this isolation is pretty interesting for

55:47

testing, right, Anthony, like

55:50

PyTest? So another thing

55:52

you've been up to is working with

55:54

trying to run PyTest sessions in subinterpreters.

55:56

Tell people about that. Yeah, so I

55:58

started a... for the web worker.

56:01

One of the things I hit with a

56:03

web worker was that I couldn't start Django

56:05

applications and

56:08

realized the reason was the daytime

56:10

module. So the Python standard library,

56:16

some of the modules are implemented in Python,

56:18

some of them are implemented in C, some

56:21

of them are a combination of

56:23

both. So some modules you import

56:25

in the standard library have a

56:27

C part that's implemented in C

56:29

for performance reasons typically or because

56:31

it needs some special operating system

56:33

API that you can't access from

56:35

Python. And then the front end

56:37

is Python. So

56:39

there is a list basically

56:42

of standard library modules

56:44

that are written in C that

56:46

have some sort of global state.

56:48

And then the core developers have

56:50

been going down that list and

56:52

fixing them up so that they

56:54

can be imported from a subinterpreter

56:57

or just marking them as

56:59

not compatible with subinterpreters. One

57:01

such example was the read line module

57:04

that Eric and I were

57:06

kind of working on last week and the week

57:08

before. Read line

57:10

is used for I guess listening to user input.

57:12

So if you run the input built-in, like

57:16

read line is one of the utilities

57:18

it uses to listen to keyboard input.

57:20

Let's say you started five

57:23

subinterpreters at the same time and all of

57:25

them did a read line listen to input,

57:27

like what would you expect the behavior to

57:30

be? Which when you type in the keyboard, where would

57:32

you expect the

57:34

letters to come out? So it kind of poses an

57:36

interesting question. So read line

57:38

is not compatible with subinterpreters

57:41

but it discovered like it

57:43

was actually sharing a global

57:45

state. So when it initialized,

57:47

it would install like

57:49

a callback. And what

57:51

that meant was that even though it said it's

57:53

not compatible, if you started multiple subinterpreters

57:56

that imported read line, it would

57:58

crash Python itself. The

58:01

DateTime module is another one

58:03

that needs fixing. It

58:06

installs a bunch of global state. So

58:08

yeah, DateTime was another one. So what

58:10

I wanted to do is to time

58:12

test some other C extensions that

58:15

I had and just basically write a

58:18

PyTest extension, a PyTest plugin,

58:20

I guess, which you've

58:23

got an existing PyTest suite, but you want

58:26

to run all of that in a sub-interpreter.

58:29

And the goal of this is really that you're

58:32

developing a C extension, you've written

58:34

a test suite already for PyTest,

58:36

and you want to run that

58:38

inside a sub-interpreter. So I'm looking

58:40

at this from a couple of

58:42

different angles, but I want to

58:44

really try and use sub-interpreters

58:46

in other ways, import some C extensions

58:48

that have never even considered the idea

58:50

of sub-interpreters and just see how they

58:53

respond to it. Like,

58:55

ReadLine was a good example. I think it

58:57

was a, this won't

59:00

work, but the fact that it crashed is bad. How

59:02

is it going to crash, right? What's

59:04

happening there? Yeah, so it

59:07

should have just said, this is not

59:09

compatible. And that's uncovered a... And

59:12

this is all super experimental

59:15

as well. So this is

59:17

not... You've had to import

59:19

the underscore XX module to

59:21

even try this. So yeah,

59:23

ReadLine, DateTime was another one.

59:27

And so I put this sort of PyTest extension together

59:29

so that I could run some

59:31

existing test suites inside sub-interpreters. And

59:34

then the next thing that I looked at

59:36

doing was, CPython has

59:39

a huge test suite. So

59:42

basically how all of Python itself

59:45

is tested, the parser,

59:47

the compiler, the evaluation loop,

59:49

all the standard library modules

59:51

have got pretty good test

59:53

coverage. So when you compile

59:56

Python from source or you

59:58

make changes on GitHub, like

1:00:00

it runs the test suite to make sure that your

1:00:02

changes didn't break anything. Now,

1:00:05

the next thing I kind of wanted

1:00:07

to look at was, okay, to try

1:00:09

and kind of get ahead of the

1:00:11

curve really on subinterpreter adoption.

1:00:14

So in 3.13 when pep7.3.4 lands, can we try

1:00:16

and test all of the

1:00:20

standard library inside a subinterpreter and see

1:00:23

if it has any other

1:00:25

weird behaviors. And

1:00:27

this test will probably apply

1:00:29

to free threading as well, to be

1:00:31

honest, because I think anything

1:00:34

that you're doing like this, you're importing these

1:00:36

C extensions, which always assumed that there was

1:00:39

a big deal in place. If

1:00:41

you take away that assumption, then you get

1:00:43

these strange behaviors. So yeah,

1:00:45

the next thing I've been working on

1:00:47

is basically running the C Python test

1:00:50

suite inside subinterpreters and then seeing what

1:00:53

kind of weird behaviors pop up. I think it's

1:00:55

a great idea because obviously C Python is going

1:00:57

to need to run code in a subinterpreter, run

1:00:59

our code, right? So at a

1:01:01

minimum, the framework interpreter, all

1:01:04

the runtime bits that you hang together,

1:01:06

right? Yeah, there are some modules that

1:01:08

doesn't make sense to run in subinterpreters.

1:01:11

Readline was an example. Yeah, possibly. Maybe

1:01:16

not. If you think about

1:01:18

like, if you got, when

1:01:21

you're doing GUI programming, right, you're going to have kind

1:01:24

of your core stuff running the

1:01:26

main thread, right? And then you

1:01:28

hand off, you may have sub

1:01:30

threads doing some other work, but the core

1:01:32

of the application, think of it as running

1:01:34

in the main thread. I think

1:01:37

of the applications in that way.

1:01:39

And there's certain things that you

1:01:41

do in Python, standard library modules

1:01:44

that really only make sense with

1:01:46

that main thread. So supporting those

1:01:49

and subinterpreters isn't quite as meaningful.

1:01:51

Yeah, I can't remember

1:01:53

all the details, but I feel like there

1:01:55

are some parts of Windows itself, some UI

1:01:58

frameworks there that required the access. on

1:02:00

the main program thread, not on some

1:02:02

background thread as well because it would

1:02:04

freak things out. So it seems like

1:02:06

not unusable. Yeah, same is true. Like

1:02:08

the signal module, I remember at exit,

1:02:11

a few others. Excellent. All right. Well, I

1:02:13

guess let's, we're getting short on time. Let's

1:02:16

wrap it up with this. So the big

1:02:18

thing to keep an eye on really here

1:02:20

is HEP734 because that's when this

1:02:23

would land. You're no

1:02:26

longer with the underscore XX

1:02:28

subinterpreter. You're just working with

1:02:31

interpreters sub module. Yeah, 313. Yeah.

1:02:33

So right now it's in draft.

1:02:36

What's it looking like? If it'll be in

1:02:38

313, it'll be in 313, alpha something, some

1:02:40

beta something. Like when is this

1:02:43

going to start looking like a thing that is ready

1:02:45

for people to play with? So

1:02:47

yeah, this path, I went

1:02:50

through and did a massive cleanup of HEP554,

1:02:52

which is why I made a

1:02:54

new HEP for it and simplified

1:02:56

a lot of things, clarified a lot

1:02:58

of points, had lots of good feedback from

1:03:00

people and ended up with what I

1:03:03

think is a good API, but it

1:03:05

was a little different in some ways.

1:03:07

So I've had the implementation for HEP554

1:03:09

mostly done and ready to go for

1:03:12

years. And so it was

1:03:14

a matter, it's been a matter

1:03:16

of now that I have this

1:03:18

updated PEP up, going back to

1:03:20

the implementation, tweaking it to match,

1:03:22

and then making sure everything still

1:03:24

feels right, try and use

1:03:27

it in a few cases. And if everything

1:03:29

looks good, then go ahead and I'll

1:03:31

start a discussion on that. I'm hoping within

1:03:33

the next week or two to start up

1:03:36

a round of discussion about this PEP and

1:03:38

hopefully we won't have a whole lot of

1:03:40

back and forth so I can get this

1:03:42

over to the steering councils in

1:03:45

the near future. Well, the hard work

1:03:47

has been done already, right? The C

1:03:50

layer is there and it's accepted and it's in there

1:03:52

now. It's just a matter of What's

1:03:54

the right way to look at it from Python,

1:03:56

right? And One thing to keep in mind is

1:03:58

that I'm. Finding I met

1:04:01

at back fourteen the module to Python

1:04:03

Three twelve to so that we have

1:04:05

a printer pergola, three trolls so it

1:04:07

be nice if people could really take

1:04:09

advantage of it. I don't some for

1:04:12

that one. We'd have to pip install

1:04:14

it or when it began in as

1:04:16

just as a pencil. guess it was

1:04:18

for before three twelve minutes. Seven trapeze

1:04:20

have been around for decades but on

1:04:23

the suit Cpr but the sad I

1:04:25

I I doubt all in five back

1:04:27

for dismantle as treetops such as three

1:04:29

Trolls. And up and spat at more

1:04:31

than I expected Any my sense for

1:04:34

nickel or a. Final thoughts guys are

1:04:36

you and tell people about this us

1:04:38

but personally I'm I'm excited for where

1:04:40

everything's going. A is taking a while

1:04:42

but I think are getting took place

1:04:44

though it's into seamless. Also discuss about

1:04:46

know Gil it's easy to think oh

1:04:49

the might have been subbing Cerberus or

1:04:51

or if we have seven to produce

1:04:53

why do we need no guilt but

1:04:55

they're kind of different needs that median.

1:04:57

The most interesting thing for me is

1:04:59

I. Am what's good for know

1:05:02

Gil is good for Saboteur, Bruce

1:05:04

and vice versa That know Guild

1:05:06

probably really wouldn't be possible without

1:05:08

a lot of work that we've

1:05:10

done make a printer pretty girl

1:05:12

possible. So and I think that's

1:05:15

why the neat things that the

1:05:17

future switching bright for Python multicore

1:05:19

and I'm excited to see where

1:05:21

people go with all these things

1:05:23

over adding that he wins the

1:05:26

As sub interpreters programming design patterns

1:05:28

about coming up since I I'm.

1:05:31

Yeah, mine On with my thoughts

1:05:33

are I will our visit Some

1:05:35

interpreters immense into my book as

1:05:37

he when it was like Python

1:05:39

three when I think I'm. ah

1:05:41

i'm because it was possible then

1:05:44

been is tones quite a lot

1:05:46

since i'm the i guess i

1:05:48

can on some source to leave

1:05:50

people with i think if you're

1:05:52

a maintain a ever a python

1:05:54

package oh i see censor moto

1:05:56

in a python package i'm there's

1:05:58

gonna be a lot more scenarios

1:06:00

for you to test coming in

1:06:02

the next year or so. And

1:06:05

some of those uncover things

1:06:08

that you might have done or just kind

1:06:10

of relied on the girl with global state,

1:06:12

whether that's not really desirable

1:06:14

anymore and you're going to get bugs down the

1:06:17

line. So I think with any of that stuff

1:06:19

as a package maintainer, you want to test as

1:06:21

many scenarios as you can so that you can

1:06:23

catch bugs and fix them before your users find

1:06:26

them. So if you are a

1:06:28

package maintainer, there's definitely some things that you can start

1:06:30

to look at now to test

1:06:33

that's available in 313 alpha 2 is

1:06:37

at least probably the one I've tried to be

1:06:39

honest. And if

1:06:42

you're a developer, not necessarily a

1:06:44

maintainer, then I think this is

1:06:46

a good time to start reading

1:06:48

up on parallel programming

1:06:50

and how you need to design parallel

1:06:54

programs. And those

1:06:56

kind of concepts are the same across

1:06:58

all languages and Python would

1:07:00

be no different. We just have different

1:07:02

mechanisms for starting parallel work and joining

1:07:04

it back together. But

1:07:06

if you're interested in this and you

1:07:08

want to run more code in parallel,

1:07:10

there's definitely some stuff to read

1:07:13

and some stuff to learn about in terms

1:07:15

of signals,

1:07:17

pipes, queues, sharing

1:07:20

data, how you have locks and where

1:07:22

you should put them, how

1:07:24

deadlocks can occur, things like

1:07:27

that. So all of that stuff is the same in Python as

1:07:29

anywhere else. We just have different mechanisms for doing it.

1:07:31

All right. Well, people have some research work

1:07:33

and I guess a really, really quick final

1:07:35

question, Eric, and then we'll wrap this up.

1:07:37

Following up on what Anthony said, like test

1:07:39

your stuff, make sure it works in a

1:07:41

sub interpreter. If for some reason you're like,

1:07:43

my code will not work in a sub

1:07:45

interpreter and I'm not ready yet, is there

1:07:47

a way to determine that your code is

1:07:50

being run in a sub interpreter rather than

1:07:52

regularly from your Python code?

1:07:54

Yeah, if you have an

1:07:56

extension module that supports sub

1:07:58

interpreters, then you will have

1:08:01

updated your module to use

1:08:03

what's called multi-phase init. And

1:08:06

that's something that shouldn't be too hard to look

1:08:08

up. I think I talked about it in the

1:08:10

PEP. If you implement

1:08:12

multi-phase init, then you've already done most

1:08:14

of the work to support a subinterpreter.

1:08:19

If you happen, then your module can't

1:08:21

be imported in a subinterpreter. It'll actually

1:08:23

fail with an import error if you're

1:08:25

trying to import it in a subinterpreter

1:08:28

or at least a subinterpreter that has its

1:08:30

own guilt. There are ways to create subinterpreters

1:08:32

that still share a guilt and that sort

1:08:35

of thing, but you

1:08:37

just won't be able to import it

1:08:39

at all. So like the readline module

1:08:41

can't be imported in subinterpreters. The

1:08:45

issue that Anthony ran into is

1:08:47

kind of a subtle side

1:08:49

effect of the check that we're doing.

1:08:52

So, but really

1:08:54

it boils down to if you

1:08:56

don't implement multi-phase init, then you

1:08:58

won't be able to import the module.

1:09:01

You'll just get an import error. So that's, I mean,

1:09:03

it makes it kind of straightforward. Yeah,

1:09:05

it sounds good. More opt-in than opt-out. Yep.

1:09:08

Right on. All right, guys. Thank you both for

1:09:10

coming back on the show and awesome work. It

1:09:13

was looking close to the fetish line and

1:09:15

exciting. Thanks, Michael. Yep. See ya. This

1:09:19

has been another episode of Talk Python to Me.

1:09:22

Thank you to our sponsors. Be sure to check out

1:09:24

what they're offering. It really helps support the show. Are

1:09:27

you ready to level up your Python

1:09:29

career? And could you use a little

1:09:31

bit of personal and individualized guidance to

1:09:33

do so? Check out the

1:09:36

PyBytes Python Developer Mindset

1:09:38

Program at talkpython.fm slash

1:09:40

PDM. Take some

1:09:42

stress out of your life. Get notified

1:09:44

immediately about errors and performance issues in

1:09:47

your web or mobile applications with Sentry.

1:09:49

Just visit talkpython.fm slash Sentry

1:09:52

and get started for free.

1:09:54

And be sure to use the promo code

1:09:56

talkpython, all one word. Want

1:09:58

to level up your Python? We have one

1:10:01

of the largest catalogs of Python video

1:10:03

courses over at TalkPython. Our content ranges

1:10:05

from true beginners to deeply advanced topics

1:10:07

like memory and async. And best of

1:10:09

all, there's not a subscription in sight.

1:10:11

Check it out for yourself at training.talkpython.fm.

1:10:15

Be sure to subscribe to the show, open

1:10:17

your favorite podcast app, and search for Python.

1:10:20

We should be right at the top. You can also find

1:10:22

the iTunes feed at slash iTunes, the

1:10:24

Google Play feed at slash Play, and

1:10:26

the direct RSS feed at slash RSS

1:10:28

on talkpython.fm. We're live

1:10:31

streaming most of our recordings these days. If

1:10:33

you want to be part of the show

1:10:35

and have your comments featured on the air,

1:10:37

be sure to subscribe to our YouTube channel

1:10:39

at talkpython.fm slash YouTube. This

1:10:41

is your host, Michael Kennedy. Thanks so much for listening.

1:10:44

I really appreciate it. Now get out there

1:10:46

and write some Python code. And

1:10:54

we'll see you next time. Bye-bye.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features