Podchaser Logo
Home
Seeing code flows and generating tests with Kolo

Seeing code flows and generating tests with Kolo

Released Wednesday, 29th May 2024
Good episode? Give it some love!
Seeing code flows and generating tests with Kolo

Seeing code flows and generating tests with Kolo

Seeing code flows and generating tests with Kolo

Seeing code flows and generating tests with Kolo

Wednesday, 29th May 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Do you want to look inside your Django requests? How

0:02

about all of your requests in development

0:04

and see where they overlap? If

0:07

that sounds useful, you should definitely check

0:09

out Kolow. It's a pretty

0:11

incredible extension for your editor. VSCode at

0:13

the moment. More editors to come

0:15

most likely. We have Wilhelm Klopp

0:18

on to tell us all about it. This

0:20

is TalkBython to me, episode 464 recorded May 9th, 2024. Are

0:26

you ready for your host, please? You're

0:28

listening to Michael Kennedy on TalkBython to

0:30

me. Life from Portland, Oregon,

0:32

and this segment was made with Python.

0:38

Welcome to TalkBython to me, a

0:40

weekly podcast on Python. This is

0:42

your host, Michael Kennedy. Follow me

0:44

on Mastodon, where I'm at M

0:46

Kennedy and follow the podcast using

0:48

at TalkBython, both on bostodon.org. Keep

0:51

up with the show and listen to over

0:53

seven years of past episodes at talkbython.fm. We've

0:56

started streaming most of our episodes

0:58

live on YouTube. Subscribe to our

1:00

YouTube channel over at talkbython.fm slash

1:02

YouTube to get notified about upcoming

1:04

shows and be part of that

1:06

episode. This episode is sponsored

1:08

by Sentry. Don't let those

1:11

areas go unnoticed. Use sentry. Get

1:13

started out talk Python.a fan/century. And

1:16

it's also brought to you by us

1:18

over at Tuck Python Training. Did you

1:20

know that we have over two hundred

1:22

and fifty hours of Python courses? Yeah,

1:25

that's right. Check them out at talkbython.fm

1:27

slash courses. Well,

1:29

welcome to TalkBython to me. Hello. Yeah, excited to

1:32

be here, Michael. I've been listening to TalkBython like

1:34

for, I can't even remember how long, but I'm

1:36

pretty sure it was before I had a first

1:38

Python job. So yeah, a long, long time. That's

1:41

amazing. Well, now you're helping create

1:44

it. Yeah, exactly. We're going to

1:46

talk about Kolo, your visual studio

1:48

code, Django. I don't know

1:50

what to call it. It's pretty advanced, pretty

1:52

in depth extension seems to be not quite

1:54

enough. So what any project people are going

1:56

to really dig that people who jingo and.

2:00

We'll see what the future plans are. We

2:03

could talk you into other ones, but for

2:05

now, Django plus... Yeah,

2:07

Django plus VS code is going to be super interesting.

2:09

When we get to that, of course, you

2:12

must know the drill. Tell us a bit

2:14

about yourself. Yeah, for sure. So my name

2:16

is Will. I've been using Django since, well,

2:18

I guess I've been using Python since about

2:21

2013, I want to say, so a little over 10

2:23

years. And yeah, just kind of

2:25

like fell in love with it, wanted to

2:27

make websites, started using Django. And

2:30

yeah, I guess never really looked back. That

2:32

was in school back then, but kind of

2:34

always had a love for like tinkering and

2:36

building side projects. I actually studied, I did

2:38

a management degree in university, but I really

2:40

loved hanging out with all the computer science

2:43

kids, all the computer science students. And

2:45

I think a part of me really wanted to

2:47

impress them. So I was always building side projects.

2:49

And one of them was actually a Slack app

2:52

called Simple Pole. And yeah, we were trying to

2:54

like, you know, organize something in Slack and really

2:56

felt like the need for polls. So built this

2:58

little side project just like during university.

3:00

And then it became really, really popular. And

3:03

a few years later, it became

3:05

my full time job. So for the past

3:07

like four years, I've been running simple polls,

3:09

a Slack app building up the team up

3:11

to like seven, eight of us. And

3:14

I had a great time doing that in the

3:16

middle, actually worked at GitHub for two years, working

3:18

on Ruby and Rails. And that was super fun.

3:21

Like a great company, great people, huge code base,

3:23

learned a lot there. That was really fun. But

3:25

yeah, I left after about two years to work

3:27

full time on Simple Pole. So Simple Pole had

3:29

been running as a side project kind of in

3:31

the background. And actually it's interesting, like any kind

3:34

of the order of events, thinking back, Microsoft

3:36

was acquired GitHub while

3:39

I was there. And then suddenly

3:41

all of my colleagues started talking about

3:43

buying boats and leaving the

3:45

company. And I thought, hmm,

3:48

I don't quite have boat money,

3:51

but how can I, what's an ace I might have

3:53

up my sleeve? And it was simple pole, which had

3:56

got like tons of users, but I never monetized

3:58

it. So I set out to monetize. it and

4:00

then a year later, it was actually bringing

4:02

in more revenue than my salary at GitHub.

4:04

So I decided to quit. So

4:06

that's kind of the simple poll backstories. The

4:08

simple poll is a Django app, reasonably sized

4:11

now, a bunch of people working on it.

4:13

And then yeah, at some point in the

4:15

journey of building simple poll, I kind of

4:17

started playing around with Colo. So Colo also

4:19

kind of just like simple poll started as

4:21

a side project. But now not to make

4:23

polls in Slack, but instead to improve my

4:25

own developer experience building simple poll, so kind

4:27

of built it as an as my own

4:29

tool for making Django working with

4:31

Django more fun, give me more insight, give

4:34

me access to some of the data that

4:36

I felt was so close, but that I

4:38

had to just like manually get in there

4:40

and print out. So the reasons Colo started

4:43

out as supporting just Django and VS code

4:45

is because that's what I was using. And it was

4:47

an internal side project. And now

4:50

actually handed over simple poll to a

4:52

new CEO, I'm no longer involved day

4:54

to day and I'm working full time

4:56

on Colo. Congratulations

5:00

on like multiple levels. That's awesome. Thank you. Yeah.

5:02

I want to talk to you a bit about

5:04

simple poll for just a minute. But before then,

5:06

you pointed out like, look, I made this side

5:08

project. And how many hours are

5:10

you spending on it? A week, maybe? Oh, honestly,

5:13

like, so this was like right at the beginning,

5:15

like when it was first started. I yeah, it's

5:17

a good topic. It's a good question. That I

5:20

always joke that the best thing about my management

5:22

degree was that I had a lot of free

5:24

time to like, do build side projects. Honestly, I

5:26

think it could have been like 2030 40 hours

5:28

a week. Yeah, yeah, that was that. Yeah, I

5:30

think yeah, it definitely varied week to week and then later

5:32

on. Yeah. And then while I was working, but I had

5:35

a full time job as a software engineer. Yeah, that was

5:37

a lot tougher. It was like nights and weekends, rarely

5:39

had energy during the week to work on it. And

5:42

Then honestly, like since it was a real

5:44

project with real users, I ended up spending

5:46

a lot of the weekend doing like support

5:49

like support stuff. Yeah, absolutely. Support and like,

5:51

then you charge and then now you have

5:53

finance stuff and like legal stuff to do

5:55

so that that wasn't super fun. Like Really

5:57

slows down the features and the creation of.

6:00

Athena luckily my pillow I would say

6:02

I probably spent bully fifty percent of

6:05

my full time job doing email support

6:07

back and as sorry there's like

6:09

there's tons of people taking courses plus

6:11

of I guess in Ontario questions and

6:14

thoughts and you now in and is

6:16

awesome but it also is is

6:18

really tricky so the reason I ask

6:20

is always find it fascinating you'll see

6:23

likes news articles on an Android click

6:25

video whenever this person makes three

6:27

times their job working ten hours a

6:29

week. On this other thing that you make

6:32

three times you may for your job. What

6:34

are you doing your job or right? the

6:36

ability to say you can make that step

6:38

where you go from. I retired at nights

6:40

extra time and squeeze in the weekend Sue

6:42

full time full energy the a retard you

6:44

doing well you're on like a very thin

6:46

life support like been given for it has

6:48

a a full time and energy as just

6:50

of course is could you better a search

6:52

thing? I actually have a lot of thoughts

6:54

about this. Maybe I should write something about

6:56

this at some point, but yeah I actually

6:58

think running like a bootstrap. Side Project and

7:00

of business as you have a job

7:03

can be really good because it really

7:05

forces you to prioritize and build the

7:07

most important things. they're still. I haven't

7:09

said oh nice. the idea of my

7:11

that someday. You'll be

7:13

real tired I tell ya ya. Advertised by a

7:15

see I think it really force you to pirate

7:17

I so I actually don't recommend when and for

7:19

exhausts me Like for advice like should I quit

7:22

my job to go all in or not I

7:24

actually the most think there's a lot of nice

7:26

stability and that comes from up from having a

7:28

job. Plus it's actually really nice. To have coworkers,

7:30

it's nice to have structure like you actually

7:32

need to take all of that while work

7:34

in a way on yourself like it's do

7:36

you have to make your own structure like

7:38

if you're pulling her own thing and upton

7:40

actually be bit tricky like I really struggled

7:43

with that at the beginning and sitting there.

7:45

Something to be said for yes, we're spending

7:47

like limited time on something basically and and

7:49

partisan. just just the most interesting angle. and

7:51

I don't the soil disagree with that centers.

7:53

So for me it was interesting. like in

7:55

terms of like how much like it was

7:57

in life support energy you put and versus

7:59

like full time. Do you? It was

8:01

growing decently like while I was still at

8:03

Get Hub and I thought okay I'm going,

8:05

I'm going on the full time and if

8:07

I go from like ten hours a week

8:10

or less to like forty hours a week

8:12

said would probably for x the growth rate

8:14

as well. that's how it works rights Alex

8:16

are holding a gun at a hotel he

8:18

didn't work. In fact like the month after

8:21

I left I had like my first down

8:23

months were like the revenue decreased and I

8:25

was like wait a minute what's going on

8:27

here How the doesn't make any sense That's

8:29

not fair. So I think that also

8:31

points that like that their yeah you can

8:33

definitely spend more hours on something and it

8:36

can be like the wrong things or not

8:38

doubling down on something that's really working So

8:40

but overall obviously you at some point like

8:42

just being able to like test out more

8:44

ideas is like really valuable and for that

8:46

like if you only have time to do

8:48

support on your product as really working while

8:50

and your fulltime job is the rest of

8:53

the are you spend your week than Yeah

8:55

feels like you should give yourself some time

8:57

to build features that maybe quit the job

8:59

Yeah is also. Have an interesting point about

9:01

the the because not everyone is gonna get

9:03

up at eight o'clock sit at their desk

9:05

and and they're going to. I can just

9:08

do whatever and it's it's odd zone discipline,

9:10

it's own learned skill on a present the

9:12

of Iran would like. One of the first

9:14

weeks after I was full time on simple

9:16

I woke up with the morning as had

9:18

well the money coming in. I don't need

9:21

to work on have a boss and I

9:23

just sit in bed and watched you tube

9:25

videos all day and then I just felt

9:27

miserable at the end of the day like

9:29

this. I was like this is supposed to feel

9:31

great. Why all this freedom I wanted and dumped

9:33

about for so long where it liquid Why the

9:35

not feel great. as

9:38

are so also feels like redskins more different

9:40

kinds of responsibility i lot so simple pull

9:42

the reason i that a be worth that

9:44

talk about a little bed his slack the

9:47

popular platform and this is based on gender

9:49

a simple pull his of full on jan

9:51

go up f and it's funny on a

9:53

simple joke that i don't know if you've

9:56

gone through the official tango to toil but

9:58

in their you actually make upholds in

10:00

the browser. Sometimes people joke, wait,

10:03

did you just turn this into like a

10:05

Slack app? And then you prioritize and then

10:07

getting started tutorial. Yeah, exactly. But

10:09

yeah, like it turned out that like

10:11

polls and then yeah, getting, you know,

10:13

your team more connected and slack and

10:15

more engaged are like things people really

10:18

care about. So it came

10:20

to the Slack, simple poll, joined the Slack platform, like

10:22

at the perfect time, and has

10:25

just been growing super well since

10:27

then tell people a little bit about what

10:29

it takes technically to make a Slack app.

10:32

I mean, yeah, Slack is not built in

10:34

Python as far as I know. And it's,

10:36

it's probably JavaScript and electron mostly the people

10:38

interact with, right? So what is the deal

10:40

here? It's actually super interesting. So the

10:43

way you build like a Slack app,

10:45

it's actually all backend based. So when

10:47

a user interacts in Slack, Slack sends

10:49

your app, your backend, like a JSON

10:51

payload, saying like this user clicked this

10:53

button, and then you can just send

10:55

a JSON payload back saying,

10:57

all right, now show this message. Now

11:00

show this modal. And they have their own

11:02

JSON based block kit framework where you can

11:04

render different types of content. So you don't

11:06

actually have to think about JavaScript or React

11:09

or any of their stack at all. It's

11:11

basically all sending JSON payloads around and calling

11:13

various parts of the Slack API. So you

11:15

can build a Slack app in your favorite

11:18

language, any kind of exotic language if you

11:20

wanted to. But yeah, I love

11:22

Python. So decided to build it in Python

11:24

and Django. So yeah, actually, building Slack app

11:26

is a really like pleasant experience.

11:29

What's the deployment back in

11:31

story look like? Is it a, is

11:33

it a pass sort of thing, serverless,

11:36

VMs? At the time, it was Heroku,

11:38

some of the roles running on Heroku.

11:41

And then I think a few

11:43

years ago, we migrated it to AWS.

11:45

You know, it's running on AWS and

11:47

ECS. Nice. Okay. So Docker for the

11:49

win right on how does it work

11:51

in talk price? And I'm curious how

11:53

what's where are you deployed? It's all

11:55

digital ocean and I have one big,

11:58

like eight, eight CPUs. server

12:00

running, I think, 16 different

12:03

Django apps. Not J. God, sorry, Docker

12:05

apps. No, sorry, Docker apps that are

12:07

all doing like, you know, some of

12:10

them share a database that's in Docker.

12:12

And some of them do sort

12:15

of have their own self contained

12:17

pair of like web app

12:19

and database and so on. But it's

12:21

all all Docker on one big server,

12:24

which is fairly new for me. And

12:26

it's glorious, glorious. That's awesome. Very cool.

12:29

All right. So, again,

12:31

congrats on this very, very neat. Let's

12:34

talk colo. Let's do it. I

12:36

first came across this, come

12:38

across it independently twice. Once,

12:43

when the Django chat guys recommended that

12:45

I talked to you because they're like,

12:47

Will's doing cool stuff, you should definitely talk

12:49

to him. Saying a thing for VS

12:51

code is super cool. But also, I

12:53

can't remember there's somebody on your team

12:55

whose social media profile I came across and

12:58

I saw this and like, oh, this

13:00

is this is pretty neat. I

13:02

think we even covered it on the Python bytes

13:04

podcast. Oh, no way. Let's see. Yeah, sure. In January,

13:07

we did. So that's that we talked

13:09

about a little bit. But this just

13:11

looks like such anything. And it's I

13:13

encourage people to who may be interested

13:15

in this to visit colo.app. Because it's

13:17

a super visual sort of experience of

13:19

understanding your code, right? Would you agree?

13:22

Yeah, I mean, 100%. Yeah, funny thought

13:24

I hadn't really thought that a podcast

13:26

is going to be a hard way

13:28

to describe the visual beauty and magic that

13:30

that colo can bring to your code. But yeah, 100%.

13:33

Yeah. So colo, like very much started as

13:35

like the idea of, hey, like, I should

13:37

be able to see like how my code

13:39

actually flows. I think like all of us

13:41

as we build software, as we write our

13:43

Python code, we have this kind of like

13:45

mental model of how all the different functions

13:47

like fit together, how like a bit of

13:49

data ends up from like the beginning, like

13:51

to the end, like it passes through maybe

13:53

a bunch of functions, it passes through a

13:55

bunch of like classes, a bunch

13:58

of loops, all the state gets like modified. And

14:00

we have this kind of like mental picture of all

14:02

of that in our head and the

14:05

kind of very beginning of Kolo

14:07

the question I asked myself was like is there

14:09

a way we can just like Visualize that is

14:11

there a way we can just actually print that

14:13

out onto a screen So if you go to

14:16

Kolo app It kind of looks like this funny

14:18

sun chart with like lots of it kind of

14:20

a sunny tree chart with lots of nose Nodes

14:23

going from the center and like going off

14:25

into the distance Which I think is like

14:27

yeah similar to like what folks kind of

14:29

might already have in their head about like

14:31

how the code flows maybe another way to

14:33

describe it is imagine like

14:36

you enable a debugger

14:39

at the beginning of every function and

14:41

at the end of every function in

14:43

your code and you print out like

14:46

What was the function name? What were the input

14:48

arguments? What was the return value and then you

14:50

arrange all of that in a graph that then

14:52

shows which function called which other function? It almost

14:54

looks like what you get out of profilers, right?

14:56

You know where you say like, okay this function

14:59

took 20% but if you expand it out I'll

15:02

say well really spend 5% there 10% there and

15:04

then a bunch of it and you go kind of

15:07

Converse that 100% you know, I'm guessing you're

15:09

not really interested in how long it took

15:11

although Maybe you can probably get that out

15:13

of it. It's the important is more. What

15:15

is the dependency? What are the variables being

15:18

passed and like understanding individual behavior, right or

15:20

maybe yeah, what do you think? Yeah, 100%

15:22

I think like it's interesting because color actually

15:24

uses under the hood like a bunch of

15:27

the Python profiling API's and people

15:29

often think of kolo as a profiler we do actually

15:31

have a traditional profiling based

15:33

chart which puts the timing at the center

15:36

But you're absolutely right that the focus of

15:38

our like main chart the one that we're

15:40

both looking at that Has

15:43

like this idea of the function overview and

15:45

like which function calls which the idea there

15:47

is like absolutely the hierarchy and seeing Like

15:50

giving yourself that same mental model that someone

15:52

who's worked on a code base for three

15:54

months has in their head Immediately

15:56

like yourself by just looking at it This

16:00

portion of TalkPython to me is brought to you

16:02

by Sentry. Code breaks. It's a

16:04

fact of life. With Sentry, you can fix

16:07

it faster. As I've

16:09

told you all before, we use Sentry

16:11

on many of our apps and APIs

16:13

here at TalkPython. I recently used Sentry

16:15

to help me track down one of

16:17

the weirdest bugs I've run into in

16:20

a long time. Here's what happened. When

16:22

signing up for our mailing list, it

16:24

would crash under a non-common execution pass,

16:26

like situations where someone was already subscribed

16:28

or entered an invalid email address or

16:30

something like this. The bizarre part

16:33

was that our logging of

16:35

that unusual condition itself was

16:37

crashing. How is it possible

16:39

for her log to crash? It's

16:42

basically a glorified print statement. Well,

16:44

Sentry to the rescue. I'm looking at the

16:46

crash report right now and I see way

16:48

more information than you would expect to find

16:50

in any log statement. And because it's production,

16:53

debuggers are out of the question. I

16:56

see the trace back of course,

16:58

but also the browser version, client

17:00

OS, server OS, server OS version,

17:02

whether it's production or Q&A, the email and

17:04

name of the person signing up. That's the

17:06

person who actually experienced the crash, dictionaries of

17:09

data on the call stack, and so much

17:11

more. What was the problem? I initialize

17:13

the logger with the string

17:16

info for the level rather

17:18

than the enumeration.info, which

17:20

was an integer-based enum. So the logging

17:22

statement would crash, saying that I could

17:25

not use less than or equal to

17:27

between strings and ints. Crazy

17:29

town. But with Sentry, I

17:32

captured it, fixed it, and I even helped

17:34

the user who experienced that crash. Don't

17:37

fly blind, fix code faster with

17:39

Sentry. Create your Sentry account now

17:41

at talkpython.fm slash Sentry. And if

17:43

you sign up with the code

17:46

talkpython, all capital no spaces,

17:49

it's good for two free months of Sentry's

17:51

business plan, which will give you up to

17:53

20 times as many monthly events, as well

17:55

as other features. Usually,

17:58

in the way these charts turn out, you

18:00

can notice that there's like points of interest. Like

18:02

there's one function that has a lot of children.

18:04

So that clearly is coordinating like a bunch of

18:06

the work where you can see kind

18:09

of similarities in the structure of some

18:11

of the sub trees. So you know, okay, maybe that's

18:13

like a loop and it's the same thing happening a

18:15

couple of times. So you can essentially,

18:17

I get this overview and then it's

18:20

fully interactive and you can dive in to

18:22

like what exactly is happening. Yeah, is it

18:24

interact? So I can like click on these

18:26

pieces and it'll pull them up. We actually,

18:28

and this is what's, it'll be live by

18:30

the time this podcast goes live. We

18:32

actually have a playground in the browser. This is

18:34

also super fun. We can talk about this one.

18:36

Let me drop you a link real quick. This

18:38

will be at play.colo.app. So

18:40

with this, yeah, this is super fun

18:43

because this is fully Python just running

18:45

in the browser using Pyadite and like

18:47

WebAssembly. Oh nice, okay. But yeah, so

18:49

this is the fully visual version where

18:51

you can, yeah, it defaults to loading

18:53

like a simple Fibonacci algorithm. And

18:56

you can see like what the

18:58

colo visualization of Fibonacci looks like.

19:00

And you can actually edit the code and see

19:02

how it changes with your edits and all of

19:04

that. We have a couple other examples. Wow, the

19:06

pandas one and the whack-a-mole one are pretty intense.

19:09

They're pretty wild pictures. They look like sort of

19:11

Japanese fans or whatever. You know, the whole paper

19:13

ones. We once had a competition at a conference

19:15

to see who could make like the most fun

19:17

looking algorithm and visualize with

19:20

colo. But yeah, like the, it's fun. Like

19:22

visualizing code is really great. That's

19:24

awesome. So this is super cool.

19:27

It's just all from scratch. It's

19:29

besides Pyadite here, not like

19:31

VS Code in the browser or

19:33

anything like that. I think it's using Monaco in

19:36

this case or CodeMirror. But otherwise this is all,

19:38

is this Pyadite and a little bit of React

19:40

to like pull kind of the data together. But

19:43

yeah, we're really, yeah. It's otherwise homemade.

19:45

This is kind of like the kind

19:47

of what colo has been for like

19:49

the past like two years or so

19:51

has been this kind of side project

19:53

for our simple poll to help

19:55

like just visualize and understand code better. The

19:58

simple poll code base to be honest has grown. large that

20:00

like there's parts of it that I wrote like

20:02

five years ago that I don't understand anymore. And

20:05

it's like annoying to get back to that

20:07

and having to spend like a day to

20:09

refamiliarize myself with everything. It's a lot nicer

20:11

to just like to actually kind of explain

20:13

like end to end how it works. You

20:15

install like in a Django project, you install

20:18

kolo as a as a middleware. And

20:20

then as you just browse

20:22

and use your Django app and

20:24

make requests, traces get saved. So

20:26

kolo records these traces, they can

20:28

actually get saved in a local SQLite

20:30

database, then you can view the traces, which

20:33

includes the visualization, but also like lots of

20:35

other data like you can actually see in

20:37

the version you have there, like we show

20:40

every single function call like the inputs and

20:42

outputs for each function call. So

20:44

that main idea of code is like really show you

20:47

everything that happened in your code. So

20:49

in a Django app, that would be like the

20:51

request, the response, like all the headers, every

20:53

single function call input and output, outbound

20:56

requests, SQL queries as well. So they're

20:58

really the goals to show you everything.

21:00

You can view these stored traces either through

21:02

VS code. And this is also will be

21:05

live by the time this episode goes live

21:07

through like a web middleware version, which is

21:09

a bit similar to Django debug toolbar. Not

21:11

sure if you've played around much with Django

21:14

debug. Yeah, a little bit. Yeah. And those

21:16

things are actually pretty impressive. Right? I played

21:18

out with that one in the pyramid one.

21:20

And yeah, you can see more than I

21:22

think you would reasonably expect from just

21:24

a little thing on the side of your

21:26

web app. Yeah, exactly. And that's very much

21:28

our goal to like very kind of deep

21:30

insight. In our minds, this is almost like

21:32

a bit like old news, like we've been

21:34

using this for like a few years, basically.

21:37

And then at some point, like last year,

21:39

we started playing around with this idea of

21:41

like, okay, so we have this trace that

21:44

has information about like pretty much everything that

21:46

happened in like a request. Is there any

21:48

way we could use that to solve this

21:50

like reasonably large pain point for us, which

21:52

is like writing tests? I'm actually curious.

21:54

Do you enjoy writing tests? I'll tell you

21:57

what I used to actually, I used

21:59

to really I used to enjoy writing tests and I used

22:01

to enjoy thinking a lot about it. And

22:04

then as the projects would get bigger, I'm like, you know,

22:06

this is, these tests don't

22:08

really cover what I need them to cover anymore.

22:11

And they're kind of dragging it down. And then,

22:13

you know, the thing that really kind of knocked

22:15

it out for me is I'd have like teammates,

22:17

they wouldn't care about the test at all. So

22:19

they would break the test or just write a

22:22

bunch of code without tests. And I felt kind

22:24

of like a parent cleaning up after kids. You're

22:27

like, why is it so, can

22:29

we just pick up like, why are there dishes here? You know,

22:31

I was just going around and like, this is

22:33

not what I want to do. Like I want

22:35

to just write software and like, I understand the

22:37

value of tests, of course. A hundred percent. Yeah.

22:40

At the same time, I feel like

22:42

maybe higher order integration tests often, for

22:44

me at least, serve more

22:47

value because it's like, I could write 20

22:49

little unit tests or I could write two

22:51

integration tests and it's probably gonna work. I'm

22:53

actually completely with you on that. Okay. So

22:56

the bang for the buck of integration tests

22:58

are like great, like really, really useful.

23:01

You can almost think of tests as having

23:03

like two purposes, one being like, well, actually,

23:05

I think this would be too simple an

23:08

expression. Let me not make grand

23:10

claims about all the uses of tests.

23:12

I think the use of it that

23:14

most people are after is this idea

23:16

of like, what I've built isn't

23:18

going to break by accident. Yeah. Like

23:21

you want confidence that any future change

23:23

you make doesn't impact a bunch of

23:25

unrelated stuff that it's not supposed to

23:27

impact. I think that's what most people

23:29

are after with tests. And

23:31

I think for that specific desired result,

23:34

like integration tests are the way to

23:36

go. And there's some cool

23:38

writing about this from, I wrote a

23:40

little blog post about Kolo's test generation

23:42

abilities. And in there, I

23:44

linked to a post from Ken C. Dodds from

23:47

the JavaScript community who has a great post about,

23:49

I think it's called Write Tests, Not

23:51

Too Many, Mostly Integration, kind of after this

23:53

idea of like, nice. Heat,

23:56

not too much, mostly vegetables. I think that's

23:58

the... Yeah, exactly. Exactly. Yeah. I'm a big

24:01

fan of that. And actually, it's interesting. I've

24:03

been speaking to a bunch of folks over

24:05

the past year about tests. A lot of

24:07

engineers think about writing tests as vegetables.

24:10

And obviously, some people love vegetables. And

24:13

some of us love writing tests. But

24:15

it seems like for a lot of

24:17

folks, it's kind of like a obviously

24:19

necessary part of creating great software. But

24:21

it's maybe not like the most fun

24:24

part of our job. Before you pick

24:26

up some project, you're a consultant, or

24:28

you're taking over some open source project,

24:30

you're like, this has no tests. Right. It's

24:32

kind of like running a linter. And it says there's a thousand

24:34

errors. You're like, well, we're not going to do that. Yeah,

24:37

we're just not going to run the linter against it.

24:39

Because it's just too messed up at this point, right?

24:41

It's interesting you mentioned the picking up like a project

24:43

with no tests, because I think within

24:46

the next three months, we're not quite there yet.

24:48

But I think in the next three months with

24:50

colos test generation abilities, we'll have a

24:52

thing where all we need is a Python

24:54

code base to get started. And then we

24:56

can bring that to like a really respectable

24:58

level of code coverage, just by

25:00

using colo. Okay, how? That's

25:04

kind of describing a second ago how like we

25:06

simple pull has tons of integration tests simple pull

25:08

actually is about 80,000 lines

25:10

of application code, not including migrations

25:12

and and like config files. And

25:15

then it's about 100,000 lines of tests. And

25:18

most of that is integration tests. So simple,

25:20

very well, very well tested lots of really

25:23

mostly integration tests. But it is always

25:25

a bit of a chore to like write them.

25:28

So we started thinking about like, Hmm, this like

25:30

colo tracing we're doing, can that help

25:32

us with making tests somehow? And

25:35

then we started experimenting with it. And like

25:37

to our surprise, it's actually a Yeah, I'm

25:39

still sometimes surprised that it actually works. But

25:41

basically, the idea is that if you have

25:43

a trace that has that captures everything

25:46

in the request, you

25:48

can kind of invert it to

25:51

build a integration test. So let

25:53

me give an example of what

25:55

that means. The biggest challenge we

25:57

found with creating integration tests is

25:59

actually the test data setup.

26:02

So getting your application into the right

26:04

shape before you can send a request

26:06

to it or before you can call

26:09

a certain function. That's like kind of

26:11

the hardest part. Writing the asserts is

26:13

almost like easy or even like fun.

26:15

Right, there's the three A's of unit

26:18

testing. Range, Assert, Act, Wait, Arrange, Act,

26:20

Assert. Exactly. Yeah, the first and the

26:22

third one that you kind of have

26:24

data on, right? Exactly. Yeah. So we're

26:27

like, wait a second, we actually can

26:29

like kind of extract this like

26:31

the act, so like the setting, sorry, the

26:33

Arrange, setting up the data, the act, like

26:35

actually making the HTTP request and then

26:38

the assert like to ensure the status change or

26:41

that the request go to 200 or something. We

26:43

actually have the data for this. It's

26:45

reasonably straightforward. Like if you capture in, you

26:48

know, just your like normal, like imagine you have

26:50

a local to-do app and you browse like a

26:53

to-do kind of demo, simple to-do app and you

26:55

browse to the homepage and the homepage maybe lists

26:57

the to-dos. And if you've got colo enabled, then

26:59

colo will have captured the request, right? So like

27:01

the request went to the homepage and it returned

27:04

to 200. So that's already

27:06

like two things we can now turn into

27:08

code in our integration test. So first step

27:10

being, well, I guess this is the act

27:12

and the assert in the sense that the

27:14

assert is the 200 and then the act

27:16

is firing off a request to the homepage.

27:18

Now the tricky bit, and this is where

27:20

it gets the most fun, is the range.

27:23

So if we just put those two

27:25

things into our test in our to

27:27

imaginary test, there wouldn't have been any

27:29

to-dos there, right? So it's actually not

27:31

an interesting test yet. But in your

27:34

local version where the trace was recorded,

27:36

you actually had maybe like three to-dos

27:38

already in your database. Does that make

27:40

sense so far? Yeah, yeah, absolutely. On

27:42

the homepage, like your to-do app might

27:44

make a SQL query to like select

27:46

all the to-dos or all the to-dos

27:49

for the currently logged in user. And

27:51

then Colo would store that SQL query,

27:53

would store that select and would also

27:55

store actually what data the database returned.

27:57

This is actually something where Colo

27:59

goes... beyond a lot of the existing

28:01

kind of like debugging tooling that might

28:04

exist, like actually showing exactly what data

28:06

the database returned in a given SQL

28:08

query. But imagine we get like a

28:10

single to do returned, right? We now

28:12

know that to replicate this like trace

28:14

in our test, we need to

28:17

start by feeding that to do into

28:19

the database. That's where like the trace

28:22

inversion comes in. If like a request

28:24

starts with a select of like

28:26

the to do table, then the first thing that needs

28:29

to happen in the integration test is actually a

28:32

like creating like an insert into the database

28:34

for that to do. And now when you

28:36

fire off the request of the homepage, it

28:38

actually goes through your real code path where

28:40

like an actual to do gets loaded and

28:42

gets printed out onto the page. So that's

28:44

like the most basic kind of

28:46

example of like, how can you turn

28:49

like a locally captured trace of a

28:51

request that like made a SQL query

28:53

and return 200 into

28:55

like an integration test? Yeah, that's awesome. One

28:57

of the things that makes me want to

28:59

write fewer unit tests or

29:01

not write a unit test in a certain case is

29:04

I can test given using

29:06

mocking given my let's say

29:09

SQL alchemy or beanie or

29:11

whatever Django RM model theoretically

29:14

matches the database, I

29:16

can do some stuff, set some values and

29:18

check and that that's all good. But in

29:20

practice, if the if the shape the schema

29:22

in the database doesn't match the shape of

29:24

my object, right, the system freaks out and

29:26

crashes and says, Well, that's not going to

29:28

work, right? There's no way. And so it

29:30

doesn't matter how good I mock it out. It

29:32

has to go kind of end

29:34

to end before I feel very good about it.

29:37

Oh, yeah, okay, it's gonna really, really work, right?

29:39

Exactly. It's an interesting story, like you're saying to

29:41

like, like, let's actually see if we can just

29:43

create the data. But like,

29:45

let it run all the way through, right? I'm

29:47

totally with you. And and I think I've often

29:49

seen like unit tests pass and say every mean

29:51

there's like lots of memes about this, right? How

29:53

like unit tests say everything is good, but the

29:55

server is down. Like, how's that possible? I think

29:58

in Django world, it's reasonably common to write into

30:00

integration tests like this, as in like

30:02

the actual database gets hit, you have

30:04

this idea of like the Django test

30:06

client, which sends like a real

30:09

in air quotes HTTP request through

30:11

the entire Django stack, as opposed to doing

30:13

the more unit test approach. So it hits

30:16

the routes, it hits like all the that

30:18

sort of stuff all the way in and

30:20

the template. Yeah. Yep. And then at the

30:22

end, you can assert based on like the

30:24

content of the response, or you can check

30:27

like imagine if we go back to the

30:29

to do example, if we're testing like the

30:31

ad to do endpoint or form submission, then

30:34

you could make a database query at the end.

30:36

And Colo actually does this as well. Because like,

30:39

again, we know like that you inserted a

30:41

to do in your request. So we can

30:43

actually make an assert this is a different

30:45

example of the trace inversion. If there's an

30:48

insert in your request that you've captured, then

30:50

we know at the end of

30:52

the integration test, we want to assert that

30:54

this row now exists in the database. So

30:56

you can assert at the very end to

30:58

say, does this row actually exist in the

31:00

database now. So it's a very nice kind

31:02

of reasonably end to end, but still integration

31:04

test. It's not like a brittle click around

31:07

in the browser and kind of hope for

31:09

the best kind of thing. It's like, as

31:11

we said, I think like integration tests just

31:13

get you great back for your buck. They

31:15

really do. It's like the the

31:18

8020 rule of unit testing for sure. Yeah.

31:21

Talk Python to me is partially supported by

31:23

our training courses. If you're a regular listener

31:26

of the podcast, you surely heard about Talk

31:28

Pythons online courses. But have you had a

31:30

chance to try them out? No

31:32

matter the level you're looking for, we have a course for

31:34

you. Our Python for absolute beginners

31:36

is like an introduction to Python plus that

31:38

first year computer science course that you never

31:40

took our data driven web

31:42

app courses build a full pipe,

31:44

i.org clone along with you right

31:47

on the screen. And we even

31:49

have a few courses to dip your toe in with. See

31:51

what we have to offer at training.talkpython.fm or

31:54

just click the link in your podcast player. So

31:58

is this all algorithmic? Yep. Great

32:00

question. Is it LLMs? Like how much VC

32:02

funding are you looking for? Like, you know,

32:04

like if you got LLMs in there, like

32:07

coming out of the woodwork. No, I just

32:09

kidding. No, how do you, how does this

32:11

happen? It's actually all algorithmic and rule-based at

32:13

the moment. So this idea of a

32:16

select becomes like a, like an

32:18

insert and an insert becomes like

32:20

a select assert. We were

32:22

surprised how far we could get with just rules.

32:25

The benefit we have is that we kind of

32:27

have this like full size, simple poll Django code

32:29

base to play around with. And yeah,

32:31

like generating integration tests in simple poll

32:33

just like fully works. There's a bunch

32:35

of tweaks we like had to make

32:38

to, as soon as I guess you

32:40

work in kind of like outside of

32:42

a demo example, you want like time

32:44

mocking and HTTP mocking and you want

32:46

to use your like factory boy factories.

32:48

And maybe you have a custom unit

32:50

test, like base class and all of

32:52

this. But yeah, it like, it

32:55

actually works now. I gave a talk at DjangoCon Europe

32:57

last year. It's kind of like a bit of a

32:59

wow moment in the audience where yeah, you

33:01

just click generate test and

33:03

it generates you like a hundred line

33:05

integration test. And the test actually passes.

33:07

So that was like people started, just

33:09

started clapping, which was a great feeling.

33:12

I'm still a bit surprised that it works on it,

33:14

but yeah, no LOM at all. I do think like

33:16

LMs could probably make these tests like even better. Or

33:19

you know how I was saying a second ago, like

33:21

in three months, we could go take

33:23

a code base from like zero test coverage to

33:25

maybe like 60%, 80%. I

33:29

imagine if we made use of LLMs

33:31

that would help make that

33:33

happen. Yeah. Yeah. You could talk to

33:35

it about like, well, these things aren't

33:37

covered. What can we do to

33:39

cover them? Yeah. I don't know if you

33:41

maybe could do fully, fully automated. Just push

33:43

the button and let it generate it. But

33:45

you know, it could also be like a

33:47

conversational, not a conversation, sort of a guided,

33:50

let's get the rest of the test. You don't like, okay, we're

33:52

down to 80. We got 80%, but there's

33:54

the last bit or a little tricky. Like what ones are

33:57

missing? All right. So how do you think we could do

33:59

this? Is that valid? No, no, you need to, that's

34:01

not really the kind of data we're going to

34:03

pass. You know, I don't know. It seems something

34:05

like that, right? I really liked that. I'd not

34:07

thought about like a conversation as a way to

34:09

generate tests, but that makes so much sense. Right.

34:11

It kind of bringing the developer along with them

34:13

where it's gotten too hard or something, you know?

34:16

Yeah. There's something cool about just clicking a button

34:18

and see how much code coverage you could get

34:20

to, but chatting to it, I think also honestly,

34:22

like so far, like our

34:24

test generation logic is a bit of

34:26

a black box. It just

34:28

kind of like works. Yeah. Until the point

34:30

where like it doesn't. So we're actually kind

34:32

of in the, in the process of like

34:35

shining a bit more of a light into

34:37

like, like essentially the like internal data model

34:39

that Curlo keeps track of to know what

34:41

the database state should be like in this

34:44

arrange part of the integration test. And

34:46

yeah, we're actually like in the process of

34:48

like, yeah, talking to a bunch of users

34:50

who are already using it and also finding

34:52

like companies who want to increase

34:55

their, increase their test coverage or who have

34:57

problems with their testing and want to improve

34:59

that and kind of working closely with them

35:01

to make that happen. That's kind of

35:03

a huge focus for us as we figure out like,

35:05

how do we want to monetize Curlo? Like so

35:07

far, Curlo has been kind of supported by simple pull

35:10

as a, as a side project, but we're kind of

35:12

making it real, making it its own business. So,

35:15

and we think the test generation is going

35:17

to play a big part in that. Right.

35:19

Like that could be a, certainly a premium

35:21

team feature sort of thing. Exactly. Yeah. Yeah.

35:23

Yeah. Enterprise and priority subversion comes with auto

35:26

testing. Yeah, exactly. Something

35:28

like that. Yeah. Yeah. If there's anyone listening and

35:30

like they're keen to increase their code coverage, please email

35:32

me. Maybe we can leave my email in the, in

35:34

the notes or something like that. Yeah. I'll put your

35:36

contact info on the show notes for sure. It's actually

35:38

really nice. It's just w at Curlo dot app. Oh,

35:41

very nice. So yeah, if anyone's listening and wants to

35:43

kind of like increase their code

35:45

coverage or has a lot of code bases that

35:47

have zero coverage that would benefit from getting to

35:49

like some level of coverage, we'd love to help

35:51

you and talk to you. Even if the solution

35:53

doesn't like involve using Curlo, just really, really keen

35:55

to talk to anyone about like Python tests and,

35:58

and what can be done there. So. Yeah

36:00

please hit me up awesome i'll definitely

36:02

put some details in the show notes

36:04

for that i have some questions as

36:06

well yes right here i'm looking at

36:08

the webpage and the angle bracket title

36:11

is colo for jango but in

36:13

the. Playground thing you sent me

36:15

it was on playing python code

36:17

it was on algorithms it was on pandas

36:19

which i thought was pretty interesting how much

36:21

you could see inside pandas. makes

36:23

me wonder you know if you look at the

36:26

web frameworks there's two three more they're pretty popular

36:28

out there and they all support middleware. Yeah hundred

36:30

percent so colo kind of started as like this

36:32

like side project for our jango app and i

36:35

think that that's why we kind of went there

36:37

first kind of the audience we know best. Lily

36:40

as well yeah exactly dog fooded lily

36:43

who's who's an engineer on the team

36:45

is and has been yet building a

36:47

lot of yeah a lot of the

36:49

python side of colo is like a

36:51

core contributor to jango so jango is

36:54

like really where we're home and. I

36:57

think when building a new product it's

36:59

kind of nice to keep the audience

37:01

somewhat small initially keep like building for

37:03

very specific needs as opposed to going

37:05

like very wide very early that was

37:07

kind of very much very much the

37:09

intention but there's no reason why colo

37:11

can support fast fast api. The

37:14

scientific python stack as you can

37:16

see in the playground it does

37:18

totally work on on plain python

37:20

it's really just a matter of

37:22

honestly like fast api support would

37:24

probably be like a forty line config

37:26

file in exactly in like our

37:28

code and there's actually yeah we're

37:30

thinking of ways to make that actually a

37:32

bit more pluggable as well. There's only like

37:35

so many things we can reasonably support well

37:37

ourselves i was gonna save somebody else out

37:39

there has an open source project they want

37:41

it to have good support for this right

37:43

like hey exactly i run hdpx or i

37:45

run light star or whatever and i want

37:48

mine to look good here too right totally.

37:50

So the thing you can do already today

37:52

is a little bit of conflict you can

37:54

pass in and actually if you look back

37:56

on the pandas example you'll see this by

37:59

default color. actually doesn't show you library code,

38:01

if you use it in your own code base.

38:03

But you can tell it show me everything that

38:05

happened, like literally everything. And then it will it

38:07

will do that for you. So in this example,

38:10

you're looking at or if anyone's looking at the

38:12

playground, if you look at the pandas example, it

38:14

will say like include everything in pandas. And that'll

38:16

give you like a lot more, a lot more

38:18

context. The thinking there is that most people don't

38:20

really need like the issues you're going to be

38:23

looking at will be in your own code or

38:25

you're in your own company's code base. You don't

38:27

really need to look at the abstractions, but you

38:29

totally can. But yeah, to answer

38:31

the question, like we have this like

38:33

internal version of a plugin system where

38:36

yeah, like anyone could add fast API

38:38

support, or like a great insight into

38:40

PyTorch or what have you. The way

38:42

it all works technically really is it's

38:45

totally built on top of this Python API called

38:47

set profile. I'm not sure. Have you used have

38:49

you come across this before? It's a bit similar

38:51

to set trace actually. Yeah, I think so. I

38:53

think I've done it for some, some

38:56

C profile things before. I'm not

38:58

totally sure. Yeah, yeah, it's a

39:00

really neat API, to be honest, because

39:02

Python calls back to your like the

39:04

callback that you register on every function,

39:07

enter and exit. And then kolo essentially

39:09

looks at all of these functions, enters

39:11

and exits and decides which ones are

39:13

interesting. So the matter of like supporting

39:15

say fast API is basically just telling

39:18

kolo, these are the fast API functions

39:20

that are interesting. This is the fast

39:22

API function for for like an HTTP

39:24

request that was served. This is the

39:26

HTTP response. Or similarly for SQL alchemy.

39:28

This is the function where the

39:31

query was actually executed and sent to the database.

39:33

This is the variable which has the query result.

39:35

Like there's a little bit more to it.

39:37

And I'm definitely like, yeah, generalizing, but it's

39:40

kind of like in principle, it's as simple

39:42

as that. It's like telling kolo, here's the

39:44

bits of code in a given library that

39:46

are interesting. Now just kind of like display

39:48

that and make that available for the for

39:50

the test generation. Excellent. Yeah, I totally agree

39:52

with you that it getting focused, it probably

39:54

gets you some more full

39:56

attention from the Django audience and the Django

39:58

audience is quite a large and

40:01

influential group in the Python web space. So

40:03

that makes a ton of sense, especially since

40:05

you're using it. By the way, it was

40:07

Lily's mastodon profile, I believe, that I ran

40:10

across that I first discovered colo from. So

40:12

of all the places, yeah, or a

40:15

post from her or something like that. That's awesome. Cool,

40:17

all right. So let's talk about a couple other things here. For

40:20

people who haven't seen it yet, like you get

40:23

quite a bit of information. So if you see

40:25

like the get request, you actually

40:27

see the JSON response that

40:30

was returned out of that request and it

40:32

integrates kind of into your editor directly, right?

40:34

If you've seen CodeLens before, it's kind of

40:36

like CodeLens, right? Yeah, this is another thing

40:39

which I think is pretty novel with colo.

40:41

Like I think it's reasonably common

40:43

for existing debugging tools to show you

40:45

like, oh yeah, this is the headers

40:47

for the request, or this is like

40:49

the response status code. But especially working

40:51

with the Slack API in Simple Poll,

40:54

you're constantly looking at payloads and what were

40:56

the values for things and what are you

40:58

returning in production. You don't directly get to

41:01

even make those or receive those requests, right?

41:03

There's some like system in Slack

41:05

who was like chatting with your thing. You're like,

41:07

well, what is happening here, right? Not that you

41:09

would actually run this in there, but you know.

41:12

I mean, it's funny you mentioned this because there

41:14

is one experiment we wanna

41:16

run of kind of actually enabling these

41:18

extremely deep and detailed colo traces in

41:21

production. We haven't explored this too much

41:23

yet and I think we're gonna focus a little bit

41:25

more on the test generation, but you

41:27

could imagine like a user who's using,

41:30

who's on the talk Python site and they've

41:32

got some incredibly niche error

41:34

that no one else is

41:36

like encountering and you've tried to reproduce

41:39

it, but you can't reproduce it. Maybe

41:41

there's a little bit of information in like your

41:43

logging system, but it's just not enough and you

41:45

keep adding more logging and you keep adding more logging

41:47

and it's just not helping. Like

41:49

imagine a world where you can say, just

41:52

for that user, like enable colo and enable

41:54

like these really deep traces and

41:56

then you can see whenever the user

41:58

next interacts. the value

42:00

for every single variable for every single

42:02

code path that executed for that user.

42:04

That's just like, yeah, I think one

42:06

of our users discovered is like a

42:08

debugger on steroids. Yeah, it's pretty interesting.

42:11

Sounds a little bit like what you get

42:13

with Sentry and some of those things, but

42:16

maybe also a little bit different. So,

42:18

you know, you could do something like

42:21

here's a dear user with problem. Here's

42:23

a URL. If you click this, it'll

42:25

set a cookie in your browser and

42:27

then all subsequent behavior, it just pants

42:30

on it. You know what I mean? It's

42:32

like recording it. Yeah, that'd be pretty interesting. Yeah,

42:34

I think it makes sense in the case, like

42:36

if a user, it could even be an

42:38

automated support thing, right? Like if a couple

42:40

of sites have this where you can like do

42:43

like a debug dump before you submit your support

42:45

ticket, this is almost like that. And then

42:48

as an engineer who's tasked with digging into

42:50

that user's bug, you don't have to start

42:52

with like piecing together. What was

42:54

this variable at this time when they made that

42:56

request three days ago, you like you can just

42:59

see it. If an error ever encounters an exception

43:01

on your site, you just set the cookie, right?

43:03

All everything else they do is how to record

43:05

it until you turn it off on. Oh my

43:07

God, you're giving me so many good ideas. That'd

43:10

be fun, right? Let me start writing this stuff

43:12

down. Hey, let's record it. It'll be fine. That's

43:14

awesome. Yeah, there's a bunch of stuff that's interesting.

43:16

People can check it on the site. It's all

43:18

good. However, we talked

43:21

a little bit about the production thing. Like

43:23

another thing you could do for production, if

43:25

this requires both a decent amount of traffic

43:27

and maybe you could actually pull this off

43:29

on just a single server, but you could

43:31

do like, let's just run this for 1%

43:34

of the traffic so that you don't kill the

43:36

system. But you get, you know, if

43:38

that's why you have enough traffic is like statistically

43:41

significant sampling of what people

43:43

do without actually recording a

43:46

million requests a day or something insane. 100%.

43:49

I think there's really something there or like I

43:51

could go on about this whole idea of like

43:53

runtime data and like improving software understanding for days,

43:55

because I just think like it's really just like

43:58

missing layer, right? Like all of us constantly. imagine

44:00

like what is like we play computer looking

44:02

at our code imagining what the values can

44:05

be but like yeah so you're looking

44:07

at some complex function in production and you want

44:09

to understand how it works like how useful would

44:11

it be if you could see like the ten

44:13

the last ten times it was called like what

44:16

were the values going into it and what were

44:18

the values coming out of it like that would

44:20

be I just think like why do we not

44:22

have this already like why does your editor not

44:25

show you for every single function in the code

44:27

base give examples of like how it's actually used

44:29

like in production yeah and then use those to

44:31

generate unit tests and if there's an error use

44:33

that to generate a case like the negative case

44:35

not the positive case unit test right there you

44:38

go exactly it's all like kind of

44:40

hanging together like yeah yeah

44:42

once you have the data you have interesting options

44:44

yeah business model this is not this I maybe

44:46

should have started sooner with this but it's not

44:48

entirely open source that we may be a little

44:50

little bits and pieces of it but in general

44:52

it's not open source that's correct yeah yeah no

44:54

I'm putting that out there's a negative right this

44:56

looks like a super powerful tool that people can

44:59

use your right right coded and that's fine

45:01

yeah I think the open source question is

45:03

super interesting like it's always been like

45:05

something we thought about or considered I think

45:08

there is yeah with developer tools I think

45:10

business models are always super interesting and we

45:12

want to make sure that we can have

45:14

a business model for colo and like run

45:16

it as like a sustainable thing as opposed

45:19

to it just being like a simple poll

45:21

side project kind of indefinitely be great if

45:23

kolo could like support itself and

45:25

yeah have a business model I think that's how it

45:27

can like really fulfill its potential in a way but

45:30

that's not to say that like kola

45:32

won't ever be open source like I

45:34

think there's a lot to be said

45:36

for open sourcing it I think especially

45:38

like the the capturing of the traces

45:40

is maybe something like I could see

45:42

us open sourcing I think the open

45:44

source community is fantastic I do also

45:46

think it's not like a thing you

45:48

get for free right like as soon

45:50

as you say hey we're open source

45:52

you open yourself up to contributions right

45:54

and to like the community actually getting

45:56

involved and that's great but it also

45:59

takes time And I think that's the

46:01

path I would like to go down when

46:03

we're a little bit clearer on what Kolo

46:05

actually is and where it's

46:07

valuable, if that makes sense. Yeah, sure. If

46:10

it turns out that no one cares about

46:13

how to visualize code, then that's

46:15

a great learning

46:17

for us to have made. But I'd rather

46:19

get there without a lot of work in

46:21

the middle that we could have kind of

46:23

avoided, if that makes sense. For sure. It

46:25

feels like once we have a better sense

46:27

of the shape of Kolo and what the

46:30

business model actually looks like, then we can

46:32

be a bit more, yeah, we can invest

46:35

into open source a little bit more. But

46:37

to be honest, based on how everything's looking

46:39

right now, I would not be surprised at

46:41

all if Kolo becomes open core or

46:44

big chunks of it are open source. It

46:46

makes sense to me. It is fully free

46:48

at the moment. So that's worth calling out.

46:50

There's no cost or anything. You

46:53

can also download the Python package. And guess

46:55

what? You can look at all of the code. It

46:58

actually is all of theirs. It is all kind of

47:00

visible. That kind of leads into the next question is,

47:03

I've never used GitHub Copilot in a

47:05

few of those other things because it's

47:08

like, here, check this box to

47:10

allow us to upload all of your code and

47:13

maybe your access keys and everything else that's

47:15

interesting. So we can, one, train our

47:17

models, and two, give you some

47:19

answers. And that just always felt a

47:21

little bit off to me. What's the

47:23

story with the data? At the moment,

47:25

Kolo is entirely local product. So

47:28

it's all local. You don't have to. You

47:31

can get all of the visualization and

47:33

everything just by using local Kolo in

47:35

VS Code. We do have a

47:37

way to upload traces and share them

47:39

with a colleague. This is actually also something I

47:41

think is kind of playing with the idea of

47:44

writing a little Kolo manifesto. What

47:46

are the things that we believe in? One of them

47:48

that I believe in, and this goes back to the

47:50

whole runtime layer on top

47:52

of code. And there is this whole

47:54

dimension, this third dimension to code

47:56

that we're all simulating in our heads. I

47:59

think it should be a little bit more

48:01

totally be possible to not just like link

48:03

to a snippet of code like on GitHub,

48:05

but it should be possible to have a

48:07

like link like a URL to a specific

48:09

execution of code, like a specific function, and

48:11

actually talk about that. It's kind of wild

48:13

to me that we don't have this at

48:15

the moment, like you can't send a link

48:17

to a colleague saying, Hey, look at this

48:19

execution. That looks a bit weird. We ran

48:22

this in continuous integration, and it crashed. But

48:24

I understand. Let's look at the exact right

48:26

the whole deal. You can link to like

48:28

the Iran, you can link to like sentry

48:30

errors. But like if you're just seeing something

48:32

slightly weird locally, or like even something slightly

48:35

like weird in production where there's no area,

48:37

you can't really like link link to that.

48:39

Anyway, like this is kind of a roundabout

48:41

way of me saying that like, I think

48:43

that totally should be a thing like you

48:45

should be able to link like generically to

48:47

like a execution of a function or an

48:49

execution of a request. And

48:51

like, that would totally have

48:53

to live somewhere, right? So this is where there's

48:55

some idea of like colo cloud comes in. And

48:57

this is where you could like connect your your

49:00

repository. And then colo would like, as part

49:02

of that, you know, just like GitHub does

49:04

have access to your code and like show

49:07

you the code in like the colo cloud.

49:09

So I think there's definitely like useful things

49:12

that are possible there. But at the

49:14

moment, it's a fully local experience, like

49:16

your your code doesn't ever leave your

49:18

your system, you can if you want

49:20

to like upload traces, and then colo

49:23

stores the like trace data, not not

49:25

the code, just the trace data, but

49:27

a very local experience right now. Yeah,

49:29

little SQLite database. Exactly. Yeah, SQLite is

49:31

pretty awesome. It's a credible piece of

49:33

software. Yeah, it really really is. Let's

49:35

close out our conversation here with a

49:37

little bit of a request from Michael.

49:39

Right now it's VS code only any

49:41

chance for some PyCharm in there. This

49:43

is our top request like PyCharm support.

49:45

Yeah. And we've decided super small team

49:47

like we want to kind of support

49:49

everyone. But we've been working very heavily

49:51

actually the past few months on a

49:54

Web based version, which is I'm happy

49:57

to say like very much nearing completion.

49:59

And There's. The few bits and pieces were

50:01

like. It's really nice to be integrated super

50:03

deeply into the editor like the code, lenses

50:05

and and all of that and I think

50:07

they're the chance. We'll have that for Pie

50:09

Trump eventually as well, but we actually found

50:11

that like filling out, this web version does

50:13

a few things that are actually much nicer

50:15

when you have the full control over the

50:18

you I in terms of like browsing around

50:20

a trace, highlighting little bits of code. So

50:22

for example, in total like a given function

50:24

call because frame and you can look at

50:26

a given frame both and be as good.

50:28

but also in the web version and see

50:30

the code and see all of the data

50:32

that pass through the code. But something we

50:34

can do in the web version we can't

50:36

do in Vs code is actually show where

50:38

the current function was called from and actually

50:40

show like a preview of that code. Envious

50:42

code you can't really show like they couldn't

50:44

move where multiple files together or different stags.

50:46

I player is actually a load of a

50:48

guy. was surprised by how many different novel

50:50

like kind of. Ways. We had

50:53

in the web that we just never

50:55

even considered with like a direct editor

50:57

Integration in terms of displaying this one

50:59

time data. So like long story short,

51:01

like you you want to fight I'm

51:03

integration. Let me give you something even

51:05

better. Yeah lovers. and so without work

51:07

Likes: you run a certain commander something

51:09

when you run your web app and

51:11

then it just generates a sequel I

51:13

file and then you could just exploring

51:15

with a web viewer. What are you?

51:17

Yeah, it's actually kind of cooler than

51:19

that. So if you're using jungle or

51:21

in the future. like other things with

51:23

a typical middle where you would just

51:25

like go to your you would just

51:27

go to local host a thousand last

51:29

year kind of like you do for

51:31

us open a p i docs the

51:33

of and in the whole experience was

51:35

just there if you're not using a

51:37

middle where it will have a command

51:39

like colo serve or something like that

51:41

and dot old yeah host the same

51:43

experience for you to spacers off by

51:45

defaults are he is only of and

51:47

only response on local hosts heard something

51:49

like that you know like yeah exactly

51:51

dollar people zip it on accident that

51:53

was you badly his know production youth

51:55

of this yeah i mean people already

51:57

know about the game go to bugs

51:59

settings But I guess you could sort

52:01

of layer onto that, right? Probably. Yeah. I think

52:03

we actually do that at the moment, but yeah,

52:05

it's worth, worth remembering. No, I

52:08

just think, you know, like, Oh, this is

52:10

really cool. It explore a hundred

52:12

percent. cnn.com is awesome. Look what it's doing.

52:15

Requests and all this. Yeah, exactly.

52:18

A hundred percent. Yeah. Yeah. Oh,

52:20

and API keys. So interesting. Anyway,

52:22

a side, a side

52:25

conversation. Let's just, um, let's wrap it

52:27

up with final collection. People are interested.

52:30

What do they do? Yeah. Colo.app and check

52:32

it out. We have a playground link there,

52:34

play.colo.app easiest way to kind of see what

52:36

Colo is and what Colo does, but

52:38

we'll say the most powerful way to,

52:40

to actually see Colo in action is

52:42

to use it on your own code

52:44

base. So seeing the visualization and the

52:47

test generation capabilities is just like,

52:49

yeah, the most useful when you use it on

52:51

your code base. So hopefully the playground can entice

52:53

that a little bit. And yeah, really the main,

52:55

most important thing for us right now is yeah.

52:57

Chatting to folks who want to increase their test

52:59

coverage, want to like build automated testing

53:01

as part of their workflow and yeah,

53:04

work very closely with you to make

53:06

that happen. So if that's you, please

53:08

email me at W at Colo.app. You

53:10

need that posture, the W that's right. Awesome.

53:14

Well, thanks for being on the show. Congrats on

53:16

both of your projects. They look really neat. Thanks

53:18

so much for having me. Yeah. So excited to

53:20

have been on. Yeah, you bet. Bye. Bye. This

53:24

has been another episode of Talk Python to me.

53:27

Thank you to our sponsors. Be sure to check out

53:29

what they're offering. It really helps support the show. Take

53:32

some stress out of your life. Get

53:34

notified immediately about errors and performance issues

53:36

in your web or mobile applications. Just

53:40

visit talkpython.fm slash Sentry

53:43

and get started for free and be

53:45

sure to use the promo code talkpython, all

53:47

one word. Want to level

53:49

up your Python? We have one of the largest

53:51

catalogs of Python video courses over at Talk Python.

53:54

Our content ranges from true beginners to

53:56

deeply advanced topics like memory and async

53:58

and best of all. there's not

54:00

a subscription in sight. Check it out

54:03

for yourself at training.talkpython.fm. Be

54:06

Be sure to subscribe to the show, open your favorite

54:08

podcast app, and search for Python. We should be

54:10

right at the top. You can also

54:12

find the iTunes feed at slash iTunes,

54:14

the Google Play feed at slash Play,

54:16

and the Direct RSS feed at

54:18

slash RSS on talkpython.fm. We're

54:21

live streaming most of our recordings these days. If

54:23

you want to be part of the show and

54:25

have your comments featured on the air, be sure

54:27

to subscribe to our YouTube channel at talkpython.fm slash

54:30

YouTube. This is your host,

54:32

Michael Kennedy. Thanks so much for listening. I really

54:34

appreciate it. Now get out there and write some

54:36

Python code.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features