if you watch to the end or look closely at the profile it's not actually inspect stack where the self time happens and the slowdown occurs but in getmodule
Sampling profilers are also useful for languages that use exceptions as some trace profilers can lose track and end up with either incorrect trees or tree nodes bundling the gap with a name that may as well be ¯\_(ツ)_/¯. I've also seen trace profilers just hang or return zero data as well as nodes in completely the wrong place. Not capturing all possible call stacks isn't a problem if you are debugging a high CPU problem. A few hundred samples is usually enough.
It's kinda wild to me that I am still using 3.8 for most projects that need the strongest compute. Kinda funy how things went the other way with deeplearning and hpc
@@anthonywritescode I would still like a video on why celery might not be the best choice. I have used celery quite extensively in production and I too would pick something lighter like dramatiq just because of the number of regressions across three or more separate libraries (kombu, billiard, celery) every version upgrade. Can't really blame the already small team of maintainers, it is a huge project that everyone uses differently.
@@anthonywritescode sorry I meant using a sampling profiler continuously. There is still some slight overhead and just with logging you get to see what happens most of the time as it happens.
I’m not even sure we haven’t worked together, you remind me of my favorite python CS athletes and colleagues. But I definitely clicked on this because of your shocked face in the thumbnail. Haha, you don’t do that often. Thanks for being authentic, sir! (I heard the tech details too, just had to compliment)
Performance story time and a new to me tool, 10/10 video
Every once and a while youtube starts suggesting me your videos and it's always a joy to learn more.
Interesting, looking at the docs of 3.11 it doesn't say anything changed in inspect.stack except for the return type. Curious that is so much slower.
Changing return type might be just what's documented 'for the user'. Internally it might change a lot to meet that new return type.
if you watch to the end or look closely at the profile it's not actually inspect stack where the self time happens and the slowdown occurs but in getmodule
Thank you for sharing the case!👍
love stuff. always fascinating how strange these bugs manifest in production.
12:00 - why do you have `Generator[str, None, None]` instead of `Iterable[str]`?
because it is a generator
Great video!! I learn so much from your videos!!
Another Gem. Great work and useful tip. :)
Sampling profilers are also useful for languages that use exceptions as some trace profilers can lose track and end up with either incorrect trees or tree nodes bundling the gap with a name that may as well be ¯\_(ツ)_/¯. I've also seen trace profilers just hang or return zero data as well as nodes in completely the wrong place.
Not capturing all possible call stacks isn't a problem if you are debugging a high CPU problem. A few hundred samples is usually enough.
It's kinda wild to me that I am still using 3.8 for most projects that need the strongest compute.
Kinda funy how things went the other way with deeplearning and hpc
Great video. I would love to see a video on Celery.
I do not recommend using celery so I probably won't be making a video on it
@@anthonywritescode I would still like a video on why celery might not be the best choice. I have used celery quite extensively in production and I too would pick something lighter like dramatiq just because of the number of regressions across three or more separate libraries (kombu, billiard, celery) every version upgrade. Can't really blame the already small team of maintainers, it is a huge project that everyone uses differently.
What do you think about using a continuous profiler for your applications in Production?
I wouldn't do it all the time. a profiler adds significant overhead and most of the time you don't care about performance data
@@anthonywritescode sorry I meant using a sampling profiler continuously. There is still some slight overhead and just with logging you get to see what happens most of the time as it happens.
at scale "slight overhead" is significant
Any opinions on 'scalene' Which is another profiler I have heard of but not used. Perhaps it is of this other profile type you mentioned?
haven't used it but it is another sampling profiler
I’m not even sure we haven’t worked together, you remind me of my favorite python CS athletes and colleagues.
But I definitely clicked on this because of your shocked face in the thumbnail. Haha, you don’t do that often. Thanks for being authentic, sir!
(I heard the tech details too, just had to compliment)
Any thoughts on this compared to pyinstrument?
nope, haven't used it
I like these solving real world problem vids.
love these kind of videos
If Anthony has paid tutorial courses, I will buy every single one of them even if I go broke.
we dont even know what sentry is and what it does. A little bit of context would have been very helpful
use your favorite search engine -- though if you're a developer I'm surprised you haven't heard of it