What's the point of hosting a blog with a series of superficial posts? There's no promotion of anything, no personal brand, no advertising, just mediocre writing and AI graphics with no actual benchmarks or code.
Why even bother with sampling profilers in Python? You can do full function traces for literally all of your code in production at ~1-10% overhead with efficient instrumentation.
That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python).
It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient.
Interesting, but
"FunctionTrace is opensourced under the Prosperity Public License 3.0 license."
"This license allows you to use and share this software for noncommercial purposes for free and to try this software for commercial purposes for thirty days."
This is not an open source license. "Open Source" is a trademarked term meaning without restrictions of this kind; it is not a generic term meaning "source accessible".
You can also just use perf, but it does require an extra package from the python build (which uv frustratingly doesn't supply)
perf is a sampling profiler, not a function tracing profiler, so that fails the criteria I presented.
I used FunctionTrace as a example and evidence for my position that tracing Python is low overhead with proper design to bypass claims like: “You can not make it that low overhead or someone would have done it already, thus proving the negative.” I am not the author or in any way related to it, so you can bring that up with them.
Why is Python automatically a bad choice? We've build some turbodbc + pandas which beats Go when dealing with a logic azure sql server for massive analtyic flows. I'm not sure if it's faster or slower than Rust, but it's basically C++ fast, though it obviously uses a lot more memory. Fortunately we live in a world where "a lot more" is like $5 a year.
I don't disagree with you that Python might be a wrong choice for algorithmic trading, but I do think it depends. We did our stuff with turbodbc rather than pyodbc which is used everywhere else, specifically because we analysed our bottlenecks.
What a weird AI generated blog.
What's the point of hosting a blog with a series of superficial posts? There's no promotion of anything, no personal brand, no advertising, just mediocre writing and AI graphics with no actual benchmarks or code.
Why even bother with sampling profilers in Python? You can do full function traces for literally all of your code in production at ~1-10% overhead with efficient instrumentation.
That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python).
It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient.
[1] https://functiontrace.com/
Interesting, but "FunctionTrace is opensourced under the Prosperity Public License 3.0 license."
"This license allows you to use and share this software for noncommercial purposes for free and to try this software for commercial purposes for thirty days."
This is not an open source license. "Open Source" is a trademarked term meaning without restrictions of this kind; it is not a generic term meaning "source accessible".
You can also just use perf, but it does require an extra package from the python build (which uv frustratingly doesn't supply)
perf is a sampling profiler, not a function tracing profiler, so that fails the criteria I presented.
I used FunctionTrace as a example and evidence for my position that tracing Python is low overhead with proper design to bypass claims like: “You can not make it that low overhead or someone would have done it already, thus proving the negative.” I am not the author or in any way related to it, so you can bring that up with them.
That's... intriguing. I just tested out functiontrace and saw 20-30% overhead. I didn't expect it to be anywhere near that low.
> The trading system reports an average latency of 10ms
Python is a bad choice for a system with such latency requirements. Isn't C++/Rust preferred language for algorithmic trading shops?
Why is Python automatically a bad choice? We've build some turbodbc + pandas which beats Go when dealing with a logic azure sql server for massive analtyic flows. I'm not sure if it's faster or slower than Rust, but it's basically C++ fast, though it obviously uses a lot more memory. Fortunately we live in a world where "a lot more" is like $5 a year.
I don't disagree with you that Python might be a wrong choice for algorithmic trading, but I do think it depends. We did our stuff with turbodbc rather than pyodbc which is used everywhere else, specifically because we analysed our bottlenecks.
Depends... not all kinds of trading are THAT latency sensitive.
This looks awfully AI-Generated