While I never had this issue before, the first time I hand-rolled a logging system for a small service _I was later glad I had the foresight to include the offset from UTC._
I’ve always found Python logging super opaque. After hours of reading and playing around, I still never really understood it. Somehow, in 20 minutes, you covered way more than I even knew was possible and it all makes so much sense now! Thank you! PS: Still waiting on your C++ value category video 🤞
That why I love the old style programming youtubers, unlike the new programming influencers who react to articles and tech news. Mad respect for you James. ❤
@@maxrinehart4177Or straight up say nonsense and are closer to instagram influencers than actual devs, like a certain "blond" one, the problem is that there are some juniors listening to them
@@heroe1486 it's a disease, spilled out from instathots to wannabe programmers, the problem is the huge numbers of followers who watch this nonsense that bring no value. They bring nothing of value and it's a crime they are so popular and the actual competent tech youtubers are not. Not only the blonde one but that another guy called prime, he deleted my comments and banned me for saying his react videos bring no actual values.
This is my favorite format of your videos so far. No memey images, just solid information with dry humor and guiding text peppered helpfully throughout. Great content as usual.
This breakdown on the logger is truly amazing. One comment regarding the queue handler implementation 18:11 . This implementation only works, as is, on Python 3.12 as prior versions don't have that available for dictConfig.
Thank you, and you are right. Users not yet on the current stable Python (3.12) may want to subclass QueueHandler, both to create the QueueListener as well as to start its thread automatically.
@@jrat104 you could do something like: ```python import logging.config import logging.handlers class BackportedQueueHandler(logging.handlers.QueueHandler): def __init__( self, handlers: logging.config.ConvertingList[str], queue=None, respect_handler_level: bool = False, ) -> None: if queue is None: import queue as q queue = q.Queue() assert isinstance(handlers, logging.config.ConvertingList) self.listener = logging.handlers.QueueListener( queue, # NOTE: `logging.config.ConvertingList` converts elements by # calling `convert_with_key` when accessed via `__getitem__` *[handlers[i] for i in range(len(handlers))], respect_handler_level=respect_handler_level, ) super().__init__(queue) ```
Wow... every python video from you is pure Gold. I'm only aware of one "recent" async video, but maybe you could do a more detailed video on async and await in python. I and certainly thousands of others would appreciate this. Keep up the awesome work!
Thank you for this tutorial. Nothing new for me, but finally a nice comprehensive and modern approach. I totally agree with the statement "the tutorials are from people that are not using it". Sadly that's true for many other tutorials.
The reason I love this channel and learned a lot from it is, James is one of the few programming content creators that have experience. Him saying "You know what works best for you" is an ironclad example for that
Great video on standard library logging, but one thing to add is that I can recommend using library called structlog (for structured logs as name suggests), which are very useful especially for bigger applications, where we want to aggregate, store, parse and analyze logs on the system level. Structured logs are way easier to parse, thus analyze, so I think it's nice to have this library in your Python dev toolkit.
As someone who’s been using print for logging, I will have to watch this a couple more times to fully understand everything 😅 First impressions is that this seems very useful, and that although comprehensive and complex to set up for the first time, some parts should be reusable. At least for smaller projects. Most of my projects are about 500 lines of code, so this would be a substantial portion of the work setting up the first time😅 Thank you for setting me on the path of proper logging 🎉
As someone who owns a library, and rolled my own logger (in JavaScript), I appreciate the deep dive in proper logger design and methodology. I may not be able to use the examples here, but the lesson is well-received. As a self-taught developer, I missed a lot of the history in why things are written the way they are.
This is a great condensed tutorial, respect! But, I do need to add that I don't really agree on the "one logger only" rule. In a lot of cases I have learned it's very valuable to seperate logging user activity from the technical logging. In a lot of applications you would want to log user activities for legal reasons and have the technical stuff for development reasons. I'm a big proponent of seperating the two. In most applications I've worked on we have a technical logger and an audit logger. They both log VERY different stuff and have different handlers and formatters. Technical stuff goes into files and sometimes email inboxes while the audit logs go into some elastic search cluster through a kafka bus just to make sure we have the persistence.
This is a great deal better than other Python logging videos and tutorials that I have seen. They might cover the basics, but miss out pretty much entirely about information you need to know when you really want to get into logging with packages, subpackages, and user libraries, Thanks.
I appreciate that you emphasized that the built-in logging library is the standard. Since there are some other competitors out there, it can be a bit confusing trying to evaluate whether to use the built-in or not. Thank you.
Finally I understand Python logging. I really hope one day they actually create a better logging package. Every project needs logging and it should not be this difficult and outdated, especially when it comes to Python.
Have you seen the most popular packages on pypi? It is full of libraries built with bad decisions, bad API design, bad programming practices, bad documentation. And recently it has gotten even worse, because now all code is either type hinted or not, and is written for asyncio or not. And these new async packages are going through all the same problems that were solved decades ago for the older packages. It’s a huge mess.
This is an incredibly valuable information. Since this is only applicable for applications and not libraries, this is not openly discussed anywhere nor available on Github. Thanks for sharing.
if you are experiencing an error of ValueError: Unable to configure handler 'file' when you added the file logger, ensure that you have have log folder in your current directory. Great video btw
I used a mix of print and logging in my application just to take quick look of what happened at a glance. Wasn't aware of json lines and queue handler. I had switched from standard json library to orjson to get considerable speedup when handling mixture of celery task uuid, objects, dates in different formats etc. Thanks for sharing
Very very good. The explanation of the mental model is Gold! Finally a video that makes it simpler by showing the whole picture and what to skip instead of just showing the bare minimum which would leave you too ignorant to see both the the forrest and the trees.
When I started with the current professional work, I looked into Python logging, found many people frustrated, and wrote the logger myself. Normally I am not a fan of the DIY mentality in software, but I think I saved time this way and have what I need.
When I started working with the logging module it seemed difficult to understand. But actually, after 2-3 hours I admired how powerful yet simple it turns out to be. Really, you plug it in and it just works.
Epic walkthrough. The standard logger has always been a bit of a black box and logs have always been a bit of an issue as my apps have grown. This is exactly what I needed.
My team and I have recently been working with trying to get multiple loggers setup and deleting the old logs locally after some time. I'd say this is incredible timing but I'd be lying, having this massively useful video would have been even better last week! But still, thanks so much for sharing your expertise!
Not me watching the video and configuring my library logging bit by bit and then seeing the note at the end for libraries LMAO. Oh well, time to delete all that code. Great video though, definitely will use these tips in my Python apps.
This is very nice. Extremely useful for out-of-the-box best practices for Python logging. I personally use "pip install loguru" for any projects I set up for logging, since it handles a lot of things automatically and removes so much boiletplate.
Thx for a very good explanation of python logging. I've done this whole thing so many times at different companies, in different way. The queue handler was new to me however, I guess performance wasn't of that importance. One thing I would add is that while JSON format is great for programmatically parsing, I wouldn't say it's as human readable as colorized text. This is what I would use locally when developing and then use JSON when deployed.
I like to also set up probabilistic trace logging, where some (usually small) portion of the work is logged end to end, with the portion changeable at runtime. If something doesn't seem right, being able to trace 1 of 10000 events can be really handy. That takes a bit more setup in the code, but it's super useful.
Great video, I've been using logging for a while, but I might tune my setup with what I learned just now! I agree that yaml is very error prone (look up "Norway problem"), but I've also come to the conclusion that JSON is good for data exchange, but not for config files. For config, I've started using TOML. The syntax is a mix of JSON and .ini, but it's far less error prone than yaml, it has trailing commas, comments and can be read and written programmatically without losing formatting. Like yaml, it wasn't included in the stdlib, but there is now an implementation in Python 3.11+
There's also JSON5 as an extension for human-writable JSON that allows such things as trailing commas in objects and lists, comments, single quote strings, hexadecimal numbers etc. For example, it's used for the configuration file of the Windows 11 terminal app. The JSON module in the Python library is only meant for serialization, however.
Really great video, thank you! On 20:00, it's worth mentioning that the default behavior, if I'm not mistaken, is for debug/info logs to go to stderr as well which I've always found unintuitive.
Yeah, I've found that to be an unintuitive default as well. It goes against the principle of not using stderr as just an extra stream. It should really be just for errors as intended.
you really have great content, and the detail in which you go to explain a concept is really appreciation worthy. thanks for these videos, they are really informative and helpful
Dude... I've been using logger since.... _years_ , and I had no notion of dictConfig's existence! Fair enough, I'm a DevOps/SysOps guy, so I pretty much only use the same handlers, stdout, stderr, and systemd-journald, which makes things easy enough to knock out, but still. Thanks! 🙂
I remember having to explicitly murder a library's own logging config, because they were logging as plain text and we wanted everything as JSON. It was very annoying until I did some involved introspection on the logging system.
Great video, can you do a video on "try except" code blocks; I believe its blurry for a lot of people who aren't working in the industry as far as HOW to use it in your coding project. For example, do you apply it to all of your code logic ? or conservatively ? An in-depth explanation and examples when you're working in industry and you're writing code how exactly would you apply it?, whats your thinking framework with it ??? Thank you very much for the quality content.
The most basic concepts of python logging module people don't understand is that configuration of logging itself must be done by the end user, not frameworks or libraries, all what they suppose to do is to import logging, get logger and do logging calls.
Great and concise content, did not know about dictconfig yet. Thanks! Around 08:40 you mention the weird format str syntax. As an alternative there is the style keyword which accepts "{", then the format str can be much more like fstrings.
Painful memories of when I wrote an API wrapper and decided to log every POST, PUT and DELETE call by... hand-rolling something that would append objects to a .json file. I'm not sure what I was thinking. I'm sure I'd heard about structured logs before embarking on that...
Yeah as many mentioned I just use loguru too. It makes it really simpler without all the boilerplate which I couldn't remember even after years of using logging
One important thing you forgot to mention: logging.getLogger(logger_name) returns a singleton for each logger_name. The logic is kind of like: if logger_name not in dict_of_all_existing_loggers: dict_of_all_existing_loggers[logger_name] = LoggerClass() return dict_of_all_existing_loggers[logger_name] Or in other words: It's memoized This means, if you use multithreading or multiprocessing, you don't need to pass the logger instance to the worker threads/processes; rather, for each worker, have them fetch the singleton object by invoking logging.getLogger(logger_name).
Your video did set me on the right course for logging but it is far cleverer than what you showed,. PS I do watch your videos on most things as they are extremely helpful.
Typically, if I have a major class that might be used from different contexts, I'll include a formal `logger` kwarg init parameter defaulted to something like logging.getLogger("my_class"), to enable a caller to DI some other logger if they want. Then, in each major function I'll spawn a child as `logger = self.logger.getChild("my_function")`. (This is similar to my philosophy about always passing around a `now` kwarg any time I'm writing time-dependent code, to avoid whole classes of bugs linked to conditions unexpectedly becoming true in the middle of a function by upholding a sort of "fiction" that the entire code execution between two sleeps happens in a single instant.)
You suggest only using one logger (or a small number) rather than __name__, but is it really a bad cost to have one per file? Using __name__ does have perks, such as "automatically" providing a nested structure for easy control without any actual major downside that I can see. It also avoids you ever having to actually maintain your loggers name, which is an incredibly minor thing, but I'm always a fan of removing stuff like that. If it's secretly a massive performance hit though? Then yeah, maybe I'll stop doing it! But great video as always! Side note: Your website looks kind of weird on an ultrawide. Not bad, just a little weird. The footer is much wider than the content.
Great questions! You are still welcome to use `__name__`, although it mostly only makes sense if you are defining your logger within an `__init__.py` or if your code is just a single module. For larger applications, your logging setup is probably in some kind of config or logging module, and it would be a bit weird to name your logger `mypackage.logging` or `mypackage.config` instead of `mypackage`. But for a single file it does make a lot of sense to use `__name__`. The memory hit is honestly not that bad, unless you are really constrained or have 1000s of files, although what gets you with `__name__` is that propagation can start to add up on all those middle loggers that do nothing.
proper timestamps in logging is probably the most underrated element, until everyone realizes oh we are trying debug a multi-threading real time service.
Thanks for putting this out. Python logging has to be one of my least favorite package. It's overcomplex, explanations about it are invariably simplistic and it is just amusing how even seasoned Python developers run into the weeds with it. We are however stuck with it - I remember swapping in an alternative and having Django 3.x choke because it was expecting some logging internals to be around. Some least-problematic guidance on how to use the sorry beast is most welcome.
@5:44 I think you misunderstand something or at least are being ambiguous in the following statement: "Once again, if a record is dropped by a handler it will continue moving on, to include propagating up to the parent. But if it's dropped by a Logger, then it stops and doesn't propagate." It would be more accurate to say that Loggers cannot "drop" messages, they only can "generate" or "not generate" them. In other words, a Logger may only drop messages it created: if a message survives its "source" Logger then it will be propagated up to the root (unless `propagate` is set to False). For example, if Root Logger has level `ERROR` and Logger A has level `DEBUG`, then debug messages originating at A will be handled by the Root Logger, despite it having a higher threshold. This is why you might want to add permissive-level handlers to restrictive-level loggers. Python's "Logging HOWTO" says: "In addition to any handlers directly associated with a logger, all handlers associated with all ancestors of the logger are called to dispatch the message (unless the propagate flag for a logger is set to a false value, at which point the passing to ancestor handlers stops)."
Thanks for the great explanation! However, it's not quite clear to me why you shouldn't use the root logger directly 6:49 when you're not using some sort of logger tree. Could anyone explain this?
I always disliked how much boilerplate you need to do if all you're interested in is logging to stdout but selecting different levels for different paths. In an ideal world there's be some middle ground between just basicConfig and super detailed dictConfig
I'm there with you. Thankfully logging is typically a once-per-project setup thing that doesn't change much, so it gets lumped in with all my other one-time setups.
Or, if you have multiple projects, create your own logging lib that pre-configures what you need. That's what we did with structlog, so now all our applications output ECS-compatible JSON logging. ECS is the Elastic Comming Schema. The E in the ELK stack, enabling us to find our logs in Kibana, letting us search for keys or values, and create dashboards.
Thanks sincerely for your brilliant video. I have a question that PyCharm seems not to have a good support for color display of the different levels for the logging module.
Jeans down, this is the best python logging explanation I've ever seen. Brilliant work.
I will start saying jeans down instead of hands down from this day on.
It is, jeans down, the better of the two!
Jeans Down?!
pants down
Agreed, we all say "jeans down" now.
the best ever
That "include the timezone, *trust me*" has some history... xD
"If you trust your users to do that kind of thing... *silent head shake*"
Working with time is the worst :D *trust me*.
HAving users in another timezone... I'd rather write them a whole separate app :D
Don’t use timezones, put everything in UTC
While I never had this issue before, the first time I hand-rolled a logging system for a small service _I was later glad I had the foresight to include the offset from UTC._
@@hemerythrintrust users? Buddy, I don't trust myself.
I’ve always found Python logging super opaque. After hours of reading and playing around, I still never really understood it. Somehow, in 20 minutes, you covered way more than I even knew was possible and it all makes so much sense now! Thank you!
PS: Still waiting on your C++ value category video 🤞
That why I love the old style programming youtubers, unlike the new programming influencers who react to articles and tech news.
Mad respect for you James. ❤
My only hesitation with value categories video is that I have to choose whether it will be slightly incorrect or slightly incomprehensible.
@@mCoding I'll accept either 😂. It's just another topic I struggle with, and you always have a way for making things understandable 🙏
@@maxrinehart4177Or straight up say nonsense and are closer to instagram influencers than actual devs, like a certain "blond" one, the problem is that there are some juniors listening to them
@@heroe1486 it's a disease, spilled out from instathots to wannabe programmers, the problem is the huge numbers of followers who watch this nonsense that bring no value. They bring nothing of value and it's a crime they are so popular and the actual competent tech youtubers are not. Not only the blonde one but that another guy called prime, he deleted my comments and banned me for saying his react videos bring no actual values.
This is my favorite format of your videos so far. No memey images, just solid information with dry humor and guiding text peppered helpfully throughout. Great content as usual.
This channel is dangerously underrated
Expert level developers don't often create a lot of RUclips content. Like StackOverflow where most solutions aren't production level good.
18:04 Leaving a note here that QueueHandler config is only available in Python 3.12, according to the logging.config docs
I realised my "meh" feeling about dictConfig was due to ignorance and laziness! Thanks for a great, professional tutorial!
The "complete logging picture" @02:44 is what I've been missing. Thank you!
This breakdown on the logger is truly amazing.
One comment regarding the queue handler implementation 18:11 . This implementation only works, as is, on Python 3.12 as prior versions don't have that available for dictConfig.
Thank you, and you are right. Users not yet on the current stable Python (3.12) may want to subclass QueueHandler, both to create the QueueListener as well as to start its thread automatically.
@@mCodingI was looking forward to using this but i am stuck at 3.9 for reasons. Now to figure out how to do this
@@jrat104 you could do something like:
```python
import logging.config
import logging.handlers
class BackportedQueueHandler(logging.handlers.QueueHandler):
def __init__(
self,
handlers: logging.config.ConvertingList[str],
queue=None,
respect_handler_level: bool = False,
) -> None:
if queue is None:
import queue as q
queue = q.Queue()
assert isinstance(handlers, logging.config.ConvertingList)
self.listener = logging.handlers.QueueListener(
queue,
# NOTE: `logging.config.ConvertingList` converts elements by
# calling `convert_with_key` when accessed via `__getitem__`
*[handlers[i] for i in range(len(handlers))],
respect_handler_level=respect_handler_level,
)
super().__init__(queue)
```
Wow... every python video from you is pure Gold. I'm only aware of one "recent" async video, but maybe you could do a more detailed video on async and await in python. I and certainly thousands of others would appreciate this.
Keep up the awesome work!
Thank you for this tutorial. Nothing new for me, but finally a nice comprehensive and modern approach.
I totally agree with the statement "the tutorials are from people that are not using it". Sadly that's true for many other tutorials.
The reason I love this channel and learned a lot from it is, James is one of the few programming content creators that have experience. Him saying "You know what works best for you" is an ironclad example for that
Great video on standard library logging, but one thing to add is that I can recommend using library called structlog (for structured logs as name suggests), which are very useful especially for bigger applications, where we want to aggregate, store, parse and analyze logs on the system level. Structured logs are way easier to parse, thus analyze, so I think it's nice to have this library in your Python dev toolkit.
As someone who’s been using print for logging, I will have to watch this a couple more times to fully understand everything 😅 First impressions is that this seems very useful, and that although comprehensive and complex to set up for the first time, some parts should be reusable. At least for smaller projects. Most of my projects are about 500 lines of code, so this would be a substantial portion of the work setting up the first time😅 Thank you for setting me on the path of proper logging 🎉
I've literally been trying to understand python logging for the past few days. This is easily the best video I've come across!
As someone who owns a library, and rolled my own logger (in JavaScript), I appreciate the deep dive in proper logger design and methodology. I may not be able to use the examples here, but the lesson is well-received. As a self-taught developer, I missed a lot of the history in why things are written the way they are.
You own a library? Nice!
Do you have one of those rolling ladders to get to the high-up books?
This is a great condensed tutorial, respect! But, I do need to add that I don't really agree on the "one logger only" rule. In a lot of cases I have learned it's very valuable to seperate logging user activity from the technical logging. In a lot of applications you would want to log user activities for legal reasons and have the technical stuff for development reasons. I'm a big proponent of seperating the two. In most applications I've worked on we have a technical logger and an audit logger. They both log VERY different stuff and have different handlers and formatters. Technical stuff goes into files and sometimes email inboxes while the audit logs go into some elastic search cluster through a kafka bus just to make sure we have the persistence.
This is a great deal better than other Python logging videos and tutorials that I have seen. They might cover the basics, but miss out pretty much entirely about information you need to know when you really want to get into logging with packages, subpackages, and user libraries,
Thanks.
I appreciate that you emphasized that the built-in logging library is the standard. Since there are some other competitors out there, it can be a bit confusing trying to evaluate whether to use the built-in or not.
Thank you.
Sometimes I don't understand a word you say, not because of your videos, but for the current knowledge I have. I still enjoyed your videos 😃
Finally I understand Python logging. I really hope one day they actually create a better logging package. Every project needs logging and it should not be this difficult and outdated, especially when it comes to Python.
Have you seen the most popular packages on pypi? It is full of libraries built with bad decisions, bad API design, bad programming practices, bad documentation. And recently it has gotten even worse, because now all code is either type hinted or not, and is written for asyncio or not. And these new async packages are going through all the same problems that were solved decades ago for the older packages. It’s a huge mess.
It's Java-inspired. Difficulty, unobviousness and fussiness are a feature, not a bug.
Animations are nice, but knowing that it's his hand dragging/dropping the graphics makes this intuitive to follow. Great subtlety of presentation.
This is an incredibly valuable information. Since this is only applicable for applications and not libraries, this is not openly discussed anywhere nor available on Github. Thanks for sharing.
For anyone interested in quick and dirty logging for debug purposes, there's also watchpoints, which lets you monitor any variable for changes.
keep talking!
@@wchen2340 rich.inspect()
@@wchen2340 Use rich inspect() instead of print().
if you are experiencing an error of ValueError: Unable to configure handler 'file' when you added the file logger, ensure that you have have log folder in your current directory. Great video btw
I used a mix of print and logging in my application just to take quick look of what happened at a glance. Wasn't aware of json lines and queue handler. I had switched from standard json library to orjson to get considerable speedup when handling mixture of celery task uuid, objects, dates in different formats etc.
Thanks for sharing
Very very good. The explanation of the mental model is Gold! Finally a video that makes it simpler by showing the whole picture and what to skip instead of just showing the bare minimum which would leave you too ignorant to see both the the forrest and the trees.
When I started with the current professional work, I looked into Python logging, found many people frustrated, and wrote the logger myself. Normally I am not a fan of the DIY mentality in software, but I think I saved time this way and have what I need.
When I started working with the logging module it seemed difficult to understand. But actually, after 2-3 hours I admired how powerful yet simple it turns out to be. Really, you plug it in and it just works.
A video like this was much needed in RUclips, thanks so much! I've learn some new things I can now work on
I cannot overstate the value of this video. I was postponing e deep dive into logging, which I would have had this video sooner!
this is geniusly explained of how to use rarely used logging
Epic walkthrough. The standard logger has always been a bit of a black box and logs have always been a bit of an issue as my apps have grown. This is exactly what I needed.
My team and I have recently been working with trying to get multiple loggers setup and deleting the old logs locally after some time. I'd say this is incredible timing but I'd be lying, having this massively useful video would have been even better last week! But still, thanks so much for sharing your expertise!
Sounds like you need to hire us to do a code review! You're very welcome regardless.
Code & architecture review I'd say! I'll definitely be seeing whether we can
Just introduced myself to the logging module a week back, nice coincidence you making this video
Loguru is straight up the best
I recently found this as well. Great library.
Also thread safe. There isnt any reason why you shouldn't use loguru everywhere
lol my comment got deleted but this thread reads like those crypto scams😂
@@karseraslactually there is. Loguru does not support multiple loggers. There is only one.
Not me watching the video and configuring my library logging bit by bit and then seeing the note at the end for libraries LMAO. Oh well, time to delete all that code. Great video though, definitely will use these tips in my Python apps.
Whoops lol. 🤭
i love you. i didnt google for logging tutorial. I just watch your videos whenever there is a new on my main page
This is amazing content. Very concise and thorough, and you cover the concept in much greater depth than any other sources I've come across.
This is very nice. Extremely useful for out-of-the-box best practices for Python logging. I personally use "pip install loguru" for any projects I set up for logging, since it handles a lot of things automatically and removes so much boiletplate.
Added to my reference library. This is invaluable. Thank you @mCoding.
Finished yesterday a one week long logging project. If only I had watched your video before that...
Thx for a very good explanation of python logging. I've done this whole thing so many times at different companies, in different way. The queue handler was new to me however, I guess performance wasn't of that importance.
One thing I would add is that while JSON format is great for programmatically parsing, I wouldn't say it's as human readable as colorized text. This is what I would use locally when developing and then use JSON when deployed.
I like to also set up probabilistic trace logging, where some (usually small) portion of the work is logged end to end, with the portion changeable at runtime. If something doesn't seem right, being able to trace 1 of 10000 events can be really handy. That takes a bit more setup in the code, but it's super useful.
Great video, I've been using logging for a while, but I might tune my setup with what I learned just now!
I agree that yaml is very error prone (look up "Norway problem"), but I've also come to the conclusion that JSON is good for data exchange, but not for config files.
For config, I've started using TOML. The syntax is a mix of JSON and .ini, but it's far less error prone than yaml, it has trailing commas, comments and can be read and written programmatically without losing formatting.
Like yaml, it wasn't included in the stdlib, but there is now an implementation in Python 3.11+
There's also JSON5 as an extension for human-writable JSON that allows such things as trailing commas in objects and lists, comments, single quote strings, hexadecimal numbers etc. For example, it's used for the configuration file of the Windows 11 terminal app.
The JSON module in the Python library is only meant for serialization, however.
This is, by far, the most clear and helpful explanation of Python logging I've ever seen. Thank you!
Really great video, thank you!
On 20:00, it's worth mentioning that the default behavior, if I'm not mistaken, is for debug/info logs to go to stderr as well which I've always found unintuitive.
Yeah, I've found that to be an unintuitive default as well. It goes against the principle of not using stderr as just an extra stream. It should really be just for errors as intended.
pure gold! I been wanting to master the logging concepts for a long time now!
The acting on this video is impeccable
Thanks. One thing you could have mentionned is that a library developper could create a filter for his own logger(s).
you really have great content, and the detail in which you go to explain a concept is really appreciation worthy. thanks for these videos, they are really informative and helpful
structured logging gold. Thank you, Mr. Murphy!
You had me cracking up. This is very relevant in what I’m working on right now. Gonna create a new boilerplate
Dude... I've been using logger since.... _years_ , and I had no notion of dictConfig's existence! Fair enough, I'm a DevOps/SysOps guy, so I pretty much only use the same handlers, stdout, stderr, and systemd-journald, which makes things easy enough to knock out, but still.
Thanks! 🙂
One of the best Python channels on RUclips, great tutorial
I remember having to explicitly murder a library's own logging config, because they were logging as plain text and we wanted everything as JSON. It was very annoying until I did some involved introspection on the logging system.
Haha, what a vivid depiction, I've felt that pain, too.
Best video on logging I’ve seen yet. Learned a lot, thanks!
Great video, can you do a video on "try except" code blocks; I believe its blurry for a lot of people who aren't working in the industry as far as HOW to use it in your coding project. For example, do you apply it to all of your code logic ? or conservatively ? An in-depth explanation and examples when you're working in industry and you're writing code how exactly would you apply it?, whats your thinking framework with it ??? Thank you very much for the quality content.
This is some invaluable information. Thanks for the video.
Glad you enjoyed it!
The most basic concepts of python logging module people don't understand is that configuration of logging itself must be done by the end user, not frameworks or libraries, all what they suppose to do is to import logging, get logger and do logging calls.
Python logging is both brilliant and frighteningly complex.
This video is a gift from heaven 🎉 Thanks so lot for making such high quality content helping devs around the world write better code
I appreciate your very kind praise 😅
Great and concise content, did not know about dictconfig yet. Thanks!
Around 08:40 you mention the weird format str syntax. As an alternative there is the style keyword which accepts "{", then the format str can be much more like fstrings.
Great tip! I do tend to stick to the json approach, though! 😉
Painful memories of when I wrote an API wrapper and decided to log every POST, PUT and DELETE call by... hand-rolling something that would append objects to a .json file. I'm not sure what I was thinking. I'm sure I'd heard about structured logs before embarking on that...
Nice tips for the Queue Handlers! Btw, be careful with thread-safe logging if you are using multi-thread inside your application
Woah thanks for this amazing video. I never used to know what I was doing with python logging
Yeah as many mentioned I just use loguru too. It makes it really simpler without all the boilerplate which I couldn't remember even after years of using logging
Excellent logging summary/best practices! Thank you so much!
Best python logging video ever.
One important thing you forgot to mention:
logging.getLogger(logger_name) returns a singleton for each logger_name.
The logic is kind of like:
if logger_name not in dict_of_all_existing_loggers:
dict_of_all_existing_loggers[logger_name] = LoggerClass()
return dict_of_all_existing_loggers[logger_name]
Or in other words: It's memoized
This means, if you use multithreading or multiprocessing, you don't need to pass the logger instance to the worker threads/processes; rather, for each worker, have them fetch the singleton object by invoking logging.getLogger(logger_name).
I salute you mister for enlightening us with this video. Thank you very much!
thank you i finally understand it now, have been struggling with python logging since the start of my cs journey😂
Your video did set me on the right course for logging but it is far cleverer than what you showed,. PS I do watch your videos on most things as they are extremely helpful.
You always managed to deep dive into the simplest thing and, voilà, I feel like I just learn python again.
Great explanation!
I felt the same way when python 3 came out.... Mostly just the part about learning python all over again
Typically, if I have a major class that might be used from different contexts, I'll include a formal `logger` kwarg init parameter defaulted to something like logging.getLogger("my_class"), to enable a caller to DI some other logger if they want. Then, in each major function I'll spawn a child as `logger = self.logger.getChild("my_function")`.
(This is similar to my philosophy about always passing around a `now` kwarg any time I'm writing time-dependent code, to avoid whole classes of bugs linked to conditions unexpectedly becoming true in the middle of a function by upholding a sort of "fiction" that the entire code execution between two sleeps happens in a single instant.)
Your video teach me a lot about logging in python😊. Thank you for this video!!
Will you ever make of how to logging in a library?
You suggest only using one logger (or a small number) rather than __name__, but is it really a bad cost to have one per file? Using __name__ does have perks, such as "automatically" providing a nested structure for easy control without any actual major downside that I can see. It also avoids you ever having to actually maintain your loggers name, which is an incredibly minor thing, but I'm always a fan of removing stuff like that.
If it's secretly a massive performance hit though? Then yeah, maybe I'll stop doing it! But great video as always!
Side note: Your website looks kind of weird on an ultrawide. Not bad, just a little weird. The footer is much wider than the content.
Great questions! You are still welcome to use `__name__`, although it mostly only makes sense if you are defining your logger within an `__init__.py` or if your code is just a single module. For larger applications, your logging setup is probably in some kind of config or logging module, and it would be a bit weird to name your logger `mypackage.logging` or `mypackage.config` instead of `mypackage`. But for a single file it does make a lot of sense to use `__name__`. The memory hit is honestly not that bad, unless you are really constrained or have 1000s of files, although what gets you with `__name__` is that propagation can start to add up on all those middle loggers that do nothing.
I have learned a ton from this video and channel. Such high quality! Thank you!
You're welcome!
Wow, this is super useful. I was looking for something like this for my setup
Amazing, comprehensive, and clear. Thank you.
Superb 👍this is pure value addtion, class apart from N numbers of videos showing ABC of Python endlessly.
proper timestamps in logging is probably the most underrated element, until everyone realizes oh we are trying debug a multi-threading real time service.
Thanks for putting this out. Python logging has to be one of my least favorite package. It's overcomplex, explanations about it are invariably simplistic and it is just amusing how even seasoned Python developers run into the weeds with it. We are however stuck with it - I remember swapping in an alternative and having Django 3.x choke because it was expecting some logging internals to be around. Some least-problematic guidance on how to use the sorry beast is most welcome.
Very thorough! Thank you for putting this together! Well done!
Wowwwwww, I'm immediately cloning and implementing in my app. THX🙏
What are you really cloning?
Hi! Fantastic video.
What is you opinion on structlog, and how it could help avoid writing our own formatters?
Very comprehensive tutorial thanks for sharing.
That deadpan headshake at 10:31 was great 😂
Is there anything that would be done differently for logging with pytest?
Wow, this was an amazing tutorial... Thank you so much for making this!
And thank you for watching!
@5:44 I think you misunderstand something or at least are being ambiguous in the following statement: "Once again, if a record is dropped by a handler it will continue moving on, to include propagating up to the parent. But if it's dropped by a Logger, then it stops and doesn't propagate." It would be more accurate to say that Loggers cannot "drop" messages, they only can "generate" or "not generate" them. In other words, a Logger may only drop messages it created: if a message survives its "source" Logger then it will be propagated up to the root (unless `propagate` is set to False). For example, if Root Logger has level `ERROR` and Logger A has level `DEBUG`, then debug messages originating at A will be handled by the Root Logger, despite it having a higher threshold. This is why you might want to add permissive-level handlers to restrictive-level loggers. Python's "Logging HOWTO" says: "In addition to any handlers directly associated with a logger, all handlers associated with all ancestors of the logger are called to dispatch the message (unless the propagate flag for a logger is set to a false value, at which point the passing to ancestor handlers stops)."
Thanks for the great explanation! However, it's not quite clear to me why you shouldn't use the root logger directly 6:49 when you're not using some sort of logger tree. Could anyone explain this?
Just discovering this, I learnt a lot from your concise video.
@mCoding What about using loguru instead of the default logging ?
Great quality videos, as always :) Thank you for the learning (and the entertainment, to be fair)
Hahaha i dont know why but the pause and face at 1:57 just cracked me up
What about 10:30 :D
We were doing this way only except non-blocking Queue one. Thanks
I always disliked how much boilerplate you need to do if all you're interested in is logging to stdout but selecting different levels for different paths. In an ideal world there's be some middle ground between just basicConfig and super detailed dictConfig
I'm there with you. Thankfully logging is typically a once-per-project setup thing that doesn't change much, so it gets lumped in with all my other one-time setups.
Or, if you have multiple projects, create your own logging lib that pre-configures what you need.
That's what we did with structlog, so now all our applications output ECS-compatible JSON logging.
ECS is the Elastic Comming Schema. The E in the ELK stack, enabling us to find our logs in Kibana, letting us search for keys or values, and create dashboards.
Best. Explanation. Ever.
Thank you.
You're very welcome!
Thanks sincerely for your brilliant video. I have a question that PyCharm seems not to have a good support for color display of the different levels for the logging module.
What's your opinion on loguru vs stdlib logging?