Post logs python ошибка

I am printing Python exception messages to a log file with logging.error:

import logging
try:
    1/0
except ZeroDivisionError as e:
    logging.error(e)  # ERROR:root:division by zero

Is it possible to print more detailed information about the exception and the code that generated it than just the exception string? Things like line numbers or stack traces would be great.

vvvvv's user avatar

vvvvv

23.9k19 gold badges48 silver badges75 bronze badges

asked Mar 4, 2011 at 9:21

probably at the beach's user avatar

0

logger.exception will output a stack trace alongside the error message.

For example:

import logging
try:
    1/0
except ZeroDivisionError:
    logging.exception("message")

Output:

ERROR:root:message
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
ZeroDivisionError: integer division or modulo by zero

@Paulo Cheque notes, «be aware that in Python 3 you must call the logging.exception method just inside the except part. If you call this method in an arbitrary place you may get a bizarre exception. The docs alert about that.»

GG.'s user avatar

GG.

20.8k13 gold badges82 silver badges130 bronze badges

answered Mar 4, 2011 at 9:25

SiggyF's user avatar

13

Using exc_info options may be better, to allow you to choose the error level (if you use exception, it will always be at the error level):

try:
    # do something here
except Exception as e:
    logging.critical(e, exc_info=True)  # log exception info at CRITICAL log level

ti7's user avatar

ti7

15.7k6 gold badges39 silver badges67 bronze badges

answered Apr 10, 2015 at 8:01

flycee's user avatar

flyceeflycee

11.8k3 gold badges19 silver badges14 bronze badges

3

One nice thing about logging.exception that SiggyF’s answer doesn’t show is that you can pass in an arbitrary message, and logging will still show the full traceback with all the exception details:

import logging
try:
    1/0
except ZeroDivisionError:
    logging.exception("Deliberate divide by zero traceback")

With the default (in recent versions) logging behaviour of just printing errors to sys.stderr, it looks like this:

>>> import logging
>>> try:
...     1/0
... except ZeroDivisionError:
...     logging.exception("Deliberate divide by zero traceback")
... 
ERROR:root:Deliberate divide by zero traceback
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
ZeroDivisionError: integer division or modulo by zero

Stevoisiak's user avatar

Stevoisiak

23.2k27 gold badges120 silver badges222 bronze badges

answered Jul 1, 2013 at 4:34

ncoghlan's user avatar

ncoghlanncoghlan

39.8k10 gold badges71 silver badges79 bronze badges

3

Quoting

What if your application does logging some other way – not using the logging module?

Now, traceback could be used here.

import traceback

def log_traceback(ex, ex_traceback=None):
    if ex_traceback is None:
        ex_traceback = ex.__traceback__
    tb_lines = [ line.rstrip('n') for line in
                 traceback.format_exception(ex.__class__, ex, ex_traceback)]
    exception_logger.log(tb_lines)
  • Use it in Python 2:

    try:
        # your function call is here
    except Exception as ex:
        _, _, ex_traceback = sys.exc_info()
        log_traceback(ex, ex_traceback)
    
  • Use it in Python 3:

    try:
        x = get_number()
    except Exception as ex:
        log_traceback(ex)
    

answered Oct 19, 2015 at 10:21

zangw's user avatar

zangwzangw

42.8k19 gold badges172 silver badges207 bronze badges

3

You can log the stack trace without an exception.

https://docs.python.org/3/library/logging.html#logging.Logger.debug

The second optional keyword argument is stack_info, which defaults to False. If true, stack information is added to the logging message, including the actual logging call. Note that this is not the same stack information as that displayed through specifying exc_info: The former is stack frames from the bottom of the stack up to the logging call in the current thread, whereas the latter is information about stack frames which have been unwound, following an exception, while searching for exception handlers.

Example:

>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.getLogger().info('This prints the stack', stack_info=True)
INFO:root:This prints the stack
Stack (most recent call last):
  File "<stdin>", line 1, in <module>
>>>

answered Dec 3, 2019 at 10:01

Baczek's user avatar

BaczekBaczek

1,1691 gold badge13 silver badges23 bronze badges

If you use plain logs — all your log records should correspond this rule: one record = one line. Following this rule you can use grep and other tools to process your log files.

But traceback information is multi-line. So my answer is an extended version of solution proposed by zangw above in this thread. The problem is that traceback lines could have n inside, so we need to do an extra work to get rid of this line endings:

import logging


logger = logging.getLogger('your_logger_here')

def log_app_error(e: BaseException, level=logging.ERROR) -> None:
    e_traceback = traceback.format_exception(e.__class__, e, e.__traceback__)
    traceback_lines = []
    for line in [line.rstrip('n') for line in e_traceback]:
        traceback_lines.extend(line.splitlines())
    logger.log(level, traceback_lines.__str__())

After that (when you’ll be analyzing your logs) you could copy / paste required traceback lines from your log file and do this:

ex_traceback = ['line 1', 'line 2', ...]
for line in ex_traceback:
    print(line)

Profit!

Community's user avatar

answered Nov 4, 2016 at 17:32

doomatel's user avatar

doomateldoomatel

5971 gold badge5 silver badges8 bronze badges

This answer builds up from the above excellent ones.

In most applications, you won’t be calling logging.exception(e) directly. Most likely you have defined a custom logger specific for your application or module like this:

# Set the name of the app or module
my_logger = logging.getLogger('NEM Sequencer')
# Set the log level
my_logger.setLevel(logging.INFO)

# Let's say we want to be fancy and log to a graylog2 log server
graylog_handler = graypy.GELFHandler('some_server_ip', 12201)
graylog_handler.setLevel(logging.INFO)
my_logger.addHandler(graylog_handler)

In this case, just use the logger to call the exception(e) like this:

try:
    1/0
except ZeroDivisionError, e:
    my_logger.exception(e)

Stevoisiak's user avatar

Stevoisiak

23.2k27 gold badges120 silver badges222 bronze badges

answered Apr 16, 2015 at 13:38

Will's user avatar

WillWill

1,96920 silver badges19 bronze badges

1

If «debugging information» means the values present when exception was raised, then logging.exception(...) won’t help. So you’ll need a tool that logs all variable values along with the traceback lines automatically.

Out of the box you’ll get log like

2020-03-30 18:24:31 main ERROR   File "./temp.py", line 13, in get_ratio
2020-03-30 18:24:31 main ERROR     return height / width
2020-03-30 18:24:31 main ERROR       height = 300
2020-03-30 18:24:31 main ERROR       width = 0
2020-03-30 18:24:31 main ERROR builtins.ZeroDivisionError: division by zero

Have a look at some pypi tools, I’d name:

  • tbvaccine
  • traceback-with-variables
  • better-exceptions

Some of them give you pretty crash messages:
enter image description here

But you might find some more on pypi

answered Nov 4, 2020 at 22:22

Kroshka Kartoshka's user avatar

A little bit of decorator treatment (very loosely inspired by the Maybe monad and lifting). You can safely remove Python 3.6 type annotations and use an older message formatting style.

fallible.py

from functools import wraps
from typing import Callable, TypeVar, Optional
import logging


A = TypeVar('A')


def fallible(*exceptions, logger=None) 
        -> Callable[[Callable[..., A]], Callable[..., Optional[A]]]:
    """
    :param exceptions: a list of exceptions to catch
    :param logger: pass a custom logger; None means the default logger, 
                   False disables logging altogether.
    """
    def fwrap(f: Callable[..., A]) -> Callable[..., Optional[A]]:

        @wraps(f)
        def wrapped(*args, **kwargs):
            try:
                return f(*args, **kwargs)
            except exceptions:
                message = f'called {f} with *args={args} and **kwargs={kwargs}'
                if logger:
                    logger.exception(message)
                if logger is None:
                    logging.exception(message)
                return None

        return wrapped

    return fwrap

Demo:

In [1] from fallible import fallible

In [2]: @fallible(ArithmeticError)
    ...: def div(a, b):
    ...:     return a / b
    ...: 
    ...: 

In [3]: div(1, 2)
Out[3]: 0.5

In [4]: res = div(1, 0)
ERROR:root:called <function div at 0x10d3c6ae8> with *args=(1, 0) and **kwargs={}
Traceback (most recent call last):
  File "/Users/user/fallible.py", line 17, in wrapped
    return f(*args, **kwargs)
  File "<ipython-input-17-e056bd886b5c>", line 3, in div
    return a / b

In [5]: repr(res)
'None'

You can also modify this solution to return something a bit more meaningful than None from the except part (or even make the solution generic, by specifying this return value in fallible‘s arguments).

answered Jul 12, 2018 at 16:55

Eli Korvigo's user avatar

Eli KorvigoEli Korvigo

10.2k6 gold badges46 silver badges73 bronze badges

In your logging module(if custom module) just enable stack_info.

api_logger.exceptionLog("*Input your Custom error message*",stack_info=True)

answered May 20, 2020 at 10:40

Dunggeon's user avatar

DunggeonDunggeon

921 gold badge1 silver badge11 bronze badges

If you look at the this code example (which works for Python 2 and 3) you’ll see the function definition below which can extract

  • method
  • line number
  • code context
  • file path

for an entire stack trace, whether or not there has been an exception:

def sentry_friendly_trace(get_last_exception=True):
    try:
        current_call = list(map(frame_trans, traceback.extract_stack()))
        alert_frame = current_call[-4]
        before_call = current_call[:-4]

        err_type, err, tb = sys.exc_info() if get_last_exception else (None, None, None)
        after_call = [alert_frame] if err_type is None else extract_all_sentry_frames_from_exception(tb)

        return before_call + after_call, err, alert_frame
    except:
        return None, None, None

Of course, this function depends on the entire gist linked above, and in particular extract_all_sentry_frames_from_exception() and frame_trans() but the exception info extraction totals less than around 60 lines.

Hope that helps!

answered Jun 28, 2020 at 3:26

Zephaniah Grunschlag's user avatar

I wrap all functions around my custom designed logger:

import json
import timeit
import traceback
import sys
import unidecode

def main_writer(f,argument):
  try:
    f.write(str(argument))
  except UnicodeEncodeError:
    f.write(unidecode.unidecode(argument))


def logger(*argv,logfile="log.txt",singleLine = False):
  """
  Writes Logs to LogFile
  """
  with open(logfile, 'a+') as f:
    for arg in argv:
      if arg == "{}":
        continue
      if type(arg) == dict and len(arg)!=0:
        json_object = json.dumps(arg, indent=4, default=str)
        f.write(str(json_object))
        f.flush()
        """
        for key,val in arg.items():
          f.write(str(key) + " : "+ str(val))
          f.flush()
        """
      elif type(arg) == list and len(arg)!=0:
        for each in arg:
          main_writer(f,each)
          f.write("n")
          f.flush()
      else:
        main_writer(f,arg)
        f.flush()
      if singleLine==False:
        f.write("n")
    if singleLine==True:
      f.write("n")

def tryFunc(func, func_name=None, *args, **kwargs):
  """
  Time for Successfull Runs
  Exception Traceback for Unsuccessful Runs
  """
  stack = traceback.extract_stack()
  filename, codeline, funcName, text = stack[-2]
  func_name = func.__name__ if func_name is None else func_name # sys._getframe().f_code.co_name # func.__name__
  start = timeit.default_timer()
  x = None
  try:
    x = func(*args, **kwargs)
    stop = timeit.default_timer()
    # logger("Time to Run {} : {}".format(func_name, stop - start))
  except Exception as e:
    logger("Exception Occurred for {} :".format(func_name))
    logger("Basic Error Info :",e)
    logger("Full Error TraceBack :")
    # logger(e.message, e.args)
    logger(traceback.format_exc())
  return x

def bad_func():
  return 'a'+ 7

if __name__ == '__main__':
    logger(234)
    logger([1,2,3])
    logger(['a','b','c'])
    logger({'a':7,'b':8,'c':9})
    tryFunc(bad_func)

answered Oct 13, 2021 at 9:17

Farhan Hai Khan's user avatar

My approach was to create a context manager, to log and raise Exceptions:

import logging
from contextlib import AbstractContextManager


class LogError(AbstractContextManager):

    def __init__(self, logger=None):
        self.logger = logger.name if isinstance(logger, logging.Logger) else logger

    def __exit__(self, exc_type, exc_value, traceback):
        if exc_value is not None:
            logging.getLogger(self.logger).exception(exc_value)


with LogError():
    1/0

You can either pass a logger name or a logger instance to LogError(). By default it will use the base logger (by passing None to logging.getLogger).
One could also simply add a switch for raising the error or just logging it.

answered Aug 23, 2022 at 12:01

MuellerSeb's user avatar

MuellerSebMuellerSeb

7816 silver badges11 bronze badges

If you can cope with the extra dependency then use twisted.log, you don’t have to explicitly log errors and also it returns the entire traceback and time to the file or stream.

Wim Coenen's user avatar

Wim Coenen

65.9k13 gold badges156 silver badges249 bronze badges

answered Mar 4, 2011 at 9:26

Jakob Bowyer's user avatar

Jakob BowyerJakob Bowyer

33.7k8 gold badges75 silver badges91 bronze badges

1

A clean way to do it is using format_exc() and then parse the output to get the relevant part:

from traceback import format_exc

try:
    1/0
except Exception:
    print 'the relevant part is: '+format_exc().split('n')[-2]

Regards

Rohan's user avatar

Rohan

52.1k12 gold badges89 silver badges86 bronze badges

answered Feb 20, 2013 at 16:32

caraconan's user avatar

1

Если Вы хотя бы немного знакомы с программированием и пробовали запускать что-то «в продакшен», то вам наверняка станет больно от такого диалога:

— Вась, у нас там приложение слегло. Посмотри, что случилось?

— Эмм… А как я это сделаю?..

Да, у Василия, судя по всему, не настроено логирование. И это ужасно, хотя бы по нескольким причинам:

  1. Он никогда не узнает, из-за чего его приложение упало.
  2. Он не сможет отследить, что привело к ошибке (даже если приложение не упало).
  3. Он не сможет посмотреть состояние своей системы в момент времени N.
  4. Он не сможет профилактически поглядывать в логи, чтобы следить за работоспособностью приложения.
  5. Он не сможет хвастаться своим… (кхе-кхе).

Впрочем, последний пункт, наверно, лишний. Однако, одну вещь мы поняли наверняка:

Логирование — крайне важная штука в программировании.

В языке Python основным инструментом для логирования является библиотека logging. Так давайте вместе с IT Resume рассмотрим её подробней.

Что такое logging?

Модуль logging в Python — это набор функций и классов, которые позволяют регистрировать события, происходящие во время работы кода. Этот модуль входит в стандартную библиотеку, поэтому для его использования достаточно написать лишь одну строку:

import logging

Основная функция, которая пригодится Вам для работы с этим модулем — basicConfig(). В ней Вы будете указывать все основные настройки (по крайней мере, на базовом уровне).

У функции basicConfig() 3 основных параметра:

  1. level — уровень логирования на Python;
  2. filename — место, куда мы направляем логи;
  3. format — вид, в котором мы сохраняем результат.

Давайте рассмотрим каждый из параметров более подробно.

Наверно, всем очевидно, что события, которые генерирует наш код кардинально могут отличаться между собой по степени важности. Одно дело отлавливать критические ошибки (FatalError), а другое — информационные сообщения (например, момент логина пользователя на сайте).

Соответственно, чтобы не засорять логи лишней информацией, в basicConfig() Вы можете указать минимальный уровень фиксируемых событий.

По умолчанию фиксируются только предупреждения (WARNINGS) и события с более высоким приоритетом: ошибки (ERRORS) и критические ошибки (CRITICALS).

logging.basicConfig(level=logging.DEBUG)

А далее, чтобы записать информационное сообщение (или вывести его в консоль, об этом поговорим чуть позже), достаточно написать такой код:

logging.debug('debug message')
logging.info('info message')

И так далее. Теперь давайте обсудим, куда наши сообщения попадают.

Отображение лога и запись в файл

За место, в которое попадают логи, отвечает параметр filename в basicConfig. По умолчанию все Ваши логи будут улетать в консоль.

Другими словами, если Вы просто выполните такой код:

import logging
logging.error('WOW')

То сообщение WOW придёт Вам в консоль. Понятно, что в консоли никому эти сообщения не нужны. Как же тогда направить запись лога в файл? Очень просто:

logging.basicConfig(filename = "mylog.log")

Ок, с записью в файл и выбором уровня логирования все более-менее понятно. А как настроить свой шаблон? Разберёмся.

Кстати, мы собрали для Вас сублимированную шпаргалку по логированию на Python в виде карточек. У нас ещё много полезностей, не пожалеете 🙂

Форматирование лога

Итак, последнее, с чем нам нужно разобраться — форматирование лога. Эта опция позволяет Вам дополнять лог полезной информацией — датой, названием файла с ошибкой, номером строки, названием метода и так далее.

Сделать это можно, как все уже догадались, с помощью параметра format.

Например, если внутри basicConfig указать:

format = "%(asctime)s - %(levelname)s - %(funcName)s: %(lineno)d - %(message)s"

То вывод ошибки будет выглядеть так:

2019-01-16 10:35:12,468 - ERROR - <module>:1 - Hello, world!

Вы можете сами выбирать, какую информацию включить в лог, а какую оставить. По умолчанию формат такой:

<УРОВЕНЬ>: <ИМЯ_ЛОГГЕРА>: <СООБЩЕНИЕ>.

Важно помнить, что все параметры logging.basicConfig должны передаваться до первого вызова функций логирования.

Эпилог

Что же, мы разобрали все основные параметры модуля logging и функции basicConfig, которые позволят Вам настроить базовое логирование в Вашем проекте. Дальше — только практика и набивание шишек 🙂

Вместо заключения просто оставим здесь рабочий кусочек кода, который можно использовать 🙂

import logging

logging.basicConfig(
    level=logging.DEBUG, 
    filename = "mylog.log", 
    format = "%(asctime)s - %(module)s - %(levelname)s - %(funcName)s: %(lineno)d - %(message)s", 
    datefmt='%H:%M:%S',
    )

logging.info('Hello')

Если хотите разобраться с параметрами более подробно, Вам поможет официальная документация (очень неплохая, кстати).

In this post , we will see – How to log an error in Python with debug information. Logging is important is Python to debug the errors .

Lets see how we can do that.

(Also in the same context , MUST READ earlier posts – How to Handle Errors and Exceptions in Python ?


if( aicp_can_see_ads() ) {

}

How to Code Custom Exception Handling in Python ?)

Option 1 – Using logging.error –

import logging

try:
  <SOME_OPERATION>

except <STANDARD_PYTHON_ERRORNAME>:  #Any Stanndard Error
  logging.exception("message")

Option 2 – Using sys.excepthook – 

To handle all types of uncaught exceptions, wecan use the “try-except” block (read our detailed post here) or we can use sys.excepthook.

Note that – excepthook is invoked every time an exception is raised but is uncaught.  So you can override the default behavior of sys.excepthook to do as you would like it (including using logging.exception).


if( aicp_can_see_ads() ) {

}

When an exception is raised and uncaught, the interpreter calls sys.excepthook with 3 parameters –

  • Exception class,
  • Exception instance,
  • A traceback object

But we can customize the handling by assigning another three-argument function to sys.excepthook.

Let’s see an example as to how to use it –

import logging
import sys

logger = logging.getLogger('<CUSTOME_LOGGER>')
# Write a Custom logger to write to a text file

def customr_handle(exc_type, exc_value, exc_traceback):
  logger.exception("Uncaught exception: {0}".format(str(exc_value)))

# Initiate exception handler
sys.excepthook = customr_handle


if __name__ == '__main__':
  main()

Another way of using it

import sys
import logging

logger = logging.getLogger(__name__)
handleIt = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handleIt)

def custom_handler(type, value, traceback):
  if issubclass(exc_type, <ANY_STANDARD_EXCEPTION>):
    sys.__excepthook__(type, value, traceback)
    return

  logger.error("Uncaught exception", exc_info=(type, value, traceback))

sys.excepthook = custom_handler

if __name__ == "__main__":
  raise RuntimeError("Test The Logic")

Option 3 – Using the logging to a file –

You can log to an external file.


if( aicp_can_see_ads() ) {

}

logFile = open("logoutput.log", "w")

try: 
  <SOME_OPERATION>

except Exception as e:
  logFile.write("Exception - {0}n".format(str(e)))

Option 4 – Using exc_info –

This function gives back a tuple of three values – which are the information about the exception being handled.

The information returned is specific both to the current thread and to the current stack frame.

import sys

print sys.exc_info()

try: 
  <SOME_OPERATION>

except <ANY_STANDARD_EXCEPTION>: 
  print sys.exc_info()

Or you can also use as below –

try: 
  <SOME_OPERATION>

except Exception as e:
  logging.critical(e, exc_info=True)


if( aicp_can_see_ads() ) {

}

Option 5 – Using the traceback module –

import traceback

try:
 raise Exception()

except Exception as e:
 print(traceback.extract_tb(e.__traceback__))

Other Interesting Reads –

  • How To Fix – fatal error: Python.h: No such file or directory ?

  • How to Send Large Messages in Kafka ?

  • How to Handle Errors and Exceptions in Python ?

  • How To Fix – Indentation Errors in Python

[the_ad id=”1420″]


if( aicp_can_see_ads() ) {

}

How to log an error in Python, logging python error, python logging to file, python logging best practices, python logging to console, python logging multiple modules, python logging filehandler, python logging config, python logging timestamp, python logging stdout, What is logging module in Python, logging python error example, logging python error exception, logging python error stacktrace, logging python error traceback, logging python error exec_info, logging python error vs critical, logging python error try except, python, log error, error log, python error, python log, python logging, python log exception to file, print to error log python, create error log file python,python log all exceptions, python logging libraries, python data logger, create log file in python
python error ,python error handling best practices ,python error types ,python error function ,python error no module named ,python error unexpected indent ,python error logging ,python error checker ,python error stack trace ,python error break outside loop ,python error handling best practices ,python error handling ,python error handling examples ,python error handling try except ,python error handling decorator ,python error handling continue ,python error handling index out of range ,python error handling raise ,python error handling try ,python error handling line number ,python error fixer online ,python error fix ,python syntax error fixer online ,python indentation error fix ,python syntax error fixer ,python memory error fix ,python indentation error fix online ,python type error fix ,python.exe error fix ,python best practices for code quality ,python best practices github ,python best practices book ,python best practices for beginners ,python best practices example ,python log exception ,python log exception stack trace ,python log exception to file ,python log error ,python log exception message ,python log execution time ,python log e ,python log example ,python log exception traceback ,python log error exception ,python log error ,python log error message ,python log errors to file ,python log error exception ,python log error traceback ,python log error and exit ,python log error example ,python log error line number ,python log error to console ,python logging best practices ,python logging module ,python logo ,python logger ,python logical operators ,python logging to file ,python log ,python logical and ,python logo png ,python logging multiple modules ,python log exception ,python log exception stack trace ,python log exception to file ,python log exception message ,python log execution time ,python log example ,python log exception traceback ,python log exc_info ,python log exception type ,python log exception and raise ,python error logging ,python error log file location ,python error logging to file ,python error logger ,python error logging best practices ,python error logging example ,python error logging decorator ,python logging best practices ,python logging module ,python logger ,python logging to file ,python logging multiple modules ,python logging not writing to file ,python logging config file example ,python logging filehandler ,python logging multiple files ,python logging example to file ,python logging error ,python logging error example ,python logging error vs exception ,python logging errors to separate file ,python logging error with exception ,python logging error stack trace ,python logging error exc_info ,python logging error to stderr ,python logging error vs critical ,python logging error traceback
python log exception ,python log exception stack trace ,python log exception to file ,python log exception message ,python log exception traceback ,python log exception type ,python log exception and raise ,python log exception object ,python log exception as warning ,python log exception example ,python log exception stack trace ,python log exception to file ,python log exception message ,python log exception traceback ,python log exception type ,python log exception and raise ,python log exception object ,python log exception as warning ,python log exception example ,python log exception error ,python exception logging ,python exception log stack trace ,python exception loop ,python exception log message ,python exception location ,python exception log traceback ,python exception logging to file
python error log ,python error logging ,python error log file location ,python error logging to file ,python error logger ,python error logging best practices ,python error logging decorator ,python log error and exit ,python log error and raise exception ,python log error and continue ,python log assertion error ,python log an error ,python log any error ,python print to apache error log ,python catch and log error ,python error bar log scale ,python error bar log plot ,python logging error bad file descriptor ,python logging error color ,python logging error count ,python logging error critical ,python log error to console ,python logging error vs critical ,python create error log file ,crontab python error log ,python log error details ,python logging error debug ,python error logging to database ,python log domain error ,django python error log ,python default error log ,python error logging example ,python log error exception ,python log.error exc_info ,python logging error exit ,python logging.error(e) ,python logging error email ,python log error vs exception ,python logging error try except ,python error log file ,python logging error format ,python logging error keyerror 'formatters' ,python logging error stale file handle ,python flask error log ,python get error log ,python error handling log ,python how to log error ,python log error in file ,python log import error ,print error log in python ,python logging error keyerror ,python error log location ,python error log location windows ,python error log level ,python log error line number ,python log.error vs log.exception ,python lambda log error ,log fatal error python library file not exist ,python log error message ,python log error message to file ,python try except log error message ,python mysql error log ,python math domain error log ,python mean_squared_log_error ,python logging error not working ,python logging error not enough arguments for format string ,python logging error nameerror name 'open' is not defined ,python logging error no such file or directory ,python logging error oserror errno 22 invalid argument ,python logging error only ,python output error log ,python error log path ,python print error log ,python pip error log ,raspberry pi python error log ,python print error to log file ,qgis python error log ,python logging error raise exception ,python logging error red ,python log runtime error ,python requests log error ,python redirect error to log ,python error logs ,python error logs location ,python error logs linux ,python log error stack trace ,python log error to stderr ,python save error log ,python script error log ,python error log to file ,python log error traceback ,python error to log ,python log error trace ,python logging error unicodeencodeerror ,ubuntu python error log ,uwsgi python error log ,python unittest log error ,python logging error vs warning ,python logging error valueerror i/o operation on closed file ,python logging error vs info ,python log valueerror math domain error ,python log error with traceback ,python log error with stack trace ,python log error with exception ,python logging error warning ,python write error log ,python error logging stack trace ,python logging error exception ,python logging error traceback ,python logging error to stderr ,python logging error and exit ,python logging error arguments ,python error handling and logging ,python automatic error logging ,python assert logging error ,aws lambda python logging error ,python logging difference between error and critical ,python logging capture error ,logging fatal python error cannot recover from stack overflow ,python logging error exc_info ,python flask error logging ,python logging get error message ,python error handling logging ,python logging error handler ,python logging error in red ,python logging error info ,python logging install error ,python logging info stdout error stderr ,python import logging error ,python logging keyerror ,python error log linux ,python logging error line number ,python lambda error logging ,python logging.error vs logging.exception ,python error logging module ,python logging error message ,python mock logging.error ,python logging only error to stderr ,python logging only error to file ,python logging os error ,python logging permission error ,python logging print error ,python logging runtime error ,python logging error stderr ,python logging error to stdout ,python logging setlevel error ,python suppress logging error ,python logging unicode error ,python unittest logging error ,python logging error vs exception ,python logging error with exception ,python logging windowserror error 32 ,python log ,python logging ,python logo ,python logging example ,python logical operators ,python logging to file ,python logo png ,python logging multiple modules ,python logging best practices ,python log analysis tools ,python log analyzer ,python log analysis ,python log all output to file ,python log all errors to file ,python log any base ,python log append ,python log adapter ,python log base 2 ,python log base 10 ,python log base e ,python log backupcount ,python log base ,python log base 2 numpy ,python log base 3 ,python log console output to file ,python log calculation ,python log console ,python log config ,python log color ,python log class ,python log current time ,python log colorbar ,python log dictionary ,python log directly to s3 ,python log debug ,python log data ,python log decorator ,python log datetime ,python log dataframe ,python log data to file ,python log exception ,python log exception stack trace ,python log error ,python log exception to file ,python log exception message ,python log example ,python log execution time ,python log exception traceback ,python log function ,python log file ,python log format ,python log file creation ,python log file parser ,python log function name ,python log file not created ,python log file stack overflow ,python log gamma ,python log graph ,python log generator ,python log grid ,python log github ,python log gpu usage ,python log generic exception ,python log getlogger ,python log handler ,python log histogram ,python log http requests ,python log handler example ,python log hostname ,python log handler level ,python log handler stdout ,python log http response ,python log in ,python log into a file ,python log in file ,python log into website ,python log info ,python log info to console ,python log is not defined ,python login system ,python log json ,python log json format ,python log json object ,python log json formatter ,python log json pretty ,python log json to file ,python log journalctl ,python jira log work ,python log keystrokes ,python log kwargs ,python log kubernetes ,python log keyboardinterrupt ,python log kafka ,python log kibana ,python log keyerror ,python log keyerror 'formatters' ,python log levels ,python log likelihood ,python log log plot ,python log level environment variable ,python log list ,python log line number ,python log location ,python log loss ,python log module ,python logarithm ,python log monitoring ,python log multiple lines ,python log message ,python log memory usage ,python log math domain error ,python log method ,python log normal distribution ,python log normalization ,python log numpy ,python log natural ,python log number ,python log new line ,python log negative number ,python log not printing ,python log of number ,python log output to file ,python log object ,python log of array ,python log operation ,python log output ,python log object to string ,python log of list ,python log parser ,python log parsing ,python log print ,python log parser github ,python log parser example ,python log parsing interview questions ,python log print statements ,python log plot ,python log qualname ,python log queue ,python log qiita ,python qt log viewer ,python sqlite log queries ,python mysql log queries ,python log message queue ,python django log queries ,python log rotation ,python log rotation handler ,python log rotation not working ,python log rotation compression ,python log returns ,python log record ,python log range ,python log regression ,python log scraper ,python log stack trace ,python log statements ,python log scale ,python log stdout ,python log scale plot ,python log stdout to file ,python log scale x axis ,python log to file ,python log to console ,python log time ,python log to stdout ,python log to file and console ,python log transformation ,python log traceback ,python log to base 2 ,python log uniform ,python log utc time ,python log unhandled exceptions ,python log unicode ,python log ubuntu ,python log unittest ,python log username ,log util python ,python log viewer ,python log value ,python log variable ,python log vs print ,python log valueerror ,python log vs ln ,python log valueerror math domain error ,python log variable type ,python log without math ,python log warning ,python log with timestamp ,python log with base ,python log with color ,python log write to file ,python log warn vs warning ,python log with base 2 ,python log x axis ,python log xml ,python log(x 10) ,python xticks log scale ,python xlim log ,python xgboost log ,python histogram log x scale ,python log y axis ,python log y scale ,python log yaml ,python log yield ,python yaml log ,python yscale log ,python histogram log y ,python matplotlib log y ,python log zero ,python log zip ,python log z axis ,python log zeromq ,python log zur basis 2 ,python zip log files ,python np.log zero ,python math log zero ,python logging not writing to file ,python logging config file example ,python logging handler example ,python logging filehandler ,python logging across files ,python logging append ,python logging asctime format ,python logging async ,python logging as json ,python logging also print to console ,python logging asctime not working ,python logging asctime ,python logging basicconfig ,python logging both file and console ,python logging basicconfig example ,python logging basics ,python logging backupcount ,python logging best practices stack overflow ,python logging basicconfig format not working ,python logging color ,python logging console ,python logging custom formatter ,python logging cookbook ,python logging close handler ,python logging class ,python logging console and file ,python logging documentation ,python logging default file location ,python logging dictconfig example ,python logging dictconfig ,python logging debug not printing ,python logging default level ,python logging date format ,python logging decorator ,python logging exception example ,python logging example to file ,python logging exception ,python logging example github ,python logging extra fields ,python logging exception traceback ,python logging exc_info=true ,python logging format ,python logging format example ,python logging filehandler example ,python logging file rotation ,python logging from multiple modules ,python logging function ,python logging filter ,python logging getlogger ,python logging geeksforgeeks ,python logging github ,python logging get filename ,python logging github example ,python logging get level ,python logging get all loggers ,python logging get handler ,python logging handlers ,python logging hierarchy ,python logging httphandler example ,python logging howto ,python logging handler level ,python logging handler stdout ,python logging hostname ,python logging into file ,python logging info not printing ,python logging in json format ,python logging in file ,python logging info ,python logging interview questions ,python logging in multiple files ,python logging in pytest ,python logging json formatter ,python logging json ,python logging json format ,python logging json config ,python logging jupyter notebook ,python logging json output ,python logging journald ,python logging json handler ,python logging keyerror 'formatters' ,python logging keyerror ,python logging kubernetes ,python logging kwargs ,python logging kibana ,python logging keyword arguments ,python logging keyerror 'qualname' ,python logging kafka ,python logging levels ,python logging line number ,python logging libraries ,python logging level hierarchy ,python logging levels example ,python logging lambda ,python logging log rotation ,python logging log to file ,python logging module ,python logging module example ,python logging multiple files ,python logging multiprocessing ,python logging method name ,python logging multiple variables ,python logging module install ,python logging not printing ,python logging not working ,python logging not creating file ,python logging new file every day ,python logging not printing to stdout ,python logging name ,python logging new line ,python logging on console ,python logging overwrite file ,python logging on console and file ,python logging only to file ,python logging on stdout ,python logging object ,python logging only current module ,python logging output to file ,python logging package ,python logging print to console ,python logging print dictionary ,python logging print to console and file ,python logging program ,python logging package example ,python logging print exception ,python logging propagate example ,python logging queuehandler ,python logging qualname ,python logging quickstart ,python logging queuehandler example ,python logging queuehandler multiprocessing ,python logging quiet ,python logging qt ,python logging quiet mode ,python logging rotatingfilehandler example ,python logging rotatingfilehandler ,python logging rolling file ,python logging remove all handlers ,python logging real python ,python logging rotate size ,python logging root ,python logging root logger ,python logging streamhandler ,python logging set level ,python logging source code ,python logging stack overflow ,python logging stdout ,python logging shutdown ,python logging syslog ,python logging smtphandler example ,python logging to file and console ,python logging to file example ,python logging thread name ,python logging timedrotatingfilehandler example ,python logging to stdout ,python logging timestamp ,python logging timedrotatingfilehandler ,python logging using config file ,python logging utc ,python logging utf-8 ,python logging unicodeencodeerror ,python logging unrecognised argument(s) encoding ,python logging unittest ,python logging usage ,python logging uncaught exceptions ,python logging vs print ,python logging variables ,python logging verbose ,python logging vs logger ,python logging valueerror unrecognised argument(s) encoding ,python logging version ,python logging variable value ,python logging variable substitution ,python logging with timestamp ,python logging write to file ,python logging write to file and console ,python logging warning ,python logging with variables ,python logging w3schools ,python logging with line number ,python logging with rotation ,python logging xml formatter ,python logging xml ,python xmlrpc logging ,python logging yaml config example ,python logging yaml config ,python logging youtube ,python logging yaml filehandler ,python logging yaml format ,python logging yaml filename ,python logging yaml filter ,python logging yaml environment variable ,python logging zip ,python logging zeromq ,python zeep logging ,python logging time zone ,zappa python logging ,python logo images ,python logo svg ,python logo download ,python logo hd ,python logo vector ,python logo transparent background ,python logo meaning ,python logo animation ,python logo ascii art ,python logo ai ,python logo art ,python add logo to image ,python anaconda logo ,python add logo to plot ,python app logo ,python logo black and white ,python logo black background ,python logo blue ,python logo background ,python logo buy ,python bokeh logo ,python logo white background ,python logo color ,python logo copyright ,python logo copy paste ,python logo colours ,python logo creator ,python logo commands ,python logo clip art ,python logo concept ,python logo design ,python logo drawing ,python logo detection ,python logo detection opencv ,python logo definition ,python django logo ,python dash logo ,python logo emoji ,python logo explained ,python logo eps ,python logo evolution ,logo python en latex ,eckton python logo mini satchel ,python efteling logo ,python snap7 example logo ,python logo font ,python logo free download ,python logo free ,python logo flat ,python logo file ,python flask logo ,python first logo ,python find logo in image ,python logo gif ,python logo generator ,python logo github ,python logo guidelines ,python logo graphic ,python gaming logo ,python gui logo ,python logo small.gif ,python logo high resolution ,python logo history ,python logo hd wallpaper ,python logo hat ,python logo png hd ,python logo color hex ,python logo icon ,python logo images download ,python logo ico file ,python logo in latex ,python logo in turtle ,python logo inkscape ,python idle logo ,python logo jpg ,jupyter python logo ,python logo license ,python logo latex ,python logo line ,python libraries logo ,python machine learning logo ,python logo logo ,logo logiciel python ,python logo maker ,python logo mobile wallpaper ,python logo module ,python logo make ,ball python logo maker ,python logo 3d model ,python logo no background ,python logo no copyright ,python logo name ,python new logo ,python numpy logo ,nltk python logo ,python logo png download ,python logo png transparent ,python logo origin ,python logo outline ,python logo programming ,python logo pixel art ,python logo pictures ,python logo plot ,python qrcode logo ,python logo recognition ,python logo rgb ,python logo reason ,python logo represent ,python requests logo ,python remove logo ,opencv python logo recognition ,python logo sticker ,python logo svg download ,python logo small ,python logo snake ,python logo size ,python logo siemens ,python logo transparent ,python logo t shirt ,python logo turtle ,python logo trademark ,python logo template ,python tkinter logo ,python turtle logo code ,python tornado logo ,python logo usage ,python logo unicode ,python unittest logo ,python logo vector graphics ,python vs logo ,python logo without background ,python logo wallpaper ,python logo white ,python logo wiki ,python logo wikimedia ,python logo wallpaper hd ,python logo with black background ,python logo yellow ,python logging example code ,python logging example multiple modules ,python logging example format ,python logging example stdout ,python logging example to console ,python logging example to file and console ,python logging example arguments ,python logging addhandler example ,python logging adapter example ,python logging addfilter example ,python logging configure all loggers ,python logging args example ,python logging basicconfig append ,python logging addlevelname example ,python logging basic example ,python logging best example ,python logging by example ,python logging basic setup ,python behave logging example ,python 3 logging basicconfig example ,python logging example console ,python logging example config file ,python logging example config ,python logging example color ,python logging basicconfig console ,python logging sample code ,python logging.basicconfig create file ,python logging example debug ,python logging decorator example ,python logging dictionary example ,python logging basicconfig does not work ,python logging basicconfig datefmt ,python logging basicconfig default format ,python logging basicconfig directory ,python logging error example ,python logging basicconfig encoding ,python logging extra example ,python logging emit example ,python logging email example ,python logging basicconfig disable_existing_loggers ,python logging basicconfig no effect ,python logging example file ,python logging example file and console ,python logging example filehandler ,python logging filter example ,python logging basicconfig format ,python logging basicconfig file ,python logging fileconfig example ,python how to log ,python how to log exception ,python how to login to a website ,python how to log to a file ,python how to log errors ,python how to log to console ,python how to log traceback ,python how to log info ,python how to log to syslog ,python how to log an exception ,python how to log all errors ,python how to log into a website ,python how to create a log file ,python how to write a log file ,python how to read a log file ,python how to close a log file ,python log to both console and file ,python log to base 2 ,python log to base 10 ,python log to base ,python log to buffer ,python log to browser console ,python log base n ,python log basicconfig ,python how to create log file ,python how to calculate log ,python how to console log ,python logging how to close log file ,python log to console and file ,python log to csv ,python log to cloudwatch ,python log to csv file ,python how to log dictionary ,python how to do log ,python how to log to different files ,python log to database ,python log to docker ,python log to dataframe ,python log to dev null ,python log to debug ,python log to elasticsearch ,python log to elk ,python log to excel ,python log exception message ,python log e ,python how to log to file ,python how to read log file ,python how to write log file ,python how to save log file ,python how to parse log file ,python how to get log ,python log to graylog ,python log to file ,python log generator ,python log graph ,python log gamma ,python log grid ,python log github ,python log to html ,python log to http ,python server to log http requests ,python log handler ,python log histogram ,python log http requests ,python log handler example ,python log handler stdout ,python to log in ,python script to log in ,python log into file ,python log in file ,python log is not defined ,python log in console ,python how to log json ,python log to journalctl ,python log to journald ,python log to json format ,python log json object ,python log json formatter ,python log json pretty ,python convert log to json ,python log to kibana ,python log to kafka ,python log to kinesis ,python log keystrokes ,python log kwargs ,python log kubernetes ,python log keyboardinterrupt ,python log keyerror ,python how to set log level ,python log to logstash ,python log to list ,python string to log level ,python log levels ,python log level environment variable ,python log likelihood ,python log loss ,python how to make log file ,python log to multiple files ,python log to mongodb ,python log to mysql ,python log to messages ,python log to memory ,python log to multiple handlers ,python log to multiple loggers ,python how to do natural log ,python log to new file ,python log numpy ,python log normal distribution ,python log n ,python log not printing ,python log number ,python log negative number ,python log to output ,python code to log off computer ,python log to output file ,python log object ,python log of array ,python log of number ,python log operator ,python log of list ,python log to pandas ,python to print log ,python log plot ,python log parser ,python log parser github ,python log parser library ,python log package ,python log to queue ,python log qualname ,python sqlite log queries ,python mysql log queries ,python qt log viewer ,python django log queries ,python log qiita ,python quiver log plot ,python log to rsyslog ,python log to remote syslog ,python log to robot framework ,python log to redis ,python log to rabbitmq ,python log rotation ,python log returns ,python how to log stdout ,python how to save log ,python log to stdout and file ,python log to s3 ,python log to string ,python how to log time ,python how to take log ,python how to log to stdout ,python how to use log ,python log to uwsgi ,python log unhandled exceptions ,python log utf-8 ,python log uniform distribution ,python log utc time ,python log ubuntu ,python log unittest ,python log to variable ,python log viewer ,python log value ,python log vs print ,python log variable name and value ,python log visualization ,python log vs ln ,python log verbose ,python how to write log ,python log warning ,python log with timestamp ,python log without math ,python log with base ,python log with color ,python log to xml ,python log x axis ,python log(x) ,python log(x 10) ,python histogram log x scale ,python xticks log scale ,python matplotlib log x ,python xlim log ,python log y axis ,python log y scale ,python log yaml ,python log yield ,python yaml log ,python yscale log ,python histogram log y ,python matplotlib log y ,python log zero ,python log zip ,python log z axis ,python log zeromq ,python np.log zero ,python zip log files ,python math log zero ,python log zur basis 2 ,python log exception stack trace ,python log exception traceback ,python logging exception example ,python logger log exception ,python catch log exception ,python exception log file ,python log exception and raise ,python log exception and message ,python logging exception best practices ,python log caught exception ,python logging custom exception ,python log exception and continue ,python log exception details ,python log exception decorator ,python logging exception error ,python log exception format ,python logging exception formatter ,python log full exception ,python logging for exception ,python log generic exception ,python logging exception handler ,python logging exception handling ,python how to exception handling ,how to log exception in python ,python log exception as json ,python log exception line number ,python log exception level ,python logging how to log exception ,python log exception message and stack trace ,python logging module exception ,python log exception name ,python logging exception no traceback ,python log exception object ,python logging exception one line ,python logging exception output ,python log on exception ,python logging exception print ,python log exception raise ,python log exception and reraise ,python logging exception stderr ,python logging exception source code ,python log exception without stack trace ,python log exception type ,python log the exception ,python log exception without traceback ,python log uncaught exception ,python log exception vs error ,python log exception with traceback ,python log exception with stack trace ,python log exception with message ,python log warning exception ,python log with exception ,python code to login to a website and download file ,python code to login to a website using selenium ,python requests login to a website ,how to login to a website using python selenium ,python script to login to website automatically ,python selenium to login to website ,python login to website and scrape ,python login to website javascript ,python login to website and upload file ,how to login to any website using python ,python login website and scrape ,how to automate login to a website using python ,how to auto login to a website using python ,python login to website beautifulsoup ,python code to login to a website ,python login to website cookies ,python code to connect to a website ,python login to website example ,python selenium login to website example ,python login to website form ,python login to website github ,python login to https website ,how to login to a website in python ,python requests login to website javascript ,how to login to a website using python requests module ,python login to website mechanize ,how to login to a web page using python ,how to login to a website using python requests ,python script to login to a website ,python login to website selenium ,python log into secure website ,python script to connect to a website ,how to write a python script to login to a website ,how to login to a website using python ,how to connect to a website using python ,how to login to a website with python ,how to connect to a website with python ,python login to website with requests ,python login to website with cookies ,python login to website with captcha ,python login to website with selenium ,python how to write log to a file ,python log output to a file ,python log to a text file ,python log to file and console ,python log to file and screen ,python log to file append ,python log to file and print ,python log to file and stderr ,python 3 log to file and console ,python log file analysis ,python log file analyzer ,python logging to file basicconfig ,python log to both file and console ,python how to write to a binary file ,python logging to a file and console ,python logging to file config ,python log file clear ,python log file create ,python how to write to a csv file ,python how to append to a csv file ,python logging to a file example ,python how to write to a file example ,python write to log file example ,python log file error ,python how to write to a excel file ,python logging to file handler ,python how to log error ,python how to log errors ,python how to log all errors ,python log error with traceback ,python log error exception ,python log error and exit ,python log error vs exception ,python log error and continue ,python log error line number ,python log error and raise exception ,python log assertion error ,python log an error ,python log any error ,python logging error bad file descriptor ,python error logging best practices ,python logging count errors ,python log domain error ,python logging error exception ,python logging error example ,python logging error file ,python write error log file ,python logging get error message ,python logging error handling ,python log error in file ,how to log error in python ,python log import error ,how to log error message in python ,how to create error log file in python ,python logging error keyerror ,python logging error keyerror 'formatters' ,python logging keyerror ,python logging error levels ,python error log location ,python error log linux ,python log error message ,python log error message to file ,python logging error not working ,python logging error only ,python how to error out ,python logging error raise exception ,python logging error stderr ,python log error traceback ,python log error stack trace ,python logging error unicodeencodeerror ,python logging unicode error ,python logging error vs critical ,python logging error vs warning ,python logging error valueerror i/o operation on closed file ,python logging error vs info ,python log error with stack trace ,python log error with exception ,python best way to log errors ,python flask log errors ,python automatically log errors ,python script log errors ,python logger log errors ,python 3 log errors ,how to log errors in python ,python log info error ,python log all errors to file ,python logger error traceback , ,


if( aicp_can_see_ads() ) {

}

In the vast computing world, there are different programming languages that include facilities for logging. From our previous posts, you can learn best practices about Node logging, Java logging, and Ruby logging. As part of the ongoing logging series, this post describes what you need to discover about Python logging best practices.

Considering that “log” has the double meaning of a (single) log-record and a log-file, this post assumes that “log” refers to a log-file.

Advantages of Python logging

So, why learn about logging in Python? One of Python’s striking features is its capacity to handle high traffic sites, with an emphasis on code readability. Other advantages of logging in Python is its own dedicated library for this purpose, the various outputs where the log records can be directed, such as console, file, rotating file, Syslog, remote server, email, etc., and the large number of extensions and plugins it supports. In this post, you’ll find out examples of different outputs.

Python logging description

The Python standard library provides a logging module as a solution to log events from applications and libraries. Once the Python JSON logger is configured, it becomes part of the Python interpreter process that is running the code. In other words, it is global. You can also configure Python logging subsystem using an external configuration file. The specifications for the logging configuration format are found in the Python standard library.

The logging library is based on a modular approach and includes categories of components: loggers, handlers, filters, and formatters. Basically:

  • Loggers expose the interface that application code directly uses.
  • Handlers send the log records (created by loggers) to the appropriate destination.
  • Filters provide a finer grained facility for determining which log records to output.
  • Formatters specify the layout of log records in the final output.

These multiple logger objects are organized into a tree that represents various parts of your system and different third-party libraries that you have installed. When you send a message into one of the loggers, the message gets output on all of that logger’s handlers, using a formatter that’s attached to each handler. The message then propagates up the logger tree until it hits the root logger, or a logger up in the tree that is configured with propagate=False.

Python logging platforms

This is an example of a basic logger in Python:

import logging

logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
      filename='/tmp/myapp.log',
      filemode='w')

logging.debug("Debug message")

logging.info("Informative message")

logging.error("Error message")

Line 1: import the logging module.

Line 2: create a basicConf function and pass some arguments to create the log file. In this case, we indicate the severity level, date format, filename and file mode to have the function overwrite the log file.

Line 3  to 5: messages for each logging level.

The default format for log records is SEVERITY: LOGGER: MESSAGE. Hence, if you run the code above as is, you’ll get this output:

2021-07-02 13:00:08,743 DEBUG Debug message

2021-07-02 13:00:08,743 INFO Informative message

2021-07-02 13:00:08,743 ERROR Error message

Regarding the output, you can set the destination of the log messages. As a first step, you can print messages to the screen using this sample code:

import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.')

If your goals are aimed at the Cloud, you can take advantage of Python’s set of logging handlers to redirect content. Currently in beta release, you can write logs to Stackdriver Logging from Python applications by using Google’s Python logging handler included with the Stackdriver Logging client library, or by using the client library to access the API directly. When developing your logger, take into account that the root logger doesn’t use your log handler.  Since the Python Client for Stackdriver Logging library also does logging, you may get a recursive loop if the root logger uses your Python log handler.

The possibilities with Python logging are endless and you can customize them to your needs. The following are some tips for web application logging best practices, so you can take the most from Python logging:

Setting level names: This supports you in maintaining your own dictionary of log messages and reduces the possibility of typo errors.

LogWithLevelName = logging.getLogger('myLoggerSample')
level = logging.getLevelName('INFO')
LogWithLevelName.setLevel(level)

logging.getLevelName(logging_level) returns the textual representation of the severity called logging_level. The predefined values include, from highest to lowest severity:

  1. CRITICAL
  2. ERROR
  3. WARNING
  4. INFO
  5. DEBUG

Logging from multiple modules: if you have various modules, and you have to perform the initialization in every module before logging messages, you can use cascaded logger naming:

logging.getLogger(“coralogix”)

logging.getLogger(“coralogix.database”)

logging.getLogger(“coralogix.client”)

Making coralogix.client and coralogix.database descendants of the logger coralogix, and propagating their messages to it, it thereby enables easy multi-module logging. This is one of the positive side-effects of name in case the library structure of the modules reflects the software architecture.

Logging with Django and uWSGI: To deploy web applications you can use StreamHandler as logger which sends all logs to For Django you have:

  'handlers': {

    'stderr': {

        'level': 'INFO',

        'class': 'logging.StreamHandler',

        'formatter': 'your_formatter',

      },

    },

Next, uWSGI forwards all of the app output, including prints and possible tracebacks, to syslog with the app name attached:

    $ uwsgi --log-syslog=yourapp 

Logging with Nginx: In case you need having additional features not supported by uWSGI — for example, improved handling of static resources (via any combination of Expires or E-Tag headers, gzip compression, pre-compressed gzip, etc.), access logs and their format can be customized in conf. You can use the combined format, such the example for a Linux system:

access_log /var/log/nginx/access.log;

This line is similar to explicitly specifying the combined format as this:

# note that the log_format directly below is a single line
log_format mycombined '$remote_addr - $remote_user [$time_local] 
"$request" $status $body_bytes_sent "$http_referer" 
"$http_user_agent"'server_port'; 
access_log /var/log/nginx/access.log mycombinedplus;

Log analysis and filtering: after writing proper logs, you might want to analyze them and obtain useful insights. First, open files using blocks, so you won’t have to worry about closing them. Moreover, avoid reading everything into memory at once. Instead, read a line at a time and use it to update the cumulative statistics. The use of the combined log format can be practical if you are thinking on using log analysis tools because they have pre-built filters for consuming these logs.

If you need to parse your log output for analysis you might want to use the code below:

    with open(logfile, "rb") as f:

        for line in csv.reader(f, delimiter=' '):

            self._update(**self._parse(line))

Python’s CSV module contains code the read CSV files and other files with a similar format.  In this way, you can combine Python’s logging library to register the logs and the CSV library to parse them.

And of course, the Coralogix way for python logging, using the Coralogix Python appender allows sending all Python written logs directly to Coralogix for search, live tail, alerting, and of course, machine learning powered insights such as new error detection and flow anomaly detection.

Python Logging Deep Dive

The rest of this guide is focused on how to log in Python using the built-in support for logging. It introduces various concepts that are relevant to understanding Python logging, discusses the corresponding logging APIs in Python and how to use them, and presents best practices and performance considerations for using these APIs.

We will introduce various concepts relevant to understanding logging in Python, discuss the corresponding logging APIs in Python and how to use them, and present best practices and performance considerations for using these APIs.

This Python tutorial assumes the reader has a good grasp of programming in Python; specifically, concepts and constructs pertaining to general programming and object-oriented programming. The information and Python logging examples in this article are based on Python version 3.8.

Python has offered built-in support for logging since version 2.3. This support includes library APIs for common concepts and tasks that are specific to logging and language-agnostic. This article introduces these concepts and tasks as realized and supported in Python’s logging library.

Basic Python Logging Concepts

When we use a logging library, we perform/trigger the following common tasks while using the associated concepts (highlighted in bold).

  1. A client issues a log request by executing a logging statement. Often, such logging statements invoke a function/method in the logging (library) API by providing the log data and the logging level as arguments. The logging level specifies the importance of the log request. Log data is often a log message, which is a string, along with some extra data to be logged. Often, the logging API is exposed via logger objects.
  2. To enable the processing of a request as it threads through the logging library, the logging library creates a log record that represents the log request and captures the corresponding log data.
  3. Based on how the logging library is configured (via a logging configuration), the logging library filters the log requests/records. This filtering involves comparing the requested logging level to the threshold logging level and passing the log records through user-provided filters.
  4. Handlers process the filtered log records to either store the log data (e.g., write the log data into a file) or perform other actions involving the log data (e.g., send an email with the log data). In some logging libraries, before processing log records, a handler may again filter the log records based on the handler’s logging level and user-provided handler-specific filters. Also, when needed, handlers often rely on user-provided formatters to format log records into strings, i.e., log entries.

Independent of the logging library, the above tasks are performed in an order similar to that shown in Figure 1.

image2

Figure 1: The flow of tasks when logging via a logging library

Python Logging Module

Python’s standard library offers support for logging via logging, logging.config, and logging.handlers modules.

  • logging module provides the primary client-facing API.
  • logging.config module provides the API to configure logging in a client.
  • logging.handlers module provides different handlers that cover common ways of processing and storing log records.

We collectively refer to these Python log modules as Python’s logging library.

These Python log modules realize the concepts introduced in the previous section as classes, a set of module-level functions, or a set of constants. Figure 2 shows these classes and the associations between them.

python logging module diagram

Figure 2: Python classes and constants representing various logging concepts

Python Logging Levels

Out of the box, the Python logging library supports five logging levels: critical, error, warning, info, and debug. These levels are denoted by constants with the same name in the logging module, i.e., logging.CRITICAL, logging.ERROR, logging.WARNING, logging.INFO, and logging.DEBUG. The values of these constants are 50, 40, 30, 20, and 10, respectively.

At runtime, the numeric value of a logging level determines the meaning of a logging level. Consequently, clients can introduce new logging levels by using numeric values that are greater than 0 and not equal to pre-defined logging levels as logging levels.

Logging levels can have names. When names are available, logging levels appear by their names in log entries. Every pre-defined logging level has the same name as the name of the corresponding constant; hence, they appear by their names in log entries, e.g., logging.WARNING and 30 levels appear as ‘WARNING’. In contrast, custom logging levels are unnamed by default. So, an unnamed custom logging level with numeric value n appears as ‘Level n’ in log entries, and this results in inconsistent and human-unfriendly log entries. To address this, clients can name a custom logging level using the module-level function logging.addLevelName(level, levelName). For example, by using logging.addLevelName(33, 'CUSTOM1'), level 33 will be recorded as ‘CUSTOM1’.

The Python logging library adopts the community-wide applicability rules for logging levels, i.e., when should logging level X be used?

  1. Debug: Use logging.DEBUG to log detailed information, typically of interest only when diagnosing problems, e.g., when the app starts.
  2. Info: Use logging.INFO to confirm the software is working as expected, e.g., when the app initializes successfully.
  3. Warning: Use logging.WARNING to report behaviors that are unexpected or are indicative of future problems but do not affect the current functioning of the software, e.g., when the app detects low memory, and this could affect the future performance of the app.
  4. Error: Use logging.ERROR to report the software has failed to perform some function, e.g., when the app fails to save the data due to insufficient permission.
  5. Critical: Use logging.CRITICAL to report serious errors that may prevent the continued execution of the software, e.g., when the app fails to allocate memory.

Python Loggers

The logging.Logger objects offer the primary interface to the logging library. These objects provide the logging methods to issue log requests along with the methods to query and modify their state. From here on out, we will refer to Logger objects as loggers.

Creation

The factory function logging.getLogger(name) is typically used to create loggers. By using the factory function, clients can rely on the library to manage loggers and to access loggers via their names instead of storing and passing references to loggers.

The name argument in the factory function is typically a dot-separated hierarchical name, e.g., a.b.c. This naming convention enables the library to maintain a hierarchy of loggers. Specifically, when the factory function creates a logger, the library ensures a logger exists for each level of the hierarchy specified by the name, and every logger in the hierarchy is linked to its parent and child loggers.

Threshold Logging Level

Each logger has a threshold logging level that determines if a log request should be processed. A logger processes a log request if the numeric value of the requested logging level is greater than or equal to the numeric value of the logger’s threshold logging level. Clients can retrieve and change the threshold logging level of a logger via Logger.getEffectiveLevel() and Logger.setLevel(level) methods, respectively.

When the factory function is used to create a logger, the function sets a logger’s threshold logging level to the threshold logging level of its parent logger as determined by its name.

Python Logging Methods

Every logger offers the following logging methods to issue log requests.

  • Logger.critical(msg, *args, **kwargs)
  • Logger.error(msg, *args, **kwargs)
  • Logger.debug(msg, *args, **kwargs)
  • Logger.info(msg, *args, **kwargs)
  • Logger.warn(msg, *args, **kwargs)

Each of these methods is a shorthand to issue log requests with corresponding pre-defined logging levels as the requested logging level.

In addition to the above methods, loggers also offer the following two methods:

  • Logger.log(level, msg, *args, **kwargs) issues log requests with explicitly specified logging levels. This method is useful when using custom logging levels.
  • Logger.exception(msg, *args, **kwargs) issues log requests with the logging level ERROR and that capture the current exception as part of the log entries. Consequently, clients should invoke this method only from an exception handler.

msg and args arguments in the above methods are combined to create log messages captured by log entries. All of the above methods support the keyword argument exc_info to add exception information to log entries and stack_info and stacklevel to add call stack information to log entries. Also, they support the keyword argument extra, which is a dictionary, to pass values relevant to filters, handlers, and formatters.

When executed, the above methods perform/trigger all of the tasks shown in Figure 1 and the following two tasks:

  1. After deciding to process a log request based on its logging level and the threshold logging level, the logger creates a LogRecord object to represent the log request in the downstream processing of the request. LogRecord objects capture the msg and args arguments of logging methods and the exception and call stack information along with source code information. They also capture the keys and values in the extra argument of the logging method as fields.
  2. After every handler of a logger has processed a log request, the handlers of its ancestor loggers process the request (in the order they are encountered walking up the logger hierarchy). The Logger.propagate field controls this aspect, which is True by default.

Beyond logging levels, filters provide a finer means to filter log requests based on the information in a log record, e.g., ignore log requests issued in a specific class. Clients can add and remove filters to/from loggers using Logger.addFilter(filter) and Logger.removeFilter(filter) methods, respectively.

Python Logging Filters

Any function or callable that accepts a log record argument and returns zero to reject the record and a non-zero value to admit the record can serve as a filter. Any object that offers a method with the signature filter(record: LogRecord) -> int can also serve as a filter.

A subclass of logging.Filter(name: str) that optionally overrides the logging.Filter.filter(record) method can also serve as a filter. Without overriding the filter method, such a filter will admit records emitted by loggers that have the same name as the filter and are children of the filter (based on the name of the loggers and the filter). If the name of the filter is empty, then the filter admits all records. If the method is overridden, then it should return zero value to reject the record and a non-zero value to admit the record.

Python Logging Handler

The logging.Handler objects perform the final processing of log records, i.e., logging log requests. This final processing often translates into storing the log record, e.g., writing it into system logs or files. It can also translate it to communicate the log record data to specific entities (e.g., send an email) or passing the log record to other entities for further processing (e.g., provide the log record to a log collection process or a log collection service).

Like loggers, handlers have a threshold logging level, which can be set via theHandler.setLevel(level) method. They also support filters via Handler.addFilter(filter) and Handler.removeFilter(filter) methods.

The handlers use their threshold logging level and filters to filter log records for processing. This additional filtering allows context-specific control over logging, e.g., a notifying handler should only process log requests that are critical or from a flaky module.

While processing the log records, handlers format log records into log entries using their formatters. Clients can set the formatter for a handler via Handler.setFormatter(formatter) method. If a handler does not have a formatter, then it uses the default formatter provided by the library.

The logging.handler module provides a rich collection of 15 useful handlers that cover many common use cases (including the ones mentioned above). So, instantiating and configuring these handlers suffices in many situations.

In situations that warrant custom handlers, developers can extend the Handler class or one of the pre-defined Handler classes by implementing the Handler.emit(record) method to log the provided log record.

Python Logging Formatter

The handlers use logging.Formatter objects to format a log record into a string-based log entry.

Note: Formatters do not control the creation of log messages.

A formatter works by combining the fields/data in a log record with the user-specified format string.

Unlike handlers, the logging library only provides a basic formatter that logs the requested logging level, the logger’s name, and the log message. So, beyond simple use cases, clients need to create new formatters by creating logging.Formatter objects with the necessary format strings.

Formatters support three styles of format strings:

printf, e.g., ‘%(levelname)s:%(name)s:%(message)s’

str.format(), e.g., ‘{levelname}:{name}:{message}’

str.template, e.g., ‘$levelname:$name:$message’

The format string of a formatter can refer to any field of LogRecord objects, including the fields based on the keys of the extra argument of the logging method.

Before formatting a log record, the formatter uses the LogRecord.getMessage() method to construct the log message by combining the msg and args arguments of the logging method (stored in the log record) using the string formatting operator (%). The formatter then combines the resulting log message with the data in the log record using the specified format string to create the log entry.

Python Logging Module

To maintain a hierarchy of loggers, when a client uses the logging library, the library creates a root logger that serves as the root of the hierarchy of loggers. The default threshold logging level of the root logger is logging.WARNING.

The module offers all of the logging methods offered by the Logger class as module-level functions with identical names and signature, e.g., logging.debug(msg, *args, **kwargs). Clients can use these functions to issue log requests without creating loggers, and the root logger services these requests. If the root logger has no handlers when servicing log requests issued via these methods, then the logging library adds a logging.StreamHandler instance based on the sys.stderr stream as a handler to the root logger.

When loggers without handlers receive log requests, the logging library directs such log requests to the last resort handler, which is a logging.StreamHandler instance based on sys.stderr stream. This handler is accessible via the logging.lastResort attribute.

Python Logging Examples

Here are a few code snippets illustrating how to use the Python logging library.

Snippet 1: Creating a logger with a handler and a formatter

# main.py
import lo
gging, sys

def _init_logger():
    logger = logging.getLogger('app')  #1
    logger.setLevel(logging.INFO)  #2
    handler = logging.StreamHandler(sys.stderr)  #3
    handler.setLevel(logging.INFO)  #4
    formatter = logging.Formatter(  
           '%(created)f:%(levelname)s:%(name)s:%(module)s:%(message)s') #5
    handler.setFormatter(formatter)  #6
    logger.addHandler(handler)  #7

_init_logger()
_logger = logging.getLogger('app')

This snippet does the following.

  1. Create a logger named ‘app’.
  2. Set the threshold logging level of the logger to INFO.
  3. Create a stream-based handler that writes the log entries into the standard error stream.
  4. Set the threshold logging level of the handler to INFO.
  5. Create a formatter to capture
    • the time of the log request as the number of seconds since epoch,
    • the logging level of the request,
    • the logger’s name,
    • the name of the module issuing the log request, and
    • the log message.
  6. Set the created formatter as the formatter of the handler.
  7. Add the created handler to this logger.

By changing the handler created in step 3, we can redirect the log entries to different locations or processors.

Snippet 2: Issuing log requests

# main.py
_logger.info('App started in %s', os.getcwd())

This snippet logs informational messages stating the app has started.

When the app is started in the folder /home/kali with the logger created using snippet 1, this snippet will generate the log entry 1586147623.484407:INFO:app:main:App started in /home/kali/ in standard error stream.

Snippet 3: Issuing log requests

# app/io.py
import logging

def _init_logger():
    logger = logging.getLogger('app.io')
    logger.setLevel(logging.INFO)  

_init_logger()
_logger = logging.getLogger('app.io')

def write_data(file_name, data):
    try:
        # write data
        _logger.info('Successfully wrote %d bytes into %s', len(data), file_name)
    except FileNotFoundError:
        _logger.exception('Failed to write data into %s', file_name)

This snippet logs an informational message every time data is written successfully via write_data. If a write fails, then the snippet logs an error message that includes the stack trace in which the exception occurred.

With the logger created using snippet 1, if the execution of app.write_data('/tmp/tmp_data.txt', image_data) succeeds, then this snippet will generate a log entry similar to 1586149091.005398:INFO:app.io:io:Successfully wrote 134 bytes into /tmp/tmp_data.txt. If the execution of app.write_data('/tmp/tmp_data.txt', image_data) fails, then this snippet will generate the following log entry:

1586149219.893821:ERROR:app:io:Failed to write data into /tmp1/tmp_data.txt
Traceback (most recent call last):
  File "/home/kali/program/app/io.py", line 12, in write_data
    print(open(file_name), data)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp1/tmp_data.txt'

Instead of using positional arguments in the format string in the logging method, we could achieve the same output by using the arguments via their names as follows:

_logger.info('Successfully wrote %(data_size)s bytes into %(file_name)s',
    {'data_size': len(data), 'file_name': file_name})

Snippet 4: Filtering log requests

# main.py
import logging, os, sys
import app.io

def _init_logger():
    logger = logging.getLogger('app')
    logger.setLevel(logging.INFO)  
    formatter = logging.Formatter(  
        '%(created)f:%(levelname)s:%(name)s:%(module)s:%(message)s')
    handler = logging.StreamHandler(sys.stderr)
    handler.setLevel(logging.INFO)  
    handler.setFormatter(formatter) 
    handler.addFilter(lambda record: record.version > 5 or #1
            record.levelno >= logging.ERROR)               #1
    logger.addHandler(handler) 

_init_logger()
_logger = logging.LoggerAdapter(logging.getLogger('app'), {'version': 6})  #2

This snippet modifies Snippet 1 as follows.

  1. Lines marked #1 add a filter to the handler. This filter admits log records only if their logging level is greater than or equal to logging.ERROR or they are from a component whose version is higher than 4.
  2. Line marked #2 wraps the logger in a logging.LoggerAdapter object to inject version information into log records.

The logging.LoggerAdapter class provides a mechanism to inject contextual information into log records. We discuss other mechanisms to inject contextual information in the Good Practices and Gotchas section.

# app/io.py
import logging

def _init_logger():
    logger = logging.getLogger('app.io')
    logger.setLevel(logging.INFO)  

_init_logger()
_logger = logging.LoggerAdapter(logging.getLogger('app.io'), {'version': 3})  # 1

def write_data(file_name, data):
    try:
        # write data
        _logger.info('Successfully wrote %d bytes into %s', len(data),
            file_name)
    except FileNotFoundError:
        _logger.exception('Failed to write data into %s', file_name)

This snippet modifies Snippet 3 by wrapping the logger in a LoggerAdapter object to inject version information.

All of the above changes affect the logging behavior of the app described in Snippet 2 and Snippet 3 as follows.

  1. The request to log the informational message about the start of the app is processed as the version info supplied by the module satisfies the filter.
  2. The request to log the informational message about the successful write is ignored as the version info supplied by the module fails to satisfy the filter.
  3. The request to log the error message about the failure to write data is processed as the logging level of the message satisfies the filter.

What do you suppose would have happened if the filter was added to the logger instead of the handler? See Gotchas for the answer.

Python Logging Configuration

The logging classes introduced in the previous section provide methods to configure their instances and, consequently, customize the use of the logging library. Snippet 1 demonstrates how to use configuration methods. These methods are best used in simple single-file programs.

When involved programs (e.g., apps, libraries) use the logging library, a better option is to externalize the configuration of the logging library. Such externalization allows users to customize certain facets of logging in a program (e.g., specify the location of log files, use custom loggers/handlers/formatters/filters) and, hence, ease the deployment and use of the program. We refer to this approach to configuration as data-based approach.

Configuring the Library

Clients can configure the logging library by invoking logging.config.dictConfig(config: Dict) function. The config argument is a dictionary and the following optional keys can be used to specify a configuration.

filters key maps to a dictionary of strings and dictionaries. The strings serve as filter ids used to refer to filters in the configuration (e.g., adding a filter to a logger) while the mapped dictionaries serve as filter configurations. The string value of the name key in filter configurations is used to construct logging.Filter instances.

"filters": {
  "io_filter": {
    "name": "app.io"
  }
}

This configuration snippet results in the creation of a filter that admits all records created by the logger named ‘app.io’ or its descendants.

formatters key maps to a dictionary of strings and dictionaries. The strings serve as formatter ids used to refer to formatters in the configuration (e.g., adding a formatter to a handler) while the mapped dictionaries serve as formatter configurations. The string values of the datefmt and format keys in formatter configurations are used as the date and log entry formatting strings, respectively, to construct logging.Formatter instances. The boolean value of the (optional) validate key controls the validation of the format strings during the construction of a formatter.

"formatters": {
  "simple": {
    "format": "%(asctime)s - %(message)s",
    "datefmt": "%y%j-%H%M%S"

  },
  "detailed": {
    "format": "%(asctime)s - %(pathname):%(lineno) - %(message)s"
  }
}

This configuration snippet results in the creation of two formatters. A simple formatter with the specified log entry and date formatting strings and detailed formatter with specified log entry formatting string and default date formatting string.

handlers key maps to a dictionary of strings and dictionaries. The strings serve as handler ids used to refer to handlers in the configuration (e.g., adding a handler to a logger) while the mapped dictionaries serve as handler configurations. The string value of the class key in a handler configuration names the class to instantiate to construct a handler. The string value of the (optional) level key specifies the logging level of the instantiated handler. The string value of the (optional) formatter key specifies the id of the formatter of the handler. Likewise, the list of values of the (optional) filters key specifies the ids of the filters of the handler. The remaining keys are passed as keyword arguments to the handler’s constructor.

"handlers": {
  "stderr": {
    "class": "logging.StreamHandler",
    "level": "INFO",
    "filters": ["io_filter"],
    "formatter": "simple",
    "stream": "ext://sys.stderr"
  },
  "alert": {
    "class": "logging.handlers.SMTPHandler",
    "level": "ERROR",
    "formatter": "detailed",
    "mailhost": "smtp.skynet.com",
    "fromaddr": "[email protected]",
    "toaddrs": [ "[email protected]", "[email protected]" ],
    "subject": "System Alert"
  }
}

This configuration snippet results in the creation of two handlers:

  • A stderr handler that formats log requests with INFO and higher logging level log via the simple formatter and emits the resulting log entry into the standard error stream. The stream key is passed as keyword arguments to logging.StreamHandler constructor.
    The value of the stream key illustrates how to access objects external to the configuration. The ext:// prefixed string refers to the object that is accessible when the string without the ext:// prefix (i.e., sys.stderr) is processed via the normal importing mechanism. Refer to Access to external objects for more details. Refer to Access to internal objects for details about a similar mechanism based on cfg:// prefix to refer to objects internal to a configuration.
  • An alert handler that formats ERROR and CRITICAL log requests via the detailed formatter and emails the resulting log entry to the given email addresses. The keys mailhost, formaddr, toaddrs, and subject are passed as keyword arguments to logging.handlers.SMTPHandler’s constructor.

loggers key maps to a dictionary of strings that serve as logger names and dictionaries that serve as logger configurations. The string value of the (optional) level key specifies the logging level of the logger. The boolean value of the (optional) propagate key specifies the propagation setting of the logger. The list of values of the (optional) filters key specifies the ids of the filters of the logger. Likewise, the list of values of the (optional) handlers key specifies the ids of the handlers of the logger.

"loggers": {
  "app": {
    "handlers": ["stderr", "alert"],
    "level": "WARNING"
  },
  "app.io": {
    "level": "INFO"
  }
}

This configuration snippet results in the creation of two loggers. The first logger is named app, its threshold logging level is set to WARNING, and it is configured to forward log requests to stderr and alert handlers. The second logger is named app.io, and its threshold logging level is set to INFO. Since a log request is propagated to the handlers associated with every ascendant logger, every log request with INFO or a higher logging level made via the app.io logger will be propagated to and handled by both stderr and alert handlers.

root key maps to a dictionary of configuration for the root logger. The format of the mapped dictionary is the same as the mapped dictionary for a logger.

incremental key maps to either True or False (default). If True, then only logging levels and propagate options of loggers, handlers, and root loggers are processed, and all other bits of the configuration is ignored. This key is useful to alter existing logging configuration. Refer to Incremental Configuration for more details.

disable_existing_loggers key maps to either True (default) or False. If False, then all existing non-root loggers are disabled as a result of processing this configuration.

Also, the config argument should map the version key to 1.

Here’s the complete configuration composed of the above snippets.

{
  "version": 1,
  "filters": {
    "io_filter": {
      "name": "app.io"
    }
  },
  "formatters": {
    "simple": {
      "format": "%(asctime)s - %(message)s",
      "datefmt": "%y%j-%H%M%S"

    },
    "detailed": {
      "format": "%(asctime)s - %(pathname):%(lineno) - %(message)s"
    }
  },
  "handlers": {
    "stderr": {
      "class": "logging.StreamHandler",
      "level": "INFO",
      "filters": ["io_filter"],
      "formatter": "simple",
      "stream": "ext://sys.stderr"
    },
    "alert": {
      "class": "logging.handlers.SMTPHandler",
      "level": "ERROR",
      "formatter": "detailed",
      "mailhost": "smtp.skynet.com",
      "fromaddr": "[email protected]",
      "toaddrs": [ "[email protected]", "[email protected]" ],
      "subject": "System Alert"
    }
  },
  "loggers": {
    "app": {
      "handlers": ["stderr", "alert"],
      "level": "WARNING"
    },
    "app.io": {
      "level": "INFO"
    }
  }
}

Customizing via Factory Functions

The configuration schema for filters supports a pattern to specify a factory function to create a filter. In this pattern, a filter configuration maps the () key to the fully qualified name of a filter creating factory function along with a set of keys and values to be passed as keyword arguments to the factory function. In addition, attributes and values can be added to custom filters by mapping the . key to a dictionary of attribute names and values.

For example, the below configuration will cause the invocation of app.logging.customFilterFactory(startTime='6PM', endTime='6AM') to create a custom filter and the addition of local attribute with the value True to this filter.

  "filters": {
    "time_filter": {
      "()": "app.logging.create_custom_factory",
      "startTime": "6PM",
      "endTime": "6PM",
      ".": {
        "local": true
      }
    }
  }

Configuration schemas for formatters, handlers, and loggers also support the above pattern. In the case of handlers/loggers, if this pattern and the class key occur in the configuration dictionary, then this pattern is used to create handlers/loggers. Refer to User-defined Objects for more details.

Configuring using Configparse-Format Files

The logging library also supports loading configuration from a configparser-format file via the <a href="https://docs.python.org/3/library/logging.config.html#logging.config.fileConfig" target="_blank" rel="noopener noreferrer">logging.config.fileConfig() function. Since this is an older API that does not provide all of the functionalities offered by the dictionary-based configuration scheme, the use of the dictConfig() function is recommended; hence, we’re not discussing the fileConfig() function and the configparser file format in this tutorial.

Configuring Over The Wire

While the above APIs can be used to update the logging configuration when the client is running (e.g., web services), programming such update mechanisms from scratch can be cumbersome. The logging.config.listen() function alleviates this issue. This function starts a socket server that accepts new configurations over the wire and loads them via dictConfig() or fileConfig() functions. Refer to logging.config.listen() for more details.

Loading and Storing Configuration

Since the configuration provided to dictConfig() is nothing but a collection of nested dictionaries, a logging configuration can be easily represented in JSON and YAML format. Consequently, programs can use the json module in Python’s standard library or external YAML processing libraries to read and write logging configurations from files.

For example, the following snippet suffices to load the logging configuration stored in JSON format.

import json, logging.config

with open('logging-config.json', 'rt') as f:
  config = json.load(f)
  logging.config.dictConfig(config)

Limitations

In the supported configuration scheme, we cannot configure filters to filter beyond simple name-based filtering. For example, we cannot create a filter that admits only log requests created between 6 PM and 6 AM. We need to program such filters in Python and add them to loggers and handlers via factory functions or the addFilter() method.

Python Logging Good Practices and Gotchas

In this section, we will list a few good practices and gotchas related to the logging library. This list stems from our experience, and we intend it to complement the extensive information available in the Logging HOWTO and Logging Cookbook sections of Python’s documentation.

Since there are no silver bullets, all good practices and gotchas have exceptions that are almost always contextual. So, before using the following good practices and gotchas, consider their applicability in the context of your application and ascertain whether they are appropriate in your application’s context.

Best Practices

Create Loggers Using getlogger Function

The logging.getLogger() factory function helps the library manage the mapping from logger names to logger instances and maintain a hierarchy of loggers. In turn, this mapping and hierarchy offer the following benefits:

  1. Clients can use the factory function to access the same logger in different parts of the program by merely retrieving the logger by its name.
  2. Only a finite number of loggers are created at runtime (under normal circumstances).
  3. Log requests can be propagated up the logger hierarchy.
  4. When unspecified, the threshold logging level of a logger can be inferred from its ascendants.
  5. The configuration of the logging library can be updated at runtime by merely relying on the logger names.

Use Logging Level Function

Use the logging.<logging level>() function or the Logger.<logging level>()  methods to log at pre-defined logging levels

Besides making the code a bit shorter, the use of these functions/methods helps partition the logging statements in a program into two sets:

  1. Those that issue log requests with pre-defined logging levels
  2. Those that issue log requests with custom logging levels.

Use Pre-defined Logging Levels

As described in the Logging Level section in the Concepts and API chapter, the pre-defined logging levels offered by the library capture almost all logging scenarios that occur in programs. Further, since most developers are familiar with pre-defined logging levels (as most logging libraries across different programming languages offer very similar levels), the use of pre-defined levels can help lower deployment, configuration, and maintenance burden. So, unless required, use pre-defined logging levels.

Create module-level loggers

While creating loggers, we can create a logger for each class or create a logger for each module. While the first option enables fine-grained configuration, it leads to more loggers in a program, i.e., one per class. In contrast, the second option can help reduce the number of loggers in a program. So, unless such fine-grained configuration is necessary, create module-level loggers.

Name module-level loggers with the name of the corresponding modules.

Since the logger names are string values that are not part of the namespace of a Python program, they will not clash with module names. Hence, use the name of a module as the name of the corresponding module-level logger. With this naming, logger naming piggybacks on the dot notation based module naming and, consequently, simplifies referring to loggers.

Use logging.LoggerAdatper to inject local contextual information

As demonstrated in Snippet 4, we can use logging.LoggerAdapter to inject contextual information into log records. LoggerAdapter can also be used to modify the log message and the log data provided as part of a log request.

Since the logging library does not manage these adapters, they cannot be accessed via common names. For this reason, use them to inject contextual information that is local to a module or a class.

Use filters or logging.setLogRecordFactory() to inject global contextual information

There are two options to seamlessly inject global contextual information (that is common across an app) into log records.

The first option is to use the filter support to modify the log record arguments provided to filters. For example, the following filter injects version information into incoming log records.

def version_injecting_filter(logRecord):
    logRecord.version = '3'
    return True

There are two downsides to this option. First, if filters depend on the data in log records, then filters that inject data into log records should be executed before filters that use the injected data. Hence, the order of filters added to loggers and handlers becomes crucial. Second, the option “abuses” the support to filter log records to extend log records.

The second option is to initialize the logging library with a log record creating factory function via logging.setLogRecordFactory(). Since the injected contextual information is global, it can be injected into log records when they are created in the factory function and be sure the data will be available to every filter, formatter, logger, and handler in the program.

The downside of this option is that we have to ensure factory functions contributed by different components in a program play nicely with each other. While log record factory functions could be chained, such chaining increases the complexity of programs.

Use the data-based approach to configure the logging library

If your program involves multiple modules and possibly third-party components, then use the data-based approach described in the Configuration chapter to configure the logging library.

Attach common handlers to the loggers higher up the logger hierarchy

If a handler is common to two loggers of which one is the descendant of the other, then attach the handler to the ascendant logger and rely on the logging library to propagate the log requests from the descendant logger to the handlers of the ascendant logger. If the propagate attribute of loggers has not been modified, this pattern helps avoid duplicate messages.

Use logging.disable() function to inhibit the processing of log requests below a certain logging level across all loggers

A logger processes a log request if the logging level of the log request is at least as high as the logger’s effective logging level. A logger’s effective logging level is the higher of two logging levels: the logger’s threshold logging level and the library-wide logging level. We can set the library-wide logging level via the logging.disable(level) function. By default, the library-wide logging level is 0, i.e., log requests of every logging level will be processed.

Using this function, we can throttle the logging output of an app by increasing the logging level across the entire app.

What about caching references to loggers?

Before moving on to gotchas, let’s consider the goodness of the common practice of caching references to loggers and accessing loggers via cached references, e.g., this is how the _logger attribute was used in the previous code snippets.

This coding pattern avoids repeated invocations of the logging.getLogger() function to retrieve the same module-level logger; hence, it helps eliminate redundant retrievals. However, such eliminations can lead to lost log requests if retrievals are not redundant. For example, suppose the logging library configuration in a long-running web service is updated with this disabling_existing_loggers option. Since such an update would disable cached loggers, none of the logging statements that use cached loggers would log any requests. While we can remedy this situation by updating cached references to loggers, a simpler solution would be to use the logging.getLogger() function instead of caching references.

In short, caching references to loggers is not always a good practice. So, consider the context of the program while deciding to cache references to loggers.

Troubleshooting

Filters Fail

When the logging library invokes the filters associated with handlers and loggers, the library assumes the filters will always execute to completion, i.e., not fail on errors. So, there is no error handling logic in the library to deal with failing filters. Consequently, when a filter fails (to execute to completion), the corresponding log request will not be logged.

Ensure filters will execute to completion. More so, when using custom filters and using additional data in filtering.

Formatters Fail

The logging library makes a similar assumption about formatters, i.e., formatters will always execute to completion. Consequently, when a formatter fails to execute to completion, the corresponding log request will not be logged.

Ensure formatters will execute to completion.

Required keys are missing is the extra argument

If the filters/formatters refer to keys of the extra argument provided as part of logging methods, then the filters/formatters can fail when the extra argument does not provide a referred key.

Ensure every key of the extra argument used in a filter or a formatter are available in every triggering logging statement.

Keys in the extra argument clash with required attributes

The logging library adds the keys of the extra argument (to various logging methods) as attributes to log records. However, if asctime and message occur as keys in the extra argument, then the creation of log records will fail, and the corresponding log request will not be logged.

Similar failure occurs if args, exc_info, lineno, msg, name, or pathname occur as keys in the extra argument; these are attributes of the LogRecord class.

Ensure asctime, message, and certain attributes of LogRecord do not appear as keys in the extra argument of logging methods.

Libraries using custom logging levels are combined

When a program and its dependent libraries use the logging library, their logging requirements are combined by the underlying logging library that services these requirements. In this case, if the components of a program use custom logging levels that are mutually inconsistent, then the logging outcome can be unpredictable.

Don’t use custom logging levels, specifically, in libraries.

Filters of ancestor loggers do not fire

By default, log requests are propagated up the logger hierarchy to be processed by the handlers of ancestor loggers. While the filters of the handlers process such log requests, the filters of the corresponding loggers do not process such log requests.

To apply a filter to all log requests submitted to a logger, add the filter to the logger.

Ids of handlers/filters/formatters clash

If multiple handlers share the same handler id in a configuration, then the handler id refers to the handler that is created last when the configuration is processed. The same happens amongst filters and formatters that share ids.

When a client terminates, the logging library will execute the cleanup logic of the handler associated with each handler id. So, if multiple handlers have the same id in a configuration, then the cleanup logic of all but the handler created last will not be executed and, hence, result in resource leaks.

Use unique ids for objects of a kind in a configuration.

Python Logging Performance

While logging statements help capture information at locations in a program, they contribute to the cost of the program in terms of execution time (e.g., logging statements in loops) and storage (e.g., logging lots of data). Although cost-free yet useful logging is impossible, we can reduce the cost of logging by making choices that are informed by performance considerations.

Configuration-Based Considerations

After adding logging statements to a program, we can use the support to configure logging (described earlier) to control the execution of logging statements and the associated execution time. In particular, consider the following configuration capabilities when making decisions about logging-related performance.

  1. Change logging levels of loggers: This change helps suppress log messages below a certain log level. This helps reduce the execution cost associated with unnecessary creation of log records.
  2. Change handlers: This change helps replace slower handlers with faster handlers (e.g., during testing, use a transient handler instead of a persistent handler) and even remove context-irrelevant handlers. This reduces the execution cost associated with unnecessary handling of log records.
  3. Change format: This change helps exclude unnecessary parts of a log record from the log (e.g., exclude IP addresses when executing in a single node setting). This reduces the execution cost associated with unnecessary handling of parts of log records.

The above changes the range over coarser to finer aspects of logging support in Python.

Code-Based Considerations

While the support to configure logging is powerful, it cannot help control the performance impact of implementation choices baked into the source code. Here are a few such logging-related implementation choices and the reasons why you should consider them when making decisions about logging-related performance.

Do not execute inactive logging statements

Upon adding the logging module to Python’s standard library, there were concerns about the execution cost associated with inactive logging statements — logging statements that issue log requests with logging level lower than the threshold logging level of the target logger. For example, how much extra time will a logging statement that invokes logger.debug(...) add to a program’s execution time when the threshold logging level of logger is logging.WARN? This concern led to client-side coding patterns (as shown below) that used the threshold logging level of the target logger to control the execution of the logging statement.

# client code
...
if logger.isEnabledFor(logging.DEBUG):
    logger.debug(msg)
...

Today, this concern is not valid because the logging methods in the logging.Logger class perform similar checks and process the log requests only if the checks pass. For example, as shown below, the above check is performed in the logging.Logger.debug method.

# client code
...
logger.debug(msg)
...

# logging library code

class Logger:
    ...
    def debug(self, msg, *args, **kwargs):
        if self.isEnabledFor(DEBUG):
            self._log(DEBUG, msg, args, **kwargs)

Consequently, inactive logging statements effectively turn into no-op statements and do not contribute to the execution cost of the program.

Even so, one should consider the following two aspects when adding logging statements.

  1. Each invocation of a logging method incurs a small overhead associated with the invocation of the logging method and the check to determine if the logging request should proceed, e.g., a million invocations of logger.debug(...) when threshold logging level of logger was logging.WARN took half a second on a typical laptop. So, while the cost of an inactive logging statement is trivial, the total execution cost of numerous inactive logging statements can quickly add up to be non-trivial.
  2. While disabling a logging statement inhibits the processing of log requests, it does not inhibit the calculation/creation of arguments to the logging statement. So, if such calculations/creations are expensive, then they can contribute non-trivially to the execution cost of the program even when the corresponding logging statement is inactive.

Do not construct log messages eagerly

Clients can construct log messages in two ways: eagerly and lazily.

  1. The client constructs the log message and passes it on to the logging method, e.g., logger.debug(f'Entering method Foo: {x=}, {y=}').
    This approach offers formatting flexibility via f-strings and the format() method, but it involves the eager construction of log messages, i.e., before the logging statements are deemed as active.
  2. The client provides a printf-style message format string (as a msg argument) and the values (as a args argument) to construct the log message to the logging method, e.g., logger.debug('Entering method %s: x=%d, y=%f', 'Foo', x, y). After the logging statement is deemed as active, the logger constructs the log message using the string formatting operator %.
    This approach relies on an older and quirky string formatting feature of Python but it involves the lazy construction of log messages.

While both approaches result in the same outcome, they exhibit different performance characteristics due to the eagerness and laziness of message construction.

For example, on a typical laptop, a million inactive invocations of logger.debug('Test message {0}'.format(t)) takes 2197ms while a million inactive invocations of logger.debug('Test message %s', t) takes 1111ms when t is a list of four integers. In the case of a million active invocations, the first approach takes 11061ms and the second approach took 10149ms. A savings of 9–50% of the time taken for logging!

So, the second (lazy) approach is more performant than the first (eager) approach in cases of both inactive and active logging statements. Further, the gains would be larger when the message construction is non-trivial, e.g., use of many arguments, conversion of complex arguments to strings.

Do not gather unnecessary under-the-hood information

By default, when a log record is created, the following data is captured in the log record:

  1. Identifier of the current process
  2. Identifier and name of the current thread
  3. Name of the current process in the multiprocessing framework
  4. Filename, line number, function name, and call stack info of the logging statement

Unless these bits of data are logged, gathering them unnecessarily increases the execution cost. So, if these bits of data will not be logged, then configure the logging framework to not gather them by setting the following flags.

  1. logging.logProcesses = False
  2. logging.logThreads = False
  3. logging.logMultiProcessing = False
  4. logging._srcFile = None

Do not block the main thread of execution

There are situations where we may want to log data in the main thread of execution without spending almost any time logging the data. Such situations are common in web services, e.g., a request processing thread needs to log incoming web requests without significantly increasing its response time. We can tackle these situations by separating concerns across threads: a client/main thread creates a log record while a logging thread logs the record. Since the task of logging is often slower as it involves slower resources (e.g., secondary storage) or other services (e.g., logging services such as Coralogix, pub-sub systems such as Kafka), this separation of concerns helps minimize the effort of logging on the execution time of the main/client thread.

The Python logging library helps handle such situations via the QueueHandler and QueueListener classes as follows.

  1. A pair of QueueHandler and QueueListener instances are initialized with a queue.
  2. When the QueueHandler instance receives a log record from the client, it merely places the log request in its queue while executing in the client’s thread. Given the simplicity of the task performed by the QueueHandler, the client thread hardly pauses.
  3. When a log record is available in the QueueListener queue, the listener retrieves the log record and executes the handlers registered with the listener to handle the log record. In terms of execution, the listener and the registered handlers execute in a dedicated thread that is different from the client thread.

Note: While QueueListener comes with a default threading strategy, developers are not required to use this strategy to use QueueHandler. Instead, developers can use alternative threading strategies that meet their needs.

That about wraps it up for this Python logging guide. If you’re looking for a log management solution to centralize your Python logs, check out our easy-to-configure Python integration.

Пакет Logging является очень полезным инструментом в наборе инструментов программиста. Он может помочь вам лучше понять суть программы и обнаружить сценарии, о которых вы, возможно, даже не задумывались при разработке.

Логи предоставляют разработчикам дополнительный набор глаз, которые постоянно смотрят на поток, через который проходит приложение. Они могут хранить информацию о том, какой пользователь или IP получил доступ к приложению. Если возникает ошибка, то они могут предоставить больше информации, чем трассировка стека, сообщив вам, в каком состоянии находилась программа до того, как она достигла строки кода, где произошла ошибка.

Записывая полезные данные из нужных мест, вы можете не только легко отлаживать ошибки, но и использовать данные для анализа производительности приложения, планирования масштабирования или просмотра схем использования для планирования маркетинга.

В этой статье вы узнаете, почему использование модуля logging является лучшим способом добавления логов в ваше приложение, а также узнаете как быстро начать работу с ним.

Модуль Logging

Модуль logging в Python – это готовый к использованию, мощный модуль, предназначенный для удовлетворения потребностей как начинающих, так и корпоративных команд. Он используется большинством сторонних библиотек Python, поэтому вы можете интегрировать ваши логи с сообщениями из этих библиотек для создания единого журнала логов в вашего приложении.

Добавить logging в вашу программу на Python так же просто, как написать эту строчку:

import logging

С импортированным модулем logging вы можете использовать то, что называется «logger», для логирования сообщений, которые вы хотите видеть. По умолчанию существует 5 стандартных уровней severity, указывающих на важность событий. У каждого есть соответствующий метод, который можно использовать для логирования событий на выбранном уровне severity. Список уровней в порядке увеличения важности:

  • DEBUG
  • INFO
  • WARNING
  • ERROR
  • CRITICAL

Модуль logging предоставляет вам логер по умолчанию, который позволяет вам начать работу без дополнительных настроек. Соответствующие методы для каждого уровня можно вызвать, как показано в следующем примере:

import logging

logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')

Вывод вышеупомянутой программы будет выглядеть так:

WARNING:root:This is a warning message
ERROR:root:This is an error message
CRITICAL:root:This is a critical message

Вывод показывают уровень важности перед каждым сообщением вместе с root, который является именем, которое модуль logging дает своему логеру по умолчанию. Этот формат, который показывает уровень, имя и сообщение, разделенные двоеточием (:), является форматом вывода по умолчанию, и его можно изменить для включения таких вещей, как отметка времени, номер строки и других деталей.

Обратите внимание, что сообщения debug() и info() не были отображены. Это связано с тем, что по умолчанию модуль ведения журнала регистрирует сообщения только с уровнем WARNING или выше. Вы можете изменить это, сконфигурировав модуль logging для регистрации событий всех уровней. Вы также можете определить свои собственные уровни, изменив конфигурации, но, как правило, это не рекомендуется, так как это может привести к путанице с журналами некоторых сторонних библиотек, которые вы можете использовать.

Базовая конфигурация

Вы можете использовать метод basicConfig (**kwargs) для настройки ведения логов:

«Вы можете заметить, что модуль logging нарушает руководство по стилю PEP8 и использует соглашения camelCase в именнование переменных. Это потому, что он был адоптирован из пакета Log4j (утилиты ведения логов в Java). Это известная проблема в пакете, но к тому времени, когда было решено добавить ее в стандартную библиотеку, она уже была принята пользователями, и изменение ее в соответствии с требованиями PEP8 вызовет проблемы обратной совместимости ». (Источник)

Вот некоторые из часто используемых параметров для basicConfig():

  • level: Корневой логер с установленным указанным уровнем важности (severity).
  • filename: Указание файла логов
  • filemode: Режим открытия файла. По умолчанию это a, что означает добавление.
  • format: Формат сообщений.

Используя параметр level, вы можете установить, какой уровень сообщений журнала вы хотите записать. Это можно сделать, передав одну из констант, доступных в классе, и это позволило бы регистрировать все вызовы logging на этом уровне или выше. Вот пример:

import logging

logging.basicConfig(level=logging.DEBUG)
logging.debug('This will get logged')
DEBUG:root:This will get logged

Теперь будут регистрироваться все события на уровне DEBUG или выше.

Аналогично, для записи логов в файл, а не в консоль, можно использовать filename и filemode, и вы можете выбрать формат сообщения, используя format. В следующем примере показано использование всех трех переменных:

import logging

logging.basicConfig(filename='app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s')
logging.warning('This will get logged to a file')
root - ERROR - This will get logged to a file

Сообщение будет записано в файл с именем app.log вместо вывода в консоль. Для filemode значение w означает, что файл журнала открывается в «режиме записи» каждый раз, когда вызывается basicConfig(), и при каждом запуске программы файл перезаписывается. Конфигурацией по умолчанию для filemode является a, которое является добавлением.

Вы можете настроить корневой logger еще больше, используя дополнительные параметры для basicConfig(), которые можно найти здесь.

Следует отметить, что вызов basicConfig() для настройки корневого logger работает, только если корневой logger не был настроен ранее. По сути, эта функция может быть вызвана только один раз.

debug(), info(), warning(), error() и crit() также автоматически вызывают basicConfig() без аргументов, если он ранее не вызывался. Это означает, что после первого вызова одной из вышеперечисленных функций вы больше не сможете изменить настройки корневого logger.

Формат вывода

Хотя вы можете передавать любую переменную, которая может быть представлена в виде строки из вашей программы в виде сообщения в ваши журналы, есть некоторые базовые элементы, которые уже являются частью LogRecord и могут быть легко добавлены в выходной формат. Если вы хотите записать идентификатор процесса ID вместе с уровнем и сообщением, вы можете сделать что-то вроде этого:

import logging

logging.basicConfig(format='%(process)d-%(levelname)s-%(message)s')
logging.warning('This is a Warning')
18472-WARNING-This is a Warning

format может принимать строку с атрибутами LogRecord в любом порядке. Весь список доступных атрибутов можно найти здесь.

Вот еще один пример, где вы можете добавить информацию о дате и времени:

import logging

logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO)
logging.info('Admin logged in')
2018-07-11 20:12:06,288 - Admin logged in

%(asctime)s добавляет время создания LogRecord. Формат можно изменить с помощью атрибута datefmt, который использует тот же язык форматирования, что и функции форматирования в модуле datetime, например time.strftime():

import logging

logging.basicConfig(format='%(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
logging.warning('Admin logged out')
12-Jul-18 20:53:19 - Admin logged out

Вы можете найти больше информации о формате datetime в этом руководстве.

Логирование переменных

В большинстве случаев вам нужно будет включать динамическую информацию из вашего приложения в журналы. Вы видели, что методы ведения журнала принимают строку в качестве аргумента, и может показаться естественным отформатировать строку с переменными в отдельной строке и передать ее методу log. Но на самом деле это можно сделать напрямую, используя строку формата для сообщения и добавляя переменные в качестве аргументов. Вот пример:

import logging

name = 'John'

logging.error('%s raised an error', name)
ERROR:root:John raised an error

Аргументы, передаваемые методу, будут включены в сообщение в качестве переменных.

Хотя вы можете использовать любой стиль форматирования, f-строки, представленные в Python 3.6, являются лучшим способом форматирования строк, поскольку они могут помочь сделать форматирование коротким и легким для чтения:

import logging

name = 'John'

logging.error(f'{name} raised an error')
ERROR:root:John raised an error

Вывод стека

Модуль регистрации также позволяет вам захватывать стек выполнения в приложении. Информация об исключении может быть получена, если параметр exc_info передан как True, а функции ведения журнала вызываются таким образом:

import logging

a = 5
b = 0

try:
  c = a / b
except Exception as e:
  logging.error("Exception occurred", exc_info=True)
ERROR:root:Exception occurred
Traceback (most recent call last):
  File "exceptions.py", line 6, in <module>
    c = a / b
ZeroDivisionError: division by zero
[Finished in 0.2s]

Если для exc_info не задано значение True, выходные данные вышеприведенной программы не сообщат нам ничего об исключении, которое в реальном сценарии может быть не таким простым, как ZeroDivisionError. Представьте, что вы пытаетесь отладить ошибку в сложной кодовой базе с помощью журнала, который показывает только это:

ERROR:root:Exception occurred

Совет: если вы логируете в обработчике исключений (try..except…), используйте метод logging.exception(), который регистрирует сообщение с уровнем ERROR и добавляет в сообщение информацию об исключении. Проще говоря, вызов logging.exception() похож на вызов logging.error (exc_info = True). Но поскольку этот метод всегда выводит информацию об исключении, его следует вызывать только в обработчике исключений. Взгляните на этот пример:

import logging

a = 5
b = 0
try:
  c = a / b
except Exception as e:
  logging.exception("Exception occurred")
ERROR:root:Exception occurred
Traceback (most recent call last):
  File "exceptions.py", line 6, in <module>
    c = a / b
ZeroDivisionError: division by zero
[Finished in 0.2s]

Использование logging.exception() покажет лог на уровне ERROR. Если вы не хотите этого, вы можете вызвать любой из других методов ведения журнала от debug() до critical() и передать параметр exc_info как True.

Классы и функции

До сих пор мы видели logger по умолчанию с именем root, который используется модулем logging всякий раз, когда его функции вызываются непосредственно таким образом: logging.debug(). Вы можете (и должны) определить свой собственный logger, создав объект класса Logger, особенно если ваше приложение имеет несколько модулей. Давайте посмотрим на некоторые классы и функции в модуле.

Наиболее часто используемые классы, определенные в модуле logging, следующие:

  • Logger: Это класс, чьи объекты будут использоваться в коде приложения напрямую для вызова функций.
  • LogRecord: Logger автоматически создает объект LogRecord, в котором находиться вся информация, относящаяся к регистрируемому событию, например, имя логера, функции, номер строки, сообщение и т. д. 
  • Handler: Обработчики отправляют LogRecord в требуемое место назначения вывода, такое как консоль или файл. Обработчик является основой для подклассов, таких как StreamHandler, FileHandler, SMTPHandler, HTTPHandler и других. Эти подклассы отправляют выходные данные журнала соответствующим адресатам, таким как sys.stdout или файл на диске.
  • Formatter: Здесь вы указываете формат вывода, задавая строковый формат, в котором перечислены атрибуты, которые должны содержать выходные данные.

Из всего перечисленного мы в основном имеем дело с объектами класса Logger, которые создаются с помощью функции уровня модуля logging.getLogger(name). Многократные вызовы getLogger() с одним и тем же именем возвращают ссылку на один и тот же объект Logger, что избавляет нас от передачи объектов logger в каждую часть, где это необходимо. Вот пример:

import logging

logger = logging.getLogger('example_logger')
logger.warning('This is a warning')
This is a warning

Этот код создает пользовательский logger с именем example_logger, но в отличие от корневого logger, имя настраиваемого регистратора не является частью выходного формата по умолчанию и должна быть добавлена в конфигурацию. Конфигурирование его в формате для отображения имени logger даст вывод, подобный этому:

WARNING:example_logger:This is a warning

Опять же, в отличие от корневого logger, пользовательский logger нельзя настроить с помощью basicConfig(). Вы должны настроить его с помощью Handlers и Formatters:

Использование Handlers

Обработчики используются, когда вы хотите настроить свои собственные logger и предназначены для отправки сообщений в сконфигурированные места назначения мест, такие как стандартный поток вывода или файл или HTTP, или на вашу электронную почту через SMTP.

У созданного вами logger может быть несколько обработчиков, а это значит, что вы можете настроить его на сохранение в файл журнала, а также на отправку по электронной почте.

Подобно logger, вы также можете установить уровень severity в обработчиках. Это полезно, если вы хотите установить несколько обработчиков для одного и того же logger, но хотите иметь разные уровни severity для каждого из них. Например, вы можете захотеть, чтобы журналы с уровнем WARNING и выше регистрировались на консоли, но все с уровнем ERROR и выше также должно быть сохранены в файл. Вот пример кода, который делает это:

# logging_example.py

import logging

# Create a custom logger
logger = logging.getLogger(__name__)

# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler('file.log')
c_handler.setLevel(logging.WARNING)
f_handler.setLevel(logging.ERROR)

# Create formatters and add it to handlers
c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)

# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)

logger.warning('This is a warning')
logger.error('This is an error')
__main__ - WARNING - This is a warning
__main__ - ERROR - This is an error

Здесь logger.warning() создает LogRecord, который содержит всю информацию о событии, и передает ее всем имеющимся обработчикам: c_handler и f_handler.

c_handler является StreamHandler с уровнем WARNING и берет информацию из LogRecord для генерации вывода в указанном формате и выводит его на консоль. f_handler – это FileHandler с уровнем ERROR, и он игнорирует LogRecord, так как его уровень – WARNING.

Когда вызывается logger.error(), c_handler ведет себя точно так же, как и раньше, а f_handler получает LogRecord на уровне ERROR, поэтому он продолжает генерировать вывод точно так же, как c_handler, но вместо вывода на консоль, он записывает сообщение в указанный файл в этом формате:

2018-08-03 16:12:21,723 - __main__ - ERROR - This is an error

Имя logger, соответствующее переменной __name__, записывается как __main__, то есть имя, которое Python присваивает модулю, с которого начинается выполнение. Если этот файл импортируется каким-либо другим модулем, то переменная __name__ будет соответствовать его имени logging_example. Вот как это будет выглядеть:

# run.py

import logging_example
logging_example - WARNING - This is a warning
logging_example - ERROR - This is an error

Другие методы настройки

Вы можете настроить ведение журнала, как показано выше, используя функции модуля и класса или создав файл конфигурации или словарь и загрузив его с помощью fileConfig() или dictConfig() соответственно. Это полезно, если вы хотите часто менять конфигурацию ведения журнала в работающем приложении.

Вот пример файла конфигурации:

[loggers]
keys=root,sampleLogger

[handlers]
keys=consoleHandler

[formatters]
keys=sampleFormatter

[logger_root]
level=DEBUG
handlers=consoleHandler

[logger_sampleLogger]
level=DEBUG
handlers=consoleHandler
qualname=sampleLogger
propagate=0

[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=sampleFormatter
args=(sys.stdout,)

[formatter_sampleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s

В приведенном выше файле есть два logger, один обработчик и один форматер. После того, как определены их имена, они настраиваются путем добавления слов logger, handler и formatter перед их именами, разделенными подчеркиванием.

Чтобы загрузить этот файл конфигурации, вы должны использовать fileConfig():

import logging
import logging.config

logging.config.fileConfig(fname='file.conf', disable_existing_loggers=False)

# Get the logger specified in the file
logger = logging.getLogger(__name__)

logger.debug('This is a debug message')
2018-07-13 13:57:45,467 - __main__ - DEBUG - This is a debug message

Путь к файлу конфигурации передается в качестве параметра методу fileConfig(), а параметр disable_existing_loggers используется для сохранения или отключения logger, которые присутствуют при вызове функции. По умолчанию установлено значение True, если не упомянуто.

Вот та же конфигурация в формате YAML для подхода с dictionary:

version: 1
formatters:
  simple:
    format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
  console:
    class: logging.StreamHandler
    level: DEBUG
    formatter: simple
    stream: ext://sys.stdout
loggers:
  sampleLogger:
    level: DEBUG
    handlers: [console]
    propagate: no
root:
  level: DEBUG
  handlers: [console]

Пример, который показывает, как загрузить конфигурацию из файла yaml:

import logging
import logging.config
import yaml

with open('config.yaml', 'r') as f:
    config = yaml.safe_load(f.read())
    logging.config.dictConfig(config)

logger = logging.getLogger(__name__)

logger.debug('This is a debug message')
2018-07-13 14:05:03,766 - __main__ - DEBUG - This is a debug message

Заключение

Модуль logging считается очень гибким. Его дизайн очень практичен и должен подходить для любого случая использования. Вы можете добавить базовое ведение логов в небольшой проект или даже создать собственные настраиваемые уровни журналов, классы обработчиков и многое другое, если вы работаете над большим проектом.

Если вы не использовали логирование в своих приложениях до сих пор, сейчас самое время начать. Если все сделаете правильно, ведение логов, несомненно, устранит много проблем как в процессе разработки так и в процессе эксплуатации и поможет вам найти возможности поднять ваше приложение на новый уровень.

Оригинальная статья: Abhinav Ajitsaria  Logging in Python

Была ли вам полезна эта статья?

Понравилась статья? Поделить с друзьями:
  • Post error dors 800 как исправить ошибку
  • Post card pci коды ошибок
  • Post 302 ошибка
  • Possible use of uninitialized variable mql4 ошибка
  • Positional argument follows keyword argument python ошибка