Solving Airflow - ImportError: Unable to load custom logging from log_config.DEFAULT_LOGGING_CONFIG
2021 Jan 25I was working in Airflow and at the moment that I tried to configure a custom log I got the following error:
ImportError: Unable to load custom logging from
airflow.config.log_config.LOGGING_CONFIG due to
section/key [logging/logging_level] not found in config
As 99% of the normal people, I went in Stack Overflow and checked the answer given by Meny Issakov.
Solution
So based on its response, I got the following (working) solution doing the following steps:
1) I opened the file airflow.cfg
2) I’ve Iinclude a new section in the file, below the [core] section,
called [logging] using the following code:
[logging]
logging_config_class = log_config.DEFAULT_LOGGING_CONFIG
3) I restarted the scheduler
However, going a bit into the root cause of the problem, I got a (non-definitive) conclusion.
ELI5 the reason of problem
The file airflow.cfg is missing a section called [logging].
Why the problem happened?
At the time that the scheduler starts, it accesses the [core]
section in the airflow.cfg and search the logging path.
And the information where the logs will be stored it’s
found in logging_config_class parameter.
However, even if we put logging_config_class = log_config.DEFAULT_LOGGING_CONFIG
in the [core] section, the scheduler it’s not gonna work either.
Why? It’s because there’s a mismatch between the log_config.py
logging handlers and with the airflow.cfg.
In the logging handlers in the log_config.py they have the
following conf.get to get the logging configurations:
LOG_LEVEL: str = conf.get('logging', 'LOGGING_LEVEL').upper()
LOG_FORMAT: str = conf.get('logging', 'LOG_FORMAT')
The first parameter it’s the section that will be scanned
in the airflow.cfg file, but by default, there’s
no section called [logging] in the airflow.cfg file,
and this causes the following error in the scheduler initialization:
ImportError: Unable to load custom logging from log_config.DEFAULT_LOGGING_CONFIG due to section/key [logging/fab_logging_level] not found in config
I hope it helps.
PS 1: By the way, I got the same error when I was in the excellent Udemy course provided by Marc Lamberti specifically in the Section 8: Monitoring Apache Airflow Lecture - Practice - Setting up custom logging
PS 2: The log config has the following format in the current version of Airflow
that I’m using (1.10.14):
| # | |
| # Licensed to the Apache Software Foundation (ASF) under one | |
| # or more contributor license agreements. See the NOTICE file | |
| # distributed with this work for additional information | |
| # regarding copyright ownership. The ASF licenses this file | |
| # to you under the Apache License, Version 2.0 (the | |
| # "License"); you may not use this file except in compliance | |
| # with the License. You may obtain a copy of the License at | |
| # | |
| # http://www.apache.org/licenses/LICENSE-2.0 | |
| # | |
| # Unless required by applicable law or agreed to in writing, | |
| # software distributed under the License is distributed on an | |
| # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | |
| # KIND, either express or implied. See the License for the | |
| # specific language governing permissions and limitations | |
| # under the License. | |
| """Airflow logging settings""" | |
| import os | |
| from pathlib import Path | |
| from typing import Any, Dict, Union | |
| from urllib.parse import urlparse | |
| from airflow.configuration import conf | |
| from airflow.exceptions import AirflowException | |
| # TODO: Logging format and level should be configured | |
| # in this file instead of from airflow.cfg. Currently | |
| # there are other log format and level configurations in | |
| # settings.py and cli.py. Please see AIRFLOW-1455. | |
| LOG_LEVEL: str = conf.get('logging', 'LOGGING_LEVEL').upper() | |
| # Flask appbuilder's info level log is very verbose, | |
| # so it's set to 'WARN' by default. | |
| FAB_LOG_LEVEL: str = conf.get('logging', 'FAB_LOGGING_LEVEL').upper() | |
| LOG_FORMAT: str = conf.get('logging', 'LOG_FORMAT') | |
| COLORED_LOG_FORMAT: str = conf.get('logging', 'COLORED_LOG_FORMAT') | |
| COLORED_LOG: bool = conf.getboolean('logging', 'COLORED_CONSOLE_LOG') | |
| COLORED_FORMATTER_CLASS: str = conf.get('logging', 'COLORED_FORMATTER_CLASS') | |
| BASE_LOG_FOLDER: str = conf.get('logging', 'BASE_LOG_FOLDER') | |
| PROCESSOR_LOG_FOLDER: str = conf.get('scheduler', 'CHILD_PROCESS_LOG_DIRECTORY') | |
| DAG_PROCESSOR_MANAGER_LOG_LOCATION: str = conf.get('logging', 'DAG_PROCESSOR_MANAGER_LOG_LOCATION') | |
| FILENAME_TEMPLATE: str = conf.get('logging', 'LOG_FILENAME_TEMPLATE') | |
| PROCESSOR_FILENAME_TEMPLATE: str = conf.get('logging', 'LOG_PROCESSOR_FILENAME_TEMPLATE') | |
| DEFAULT_LOGGING_CONFIG: Dict[str, Any] = { | |
| 'version': 1, | |
| 'disable_existing_loggers': False, | |
| 'formatters': { | |
| 'airflow': {'format': LOG_FORMAT}, | |
| 'airflow_coloured': { | |
| 'format': COLORED_LOG_FORMAT if COLORED_LOG else LOG_FORMAT, | |
| 'class': COLORED_FORMATTER_CLASS if COLORED_LOG else 'logging.Formatter', | |
| }, | |
| }, | |
| 'handlers': { | |
| 'console': { | |
| 'class': 'airflow.utils.log.logging_mixin.RedirectStdHandler', | |
| 'formatter': 'airflow_coloured', | |
| 'stream': 'sys.stdout', | |
| }, | |
| 'task': { | |
| 'class': 'airflow.utils.log.file_task_handler.FileTaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': os.path.expanduser(BASE_LOG_FOLDER), | |
| 'filename_template': FILENAME_TEMPLATE, | |
| }, | |
| 'processor': { | |
| 'class': 'airflow.utils.log.file_processor_handler.FileProcessorHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': os.path.expanduser(PROCESSOR_LOG_FOLDER), | |
| 'filename_template': PROCESSOR_FILENAME_TEMPLATE, | |
| }, | |
| }, | |
| 'loggers': { | |
| 'airflow.processor': { | |
| 'handlers': ['processor'], | |
| 'level': LOG_LEVEL, | |
| 'propagate': False, | |
| }, | |
| 'airflow.task': { | |
| 'handlers': ['task'], | |
| 'level': LOG_LEVEL, | |
| 'propagate': False, | |
| }, | |
| 'flask_appbuilder': { | |
| 'handler': ['console'], | |
| 'level': FAB_LOG_LEVEL, | |
| 'propagate': True, | |
| }, | |
| }, | |
| 'root': { | |
| 'handlers': ['console'], | |
| 'level': LOG_LEVEL, | |
| }, | |
| } | |
| EXTRA_LOGGER_NAMES: str = conf.get('logging', 'EXTRA_LOGGER_NAMES', fallback=None) | |
| if EXTRA_LOGGER_NAMES: | |
| new_loggers = { | |
| logger_name.strip(): { | |
| 'handler': ['console'], | |
| 'level': LOG_LEVEL, | |
| 'propagate': True, | |
| } | |
| for logger_name in EXTRA_LOGGER_NAMES.split(",") | |
| } | |
| DEFAULT_LOGGING_CONFIG['loggers'].update(new_loggers) | |
| DEFAULT_DAG_PARSING_LOGGING_CONFIG: Dict[str, Dict[str, Dict[str, Any]]] = { | |
| 'handlers': { | |
| 'processor_manager': { | |
| 'class': 'logging.handlers.RotatingFileHandler', | |
| 'formatter': 'airflow', | |
| 'filename': DAG_PROCESSOR_MANAGER_LOG_LOCATION, | |
| 'mode': 'a', | |
| 'maxBytes': 104857600, # 100MB | |
| 'backupCount': 5, | |
| } | |
| }, | |
| 'loggers': { | |
| 'airflow.processor_manager': { | |
| 'handlers': ['processor_manager'], | |
| 'level': LOG_LEVEL, | |
| 'propagate': False, | |
| } | |
| }, | |
| } | |
| # Only update the handlers and loggers when CONFIG_PROCESSOR_MANAGER_LOGGER is set. | |
| # This is to avoid exceptions when initializing RotatingFileHandler multiple times | |
| # in multiple processes. | |
| if os.environ.get('CONFIG_PROCESSOR_MANAGER_LOGGER') == 'True': | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(DEFAULT_DAG_PARSING_LOGGING_CONFIG['handlers']) | |
| DEFAULT_LOGGING_CONFIG['loggers'].update(DEFAULT_DAG_PARSING_LOGGING_CONFIG['loggers']) | |
| # Manually create log directory for processor_manager handler as RotatingFileHandler | |
| # will only create file but not the directory. | |
| processor_manager_handler_config: Dict[str, Any] = DEFAULT_DAG_PARSING_LOGGING_CONFIG['handlers'][ | |
| 'processor_manager' | |
| ] | |
| directory: str = os.path.dirname(processor_manager_handler_config['filename']) | |
| Path(directory).mkdir(parents=True, exist_ok=True, mode=0o755) | |
| ################## | |
| # Remote logging # | |
| ################## | |
| REMOTE_LOGGING: bool = conf.getboolean('logging', 'remote_logging') | |
| if REMOTE_LOGGING: | |
| ELASTICSEARCH_HOST: str = conf.get('elasticsearch', 'HOST') | |
| # Storage bucket URL for remote logging | |
| # S3 buckets should start with "s3://" | |
| # Cloudwatch log groups should start with "cloudwatch://" | |
| # GCS buckets should start with "gs://" | |
| # WASB buckets should start with "wasb" | |
| # just to help Airflow select correct handler | |
| REMOTE_BASE_LOG_FOLDER: str = conf.get('logging', 'REMOTE_BASE_LOG_FOLDER') | |
| if REMOTE_BASE_LOG_FOLDER.startswith('s3://'): | |
| S3_REMOTE_HANDLERS: Dict[str, Dict[str, str]] = { | |
| 'task': { | |
| 'class': 'airflow.providers.amazon.aws.log.s3_task_handler.S3TaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), | |
| 's3_log_folder': REMOTE_BASE_LOG_FOLDER, | |
| 'filename_template': FILENAME_TEMPLATE, | |
| }, | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(S3_REMOTE_HANDLERS) | |
| elif REMOTE_BASE_LOG_FOLDER.startswith('cloudwatch://'): | |
| CLOUDWATCH_REMOTE_HANDLERS: Dict[str, Dict[str, str]] = { | |
| 'task': { | |
| 'class': 'airflow.providers.amazon.aws.log.cloudwatch_task_handler.CloudwatchTaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), | |
| 'log_group_arn': urlparse(REMOTE_BASE_LOG_FOLDER).netloc, | |
| 'filename_template': FILENAME_TEMPLATE, | |
| }, | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(CLOUDWATCH_REMOTE_HANDLERS) | |
| elif REMOTE_BASE_LOG_FOLDER.startswith('gs://'): | |
| key_path = conf.get('logging', 'GOOGLE_KEY_PATH', fallback=None) | |
| GCS_REMOTE_HANDLERS: Dict[str, Dict[str, str]] = { | |
| 'task': { | |
| 'class': 'airflow.providers.google.cloud.log.gcs_task_handler.GCSTaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), | |
| 'gcs_log_folder': REMOTE_BASE_LOG_FOLDER, | |
| 'filename_template': FILENAME_TEMPLATE, | |
| 'gcp_key_path': key_path, | |
| }, | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(GCS_REMOTE_HANDLERS) | |
| elif REMOTE_BASE_LOG_FOLDER.startswith('wasb'): | |
| WASB_REMOTE_HANDLERS: Dict[str, Dict[str, Union[str, bool]]] = { | |
| 'task': { | |
| 'class': 'airflow.providers.microsoft.azure.log.wasb_task_handler.WasbTaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), | |
| 'wasb_log_folder': REMOTE_BASE_LOG_FOLDER, | |
| 'wasb_container': 'airflow-logs', | |
| 'filename_template': FILENAME_TEMPLATE, | |
| 'delete_local_copy': False, | |
| }, | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(WASB_REMOTE_HANDLERS) | |
| elif REMOTE_BASE_LOG_FOLDER.startswith('stackdriver://'): | |
| key_path = conf.get('logging', 'GOOGLE_KEY_PATH', fallback=None) | |
| # stackdriver:///airflow-tasks => airflow-tasks | |
| log_name = urlparse(REMOTE_BASE_LOG_FOLDER).path[1:] | |
| STACKDRIVER_REMOTE_HANDLERS = { | |
| 'task': { | |
| 'class': 'airflow.providers.google.cloud.log.stackdriver_task_handler.StackdriverTaskHandler', | |
| 'formatter': 'airflow', | |
| 'name': log_name, | |
| 'gcp_key_path': key_path, | |
| } | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(STACKDRIVER_REMOTE_HANDLERS) | |
| elif ELASTICSEARCH_HOST: | |
| ELASTICSEARCH_LOG_ID_TEMPLATE: str = conf.get('elasticsearch', 'LOG_ID_TEMPLATE') | |
| ELASTICSEARCH_END_OF_LOG_MARK: str = conf.get('elasticsearch', 'END_OF_LOG_MARK') | |
| ELASTICSEARCH_FRONTEND: str = conf.get('elasticsearch', 'frontend') | |
| ELASTICSEARCH_WRITE_STDOUT: bool = conf.getboolean('elasticsearch', 'WRITE_STDOUT') | |
| ELASTICSEARCH_JSON_FORMAT: bool = conf.getboolean('elasticsearch', 'JSON_FORMAT') | |
| ELASTICSEARCH_JSON_FIELDS: str = conf.get('elasticsearch', 'JSON_FIELDS') | |
| ELASTIC_REMOTE_HANDLERS: Dict[str, Dict[str, Union[str, bool]]] = { | |
| 'task': { | |
| 'class': 'airflow.providers.elasticsearch.log.es_task_handler.ElasticsearchTaskHandler', | |
| 'formatter': 'airflow', | |
| 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), | |
| 'log_id_template': ELASTICSEARCH_LOG_ID_TEMPLATE, | |
| 'filename_template': FILENAME_TEMPLATE, | |
| 'end_of_log_mark': ELASTICSEARCH_END_OF_LOG_MARK, | |
| 'host': ELASTICSEARCH_HOST, | |
| 'frontend': ELASTICSEARCH_FRONTEND, | |
| 'write_stdout': ELASTICSEARCH_WRITE_STDOUT, | |
| 'json_format': ELASTICSEARCH_JSON_FORMAT, | |
| 'json_fields': ELASTICSEARCH_JSON_FIELDS, | |
| }, | |
| } | |
| DEFAULT_LOGGING_CONFIG['handlers'].update(ELASTIC_REMOTE_HANDLERS) | |
| else: | |
| raise AirflowException( | |
| "Incorrect remote log configuration. Please check the configuration of option 'host' in " | |
| "section 'elasticsearch' if you are using Elasticsearch. In the other case, " | |
| "'remote_base_log_folder' option in 'core' section." | |
| ) |
PS 3: The modified version of airflow.cfg file with the [logging] section is:
| [core] | |
| # The folder where your airflow pipelines live, most likely a | |
| # subfolder in a code repository | |
| # This path must be absolute | |
| dags_folder = /usr/local/airflow/dags | |
| # The folder where airflow should store its log files | |
| # This path must be absolute | |
| base_log_folder = /usr/local/airflow/logs | |
| # Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search. | |
| # Users must supply an Airflow connection id that provides access to the storage | |
| # location. If remote_logging is set to true, see UPDATING.md for additional | |
| # configuration requirements. | |
| remote_logging = False | |
| remote_log_conn_id = | |
| remote_base_log_folder = | |
| encrypt_s3_logs = False | |
| # Logging level | |
| logging_level = INFO | |
| fab_logging_level = WARN | |
| # Logging class | |
| # Specify the class that will specify the logging configuration | |
| # This class has to be on the python classpath | |
| # logging_config_class = my.path.default_local_settings.LOGGING_CONFIG | |
| logging_config_class = | |
| # Log format | |
| # Colour the logs when the controlling terminal is a TTY. | |
| colored_console_log = True | |
| colored_log_format = [%%(blue)s%%(asctime)s%%(reset)s] {%%(blue)s%%(filename)s:%%(reset)s%%(lineno)d} %%(log_color)s%%(levelname)s%%(reset)s - %%(log_color)s%%(message)s%%(reset)s | |
| colored_formatter_class = airflow.utils.log.colored_log.CustomTTYColoredFormatter | |
| #log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s | |
| log_format = [%%(asctime)s] [ %%(process)s - %%(name)s ] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s | |
| simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s | |
| # Log filename format | |
| log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log | |
| log_processor_filename_template = {{ filename }}.log | |
| dag_processor_manager_log_location = /usr/local/airflow/logs/dag_processor_manager/dag_processor_manager.log | |
| # Hostname by providing a path to a callable, which will resolve the hostname | |
| # The format is "package:function". For example, | |
| # default value "socket:getfqdn" means that result from getfqdn() of "socket" package will be used as hostname | |
| # No argument should be required in the function specified. | |
| # If using IP address as hostname is preferred, use value "airflow.utils.net:get_host_ip_address" | |
| hostname_callable = socket:getfqdn | |
| # Default timezone in case supplied date times are naive | |
| # can be utc (default), system, or any IANA timezone string (e.g. Europe/Amsterdam) | |
| default_timezone = utc | |
| # The executor class that airflow should use. Choices include | |
| # SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor, KubernetesExecutor | |
| executor = CeleryExecutor | |
| # The SqlAlchemy connection string to the metadata database. | |
| # SqlAlchemy supports many different database engine, more information | |
| # their website | |
| sql_alchemy_conn = postgresql+psycopg2://airflow:airflow@postgres:5432/airflow | |
| # The encoding for the databases | |
| sql_engine_encoding = utf-8 | |
| # If SqlAlchemy should pool database connections. | |
| sql_alchemy_pool_enabled = True | |
| # The SqlAlchemy pool size is the maximum number of database connections | |
| # in the pool. 0 indicates no limit. | |
| sql_alchemy_pool_size = 5 | |
| # The maximum overflow size of the pool. | |
| # When the number of checked-out connections reaches the size set in pool_size, | |
| # additional connections will be returned up to this limit. | |
| # When those additional connections are returned to the pool, they are disconnected and discarded. | |
| # It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, | |
| # and the total number of "sleeping" connections the pool will allow is pool_size. | |
| # max_overflow can be set to -1 to indicate no overflow limit; | |
| # no limit will be placed on the total number of concurrent connections. Defaults to 10. | |
| sql_alchemy_max_overflow = 10 | |
| # The SqlAlchemy pool recycle is the number of seconds a connection | |
| # can be idle in the pool before it is invalidated. This config does | |
| # not apply to sqlite. If the number of DB connections is ever exceeded, | |
| # a lower config value will allow the system to recover faster. | |
| sql_alchemy_pool_recycle = 1800 | |
| # How many seconds to retry re-establishing a DB connection after | |
| # disconnects. Setting this to 0 disables retries. | |
| sql_alchemy_reconnect_timeout = 300 | |
| # The schema to use for the metadata database | |
| # SqlAlchemy supports databases with the concept of multiple schemas. | |
| sql_alchemy_schema = | |
| # The amount of parallelism as a setting to the executor. This defines | |
| # the max number of task instances that should run simultaneously | |
| # on this airflow installation | |
| parallelism = 4 | |
| # The number of task instances allowed to run concurrently by the scheduler | |
| dag_concurrency = 4 | |
| # Are DAGs paused by default at creation | |
| dags_are_paused_at_creation = True | |
| # The maximum number of active DAG runs per DAG | |
| max_active_runs_per_dag = 1 | |
| # Whether to load the examples that ship with Airflow. It's good to | |
| # get started, but you probably want to set this to False in a production | |
| # environment | |
| load_examples = False | |
| # Where your Airflow plugins are stored | |
| plugins_folder = /usr/local/airflow/plugins | |
| # Secret key to save connection passwords in the db | |
| fernet_key = l-OhyQHu1gNyu7rFmr1amZZfsp2qhpnfp8GwuR-zyw8= | |
| # Whether to disable pickling dags | |
| donot_pickle = False | |
| # How long before timing out a python file import while filling the DagBag | |
| dagbag_import_timeout = 30 | |
| # The class to use for running task instances in a subprocess | |
| task_runner = StandardTaskRunner | |
| # If set, tasks without a `run_as_user` argument will be run with this user | |
| # Can be used to de-elevate a sudo user running Airflow when executing tasks | |
| default_impersonation = | |
| # What security module to use (for example kerberos): | |
| security = | |
| # If set to False enables some unsecure features like Charts and Ad Hoc Queries. | |
| # In 2.0 will default to True. | |
| secure_mode = True | |
| # Turn unit test mode on (overwrites many configuration options with test | |
| # values at runtime) | |
| unit_test_mode = False | |
| # Name of handler to read task instance logs. | |
| # Default to use task handler. | |
| task_log_reader = task | |
| # Whether to enable pickling for xcom (note that this is insecure and allows for | |
| # RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False). | |
| enable_xcom_pickling = True | |
| # When a task is killed forcefully, this is the amount of time in seconds that | |
| # it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED | |
| killed_task_cleanup_time = 60 | |
| # Whether to override params with dag_run.conf. If you pass some key-value pairs through `airflow backfill -c` or | |
| # `airflow trigger_dag -c`, the key-value pairs will override the existing ones in params. | |
| dag_run_conf_overrides_params = False | |
| # Worker initialisation check to validate Metadata Database connection | |
| worker_precheck = False | |
| # When discovering DAGs, ignore any files that don't contain the strings `DAG` and `airflow`. | |
| dag_discovery_safe_mode = True | |
| [logging] | |
| logging_config_class = log_config.DEFAULT_LOGGING_CONFIG | |
| [cli] | |
| # In what way should the cli access the API. The LocalClient will use the | |
| # database directly, while the json_client will use the api running on the | |
| # webserver | |
| api_client = airflow.api.client.local_client | |
| # If you set web_server_url_prefix, do NOT forget to append it here, ex: | |
| # endpoint_url = http://localhost:8080/myroot | |
| # So api will look like: http://localhost:8080/myroot/api/experimental/... | |
| endpoint_url = http://localhost:8080 | |
| [api] | |
| # How to authenticate users of the API | |
| auth_backend = airflow.api.auth.backend.default | |
| [lineage] | |
| # what lineage backend to use | |
| backend = | |
| [atlas] | |
| sasl_enabled = False | |
| host = | |
| port = 21000 | |
| username = | |
| password = | |
| [operators] | |
| # The default owner assigned to each new operator, unless | |
| # provided explicitly or passed via `default_args` | |
| default_owner = airflow | |
| default_cpus = 1 | |
| default_ram = 512 | |
| default_disk = 512 | |
| default_gpus = 0 | |
| [hive] | |
| # Default mapreduce queue for HiveOperator tasks | |
| default_hive_mapred_queue = | |
| [webserver] | |
| # The base url of your website as airflow cannot guess what domain or | |
| # cname you are using. This is used in automated emails that | |
| # airflow sends to point links to the right web server | |
| base_url = http://localhost:8080 | |
| # The ip specified when starting the web server | |
| web_server_host = 0.0.0.0 | |
| # The port on which to run the web server | |
| web_server_port = 8080 | |
| # Paths to the SSL certificate and key for the web server. When both are | |
| # provided SSL will be enabled. This does not change the web server port. | |
| web_server_ssl_cert = | |
| web_server_ssl_key = | |
| # Number of seconds the webserver waits before killing gunicorn master that doesn't respond | |
| web_server_master_timeout = 120 | |
| # Number of seconds the gunicorn webserver waits before timing out on a worker | |
| web_server_worker_timeout = 120 | |
| # Number of workers to refresh at a time. When set to 0, worker refresh is | |
| # disabled. When nonzero, airflow periodically refreshes webserver workers by | |
| # bringing up new ones and killing old ones. | |
| worker_refresh_batch_size = 1 | |
| # Number of seconds to wait before refreshing a batch of workers. | |
| worker_refresh_interval = 30 | |
| # Secret key used to run your flask app | |
| secret_key = temporary_key | |
| # Number of workers to run the Gunicorn web server | |
| workers = 4 | |
| # The worker class gunicorn should use. Choices include | |
| # sync (default), eventlet, gevent | |
| worker_class = sync | |
| # Log files for the gunicorn webserver. '-' means log to stderr. | |
| access_logfile = - | |
| error_logfile = - | |
| # Expose the configuration file in the web server | |
| # This is only applicable for the flask-admin based web UI (non FAB-based). | |
| # In the FAB-based web UI with RBAC feature, | |
| # access to configuration is controlled by role permissions. | |
| expose_config = False | |
| # Set to true to turn on authentication: | |
| # https://airflow.apache.org/security.html#web-authentication | |
| authenticate = False | |
| # Filter the list of dags by owner name (requires authentication to be enabled) | |
| filter_by_owner = False | |
| # Filtering mode. Choices include user (default) and ldapgroup. | |
| # Ldap group filtering requires using the ldap backend | |
| # | |
| # Note that the ldap server needs the "memberOf" overlay to be set up | |
| # in order to user the ldapgroup mode. | |
| owner_mode = user | |
| # Default DAG view. Valid values are: | |
| # tree, graph, duration, gantt, landing_times | |
| dag_default_view = tree | |
| # Default DAG orientation. Valid values are: | |
| # LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top) | |
| dag_orientation = LR | |
| # Puts the webserver in demonstration mode; blurs the names of Operators for | |
| # privacy. | |
| demo_mode = False | |
| # The amount of time (in secs) webserver will wait for initial handshake | |
| # while fetching logs from other worker machine | |
| log_fetch_timeout_sec = 5 | |
| # By default, the webserver shows paused DAGs. Flip this to hide paused | |
| # DAGs by default | |
| hide_paused_dags_by_default = False | |
| # Consistent page size across all listing views in the UI | |
| page_size = 100 | |
| # Use FAB-based webserver with RBAC feature | |
| rbac = False | |
| # Define the color of navigation bar | |
| navbar_color = #007A87 | |
| # Default dagrun to show in UI | |
| default_dag_run_display_number = 25 | |
| # Enable werkzeug `ProxyFix` middleware | |
| enable_proxy_fix = False | |
| # Set secure flag on session cookie | |
| cookie_secure = False | |
| # Set samesite policy on session cookie | |
| cookie_samesite = | |
| # Default setting for wrap toggle on DAG code and TI log views. | |
| default_wrap = False | |
| # Send anonymous user activity to your analytics tool | |
| # analytics_tool = # choose from google_analytics, segment, or metarouter | |
| # analytics_id = XXXXXXXXXXX | |
| [email] | |
| email_backend = airflow.utils.email.send_email_smtp | |
| [smtp] | |
| # If you want airflow to send emails on retries, failure, and you want to use | |
| # the airflow.utils.email.send_email_smtp function, you have to configure an | |
| # smtp server here | |
| smtp_host = localhost | |
| smtp_starttls = True | |
| smtp_ssl = False | |
| # Uncomment and set the user/pass settings if you want to use SMTP AUTH | |
| # smtp_user = airflow | |
| # smtp_password = airflow | |
| smtp_port = 25 | |
| smtp_mail_from = airflow@example.com | |
| [celery] | |
| # This section only applies if you are using the CeleryExecutor in | |
| # [core] section above | |
| # The app name that will be used by celery | |
| celery_app_name = airflow.executors.celery_executor | |
| # The concurrency that will be used when starting workers with the | |
| # "airflow worker" command. This defines the number of task instances that | |
| # a worker will take, so size up your workers based on the resources on | |
| # your worker box and the nature of your tasks | |
| worker_concurrency = 4 | |
| # The maximum and minimum concurrency that will be used when starting workers with the | |
| # "airflow worker" command (always keep minimum processes, but grow to maximum if necessary). | |
| # Note the value should be "max_concurrency,min_concurrency" | |
| # Pick these numbers based on resources on worker box and the nature of the task. | |
| # If autoscale option is available, worker_concurrency will be ignored. | |
| # http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html#cmdoption-celery-worker-autoscale | |
| # worker_autoscale = 16,12 | |
| # When you start an airflow worker, airflow starts a tiny web server | |
| # subprocess to serve the workers local log files to the airflow main | |
| # web server, who then builds pages and sends them to users. This defines | |
| # the port on which the logs are served. It needs to be unused, and open | |
| # visible from the main web server to connect into the workers. | |
| worker_log_server_port = 8793 | |
| # The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally | |
| # a sqlalchemy database. Refer to the Celery documentation for more | |
| # information. | |
| # http://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-settings | |
| broker_url = redis://:redispass@redis:6379/1 | |
| # The Celery result_backend. When a job finishes, it needs to update the | |
| # metadata of the job. Therefore it will post a message on a message bus, | |
| # or insert it into a database (depending of the backend) | |
| # This status is used by the scheduler to update the state of the task | |
| # The use of a database is highly recommended | |
| # http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-result-backend-settings | |
| result_backend = db+postgresql://airflow:airflow@postgres:5432/airflow | |
| # Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start | |
| # it `airflow flower`. This defines the IP that Celery Flower runs on | |
| flower_host = 0.0.0.0 | |
| # The root URL for Flower | |
| # Ex: flower_url_prefix = /flower | |
| flower_url_prefix = | |
| # This defines the port that Celery Flower runs on | |
| flower_port = 5555 | |
| # Securing Flower with Basic Authentication | |
| # Accepts user:password pairs separated by a comma | |
| # Example: flower_basic_auth = user1:password1,user2:password2 | |
| flower_basic_auth = | |
| # Default queue that tasks get assigned to and that worker listen on. | |
| default_queue = default | |
| # How many processes CeleryExecutor uses to sync task state. | |
| # 0 means to use max(1, number of cores - 1) processes. | |
| sync_parallelism = 0 | |
| # Import path for celery configuration options | |
| celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG | |
| # In case of using SSL | |
| ssl_active = False | |
| ssl_key = | |
| ssl_cert = | |
| ssl_cacert = | |
| # Celery Pool implementation. | |
| # Choices include: prefork (default), eventlet, gevent or solo. | |
| # See: | |
| # https://docs.celeryproject.org/en/latest/userguide/workers.html#concurrency | |
| # https://docs.celeryproject.org/en/latest/userguide/concurrency/eventlet.html | |
| pool = prefork | |
| [celery_broker_transport_options] | |
| # This section is for specifying options which can be passed to the | |
| # underlying celery broker transport. See: | |
| # http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_transport_options | |
| # The visibility timeout defines the number of seconds to wait for the worker | |
| # to acknowledge the task before the message is redelivered to another worker. | |
| # Make sure to increase the visibility timeout to match the time of the longest | |
| # ETA you're planning to use. | |
| # | |
| # visibility_timeout is only supported for Redis and SQS celery brokers. | |
| # See: | |
| # http://docs.celeryproject.org/en/master/userguide/configuration.html#std:setting-broker_transport_options | |
| # | |
| #visibility_timeout = 21600 | |
| [dask] | |
| # This section only applies if you are using the DaskExecutor in | |
| # [core] section above | |
| # The IP address and port of the Dask cluster's scheduler. | |
| cluster_address = 127.0.0.1:8786 | |
| # TLS/ SSL settings to access a secured Dask scheduler. | |
| tls_ca = | |
| tls_cert = | |
| tls_key = | |
| [scheduler] | |
| # Task instances listen for external kill signal (when you clear tasks | |
| # from the CLI or the UI), this defines the frequency at which they should | |
| # listen (in seconds). | |
| job_heartbeat_sec = 5 | |
| # The scheduler constantly tries to trigger new tasks (look at the | |
| # scheduler section in the docs for more information). This defines | |
| # how often the scheduler should run (in seconds). | |
| scheduler_heartbeat_sec = 5 | |
| # after how much time should the scheduler terminate in seconds | |
| # -1 indicates to run continuously (see also num_runs) | |
| run_duration = -1 | |
| # after how much time (seconds) a new DAGs should be picked up from the filesystem | |
| min_file_process_interval = 0 | |
| # How often (in seconds) to scan the DAGs directory for new files. Default to 5 minutes. | |
| dag_dir_list_interval = 300 | |
| # How often should stats be printed to the logs | |
| print_stats_interval = 30 | |
| # If the last scheduler heartbeat happened more than scheduler_health_check_threshold ago (in seconds), | |
| # scheduler is considered unhealthy. | |
| # This is used by the health check in the "/health" endpoint | |
| scheduler_health_check_threshold = 30 | |
| child_process_log_directory = /usr/local/airflow/logs/scheduler | |
| # Local task jobs periodically heartbeat to the DB. If the job has | |
| # not heartbeat in this many seconds, the scheduler will mark the | |
| # associated task instance as failed and will re-schedule the task. | |
| scheduler_zombie_task_threshold = 300 | |
| # Turn off scheduler catchup by setting this to False. | |
| # Default behavior is unchanged and | |
| # Command Line Backfills still work, but the scheduler | |
| # will not do scheduler catchup if this is False, | |
| # however it can be set on a per DAG basis in the | |
| # DAG definition (catchup) | |
| catchup_by_default = True | |
| # This changes the batch size of queries in the scheduling main loop. | |
| # If this is too high, SQL query performance may be impacted by one | |
| # or more of the following: | |
| # - reversion to full table scan | |
| # - complexity of query predicate | |
| # - excessive locking | |
| # | |
| # Additionally, you may hit the maximum allowable query length for your db. | |
| # | |
| # Set this to 0 for no limit (not advised) | |
| max_tis_per_query = 512 | |
| # Statsd (https://github.com/etsy/statsd) integration settings | |
| statsd_on = False | |
| statsd_host = localhost | |
| statsd_port = 8125 | |
| statsd_prefix = airflow | |
| # The scheduler can run multiple threads in parallel to schedule dags. | |
| # This defines how many threads will run. | |
| max_threads = 2 | |
| authenticate = False | |
| # Turn off scheduler use of cron intervals by setting this to False. | |
| # DAGs submitted manually in the web UI or with trigger_dag will still run. | |
| use_job_schedule = True | |
| [ldap] | |
| # set this to ldaps://<your.ldap.server>:<port> | |
| uri = | |
| user_filter = objectClass=* | |
| user_name_attr = uid | |
| group_member_attr = memberOf | |
| superuser_filter = | |
| data_profiler_filter = | |
| bind_user = cn=Manager,dc=example,dc=com | |
| bind_password = insecure | |
| basedn = dc=example,dc=com | |
| cacert = /etc/ca/ldap_ca.crt | |
| search_scope = LEVEL | |
| # This setting allows the use of LDAP servers that either return a | |
| # broken schema, or do not return a schema. | |
| ignore_malformed_schema = False | |
| [mesos] | |
| # Mesos master address which MesosExecutor will connect to. | |
| master = localhost:5050 | |
| # The framework name which Airflow scheduler will register itself as on mesos | |
| framework_name = Airflow | |
| # Number of cpu cores required for running one task instance using | |
| # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>' | |
| # command on a mesos slave | |
| task_cpu = 1 | |
| # Memory in MB required for running one task instance using | |
| # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>' | |
| # command on a mesos slave | |
| task_memory = 256 | |
| # Enable framework checkpointing for mesos | |
| # See http://mesos.apache.org/documentation/latest/slave-recovery/ | |
| checkpoint = False | |
| # Failover timeout in milliseconds. | |
| # When checkpointing is enabled and this option is set, Mesos waits | |
| # until the configured timeout for | |
| # the MesosExecutor framework to re-register after a failover. Mesos | |
| # shuts down running tasks if the | |
| # MesosExecutor framework fails to re-register within this timeframe. | |
| # failover_timeout = 604800 | |
| # Enable framework authentication for mesos | |
| # See http://mesos.apache.org/documentation/latest/configuration/ | |
| authenticate = False | |
| # Mesos credentials, if authentication is enabled | |
| # default_principal = admin | |
| # default_secret = admin | |
| # Optional Docker Image to run on slave before running the command | |
| # This image should be accessible from mesos slave i.e mesos slave | |
| # should be able to pull this docker image before executing the command. | |
| # docker_image_slave = puckel/docker-airflow | |
| [kerberos] | |
| ccache = /tmp/airflow_krb5_ccache | |
| # gets augmented with fqdn | |
| principal = airflow | |
| reinit_frequency = 3600 | |
| kinit_path = kinit | |
| keytab = airflow.keytab | |
| [github_enterprise] | |
| api_rev = v3 | |
| [admin] | |
| # UI to hide sensitive variable fields when set to True | |
| hide_sensitive_variable_fields = True | |
| [elasticsearch] | |
| # Elasticsearch host | |
| host = | |
| # Format of the log_id, which is used to query for a given tasks logs | |
| log_id_template = {dag_id}-{task_id}-{execution_date}-{try_number} | |
| # Used to mark the end of a log stream for a task | |
| end_of_log_mark = end_of_log | |
| # Qualified URL for an elasticsearch frontend (like Kibana) with a template argument for log_id | |
| # Code will construct log_id using the log_id template from the argument above. | |
| # NOTE: The code will prefix the https:// automatically, don't include that here. | |
| frontend = | |
| # Write the task logs to the stdout of the worker, rather than the default files | |
| write_stdout = False | |
| # Instead of the default log formatter, write the log lines as JSON | |
| json_format = False | |
| # Log fields to also attach to the json output, if enabled | |
| json_fields = asctime, filename, lineno, levelname, message | |
| [elasticsearch_configs] | |
| use_ssl = False | |
| verify_certs = True | |
| [kubernetes] | |
| # The repository, tag and imagePullPolicy of the Kubernetes Image for the Worker to Run | |
| worker_container_repository = | |
| worker_container_tag = | |
| worker_container_image_pull_policy = IfNotPresent | |
| # If True (default), worker pods will be deleted upon termination | |
| delete_worker_pods = True | |
| # Number of Kubernetes Worker Pod creation calls per scheduler loop | |
| worker_pods_creation_batch_size = 1 | |
| # The Kubernetes namespace where airflow workers should be created. Defaults to `default` | |
| namespace = default | |
| # The name of the Kubernetes ConfigMap Containing the Airflow Configuration (this file) | |
| airflow_configmap = | |
| # For docker image already contains DAGs, this is set to `True`, and the worker will search for dags in dags_folder, | |
| # otherwise use git sync or dags volume claim to mount DAGs | |
| dags_in_image = False | |
| # For either git sync or volume mounted DAGs, the worker will look in this subpath for DAGs | |
| dags_volume_subpath = | |
| # For DAGs mounted via a volume claim (mutually exclusive with git-sync and host path) | |
| dags_volume_claim = | |
| # For volume mounted logs, the worker will look in this subpath for logs | |
| logs_volume_subpath = | |
| # A shared volume claim for the logs | |
| logs_volume_claim = | |
| # For DAGs mounted via a hostPath volume (mutually exclusive with volume claim and git-sync) | |
| # Useful in local environment, discouraged in production | |
| dags_volume_host = | |
| # A hostPath volume for the logs | |
| # Useful in local environment, discouraged in production | |
| logs_volume_host = | |
| # A list of configMapsRefs to envFrom. If more than one configMap is | |
| # specified, provide a comma separated list: configmap_a,configmap_b | |
| env_from_configmap_ref = | |
| # A list of secretRefs to envFrom. If more than one secret is | |
| # specified, provide a comma separated list: secret_a,secret_b | |
| env_from_secret_ref = | |
| # Git credentials and repository for DAGs mounted via Git (mutually exclusive with volume claim) | |
| git_repo = | |
| git_branch = | |
| git_subpath = | |
| # Use git_user and git_password for user authentication or git_ssh_key_secret_name and git_ssh_key_secret_key | |
| # for SSH authentication | |
| git_user = | |
| git_password = | |
| git_sync_root = /git | |
| git_sync_dest = repo | |
| # Mount point of the volume if git-sync is being used. | |
| # i.e. /usr/local/airflow/dags | |
| git_dags_folder_mount_point = | |
| # To get Git-sync SSH authentication set up follow this format | |
| # | |
| # airflow-secrets.yaml: | |
| # --- | |
| # apiVersion: v1 | |
| # kind: Secret | |
| # metadata: | |
| # name: airflow-secrets | |
| # data: | |
| # # key needs to be gitSshKey | |
| # gitSshKey: <base64_encoded_data> | |
| # --- | |
| # airflow-configmap.yaml: | |
| # apiVersion: v1 | |
| # kind: ConfigMap | |
| # metadata: | |
| # name: airflow-configmap | |
| # data: | |
| # known_hosts: | | |
| # github.com ssh-rsa <...> | |
| # airflow.cfg: | | |
| # ... | |
| # | |
| # git_ssh_key_secret_name = airflow-secrets | |
| # git_ssh_known_hosts_configmap_name = airflow-configmap | |
| git_ssh_key_secret_name = | |
| git_ssh_known_hosts_configmap_name = | |
| # To give the git_sync init container credentials via a secret, create a secret | |
| # with two fields: GIT_SYNC_USERNAME and GIT_SYNC_PASSWORD (example below) and | |
| # add `git_sync_credentials_secret = <secret_name>` to your airflow config under the kubernetes section | |
| # | |
| # Secret Example: | |
| # apiVersion: v1 | |
| # kind: Secret | |
| # metadata: | |
| # name: git-credentials | |
| # data: | |
| # GIT_SYNC_USERNAME: <base64_encoded_git_username> | |
| # GIT_SYNC_PASSWORD: <base64_encoded_git_password> | |
| git_sync_credentials_secret = | |
| # For cloning DAGs from git repositories into volumes: https://github.com/kubernetes/git-sync | |
| git_sync_container_repository = k8s.gcr.io/git-sync | |
| git_sync_container_tag = v3.1.1 | |
| git_sync_init_container_name = git-sync-clone | |
| git_sync_run_as_user = 65533 | |
| # The name of the Kubernetes service account to be associated with airflow workers, if any. | |
| # Service accounts are required for workers that require access to secrets or cluster resources. | |
| # See the Kubernetes RBAC documentation for more: | |
| # https://kubernetes.io/docs/admin/authorization/rbac/ | |
| worker_service_account_name = | |
| # Any image pull secrets to be given to worker pods, If more than one secret is | |
| # required, provide a comma separated list: secret_a,secret_b | |
| image_pull_secrets = | |
| # GCP Service Account Keys to be provided to tasks run on Kubernetes Executors | |
| # Should be supplied in the format: key-name-1:key-path-1,key-name-2:key-path-2 | |
| gcp_service_account_keys = | |
| # Use the service account kubernetes gives to pods to connect to kubernetes cluster. | |
| # It's intended for clients that expect to be running inside a pod running on kubernetes. | |
| # It will raise an exception if called from a process not running in a kubernetes environment. | |
| in_cluster = True | |
| # When running with in_cluster=False change the default cluster_context or config_file | |
| # options to Kubernetes client. Leave blank these to use default behaviour like `kubectl` has. | |
| # cluster_context = | |
| # config_file = | |
| # Affinity configuration as a single line formatted JSON object. | |
| # See the affinity model for top-level key names (e.g. `nodeAffinity`, etc.): | |
| # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#affinity-v1-core | |
| affinity = | |
| # A list of toleration objects as a single line formatted JSON array | |
| # See: | |
| # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#toleration-v1-core | |
| tolerations = | |
| # **kwargs parameters to pass while calling a kubernetes client core_v1_api methods from Kubernetes Executor | |
| # provided as a single line formatted JSON dictionary string. | |
| # List of supported params in **kwargs are similar for all core_v1_apis, hence a single config variable for all apis | |
| # See: | |
| # https://raw.githubusercontent.com/kubernetes-client/python/master/kubernetes/client/apis/core_v1_api.py | |
| # Note that if no _request_timeout is specified, the kubernetes client will wait indefinitely for kubernetes | |
| # api responses, which will cause the scheduler to hang. The timeout is specified as [connect timeout, read timeout] | |
| kube_client_request_args = {"_request_timeout" : [60,60] } | |
| # Worker pods security context options | |
| # See: | |
| # https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | |
| # Specifies the uid to run the first process of the worker pods containers as | |
| run_as_user = | |
| # Specifies a gid to associate with all containers in the worker pods | |
| # if using a git_ssh_key_secret_name use an fs_group | |
| # that allows for the key to be read, e.g. 65533 | |
| fs_group = | |
| [kubernetes_node_selectors] | |
| # The Key-value pairs to be given to worker pods. | |
| # The worker pods will be scheduled to the nodes of the specified key-value pairs. | |
| # Should be supplied in the format: key = value | |
| [kubernetes_annotations] | |
| # The Key-value annotations pairs to be given to worker pods. | |
| # Should be supplied in the format: key = value | |
| [kubernetes_environment_variables] | |
| # The scheduler sets the following environment variables into your workers. You may define as | |
| # many environment variables as needed and the kubernetes launcher will set them in the launched workers. | |
| # Environment variables in this section are defined as follows | |
| # <environment_variable_key> = <environment_variable_value> | |
| # | |
| # For example if you wanted to set an environment variable with value `prod` and key | |
| # `ENVIRONMENT` you would follow the following format: | |
| # ENVIRONMENT = prod | |
| # | |
| # Additionally you may override worker airflow settings with the AIRFLOW__<SECTION>__<KEY> | |
| # formatting as supported by airflow normally. | |
| [kubernetes_secrets] | |
| # The scheduler mounts the following secrets into your workers as they are launched by the | |
| # scheduler. You may define as many secrets as needed and the kubernetes launcher will parse the | |
| # defined secrets and mount them as secret environment variables in the launched workers. | |
| # Secrets in this section are defined as follows | |
| # <environment_variable_mount> = <kubernetes_secret_object>=<kubernetes_secret_key> | |
| # | |
| # For example if you wanted to mount a kubernetes secret key named `postgres_password` from the | |
| # kubernetes secret object `airflow-secret` as the environment variable `POSTGRES_PASSWORD` into | |
| # your workers you would follow the following format: | |
| # POSTGRES_PASSWORD = airflow-secret=postgres_credentials | |
| # | |
| # Additionally you may override worker airflow settings with the AIRFLOW__<SECTION>__<KEY> | |
| # formatting as supported by airflow normally. | |
| [kubernetes_labels] | |
| # The Key-value pairs to be given to worker pods. | |
| # The worker pods will be given these static labels, as well as some additional dynamic labels | |
| # to identify the task. | |
| # Should be supplied in the format: key = value |