PUBLIC INTERFACE FOR AIRFLOW 3.0+¶
Warning
This documentation covers the Public Interface for Airflow 3.0+
If you are using Airflow 2.x, please refer to the Airflow 2.11 Public Interface Documentation for the legacy interface.
Public Interface of Airflow¶
The Public Interface of Apache Airflow is the collection of interfaces and behaviors in Apache Airflow whose changes are governed by semantic versioning. A user interacts with Airflow’s public interface by creating and managing dags, managing tasks and dependencies, and extending Airflow capabilities by writing new executors, plugins, operators and providers. The Public Interface can be useful for building custom tools and integrations with other systems, and for automating certain aspects of the Airflow workflow.
The primary public interface for DAG Authors and task execution is using task SDK Airflow task SDK is the primary public interface for DAG Authors and for task execution airflow.sdk namespace. Direct access to the metadata database from task code is no longer allowed. Instead, use the Stable REST API, Python Client, or Task Context methods.
For comprehensive Task SDK documentation, see the Task SDK Reference.
Using Airflow Public Interfaces¶
Note
As of Airflow 3.0, users should use the airflow.sdk
namespace as the official Public Interface, as defined in AIP-72.
Direct interaction with internal modules or the metadata database is not possible. For stable, production-safe integration, it is recommended to use:
The official REST API
The Python Client SDK (airflow-client-python)
The new Task SDK (
airflow.sdk
)
Related docs: - Release Notes 3.0 - Task SDK Overview
The following are some examples of the public interface of Airflow:
When you are writing your own operators or hooks. This is commonly done when no hook or operator exists for your use case, or when perhaps when one exists but you need to customize the behavior.
When writing new Plugins that extend Airflow’s functionality beyond DAG building blocks. Secrets, Timetables, Triggers, Listeners are all examples of such functionality. This is usually done by users who manage Airflow instances.
Bundling custom Operators, Hooks, Plugins and releasing them together via providers - this is usually done by those who intend to provide a reusable set of functionality for external services or applications Airflow integrates with.
Using the taskflow API to write tasks
Relying on the consistent behavior of Airflow objects
One aspect of “public interface” is extending or using Airflow Python classes and functions. The classes
and functions mentioned below can be relied on to maintain backwards-compatible signatures and behaviours within
MAJOR version of Airflow. On the other hand, classes and methods starting with _
(also known
as protected Python methods) and __
(also known as private Python methods) are not part of the Public
Airflow Interface and might change at any time.
You can also use Airflow’s Public Interface via the Stable REST API (based on the OpenAPI specification). For specific needs you can also use the Airflow Command Line Interface (CLI) though its behaviour might change in details (such as output format and available flags) so if you want to rely on those in programmatic way, the Stable REST API is recommended.
Dags¶
The DAG is Airflow’s core entity that represents a recurring workflow. You can create a DAG by
instantiating the DAG
class in your DAG file. Dags can also have parameters
specified via Param
class.
The recommended way to create DAGs is using the dag()
decorator
from the airflow.sdk namespace.
Airflow has a set of example dags that you can use to learn how to write dags
You can read more about dags in Dags.
References for the modules used in dags are here:
Note
The airflow.sdk namespace provides the primary interface for DAG Authors. For detailed API documentation, see the Task SDK Reference.
Note
The DagBag
class is used internally by Airflow for loading DAGs
from files and folders. DAG Authors should use the DAG
class from the
airflow.sdk namespace instead.
Note
The DagRun
class is used internally by Airflow for DAG run
management. DAG Authors should access DAG run information through the Task Context via
get_current_context()
or use the DagRunProtocol
interface.
Operators¶
The base classes BaseOperator
and BaseSensorOperator
are public and may be extended to make new operators.
The base class for new operators is BaseOperator
from the airflow.sdk namespace.
Subclasses of BaseOperator which are published in Apache Airflow are public in behavior but not in structure. That is to say, the Operator’s parameters and behavior is governed by semver but the methods are subject to change at any time.
Task Instances¶
Task instances are the individual runs of a single task in a DAG (in a DAG Run). Task instances are accessed through
the Task Context via get_current_context()
. Direct database access is not possible.
Note
Task Context is part of the airflow.sdk namespace. For detailed API documentation, see the Task SDK Reference.
Task Instance Keys¶
Task instance keys are unique identifiers of task instances in a DAG (in a DAG Run). A key is a tuple that consists of
dag_id
, task_id
, run_id
, try_number
, and map_index
.
Direct access to task instance keys via the TaskInstance
model is no longer allowed from task code. Instead, use the Task Context via get_current_context()
to access task instance information.
Example of accessing task instance information through Task Context:
from airflow.sdk import get_current_context
def my_task():
context = get_current_context()
ti = context["ti"]
dag_id = ti.dag_id
task_id = ti.task_id
run_id = ti.run_id
try_number = ti.try_number
map_index = ti.map_index
print(f"Task: {dag_id}.{task_id}, Run: {run_id}, Try: {try_number}, Map Index: {map_index}")
Note
The TaskInstanceKey
class is used internally by Airflow
for identifying task instances. DAG Authors should access task instance information through the
Task Context via get_current_context()
instead.
Hooks¶
Hooks are interfaces to external platforms and databases, implementing a common
interface when possible and acting as building blocks for operators. All hooks
are derived from BaseHook
.
Airflow has a set of Hooks that are considered public. You are free to extend their functionality by extending them:
Public Airflow utilities¶
When writing or extending Hooks and Operators, DAG Authors and developers can use the following classes:
The
Connection
, which provides access to external service credentials and configuration.The
Variable
, which provides access to Airflow configuration variables.The
XCom
which are used to access to inter-task communication data.
Connection and Variable operations should be performed through the Task Context using
get_current_context()
and the task instance’s methods, or through the airflow.sdk namespace.
Direct database access to Connection
and Variable
models is no longer allowed from task code.
Example of accessing Connections and Variables through Task Context:
from airflow.sdk import get_current_context
def my_task():
context = get_current_context()
conn = context["conn"]
my_connection = conn.get("my_connection_id")
var = context["var"]
my_variable = var.value.get("my_variable_name")
Example of using airflow.sdk namespace directly:
from airflow.sdk import Connection, Variable
conn = Connection.get("my_connection_id")
var = Variable.get("my_variable_name")
You can read more about the public Airflow utilities in Managing Connections, Variables, XComs
Reference for classes used for the utilities are here:
Note
Connection, Variable, and XCom classes are now part of the airflow.sdk namespace. For detailed API documentation, see the Task SDK Reference.
Public Exceptions¶
When writing the custom Operators and Hooks, you can handle and raise public Exceptions that Airflow exposes:
Public Utility classes¶
Using Public Interface to extend Airflow capabilities¶
Airflow uses Plugin mechanism to extend Airflow platform capabilities. They allow to extend Airflow UI but also they are the way to expose the below customizations (Triggers, Timetables, Listeners, etc.). Providers can also implement plugin endpoints and customize Airflow UI and the customizations.
You can read more about plugins in Plugins. You can read how to extend Airflow UI in Customize view of Apache from Airflow web UI. Note that there are some simple customizations of the UI that do not require plugins - you can read more about them in Customizing the UI.
Here are the ways how Plugins can be used to extend Airflow:
Triggers¶
Airflow uses Triggers to implement asyncio
compatible Deferrable Operators.
All Triggers derive from BaseTrigger
.
Airflow has a set of Triggers that are considered public. You are free to extend their functionality by extending them:
You can read more about Triggers in Deferrable Operators & Triggers.
Timetables¶
Custom timetable implementations provide Airflow’s scheduler additional logic to
schedule DAG runs in ways not possible with built-in schedule expressions.
All Timetables derive from Timetable
.
Airflow has a set of Timetables that are considered public. You are free to extend their functionality by extending them:
You can read more about Timetables in Customizing DAG Scheduling with Timetables.
Listeners¶
Listeners enable you to respond to DAG/Task lifecycle events.
This is implemented via ListenerManager
class that provides hooks that
can be implemented to respond to DAG/Task lifecycle events.
Added in version 2.5: Listener public interface has been added in version 2.5.
You can read more about Listeners in Listeners.
Extra Links¶
Extra links are dynamic links that could be added to Airflow independently from custom Operators. Normally they can be defined by the Operators, but plugins allow you to override the links on a global level.
You can read more about the Extra Links in Define an operator extra link.
Using Public Interface to integrate with external services and applications¶
Tasks in Airflow can orchestrate external services via Hooks and Operators. The core functionality of Airflow (such as authentication) can also be extended to leverage external services. You can read more about providers providers and core extensions they can provide in providers.
Executors¶
Executors are the mechanism by which task instances get run. All executors are
derived from BaseExecutor
. There are several
executor implementations built-in Airflow, each with their own unique characteristics and capabilities.
The executor interface itself (the BaseExecutor class) is public, but the built-in executors are not (i.e. KubernetesExecutor, LocalExecutor, etc). This means that, to use KubernetesExecutor as an example, we may make changes to KubernetesExecutor in minor or patch Airflow releases which could break an executor that subclasses KubernetesExecutor. This is necessary to allow Airflow developers sufficient freedom to continue to improve the executors we offer. Accordingly, if you want to modify or extend a built-in executor, you should incorporate the full executor code into your project so that such changes will not break your derivative executor.
You can read more about executors and how to write your own in Executor.
Added in version 2.6: The executor interface has been present in Airflow for quite some time but prior to 2.6, there was executor-specific code elsewhere in the codebase. As of version 2.6 executors are fully decoupled, in the sense that Airflow core no longer needs to know about the behavior of specific executors. You could have succeeded with implementing a custom executor before Airflow 2.6, and a number of people did, but there were some hard-coded behaviours that preferred in-built executors, and custom executors could not provide full functionality that built-in executors had.
Secrets Backends¶
Airflow can be configured to rely on secrets backends to retrieve
Connection
and Variable
.
All secrets backends derive from BaseSecretsBackend
.
All Secrets Backend implementations are public. You can extend their functionality:
You can read more about Secret Backends in Secrets Backend. You can also find all the available Secrets Backends implemented in community providers in Secret backends.
Auth managers¶
Auth managers are responsible of user authentication and user authorization in Airflow. All auth managers are
derived from BaseAuthManager
.
The auth manager interface itself (the BaseAuthManager
class) is
public, but the different implementations of auth managers are not (i.e. FabAuthManager).
You can read more about auth managers and how to write your own in Auth manager.
Connections¶
When creating Hooks, you can add custom Connections. You can read more about connections in Connections for available Connections implemented in the community providers.
Extra Links¶
When creating Hooks, you can add custom Extra Links that are displayed when the tasks are run. You can find out more about extra links in Extra Links that also shows available extra links implemented in the community providers.
Logging and Monitoring¶
You can extend the way how logs are written by Airflow. You can find out more about log writing in Logging & Monitoring.
The Writing logs that also shows available log writers implemented in the community providers.
Decorators¶
DAG Authors can use decorators to author dags using the TaskFlow concept.
All Decorators derive from TaskDecorator
.
The primary decorators for DAG Authors are now in the airflow.sdk namespace:
dag()
, task()
, asset()
,
setup()
, task_group()
, teardown()
,
chain()
, chain_linear()
, cross_downstream()
,
get_current_context()
and get_parsing_context()
.
Airflow has a set of Decorators that are considered public. You are free to extend their functionality by extending them:
Note
Decorators are now part of the airflow.sdk namespace. For detailed API documentation, see the Task SDK Reference.
You can read more about creating custom Decorators in Creating Custom @task Decorators.
Email notifications¶
Airflow has a built-in way of sending email notifications and it allows to extend it by adding custom email notification classes. You can read more about email notifications in Email Configuration.
Notifications¶
Airflow has a built-in extensible way of sending notifications using the various on_*_callback
. You can read more
about notifications in Creating a notifier.
Cluster Policies¶
Cluster Policies are the way to dynamically apply cluster-wide policies to the dags being parsed or tasks being executed. You can read more about Cluster Policies in Cluster Policies.
Lineage¶
Airflow can help track origins of data, what happens to it and where it moves over time. You can read more about lineage in Lineage.
What is not part of the Public Interface of Apache Airflow?¶
Everything not mentioned in this document should be considered as non-Public Interface.
Sometimes in other applications those components could be relied on to keep backwards compatibility, but in Airflow they are not parts of the Public Interface and might change any time:
Database structure is considered to be an internal implementation detail and you should not assume the structure is going to be maintained in a backwards-compatible way.
Web UI is continuously evolving and there are no backwards compatibility guarantees on HTML elements.
Python classes except those explicitly mentioned in this document, are considered an internal implementation detail and you should not assume they will be maintained in a backwards-compatible way.
Direct metadata database access from task code is no longer allowed. Task code cannot directly access the metadata database to query DAG state, task history, or DAG runs. Instead, use one of the following alternatives:
Task Context: Use
get_current_context()
to access task instance information and methods likeget_dr_count()
,get_dagrun_state()
, andget_task_states()
.REST API: Use the Stable REST API for programmatic access to Airflow metadata.
Python Client: Use the Python Client for Python-based interactions with Airflow.
This change improves architectural separation and enables remote execution capabilities.
Example of using Task Context instead of direct database access:
from airflow.sdk import dag, get_current_context, task
from airflow.utils.state import DagRunState
from datetime import datetime
@dag(dag_id="example_dag", start_date=datetime(2025, 1, 1), schedule="@hourly", tags=["misc"], catchup=False)
def example_dag():
@task(task_id="check_dagrun_state")
def check_state():
context = get_current_context()
ti = context["ti"]
dag_run = context["dag_run"]
# Use Task Context methods instead of direct DB access
dr_count = ti.get_dr_count(dag_id="example_dag")
dagrun_state = ti.get_dagrun_state(dag_id="example_dag", run_id=dag_run.run_id)
return f"DAG run count: {dr_count}, current state: {dagrun_state}"
check_state()
example_dag()