airflow.triggers.base
Attributes
Classes
Arguments required for start task execution from triggerer. |
|
Base class for all triggers. |
|
Base class for triggers used to schedule DAGs based on external events. |
|
Something that a trigger can fire when its conditions are met. |
|
Yield this event in order to end the task successfully. |
|
Yield this event in order to end the task with failure. |
|
Yield this event in order to end the task with status 'skipped'. |
Functions
Module Contents
- airflow.triggers.base.log[source]
- type airflow.triggers.base.Operator = MappedOperator | SerializedBaseOperator[source]
- class airflow.triggers.base.StartTriggerArgs[source]
Arguments required for start task execution from triggerer.
- timeout: datetime.timedelta | None = None[source]
- class airflow.triggers.base.BaseTrigger(**kwargs)[source]
Bases:
abc.ABC,airflow.sdk.definitions._internal.templater.Templater,airflow.utils.log.logging_mixin.LoggingMixinBase class for all triggers.
A trigger has two contexts it can exist in:
Inside an Operator, when it’s passed to TaskDeferred
Actively running in a trigger worker
We use the same class for both situations, and rely on all Trigger classes to be able to return the arguments (possible to encode with Airflow-JSON) that will let them be re-instantiated elsewhere.
- trigger_id = None[source]
- template_fields = ()[source]
- template_ext = ()[source]
- task_id = None[source]
- property task_instance: airflow.models.taskinstance.TaskInstance[source]
- render_template_fields(context, jinja_env=None)[source]
Template all attributes listed in self.template_fields.
This mutates the attributes in-place and is irreversible.
- Parameters:
context (airflow.sdk.definitions.context.Context) – Context dict with values to apply on content.
jinja_env (jinja2.Environment | None) – Jinja’s environment to use for rendering.
- abstract serialize()[source]
Return the information needed to reconstruct this Trigger.
- abstract run()[source]
- Async:
Run the trigger in an asynchronous context.
The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.
If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).
In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.
- async cleanup()[source]
Cleanup the trigger.
Called when the trigger is no longer needed, and it’s being removed from the active triggerer process.
This method follows the async/await pattern to allow to run the cleanup in triggerer main event loop. Exceptions raised by the cleanup method are ignored, so if you would like to be able to debug them and be notified that cleanup method failed, you should wrap your code with try/except block and handle it appropriately (in async-compatible way).
- async on_kill()[source]
Kill the external job managed by this trigger when the task is killed by a user.
Symmetric with
BaseOperator.on_kill()on the worker side: override this method to stop external work (e.g. cancel a BigQuery job, terminate a Databricks run) when a user explicitly acts on the deferred task via mark-failed, clear, or mark-succeeded.Distinction from
cleanup():cleanup()runs on every trigger exit — success, timeout, shutdown, and user kill. It is meant for releasing local resources held by this trigger instance. Putting external job cancellation incleanup()would cancel in-flight work on every triggerer restart or rolling deploy.on_kill()runs only when a user explicitly kills the task. It is the right place to cancel external work you do not want to keep running after the user performs an action.
This only fires when a user acts on the task. It does not fire on:
Triggerer shutdown or restart — the trigger is redistributed, not cancelled.
Triggerer redistribution to another triggerer process.
Trigger timeout — the trigger is killed, not cancelled by user.
Normal trigger completion (the trigger fired an event).
This method runs in the triggerer’s asyncio event loop, so it must be async-safe. Use
awaitfor any I/O; do not block the event loop.Exceptions raised here are logged as warnings and do not propagate — they will not affect the task state or the triggerer. Implement your own retry or error handling inside this method if needed.
on_kill()is given a bounded time to complete. Implementations that call slow external APIs should apply their own timeouts rather than relying on the framework bound.
- static repr(classpath, kwargs)[source]
- __repr__()[source]
- class airflow.triggers.base.BaseEventTrigger(**kwargs)[source]
Bases:
BaseTriggerBase class for triggers used to schedule DAGs based on external events.
BaseEventTriggeris a subclass ofBaseTriggerdesigned to identify triggers compatible with event-driven scheduling.- static hash(classpath, kwargs)[source]
Return the hash of the trigger classpath and kwargs. This is used to uniquely identify a trigger.
We do not want to have this logic in
BaseTriggerbecause, when used to defer tasks, 2 triggers can have the same classpath and kwargs. This is not true for event driven scheduling.
- class airflow.triggers.base.TriggerEvent(payload, **kwargs)[source]
Bases:
pydantic.BaseModelSomething that a trigger can fire when its conditions are met.
Events must have a uniquely identifying value that would be the same wherever the trigger is run; this is to ensure that if the same trigger is being run in two locations (for HA reasons) that we can deduplicate its events.
- payload: Any = None[source]
The payload for the event to send back to the task.
Must be natively JSON-serializable, or registered with the airflow serialization code.
- __repr__()[source]
- class airflow.triggers.base.TaskSuccessEvent(*, xcoms=None, **kwargs)[source]
Bases:
BaseTaskEndEventYield this event in order to end the task successfully.
- task_instance_state: airflow.utils.state.TaskInstanceState[source]
- class airflow.triggers.base.TaskFailedEvent(*, xcoms=None, **kwargs)[source]
Bases:
BaseTaskEndEventYield this event in order to end the task with failure.
- task_instance_state: airflow.utils.state.TaskInstanceState[source]
- class airflow.triggers.base.TaskSkippedEvent(*, xcoms=None, **kwargs)[source]
Bases:
BaseTaskEndEventYield this event in order to end the task with status ‘skipped’.
- task_instance_state: airflow.utils.state.TaskInstanceState[source]
- airflow.triggers.base.trigger_event_discriminator(v)[source]
- airflow.triggers.base.DiscrimatedTriggerEvent[source]