airflow.providers.amazon.aws.sensors.batch
¶
Module Contents¶
Classes¶
Poll the state of the Batch Job until it reaches a terminal state; fails if the job fails. |
|
Poll the state of the Batch environment until it reaches a terminal state; fails if the environment fails. |
|
Poll the state of the Batch job queue until it reaches a terminal state; fails if the queue fails. |
- class airflow.providers.amazon.aws.sensors.batch.BatchSensor(*, job_id, aws_conn_id='aws_default', region_name=None, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), poke_interval=30, max_retries=4200, **kwargs)[source]¶
Bases:
airflow.sensors.base.BaseSensorOperator
Poll the state of the Batch Job until it reaches a terminal state; fails if the job fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait on an AWS Batch job state
- Parameters
job_id (str) – Batch job_id to check the state for
aws_conn_id (str | None) – aws connection to use, defaults to ‘aws_default’ If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).
region_name (str | None) – aws region name associated with the client
deferrable (bool) – Run sensor in the deferrable mode.
poke_interval (float) – polling period in seconds to check for the status of the job.
max_retries (int) – Number of times to poll for job state before returning the current state.
- template_fields: collections.abc.Sequence[str] = ('job_id',)[source]¶
- template_ext: collections.abc.Sequence[str] = ()[source]¶
- execute(context)[source]¶
Derive when creating an operator.
Context is the same dictionary used as when rendering jinja templates.
Refer to get_template_context for more context.
- class airflow.providers.amazon.aws.sensors.batch.BatchComputeEnvironmentSensor(compute_environment, aws_conn_id='aws_default', region_name=None, **kwargs)[source]¶
Bases:
airflow.sensors.base.BaseSensorOperator
Poll the state of the Batch environment until it reaches a terminal state; fails if the environment fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait on an AWS Batch compute environment status
- Parameters
compute_environment (str) – Batch compute environment name
aws_conn_id (str | None) – aws connection to use, defaults to ‘aws_default’ If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).
region_name (str | None) – aws region name associated with the client
- template_fields: collections.abc.Sequence[str] = ('compute_environment',)[source]¶
- template_ext: collections.abc.Sequence[str] = ()[source]¶
- class airflow.providers.amazon.aws.sensors.batch.BatchJobQueueSensor(job_queue, treat_non_existing_as_deleted=False, aws_conn_id='aws_default', region_name=None, **kwargs)[source]¶
Bases:
airflow.sensors.base.BaseSensorOperator
Poll the state of the Batch job queue until it reaches a terminal state; fails if the queue fails.
See also
For more information on how to use this sensor, take a look at the guide: Wait on an AWS Batch job queue status
- Parameters
job_queue (str) – Batch job queue name
treat_non_existing_as_deleted (bool) – If True, a non-existing Batch job queue is considered as a deleted queue and as such a valid case.
aws_conn_id (str | None) – aws connection to use, defaults to ‘aws_default’ If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).
region_name (str | None) – aws region name associated with the client
- template_fields: collections.abc.Sequence[str] = ('job_queue',)[source]¶
- template_ext: collections.abc.Sequence[str] = ()[source]¶