Creating Custom @task Decorators

As of Airflow 2.2 it is possible add custom decorators to the TaskFlow interface from within a provider package and have those decorators appear natively as part of the @task.____ design.

For an example. Let’s say you were trying to create an easier mechanism to run python functions as “foo” tasks. The steps to create and register @task.foo are:

  1. Create a FooDecoratedOperator

    In this case, we are assuming that you have an existing FooOperator that takes a python function as an argument. By creating a FooDecoratedOperator that inherits from FooOperator and airflow.decorators.base.DecoratedOperator, Airflow will supply much of the needed functionality required to treat your new class as a taskflow native class.

    You should also override the custom_operator_name attribute to provide a custom name for the task. For example, _DockerDecoratedOperator in the apache-airflow-providers-docker provider sets this to @task.docker to indicate the decorator name it implements.

  2. Create a foo_task function

    Once you have your decorated class, create a function like this, to convert the new FooDecoratedOperator into a TaskFlow function decorator!

    from typing import TYPE_CHECKING
    from airflow.decorators.base import task_decorator_factory
    
    if TYPE_CHECKING:
        from airflow.decorators.base import TaskDecorator
    
    
    def foo_task(
        python_callable: Callable | None = None,
        multiple_outputs: bool | None = None,
        **kwargs,
    ) -> "TaskDecorator":
        return task_decorator_factory(
            python_callable=python_callable,
            multiple_outputs=multiple_outputs,
            decorated_operator_class=FooDecoratedOperator,
            **kwargs,
        )
    
  3. Register your new decorator in get_provider_info of your provider

    Finally, add a key-value task-decorators to the dict returned from the provider entrypoint as described in How to create your own provider. This should be a list with each item containing name and class-name keys. When Airflow starts, the ProviderManager class will automatically import this value and task.foo will work as a new decorator!

    def get_provider_info():
        return {
            "package-name": "foo-provider-airflow",
            "name": "Foo",
            "task-decorators": [
                {
                    "name": "foo",
                    # "Import path" and function name of the `foo_task`
                    "class-name": "name.of.python.package.foo_task",
                }
            ],
            # ...
        }
    

    Please note that the name must be a valid python identifier.

(Optional) Adding IDE auto-completion support

Note

This section mostly applies to the apache-airflow managed providers. We have not decided if we will allow third-party providers to register auto-completion in this way.

For better or worse, Python IDEs can not auto-complete dynamically generated methods (see JetBrain’s write up on the subject).

To hack around this problem, a type stub airflow/decorators/__init__.pyi is provided to statically declare the type signature of each task decorator. A newly added task decorator should declare its signature stub like this:

airflow/decorators/__init__.pyi

    def docker(
        self,
        *,
        multiple_outputs: bool | None = None,
        python_command: str = "python3",
        serializer: Literal["pickle", "cloudpickle", "dill"] | None = None,
        use_dill: bool = False,  # Added by _DockerDecoratedOperator.
        # 'command', 'retrieve_output', and 'retrieve_output_path' are filled by
        # _DockerDecoratedOperator.
        image: str,
        api_version: str | None = None,
        container_name: str | None = None,
        cpus: float = 1.0,
        docker_url: str | None = None,
        environment: dict[str, str] | None = None,
        private_environment: dict[str, str] | None = None,
        env_file: str | None = None,
        force_pull: bool = False,
        mem_limit: float | str | None = None,
        host_tmp_dir: str | None = None,
        network_mode: str | None = None,
        tls_ca_cert: str | None = None,
        tls_client_cert: str | None = None,
        tls_client_key: str | None = None,
        tls_verify: bool = True,
        tls_hostname: str | bool | None = None,
        tls_ssl_version: str | None = None,
        mount_tmp_dir: bool = True,
        tmp_dir: str = "/tmp/airflow",
        user: str | int | None = None,
        mounts: list[Mount] | None = None,
        entrypoint: str | list[str] | None = None,
        working_dir: str | None = None,
        xcom_all: bool = False,
        docker_conn_id: str | None = None,
        dns: list[str] | None = None,
        dns_search: list[str] | None = None,
        auto_remove: Literal["never", "success", "force"] = "never",
        shm_size: int | None = None,
        tty: bool = False,
        hostname: str | None = None,
        privileged: bool = False,
        cap_add: str | None = None,
        extra_hosts: dict[str, str] | None = None,
        timeout: int = 60,
        device_requests: list[dict] | None = None,
        log_opts_max_size: str | None = None,
        log_opts_max_file: str | None = None,
        ipc_mode: str | None = None,
        skip_on_exit_code: int | Container[int] | None = None,
        port_bindings: dict | None = None,
        ulimits: list[dict] | None = None,
        **kwargs,
    ) -> TaskDecorator:
        """Create a decorator to convert the decorated callable to a Docker task.

        :param multiple_outputs: If set, function return value will be unrolled to multiple XCom values.
            Dict will unroll to XCom values with keys as XCom keys. Defaults to False.
        :param python_command: Python command for executing functions, Default: python3
        :param serializer: Which serializer use to serialize the args and result. It can be one of the following:

            - ``"pickle"``: (default) Use pickle for serialization. Included in the Python Standard Library.
            - ``"cloudpickle"``: Use cloudpickle for serialize more complex types,
              this requires to include cloudpickle in your requirements.
            - ``"dill"``: Use dill for serialize more complex types,
              this requires to include dill in your requirements.
        :param use_dill: Deprecated, use ``serializer`` instead. Whether to use dill to serialize
            the args and result (pickle is default). This allows more complex types
            but requires you to include dill in your requirements.
        :param image: Docker image from which to create the container.
            If image tag is omitted, "latest" will be used.
        :param api_version: Remote API version. Set to ``auto`` to automatically
            detect the server's version.
        :param container_name: Name of the container. Optional (templated)
        :param cpus: Number of CPUs to assign to the container.
            This value gets multiplied with 1024. See
            https://docs.docker.com/engine/reference/run/#cpu-share-constraint
        :param docker_url: URL of the host running the docker daemon.
            Default is the value of the ``DOCKER_HOST`` environment variable or unix://var/run/docker.sock
            if it is unset.
        :param environment: Environment variables to set in the container. (templated)
        :param private_environment: Private environment variables to set in the container.
            These are not templated, and hidden from the website.
        :param env_file: Relative path to the ``.env`` file with environment variables to set in the container.
            Overridden by variables in the environment parameter.
        :param force_pull: Pull the docker image on every run. Default is False.
        :param mem_limit: Maximum amount of memory the container can use.
            Either a float value, which represents the limit in bytes,
            or a string like ``128m`` or ``1g``.
        :param host_tmp_dir: Specify the location of the temporary directory on the host which will
            be mapped to tmp_dir. If not provided defaults to using the standard system temp directory.
        :param network_mode: Network mode for the container. It can be one of the following:

            - ``"bridge"``: Create new network stack for the container with default docker bridge network
            - ``"none"``: No networking for this container
            - ``"container:<name|id>"``: Use the network stack of another container specified via <name|id>
            - ``"host"``: Use the host network stack. Incompatible with `port_bindings`
            - ``"<network-name>|<network-id>"``: Connects the container to user created network
              (using ``docker network create`` command)
        :param tls_ca_cert: Path to a PEM-encoded certificate authority
            to secure the docker connection.
        :param tls_client_cert: Path to the PEM-encoded certificate
            used to authenticate docker client.
        :param tls_client_key: Path to the PEM-encoded key used to authenticate docker client.
        :param tls_verify: Set ``True`` to verify the validity of the provided certificate.
        :param tls_hostname: Hostname to match against
            the docker server certificate or False to disable the check.
        :param tls_ssl_version: Version of SSL to use when communicating with docker daemon.
        :param mount_tmp_dir: Specify whether the temporary directory should be bind-mounted
            from the host to the container. Defaults to True
        :param tmp_dir: Mount point inside the container to
            a temporary directory created on the host by the operator.
            The path is also made available via the environment variable
            ``AIRFLOW_TMP_DIR`` inside the container.
        :param user: Default user inside the docker container.
        :param mounts: List of mounts to mount into the container, e.g.
            ``['/host/path:/container/path', '/host/path2:/container/path2:ro']``.
        :param entrypoint: Overwrite the default ENTRYPOINT of the image
        :param working_dir: Working directory to
            set on the container (equivalent to the -w switch the docker client)
        :param xcom_all: Push all the stdout or just the last line.
            The default is False (last line).
        :param docker_conn_id: The :ref:`Docker connection id <howto/connection:docker>`
        :param dns: Docker custom DNS servers
        :param dns_search: Docker custom DNS search domain
        :param auto_remove: Enable removal of the container when the container's process exits. Possible values:

            - ``never``: (default) do not remove container
            - ``success``: remove on success
            - ``force``: always remove container
        :param shm_size: Size of ``/dev/shm`` in bytes. The size must be
            greater than 0. If omitted uses system default.
        :param tty: Allocate pseudo-TTY to the container
            This needs to be set see logs of the Docker container.
        :param hostname: Optional hostname for the container.
        :param privileged: Give extended privileges to this container.
        :param cap_add: Include container capabilities
        :param extra_hosts: Additional hostnames to resolve inside the container,
            as a mapping of hostname to IP address.
        :param device_requests: Expose host resources such as GPUs to the container.
        :param log_opts_max_size: The maximum size of the log before it is rolled.
            A positive integer plus a modifier representing the unit of measure (k, m, or g).
            Eg: 10m or 1g Defaults to -1 (unlimited).
        :param log_opts_max_file: The maximum number of log files that can be present.
            If rolling the logs creates excess files, the oldest file is removed.
            Only effective when max-size is also set. A positive integer. Defaults to 1.
        :param ipc_mode: Set the IPC mode for the container.
        :param skip_on_exit_code: If task exits with this exit code, leave the task
            in ``skipped`` state (default: None). If set to ``None``, any non-zero
            exit code will be treated as a failure.
        :param port_bindings: Publish a container's port(s) to the host. It is a
            dictionary of value where the key indicates the port to open inside the container
            and value indicates the host port that binds to the container port.
            Incompatible with ``"host"`` in ``network_mode``.
        :param ulimits: List of ulimit options to set for the container. Each item should
            be a :py:class:`docker.types.Ulimit` instance.
        """

The signature should allow only keyword-only arguments, including one named multiple_outputs that’s automatically provided by default. All other arguments should be copied directly from the real FooOperator, and we recommend adding a comment to explain what arguments are filled automatically by FooDecoratedOperator and thus not included.

If the new decorator can be used without arguments (e.g. @task.python instead of @task.python()), You should also add an overload that takes a single callable immediately after the “real” definition so mypy can recognize the function as a “bare decorator”:

airflow/decorators/__init__.pyi

    @overload
    def python(self, python_callable: Callable[FParams, FReturn]) -> Task[FParams, FReturn]: ...

Once the change is merged and the next Airflow (minor or patch) release comes out, users will be able to see your decorator in IDE auto-complete. This auto-complete will change based on the version of the provider that the user has installed.

Please note that this step is not required to create a working decorator, but does create a better experience for users of the provider.

Was this entry helpful?