airflow.providers.common.ai.operators.llm

Operator for general-purpose LLM calls.

Classes

LLMOperator

Call an LLM with a prompt and return the output.

Module Contents

class airflow.providers.common.ai.operators.llm.LLMOperator(*, prompt, llm_conn_id, model_id=None, system_prompt='', output_type=str, agent_params=None, require_approval=False, approval_timeout=None, allow_modifications=False, **kwargs)[source]

Bases: airflow.providers.common.compat.sdk.BaseOperator, airflow.providers.common.ai.mixins.approval.LLMApprovalMixin

Call an LLM with a prompt and return the output.

Uses a PydanticAIHook for LLM access. Supports plain string output (default) and structured output via a Pydantic BaseModel. When output_type is a BaseModel subclass, the result is serialized via model_dump() for XCom.

Parameters:
  • prompt (str) – The prompt to send to the LLM.

  • llm_conn_id (str) – Connection ID for the LLM provider.

  • model_id (str | None) – Model identifier (e.g. "openai:gpt-5"). Overrides the model stored in the connection’s extra field.

  • system_prompt (str) – System-level instructions for the LLM agent.

  • output_type (type) – Expected output type. Default str. Set to a Pydantic BaseModel subclass for structured output.

  • agent_params (dict[str, Any] | None) – Additional keyword arguments passed to the pydantic-ai Agent constructor (e.g. retries, model_settings, tools). See pydantic-ai Agent docs for the full list.

  • require_approval (bool) – If True, the task defers after generating output and waits for a human reviewer to approve or reject via the HITL interface. Default False.

  • approval_timeout (datetime.timedelta | None) – Maximum time to wait for a review. When exceeded, the task fails with TimeoutError.

  • allow_modifications (bool) – If True, the reviewer can edit the output before approving. The modified value is returned as the task result. Default False.

template_fields: collections.abc.Sequence[str] = ('prompt', 'llm_conn_id', 'model_id', 'system_prompt', 'agent_params')[source]
prompt[source]
llm_conn_id[source]
model_id = None[source]
system_prompt = ''[source]
output_type[source]
agent_params[source]
require_approval = False[source]
approval_timeout = None[source]
allow_modifications = False[source]
property llm_hook: airflow.providers.common.ai.hooks.pydantic_ai.PydanticAIHook[source]

Return PydanticAIHook for the configured LLM connection.

execute(context)[source]

Derive when creating an operator.

The main method to execute the task. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

Was this entry helpful?