airflow.providers.alibaba.cloud.hooks.analyticdb_spark

Module Contents

Classes

AppState

AnalyticDB Spark application states.

AnalyticDBSparkHook

Hook for AnalyticDB MySQL Spark through the REST API.

class airflow.providers.alibaba.cloud.hooks.analyticdb_spark.AppState[source]

Bases: enum.Enum

AnalyticDB Spark application states.

See: https://www.alibabacloud.com/help/en/analyticdb-for-mysql/latest/api-doc-adb-2021-12-01-api-struct -sparkappinfo.

SUBMITTED = 'SUBMITTED'[source]
STARTING = 'STARTING'[source]
RUNNING = 'RUNNING'[source]
FAILING = 'FAILING'[source]
FAILED = 'FAILED'[source]
KILLING = 'KILLING'[source]
KILLED = 'KILLED'[source]
SUCCEEDING = 'SUCCEEDING'[source]
COMPLETED = 'COMPLETED'[source]
FATAL = 'FATAL'[source]
UNKNOWN = 'UNKNOWN'[source]
class airflow.providers.alibaba.cloud.hooks.analyticdb_spark.AnalyticDBSparkHook(adb_spark_conn_id='adb_spark_default', region=None, *args, **kwargs)[source]

Bases: airflow.hooks.base.BaseHook, airflow.utils.log.logging_mixin.LoggingMixin

Hook for AnalyticDB MySQL Spark through the REST API.

Parameters
  • adb_spark_conn_id (str) – The Airflow connection used for AnalyticDB MySQL Spark credentials.

  • region (str | None) – AnalyticDB MySQL region you want to submit spark application.

TERMINAL_STATES[source]
conn_name_attr = 'alibabacloud_conn_id'[source]
default_conn_name = 'adb_spark_default'[source]
conn_type = 'adb_spark'[source]
hook_name = 'AnalyticDB Spark'[source]
submit_spark_app(cluster_id, rg_name, *args, **kwargs)[source]

Perform request to submit spark application.

Parameters
  • cluster_id (str) – The cluster ID of AnalyticDB MySQL 3.0 Data Lakehouse.

  • rg_name (str) – The name of resource group in AnalyticDB MySQL 3.0 Data Lakehouse cluster.

submit_spark_sql(cluster_id, rg_name, *args, **kwargs)[source]

Perform request to submit spark sql.

Parameters
  • cluster_id (str) – The cluster ID of AnalyticDB MySQL 3.0 Data Lakehouse.

  • rg_name (str) – The name of resource group in AnalyticDB MySQL 3.0 Data Lakehouse cluster.

get_spark_state(app_id)[source]

Fetch the state of the specified spark application.

Parameters

app_id (str) – identifier of the spark application

get_spark_web_ui_address(app_id)[source]

Fetch the web ui address of the specified spark application.

Parameters

app_id (str) – identifier of the spark application

get_spark_log(app_id)[source]

Get the logs for a specified spark application.

Parameters

app_id (str) – identifier of the spark application

kill_spark_app(app_id)[source]

Kill the specified spark application.

Parameters

app_id (str) – identifier of the spark application

static build_submit_app_data(file=None, class_name=None, args=None, conf=None, jars=None, py_files=None, files=None, driver_resource_spec=None, executor_resource_spec=None, num_executors=None, archives=None, name=None)[source]

Build the submit application request data.

Parameters
  • file (str | None) – path of the file containing the application to execute.

  • class_name (str | None) – name of the application Java/Spark main class.

  • args (collections.abc.Sequence[str | int | float] | None) – application command line arguments.

  • conf (dict[Any, Any] | None) – Spark configuration properties.

  • jars (collections.abc.Sequence[str] | None) – jars to be used in this application.

  • py_files (collections.abc.Sequence[str] | None) – python files to be used in this application.

  • files (collections.abc.Sequence[str] | None) – files to be used in this application.

  • driver_resource_spec (str | None) – The resource specifications of the Spark driver.

  • executor_resource_spec (str | None) – The resource specifications of each Spark executor.

  • num_executors (int | str | None) – number of executors to launch for this application.

  • archives (collections.abc.Sequence[str] | None) – archives to be used in this application.

  • name (str | None) – name of this application.

static build_submit_sql_data(sql=None, conf=None, driver_resource_spec=None, executor_resource_spec=None, num_executors=None, name=None)[source]

Build the submit spark sql request data.

Parameters
  • sql (str | None) – The SQL query to execute. (templated)

  • conf (dict[Any, Any] | None) – Spark configuration properties.

  • driver_resource_spec (str | None) – The resource specifications of the Spark driver.

  • executor_resource_spec (str | None) – The resource specifications of each Spark executor.

  • num_executors (int | str | None) – number of executors to launch for this application.

  • name (str | None) – name of this application.

get_adb_spark_client()[source]

Get valid AnalyticDB MySQL Spark client.

get_default_region()[source]

Get default region from connection.

Was this entry helpful?