airflow.providers.alibaba.cloud.hooks.analyticdb_spark
¶
Module Contents¶
Classes¶
AnalyticDB Spark application states. |
|
Hook for AnalyticDB MySQL Spark through the REST API. |
- class airflow.providers.alibaba.cloud.hooks.analyticdb_spark.AppState[source]¶
Bases:
enum.Enum
AnalyticDB Spark application states.
See: https://www.alibabacloud.com/help/en/analyticdb-for-mysql/latest/api-doc-adb-2021-12-01-api-struct -sparkappinfo.
- class airflow.providers.alibaba.cloud.hooks.analyticdb_spark.AnalyticDBSparkHook(adb_spark_conn_id='adb_spark_default', region=None, *args, **kwargs)[source]¶
Bases:
airflow.hooks.base.BaseHook
,airflow.utils.log.logging_mixin.LoggingMixin
Hook for AnalyticDB MySQL Spark through the REST API.
- Parameters
- submit_spark_app(cluster_id, rg_name, *args, **kwargs)[source]¶
Perform request to submit spark application.
- submit_spark_sql(cluster_id, rg_name, *args, **kwargs)[source]¶
Perform request to submit spark sql.
- get_spark_state(app_id)[source]¶
Fetch the state of the specified spark application.
- Parameters
app_id (str) – identifier of the spark application
- get_spark_web_ui_address(app_id)[source]¶
Fetch the web ui address of the specified spark application.
- Parameters
app_id (str) – identifier of the spark application
- get_spark_log(app_id)[source]¶
Get the logs for a specified spark application.
- Parameters
app_id (str) – identifier of the spark application
- kill_spark_app(app_id)[source]¶
Kill the specified spark application.
- Parameters
app_id (str) – identifier of the spark application
- static build_submit_app_data(file=None, class_name=None, args=None, conf=None, jars=None, py_files=None, files=None, driver_resource_spec=None, executor_resource_spec=None, num_executors=None, archives=None, name=None)[source]¶
Build the submit application request data.
- Parameters
file (str | None) – path of the file containing the application to execute.
class_name (str | None) – name of the application Java/Spark main class.
args (collections.abc.Sequence[str | int | float] | None) – application command line arguments.
conf (dict[Any, Any] | None) – Spark configuration properties.
jars (collections.abc.Sequence[str] | None) – jars to be used in this application.
py_files (collections.abc.Sequence[str] | None) – python files to be used in this application.
files (collections.abc.Sequence[str] | None) – files to be used in this application.
driver_resource_spec (str | None) – The resource specifications of the Spark driver.
executor_resource_spec (str | None) – The resource specifications of each Spark executor.
num_executors (int | str | None) – number of executors to launch for this application.
archives (collections.abc.Sequence[str] | None) – archives to be used in this application.
name (str | None) – name of this application.
- static build_submit_sql_data(sql=None, conf=None, driver_resource_spec=None, executor_resource_spec=None, num_executors=None, name=None)[source]¶
Build the submit spark sql request data.
- Parameters
sql (str | None) – The SQL query to execute. (templated)
conf (dict[Any, Any] | None) – Spark configuration properties.
driver_resource_spec (str | None) – The resource specifications of the Spark driver.
executor_resource_spec (str | None) – The resource specifications of each Spark executor.
num_executors (int | str | None) – number of executors to launch for this application.
name (str | None) – name of this application.