airflow.providers.google.cloud.openlineage.utils

Module Contents

Classes

BigQueryJobRunFacet

Facet that represents relevant statistics of bigquery run.

Functions

extract_ds_name_from_gcs_path(path)

Extract and process the dataset name from a given path.

get_facets_from_bq_table(table)

Get facets from BigQuery table object.

get_identity_column_lineage_facet(dest_field_names, ...)

Get column lineage facet for identity transformations.

get_from_nullable_chain(source, chain)

Get object from nested structure of objects, where it's not guaranteed that all keys in the nested structure exist.

inject_openlineage_properties_into_dataproc_job(job, ...)

Inject OpenLineage properties into Spark job definition.

Attributes

log

BIGQUERY_NAMESPACE

BIGQUERY_URI

WILDCARD

airflow.providers.google.cloud.openlineage.utils.log[source]
airflow.providers.google.cloud.openlineage.utils.BIGQUERY_NAMESPACE = 'bigquery'[source]
airflow.providers.google.cloud.openlineage.utils.BIGQUERY_URI = 'bigquery'[source]
airflow.providers.google.cloud.openlineage.utils.WILDCARD = '*'[source]
airflow.providers.google.cloud.openlineage.utils.extract_ds_name_from_gcs_path(path)[source]

Extract and process the dataset name from a given path.

Args:

path: The path to process e.g. of a gcs file.

Returns:

The processed dataset name.

Examples:
>>> extract_ds_name_from_gcs_path("/dir/file.*")
'dir'
>>> extract_ds_name_from_gcs_path("/dir/pre_")
'dir'
>>> extract_ds_name_from_gcs_path("/dir/file.txt")
'dir/file.txt'
>>> extract_ds_name_from_gcs_path("/dir/file.")
'dir'
>>> extract_ds_name_from_gcs_path("/dir/")
'dir'
>>> extract_ds_name_from_gcs_path("")
'/'
>>> extract_ds_name_from_gcs_path("/")
'/'
>>> extract_ds_name_from_gcs_path(".")
'/'
airflow.providers.google.cloud.openlineage.utils.get_facets_from_bq_table(table)[source]

Get facets from BigQuery table object.

airflow.providers.google.cloud.openlineage.utils.get_identity_column_lineage_facet(dest_field_names, input_datasets)[source]

Get column lineage facet for identity transformations.

This function generates a simple column lineage facet, where each destination column consists of source columns of the same name from all input datasets that have that column. The lineage assumes there are no transformations applied, meaning the columns retain their identity between the source and destination datasets.

Args:

dest_field_names: A list of destination column names for which lineage should be determined. input_datasets: A list of input datasets with schema facets.

Returns:
A dictionary containing a single key, columnLineage, mapped to a ColumnLineageDatasetFacet.

If no column lineage can be determined, an empty dictionary is returned - see Notes below.

Notes:
  • If any input dataset lacks a schema facet, the function immediately returns an empty dictionary.

  • If any field in the source dataset’s schema is not present in the destination table, the function returns an empty dictionary. The destination table can contain extra fields, but all source columns should be present in the destination table.

  • If none of the destination columns can be matched to input dataset columns, an empty dictionary is returned.

  • Extra columns in the destination table that do not exist in the input datasets are ignored and skipped in the lineage facet, as they cannot be traced back to a source column.

  • The function assumes there are no transformations applied, meaning the columns retain their identity between the source and destination datasets.

class airflow.providers.google.cloud.openlineage.utils.BigQueryJobRunFacet[source]

Bases: airflow.providers.common.compat.openlineage.facet.RunFacet

Facet that represents relevant statistics of bigquery run.

This facet is used to provide statistics about bigquery run.

Parameters
  • cached – BigQuery caches query results. Rest of the statistics will not be provided for cached queries.

  • billedBytes – How many bytes BigQuery bills for.

  • properties – Full property tree of BigQUery run.

cached: bool[source]
billedBytes: int | None[source]
properties: str | None[source]
airflow.providers.google.cloud.openlineage.utils.get_from_nullable_chain(source, chain)[source]

Get object from nested structure of objects, where it’s not guaranteed that all keys in the nested structure exist.

Intended to replace chain of dict.get() statements.

Example usage:

if (
    not job._properties.get("statistics")
    or not job._properties.get("statistics").get("query")
    or not job._properties.get("statistics").get("query").get("referencedTables")
):
    return None
result = job._properties.get("statistics").get("query").get("referencedTables")

becomes:

result = get_from_nullable_chain(properties, ["statistics", "query", "queryPlan"])
if not result:
    return None
airflow.providers.google.cloud.openlineage.utils.inject_openlineage_properties_into_dataproc_job(job, context, inject_parent_job_info)[source]

Inject OpenLineage properties into Spark job definition.

Function is not removing any configuration or modifying the job in any other way, apart from adding desired OpenLineage properties to Dataproc job definition if not already present.

Note:
Any modification to job will be skipped if:
  • OpenLineage provider is not accessible.

  • The job type is not supported.

  • Automatic parent job information injection is disabled.

  • Any OpenLineage properties with parent job information are already present in the Spark job definition.

Args:

job: The original Dataproc job definition. context: The Airflow context in which the job is running. inject_parent_job_info: Flag indicating whether to inject parent job information.

Returns:

The modified job definition with OpenLineage properties injected, if applicable.

Was this entry helpful?