Metadata-Version: 2.4
Name: dq-suite-amsterdam
Version: 0.13.5
Summary: Wrapper for Great Expectations to fit the requirements of the Gemeente Amsterdam.
Author-email: Arthur Kordes <a.kordes@amsterdam.nl>, Aysegul Cayir Aydar <a.cayiraydar@amsterdam.nl>, Rajesh Chellaswamy <r.chellaswamy@amsterdam.nl>, Bas Schotten <b.schotten@amsterdam.nl>
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: great_expectations==1.1.3
Requires-Dist: pandas==2.1.4
Requires-Dist: pyspark==3.5.2
Requires-Dist: pyhumps==3.8.0
Requires-Dist: pyyaml==6.0.3
Requires-Dist: delta-spark~=3.2.0
Requires-Dist: validators==0.34.0
Requires-Dist: typing_extensions>=4.7.1
Requires-Dist: ydata-profiling==4.18.0
Provides-Extra: dev
Requires-Dist: bandit~=1.7; extra == "dev"
Requires-Dist: black~=23.1; extra == "dev"
Requires-Dist: pytest~=7.2; extra == "dev"
Requires-Dist: mypy~=1.4.1; extra == "dev"
Requires-Dist: pylint~=2.16; extra == "dev"
Requires-Dist: autoflake~=2.0.1; extra == "dev"
Requires-Dist: coverage~=7.6.1; extra == "dev"
Requires-Dist: chispa~=0.10.1; extra == "dev"
Dynamic: license-file

# About dq-suite-amsterdam
This repository aims to be an easy-to-use wrapper for the data quality library [Great Expectations](https://github.com/great-expectations/great_expectations) (GX). All that is needed to get started is an in-memory Spark dataframe and a set of data quality rules - specified in a JSON file [of particular formatting](dq_rules_example.json). 

By default, all the validation results are written to Unity Catalog. Alternatively, one could disallow writing to a `data_quality` schema in UC, which one has to create once per catalog via [this notebook](scripts/data_quality_tables.sql). Additionally, users can choose to get notified via Slack or Microsoft Teams.

<img src="docs/wip_computer.jpg" width="20%" height="auto">

DISCLAIMER: The package is in MVP phase, so watch your step. 


## How to contribute
Want to help out? Great! Feel free to create a pull request addressing one of the open [issues](https://github.com/Amsterdam/dq-suite-amsterdam/issues). Some notes for developers are located [here](docs/Readme-dev.md).

Found a bug, or need a new feature? Add a new issue describing what you need. 


# Getting started
Following GX, we recommend installing `dq-suite-amsterdam` in a virtual environment. This could be either locally via your IDE, on your compute via a notebook in Databricks, or as part of a workflow. 

1. Run the following command:
```
pip install dq-suite-amsterdam
```

2. Create the `data_quality` schema (and tables all results will be written to) by running the SQL notebook located [here](scripts/data_quality_tables.sql). All it needs is the name of the catalog - and the rights to create a schema within that catalog :)


3. Get ready to validate your first table. To do so, define
- `dq_rule_json_path` as a path to a JSON file, formatted in [this](dq_rules_example.json) way
- `df` as a Spark dataframe containing the table that needs to be validated (e.g. via `spark.read.csv` or `spark.read.table`)
- `spark` as a SparkSession object (in Databricks notebooks, this is by default called `spark`)
- `catalog_name` as the name of your catalog ('dpxx_dev' or 'dpxx_prd')
- `table_name` as the name of the table for which a data quality check is required. This name should also occur in the JSON file at `dq_rule_json_path`



4. Finally, perform the validation by running (*note*: the library is imported as `dq_suite`, not as `dq_suite_amsterdam`!)

```python
from dq_suite.validation import run_validation

run_validation(
    json_path=dq_rule_json_path,
    df=df, 
    spark_session=spark,
    catalog_name=catalog_name,
    table_name=table_name,
)
```
Note: run_validation now returns a tuple as (validation_result, highest_severity_level):

validation_result → Boolean flag indicating overall success (True if all checks pass, False otherwise).

highest_severity_level → String indicating the highest severity among failed checks (one of 'fatal', 'error', 'warning', or 'ok').

See the documentation of `dq_suite.validation.run_validation` for what other parameters can be passed.


**Geo Validation**

Geo validation enables geometric checks using Databricks ST geospatial functions. It is fully integrated into the existing validation flow, allowing generic and geo rules to be applied together on the same table.

Geo validation can be used to validate, among others:

- Whether geometry values are present and non-empty
- Whether geometries are structurally valid (e.g. no invalid polygons)
- Whether geometry values are of a specific geometry type (e.g. POINT, POLYGON)

1. Databricks Runtime 17.1 and above must be applied on your Databricks cluster, as ST geospatial functions are only fully supported from this version onwards. For more details, https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-st-geospatial-functions

2. When defining rules in Getting started → Step 3, you can enable geo validation by adding the parameter "rule_type": "geo" inside your JSON. Example is [here](geo_dq_rules_example.json)

3. Results of geo validation will be written into the same data_quality schema as generic validation. If a table includes both generic and geo rules, all results will be combined in the output tables.


**Profiling**

Profiling is the process of analyzing a dataset to understand its structure, patterns, and data quality characteristics (such as completeness, uniqueness, or value distributions). 

The profiling functionality in dq_suite generates profiling results and automatically produces a rules.json file, which can be used as input for the validation—making it easier to gain insights and validate data quality.
1. Run the following command:
```
pip install dq-suite-amsterdam
```
2. 2. Create the `data_quality` schema (and profiling tables that store profiling results) by running the SQL notebook located [here](scripts/data_quality_tables.sql). 
All it needs is the name of the catalog and the rights to create a schema within that catalog. The catalog allows flexible usage across environments (e.g. dev, test, prod).
This step will create the required profiling tables, including:
- `profilingtabel` (table-level profiling results)
- `profilingattribuut` (attribute-level profiling results)
3. Get ready to profile your first table. To do so, define
- `df` as a Panda dataframe containing the table that needs to be validated (e.g. via `pd.read_csv`)
- `generate_rules` as a Boolean to generate dq_rule_json. Set to False if you only want profiling without rule generation
- `spark` as a SparkSession object (in Databricks notebooks, this is by default called `spark`)
- `dq_rule_json_path` as a path to a JSON file, wil be formatted in [this](src/dq_suite/profile/dq_rules_example_from_profiling.json) way after running profiling function
- `dataset_name` as the name of the table for which a data quality check is required. This name will be placed in the JSON file at `dq_rule_json_path`
- `table_name` as the name of the table for which a data quality check is required. This name will be placed in the JSON file at `dq_rule_json_path`
- `catalog_name` as the name of your catalog ('dpxx_dev' or 'dpxx_prd')
4. Finally, perform the profiling by running 
```python
from dq_suite.profile.profile import profile_and_create_rules

profile_and_create_rules(
    df=df,
    dataset_name=dataset_name,
    table_name=table_name,
    catalog_name=catalog_name,
    spark_session=spark,
    generate_rules=True,
    rule_path=dq_rule_json_path
)
```

**Result of profiling**

Profiling results are created in an HTML view.
The rule.json file is created at the specified path(if `generate_rules=True`)
This file can be edited to refine the rules according to your data validation needs.
The JSON rule file can then be used as input for dq_suite validation.
Profiling tables are created at the table level and include attributes of each table.
Geographic rules, as described in the Geo Validation section, are automatically generated for geometry columns.

For further documentation, see:
- [other functionalities](docs/Readme-other.md)
- [notes for developers](docs/Readme-dev.md)
- [notes for data engineers at Gemeente Amsterdam](https://dev.azure.com/CloudCompetenceCenter/Dataplatform%20en%20Data%20organisatie/_git/vakgroep_data_engineering?path=/docs/03_knowledge_bank/topics/data_quality/data_quality.md&_a=preview) (in Dutch, employees only)


# Known exceptions / issues
- The functions can run on Databricks using a Personal Compute Cluster or using a Job Cluster. 
Using a Shared Compute Cluster will result in an error, as it does not have the permissions that Great Expectations requires.

- Since this project requires Python >= 3.10, the use of Databricks Runtime (DBR) >= 13.3 is needed 
([click](https://docs.databricks.com/en/release-notes/runtime/13.3lts.html#system-environment)). 
Older versions of DBR will result in errors upon install of the `dq-suite-amsterdam` library.

- At time of writing (late Aug 2024), Great Expectations v1.0.0 has just been released, and is not (yet) compatible with Python 3.12. Hence, make sure you are using the correct version of Python as interpreter for your project.

- The `run_time` value is defined separately from Great Expectations in `validation.py`. We plan on fixing this when Great Expectations has documented how to access it from the RunIdentifier object.

- Profiling rules/Rule condition logic

Current profiling-based rule conditions are placeholders and should be defined and validated by the data teams to ensure they are generic and reusable.

 - When using Great Expectations with `ResultFormat.COMPLETE`, the `unexpected_list`  is limited to a maximum of 200 values per expectation. This is a limitation  imposed by Great Expectations. 
