Metadata-Version: 2.4
Name: zipline-ai
Version: 1.9.1
Summary: CLI tool for the Zipline AI platform
Author-email: Zipline AI <hello@zipline.ai>
License: Apache License 2.0
Project-URL: homepage, https://zipline.ai
Project-URL: documentation, https://docs.zipline.ai
Project-URL: github, https://github.com/zipline-ai/chronon/
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: azure-core==1.38.0
Requires-Dist: azure-identity==1.25.1
Requires-Dist: azure-storage-blob==12.25.1
Requires-Dist: boto3==1.42.34
Requires-Dist: botocore==1.42.34
Requires-Dist: certifi==2026.1.4
Requires-Dist: cffi==2.0.0
Requires-Dist: charset-normalizer==3.4.4
Requires-Dist: click==8.3.1
Requires-Dist: crcmod==1.7
Requires-Dist: croniter==6.0.0
Requires-Dist: cryptography==46.0.3
Requires-Dist: gitdb==4.0.12
Requires-Dist: gitpython==3.1.46
Requires-Dist: google-api-core[grpc]==2.27.0
Requires-Dist: google-auth==2.48.0
Requires-Dist: google-cloud-bigquery-storage==2.36.0
Requires-Dist: google-cloud-core==2.5.0
Requires-Dist: google-cloud-iam==2.21.0
Requires-Dist: google-cloud-storage==2.19.0
Requires-Dist: google-crc32c==1.8.0
Requires-Dist: google-resumable-media==2.8.0
Requires-Dist: googleapis-common-protos[grpc]==1.72.0
Requires-Dist: grpc-google-iam-v1==0.14.3
Requires-Dist: grpcio==1.76.0
Requires-Dist: grpcio-status==1.76.0
Requires-Dist: idna==3.11
Requires-Dist: isodate==0.7.2
Requires-Dist: importlib-resources==6.5.2
Requires-Dist: jmespath==1.1.0
Requires-Dist: markdown-it-py==4.0.0
Requires-Dist: mdurl==0.1.2
Requires-Dist: msal==1.34.0
Requires-Dist: msal-extensions==1.3.1
Requires-Dist: proto-plus==1.27.0
Requires-Dist: protobuf==6.33.4
Requires-Dist: py4j==0.10.9.7
Requires-Dist: pyasn1==0.6.2
Requires-Dist: pyasn1-modules==0.4.2
Requires-Dist: pycparser==3.0
Requires-Dist: pygments==2.19.2
Requires-Dist: pyjwt[crypto]==2.10.1
Requires-Dist: pyspark==3.5.4
Requires-Dist: python-dateutil==2.9.0.post0
Requires-Dist: pytz==2025.2
Requires-Dist: requests==2.32.5
Requires-Dist: rich==14.3.1
Requires-Dist: rsa==4.9.1
Requires-Dist: s3transfer==0.16.0
Requires-Dist: six==1.17.0
Requires-Dist: smmap==5.0.2
Requires-Dist: sqlglot==28.6.0
Requires-Dist: thrift==0.21.0
Requires-Dist: typing-extensions==4.15.0
Requires-Dist: urllib3==2.6.3

### Chronon Python API


#### Overview

Chronon Python API for materializing configs to be run by the Chronon Engine. Contains python helpers to help managed a repo of feature and join definitions to be executed by the chronon scala engine.


#### User API Overview

##### Sources

Most fields are self explanatory. Time columns are expected to be in milliseconds (unixtime).

```python
# File <repo>/sources/sample_sources.py
from ai.chronon.query import (
  Query,
  select,
)
from ai.chronon.source import EventSource, EntitySource

# Sample query
Query(
  selects=select(
      user="user_id",
      created_at="created_at",
  ),
  wheres=["has_availability = 1"],
  start_partition="2021-01-01",  # Defines the beginning of time for computations related to the source.
  setups=["...UDF..."],
  time_column="ts",
  end_partition=None,
  mutation_time_column="mutation_timestamp",
  reversal_column="CASE WHEN mutation_type IN ('DELETE', 'UPDATE_BEFORE') THEN true ELSE false END"
)

user_activity = EntitySource(
  snapshot_table="db_exports.table",
  mutation_table="mutations_namespace.table_mutations",
  mutation_topic="mutationsKafkaTopic",
  query=Query(...)
)

website__views = EventSource(
  table="namespace.table",
  topic="kafkaTopicForEvents",
)
```


##### Group By (Features)

Group Bys are aggregations over sources that define features. For example:

```python
# File <repo>/group_bys/example_team/example_group_by.py
from ai.chronon.group_by import (
  GroupBy,
  Window,
  TimeUnit,
  Accuracy,
  Operation,
  Aggregations,
  Aggregation,
  DefaultAggregation,
)
from sources import sample_sources

sum_cols = [f"active_{x}_days" for x in [30, 90, 120]]


v0 = GroupBy(
  sources=test_source.user_activity,
  keys=["user"],
  aggregations=Aggregations(
    user_active_1_day=Aggregation(operation=Operation.LAST),
    second_feature=Aggregation(
      input_column="active_7_days",
      operation=Operation.SUM,
      windows=[
        Window(n, TimeUnit.DAYS) for n in [3, 5, 9]
      ]
    ),
  ) + [
    Aggregation(
      input_column=col,
      operation=Operation.SUM
    ) for col in sum_columns           # Alternative syntax for defining aggregations.
  ] + [
    Aggregation(
      input_column="device",
      operation=LAST_K(10)
    )
  ],
  dependencies=[
    "db_exports.table/ds={{ ds }}"      # If not defined will be derived from the Source info.
  ],
  accuracy=Accuracy.SNAPSHOT,          # This could be TEMPORAL for point in time correctness.
  env={
    "backfill": {                      # Execution environment variables for each of the modes for `run.py`
      "EXECUTOR_MEMORY": "4G"
     },
  },
  online=True,                         # True if this group by needs to be uploaded to a KV Store.
  production=False                     # True if this group by is production level.
)
```

##### Join

A Join is a collection of feature values for the keys and (times if applicable) defined on the left (source). Example:

```python
# File <repo>/joins/example_team/example_join.py
from ai.chronon.join import Join, JoinPart
from sources import sample_sources
from group_bys.example_team import example_group_by

v1 = Join(
    left=sample_sources.website__views,
    right_parts=[
        JoinPart(group_by=example_group_by.v0),
    ],
    online=True,       # True if this join will be fetched in production.
    production=False,  # True if this join should not use non-production group bys.
    env={"backfill": {"PARALLELISM": "10"}, "streaming": {"STREAMING_ENV_VAR": "VALUE"}},
)
```

##### Pre-commit Setup

1. Install pre-commit and other dev libraries:
```
pip install -r requirements/dev.txt
```
2. Run the following command under `api/python` to install the git hook scripts:
```
pre-commit install
```

To support more pre-commit hooks, add them to the `.pre-commit-config.yaml` file.
