Metadata-Version: 2.4
Name: seamstress-world-models
Version: 1.0.3
Summary: For investigating the manifold geometry of complex dynamical systems and using this information to learn improved world models.
Requires-Python: <=3.13,>=3.11
Requires-Dist: av>=13.0.0
Requires-Dist: glfw>=2.6.2
Requires-Dist: h5py>=3.8
Requires-Dist: hydra-core>=1.3
Requires-Dist: jax
Requires-Dist: matplotlib
Requires-Dist: moviepy>=1.0.3
Requires-Dist: numpy
Requires-Dist: pandas>=2.0.0
Requires-Dist: pyarrow>=11.0.0
Requires-Dist: pygfx==0.15.2
Requires-Dist: scikit-learn
Requires-Dist: scipy
Requires-Dist: tqdm
Requires-Dist: transformers==5.2.0
Requires-Dist: trimesh>=4.11.1
Requires-Dist: turbojpeg==0.0.2
Requires-Dist: wandb>=0.17
Provides-Extra: flash-attn
Requires-Dist: flash-attn; extra == 'flash-attn'
Requires-Dist: ninja; extra == 'flash-attn'
Requires-Dist: packaging; extra == 'flash-attn'
Requires-Dist: psutil; extra == 'flash-attn'
Provides-Extra: jax-cuda
Requires-Dist: jax-cuda12-pjrt>=0.8.1; extra == 'jax-cuda'
Requires-Dist: jax-cuda12-plugin[with-cuda]>=0.8.1; extra == 'jax-cuda'
Requires-Dist: jax[cuda12]>=0.8.1; extra == 'jax-cuda'
Provides-Extra: publish
Requires-Dist: build>=1.2.0; extra == 'publish'
Requires-Dist: twine>=5.0.0; extra == 'publish'
Provides-Extra: torch
Requires-Dist: huggingface-hub>=0.24; extra == 'torch'
Requires-Dist: lpips>=0.1.4; extra == 'torch'
Requires-Dist: pytorch-lightning<3,>=2.2; extra == 'torch'
Requires-Dist: torch; extra == 'torch'
Requires-Dist: torchaudio; extra == 'torch'
Requires-Dist: torchvision; extra == 'torch'
Description-Content-Type: text/markdown

# Seamstress

It's this easy:

```python
# Import pip installed modules
from seamstress import WorldModel

# Load weights from hugging face
model = WorldModel.from_pretrained("isaac-ronald-ward/seamstress-rotorcraft").eval()

# Pass in the context data, and the intended action sequence
# you want to imagine the outcome of
pred = model.imagine(
    # (context_len, |states_numeric|)
    past_states_numeric=...,  
    # (context_len, image_channels, image_height, image_width)
    past_states_image=..., 
    # (context_len, |actions|)
    past_actions=...,
    future_actions=...
)

# Inspect the imagined outputs
# (future_len, |states_numeric|)
pred["pred_future_states_numeric"]  
# (future_len, image_channels, image_height, image_width)
pred["pred_future_states_image"]    
```

Complete the instructions under *installation* before proceeding.

For the full inference walkthrough, including loading the shipped demo trajectory and writing predicted videos, see [docs/README.inference.md](docs/README.inference.md).

## Installation

### Git LFS

Git Large File Storage (LFS) is used to manage large files in this repository. After cloning the repository, make sure to install Git LFS and pull the large files:

```bash
git lfs install
git lfs pull
```

### Docker

[Docker](https://docs.docker.com/engine/install/) is used for reproducibility and ease of setup, and should be installed if you don't have it already.

We provide a development container style setup that bind-mounts the repo into the container, so edits on the host are reflected immediately inside the container, and vice versa.

To build and run the GPU container (requires NVIDIA drivers + nvidia-container-toolkit):

```bash
# The build step takes a while the first time round. Subsequent usage only requires the run command. For now, go grab a cuppa!
docker compose -f docker/compose.seamstress.yml build --no-cache seamstress-gpu
docker compose -f docker/compose.seamstress.yml run --rm -it seamstress-gpu bash

# Ensure that your GPUs are visible inside the container with:
nvidia-smi
```

This repository concerns the training of deep learning-based world models, so we recommend using a GPU. That said, we also provide a CPU container for development and testing:

```bash
# The build step takes a while the first time round. Subsequent usage only requires the run command. For now, go grab a cuppa!
docker compose -f docker/compose.seamstress.yml build --no-cache seamstress-cpu
docker compose -f docker/compose.seamstress.yml run --rm -it seamstress-cpu bash
```

Standard Docker commands apply:

```bash
# Leave the container
exit
# List all the containers and find the desired container id
docker ps
# Use the container id to start a new shell in the running container
docker exec -it ae8c43356adf bash
# Stop a running container with the container id
docker stop ae8c43356adf
# Stop all containers
docker stop $(docker ps -q)
```

### Environment variables

Finally, copy `.env.example` to `.env` and fill in required fields (Weights & Biases, HuggingFace tokens, etc.). Note that nothing under the blocks:

```bash
# ----------------------------------------------------------------------
# Optional FiGS sidecar integration (GPU-only)
# ----------------------------------------------------------------------
```

```bash
# ----------------------------------------------------------------------
# The following can remain unchanged unless you are tinkering
# ----------------------------------------------------------------------
```

Need changing for standard usage, but feel free to edit these for tinkering and ablations.

## Getting started

We provide the following specific guides to get started with different aspects of the project, recommending that you always start with 1.

1. [Running the full workflow on a toy environment](docs/README.toy.md)
2. [Loading a pretrained Seamstress model and using it for inference](docs/README.inference.md)
3. [Using 'Flying in Gaussian Splats' for high fidelity quadrotor simulations](docs/README.figs.md)
4. [Training on real world data](docs/README.real_world.md)
5. [Using your own data](docs/README.own_data.md)

## Convienience commands

To delete all the logs and training artefacts in the repo use:

```bash
rm -rf ./logs/*
rm -rf ./wandb/*
rm -rf ./outputs/*
```

To delete all logs on Weights & Biases that do not include '*' in the run name use:

```bash
uv run -- python -m seamstress.utils.wandb_cleanup    
```
