Metadata-Version: 2.4
Name: face-rhythm
Version: 0.3.2
Summary: A Python package for extracting and decomposing rhythmic facial movements from video.
Author: Rich Hakim
License: MIT License
        
        Copyright (c) 2020-2026 Rich Hakim
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/RichieHakim/face-rhythm
Project-URL: Documentation, https://face-rhythm.readthedocs.io
Project-URL: Repository, https://github.com/RichieHakim/face-rhythm
Project-URL: Issues, https://github.com/RichieHakim/face-rhythm/issues
Project-URL: Changelog, https://github.com/RichieHakim/face-rhythm/releases
Keywords: neuroscience,neuroimaging,machine learning,facial behavior,optical flow
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy<3,>=2
Requires-Dist: tensorly<1.0,>=0.8.1
Requires-Dist: opencv_contrib_python<5,>=4.10.0.84
Requires-Dist: matplotlib>=3.8
Requires-Dist: scikit_image>=0.24
Requires-Dist: pyyaml
Requires-Dist: tqdm
Requires-Dist: h5py>=3.11
Requires-Dist: Pillow>=10.3
Requires-Dist: torchcodec; sys_platform != "win32"
Requires-Dist: decord2; sys_platform != "win32"
Requires-Dist: eva_decord; sys_platform == "win32"
Requires-Dist: natsort
Requires-Dist: pandas>=2.2.2
Requires-Dist: tables>=3.10.1
Requires-Dist: einops>=0.8
Requires-Dist: torch>=2.3
Requires-Dist: nvidia_ml_py3
Requires-Dist: py_cpuinfo
Requires-Dist: GPUtil
Requires-Dist: psutil
Requires-Dist: requests
Requires-Dist: ffmpeg-python
Requires-Dist: vqt
Requires-Dist: plotly>=6
Requires-Dist: anywidget
Requires-Dist: jupyter
Requires-Dist: notebook<7
Requires-Dist: ipykernel
Requires-Dist: ipywidgets>=8
Requires-Dist: jupyterlab_widgets>=3
Requires-Dist: ipympl
Provides-Extra: gui
Provides-Extra: notebooks
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: xxhash; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx; extra == "docs"
Requires-Dist: sphinx_rtd_theme<3.0,>=2.0; extra == "docs"
Requires-Dist: myst-parser; extra == "docs"
Requires-Dist: sphinx_copybutton; extra == "docs"
Provides-Extra: multisession
Requires-Dist: romatch-roicat>=0.1.2.post1; extra == "multisession"
Provides-Extra: all
Requires-Dist: pytest; extra == "all"
Requires-Dist: pytest-cov; extra == "all"
Requires-Dist: xxhash; extra == "all"
Requires-Dist: sphinx; extra == "all"
Requires-Dist: sphinx_rtd_theme<3.0,>=2.0; extra == "all"
Requires-Dist: myst-parser; extra == "all"
Requires-Dist: sphinx_copybutton; extra == "all"
Requires-Dist: romatch-roicat>=0.1.2.post1; extra == "all"
Dynamic: license-file

# Welcome to face-rhythm

[![PyPI version](https://badge.fury.io/py/face-rhythm.svg)](https://badge.fury.io/py/face-rhythm)
[![Downloads](https://pepy.tech/badge/face-rhythm)](https://pepy.tech/project/face-rhythm)
[![Python versions](https://img.shields.io/pypi/pyversions/face-rhythm.svg)](https://pypi.org/project/face-rhythm/)
[![build](https://github.com/RichieHakim/face-rhythm/actions/workflows/build.yml/badge.svg)](https://github.com/RichieHakim/face-rhythm/actions/workflows/build.yml)
[![Documentation Status](https://readthedocs.org/projects/face-rhythm/badge/?version=latest)](https://face-rhythm.readthedocs.io/en/latest/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)

- **Documentation:** [https://face-rhythm.readthedocs.io](https://face-rhythm.readthedocs.io)
- **Preprint:** [Hakim et al. (2025), *bioRxiv*](https://doi.org/10.1101/2025.09.10.675423)
- **Issues / support:** [GitHub Issues](https://github.com/RichieHakim/face-rhythm/issues)

## What is face-rhythm

A Python package that turns videos of facial or other behavior into a small set of interpretable behavioral components.

**Why use face-rhythm?**
- **Unsupervised.** No labels, no model zoo.
- **Interpretable.** Each component is a (space × frequency × time) factor
  you can plot and read off directly.

## How to use it

**Interactive notebooks:**

- [`demo_pipeline.ipynb`](https://github.com/RichieHakim/face-rhythm/blob/release/notebooks/demo_pipeline.ipynb)
  — end-to-end demo on a single session. Start here.
- [`demo_set_rois_multisession.ipynb`](https://github.com/RichieHakim/face-rhythm/blob/release/notebooks/demo_set_rois_multisession.ipynb)
  — draw and align ROIs across multiple sessions of the same subject.
- [`demo_event_alignment.ipynb`](https://github.com/RichieHakim/face-rhythm/blob/release/notebooks/demo_event_alignment.ipynb)
  — align extracted factors to event timestamps and view trial-averaged
  traces.

**Command line** for batch runs across many sessions:
```shell
python scripts/run_pipeline_basic.py --path_params params.json --directory_save /path/to/project/
```
`scripts/params_pipeline_basic.json` is a ready-to-edit template.

**Python API:** see [Quick start](#quick-start) below, or the
[full API reference](https://face-rhythm.readthedocs.io/en/latest/api.html).

## Installation

### 0. Requirements

- [Anaconda](https://www.anaconda.com/distribution/) or
  [Miniconda](https://docs.conda.io/en/latest/miniconda.html) or [Mamba](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html)

### 1. Create a conda environment

```shell
conda create -n face_rhythm python=3.12
conda activate face_rhythm
python -m pip install --upgrade pip
```

Activate the env (`conda activate face_rhythm`) each time you use
face-rhythm.

### 2. Install video packages

**Linux:**
```shell
conda install -c conda-forge 'torchcodec=*=cpu*' ffmpeg libstdcxx-ng
```

**macOS:**
```shell
conda install -c conda-forge 'torchcodec=*=cpu*' ffmpeg
```

**Windows:** skip this step. `torchcodec` doesn't explicitly support Windows. Installing it often works, but is not guaranteed. Unless you need ultrafast GPU speeds, just use the `'decord'` backend, instead.

### 3. Install face-rhythm

```shell
pip install face-rhythm
```

For headless servers, GPU acceleration, and installation troubleshooting,
see the [installation docs](https://face-rhythm.readthedocs.io/en/latest/installation.html).

### 4. Clone the repo to get the notebooks

```shell
git clone https://github.com/RichieHakim/face-rhythm.git
```
<!-- end-install -->

## CLI Quick start

<!-- start-quickstart -->
```python
import json
import face_rhythm as fr

with open("params_pipeline_basic.json", "r") as f:
    params = json.load(f)

params["project"]["directory_project"] = "/path/to/new/project/"
params["paths_videos"]["directory_videos"] = "/path/to/videos/"
params["ROIs"]["initialize"]["path_file"] = "/path/to/ROIs.h5"

results = fr.pipelines.pipeline_basic(params)
```

Copy [`scripts/params_pipeline_basic.json`](scripts/params_pipeline_basic.json)
as a template, edit the three paths, and run. Results land in the project
directory as HDF5 files plus summary plots.
<!-- end-quickstart -->

## Upgrading

```shell
pip install --upgrade face-rhythm
```

To update the cloned notebooks/scripts: `cd face-rhythm && git pull`.

## Pipeline at a glance

1. Read the video frames ([`face_rhythm.helpers.BufferedVideoReader`](https://face-rhythm.readthedocs.io/en/latest/api.html)).
2. Draw ROIs that pick (a) where to track and (b) what region to crop
   ([`face_rhythm.rois`](https://face-rhythm.readthedocs.io/en/latest/api.html)).
3. Track a dense grid of points via optical flow
   ([`face_rhythm.point_tracking`](https://face-rhythm.readthedocs.io/en/latest/api.html)).
4. Compute a spectrogram for each point's trajectory
   ([`face_rhythm.spectral_analysis`](https://face-rhythm.readthedocs.io/en/latest/api.html)).
5. Factorize the (points × frequency × time) tensor with non-negative TCA
   ([`face_rhythm.decomposition`](https://face-rhythm.readthedocs.io/en/latest/api.html)).

## GPU acceleration (optional)

face-rhythm runs on CPU by default. Install the CPU setup above first.

**PyTorch compute:** set `project.use_GPU: true` in your params. Check CUDA
with:
```shell
python -c "import torch; print(torch.cuda.is_available())"
```

**OpenCV CUDA:** build OpenCV plus `opencv_contrib` with CUDA enabled, then
make sure that build is the `cv2` imported in this env. Useful links:
[OpenCV CUDA build options](https://docs.opencv.org/4.x/db/d05/tutorial_config_reference.html#cuda-support)
and [opencv_contrib](https://github.com/opencv/opencv_contrib).

**NVDEC video decoding:** (uses experimental libraries). On Linux/NVIDIA systems, try a CUDA torchcodec package, then pass `device='cuda'` when constructing video readers:
```shell
conda install -c conda-forge 'torchcodec=*=cuda130*' ffmpeg libstdcxx-ng
```
Use `cuda126*`, `cuda129*`, or `cuda130*` to match your driver. Useful
links: [TorchCodec CUDA decoding](https://meta-pytorch.org/torchcodec/stable/generated_examples/decoding/basic_cuda_example.html)
and [NVIDIA Video Codec SDK](https://developer.nvidia.com/video-codec-sdk).

## Citation

If you use face-rhythm in your research, please cite our preprint:

> Hakim et al. (2025). Spectral envelopes of facial movements predict
> intention, cortical representations, and neural prosthetic control.
> *bioRxiv*. https://doi.org/10.1101/2025.09.10.675423

BibTeX and a machine-readable `CITATION.cff` are at the root of the repo.

## Contributing

Bug reports, feature requests, and pull requests are welcome. Please open
an [issue](https://github.com/RichieHakim/face-rhythm/issues) before
submitting substantial changes.

## License

MIT — see [LICENSE](LICENSE).
