Metadata-Version: 2.4
Name: sing4me
Version: 2.0.0
Summary: Python package for online and laboratory singing experiments
Project-URL: Homepage, https://gitlab.com/computational-audition/sing4me
Project-URL: Repository, https://gitlab.com/computational-audition/sing4me
Project-URL: Issues, https://gitlab.com/computational-audition/sing4me/-/work_items
Author-email: Manuel Anglada-Tort <manel.anglada.tort@gmail.com>, Nori Jacoby <nori.jacoby@ae.mpg.de>
Maintainer-email: Manuel Anglada-Tort <manel.anglada.tort@gmail.com>, Frank Höger <fh337@cornell.edu>
License: MIT License
        
        Copyright (c) 2026, Manuel Anglada-Tort, Peter Harrison, and Nori Jacoby
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
License-File: LICENSE
Keywords: audio,experiments,music,pitch,psychology,singing
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Multimedia :: Sound/Audio :: Analysis
Classifier: Topic :: Scientific/Engineering
Requires-Python: >=3.9
Requires-Dist: click
Requires-Dist: matplotlib
Requires-Dist: numpy
Requires-Dist: praat-parselmouth
Requires-Dist: scipy
Requires-Dist: textgrid
Provides-Extra: dev
Requires-Dist: black; extra == 'dev'
Requires-Dist: flake8; extra == 'dev'
Provides-Extra: notebook
Requires-Dist: jupyter; extra == 'notebook'
Requires-Dist: sounddevice; extra == 'notebook'
Requires-Dist: soundfile; extra == 'notebook'
Provides-Extra: publish
Requires-Dist: build; extra == 'publish'
Requires-Dist: twine; extra == 'publish'
Description-Content-Type: text/markdown

# sing4me — Singing Experiments

![Logo](https://gitlab.com/computational-audition/sing4me/-/raw/main/logo.png)

**Manuel Anglada-Tort, Peter Harrison, and Nori Jacoby**\
[Computational Auditory Perception Group](https://www.aesthetics.mpg.de/en/research/research-group-computational-auditory-perception.html)\
Max Planck Institute for Empirical Aesthetics

_sing4me_ is a Python package for singing extraction, optimized for laboratory and online singing experiments. It provides signal-processing code for sung-pitch extraction, plus a bundled Praat `syllable_extract.praat` script for syllable / rhythm extraction.

## Quick Links

- [Source code](https://gitlab.com/computational-audition/sing4me)
- [Issues](https://gitlab.com/computational-audition/sing4me/-/work_items)

## Citation

Please cite this package if you use it:

> Anglada-Tort, M., Harrison, P. M., Lee, H., & Jacoby, N. (2023). Large-scale iterated singing experiments reveal oral transmission mechanisms underlying music evolution. <https://doi.org/10.1016/j.cub.2023.02.070>

**Analysis code and datasets** supporting the paper (2023): <https://doi.org/10.17605/OSF.IO/UANGD>

## Installation

### Prerequisites

- Python 3.9 or newer (tested with 3.9–3.13)
- macOS (primary testing platform)

### Setting up a Virtual Environment

```bash
pip3 install virtualenv virtualenvwrapper

export WORKON_HOME=$HOME/.virtualenvs
mkdir -p $WORKON_HOME
export VIRTUALENVWRAPPER_PYTHON=$(which python3)
source $(which virtualenvwrapper.sh)

mkvirtualenv sing4me --python $(which python3)
```

The virtual environment is now active. To activate it later:

```bash
workon sing4me
```

### Installing sing4me

To install the published package from PyPI:

```bash
pip install sing4me
```

To install from a source checkout in editable mode (this also installs the runtime dependencies declared in `pyproject.toml`):

```bash
git clone git@gitlab.com:computational-audition/sing4me.git
cd sing4me
pip install -e .
```

Optional extras:

- `dev` — flake8, black
- `notebook` — jupyter, sounddevice, soundfile (needed for the demo notebooks)
- `publish` — build, twine (only needed if you intend to build and upload a release to PyPI)

Install an extra with, e.g.:

```bash
pip install -e ".[notebook]"
```

### Verify installation

```bash
sing4me --version
```

## Running Demos

The demo notebooks in `demos/` rely on the local `input/` directory of sample audio. Those audio assets are **not shipped with the PyPI distribution** — clone the repository if you want to run the demos:

```bash
git clone git@gitlab.com:computational-audition/sing4me.git
cd sing4me
pip install -e ".[notebook]"
jupyter notebook
```

Then open one of:

- `demos/singing_demo.ipynb`
- `demos/debug_singing_extract.ipynb`

## Sample Audio

The directory `sing4me/tests/` in the source checkout contains a large collection of sample `.wav` / `.png` / `.txt` files used for manual / pilot inspection. Those samples are **kept in the repository for development but are not shipped with the PyPI distribution** (they would exceed PyPI's per-file limits). To use them, work from a source checkout.

A simple helper called `generate_pilot_analysis_suite` will generate plots and analysis for a directory of recordings:

```python
generate_pilot_analysis_suite(
    audio_dir=os.path.dirname(here) + "/tests/good_2int/",
)
```

## License

MIT License

## Future Improvements

- [ ] Rename / clean up `sing4me/tests/test_sing4me.py` — it is a misleadingly-named manual debug script, not a real pytest test
- [ ] Add a tag-triggered PyPI publish job to the GitLab CI pipeline (using PyPI Trusted Publishing) so releases don't need to be uploaded from a developer's local venv
- [ ] Automate creation of a GitLab Release (with changelog and built artifacts) on every tagged version
- [ ] Add type hints and a `py.typed` marker
- [ ] Add code-formatting and linting configuration (black/ruff/flake8)
- [ ] Add `CONTRIBUTING.md` and `CODE_OF_CONDUCT.md`
