Metadata-Version: 2.4
Name: refmatch
Version: 0.2.0
Summary: AI-powered reference track suggestions for music producers
Project-URL: Homepage, https://github.com/AdelElo13/refmatch
Project-URL: Repository, https://github.com/AdelElo13/refmatch
Project-URL: Issues, https://github.com/AdelElo13/refmatch/issues
Author-email: Adel <adel@refmatch.dev>
License-Expression: MIT
License-File: LICENSE
Keywords: audio,mastering,mixing,music,production,reference
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: End Users/Desktop
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Multimedia :: Sound/Audio :: Analysis
Requires-Python: >=3.10
Requires-Dist: click>=8.0.0
Requires-Dist: flask>=3.0.0
Requires-Dist: librosa>=0.10.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: pyloudnorm>=0.1.1
Requires-Dist: rich>=13.0.0
Requires-Dist: soundfile>=0.12.0
Provides-Extra: all
Requires-Dist: faiss-cpu>=1.7.0; extra == 'all'
Requires-Dist: torch>=2.0.0; extra == 'all'
Requires-Dist: transformers>=4.30.0; extra == 'all'
Provides-Extra: clap
Requires-Dist: torch>=2.0.0; extra == 'clap'
Requires-Dist: transformers>=4.30.0; extra == 'clap'
Provides-Extra: dev
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: ruff>=0.4.0; extra == 'dev'
Provides-Extra: faiss
Requires-Dist: faiss-cpu>=1.7.0; extra == 'faiss'
Description-Content-Type: text/markdown

# RefMatch

**AI-powered reference track suggestions for music producers.**

Stop searching for reference tracks. Drop in your audio, get instant matches based on how it actually *sounds* — not just metadata.

[![PyPI](https://img.shields.io/pypi/v/refmatch)](https://pypi.org/project/refmatch/)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://python.org)

## Why RefMatch?

- **Spotify suggests by vibe. RefMatch matches by mix characteristics.** Spectral balance, loudness, dynamics, rhythm — the things that matter when you're mixing.
- **CLAP neural embeddings** understand how your track *sounds*, not just its frequency stats. Hybrid matching combines perceptual + technical similarity.
- **Works offline.** No API calls, no cloud. Your audio stays on your machine.
- **Open source.** The engine is free. Bring your own library or use the built-in seed database.

## Install

```bash
# Core (DSP matching only)
pip install refmatch

# With neural embeddings (recommended)
pip install refmatch[clap]
```

## Quick Start

```bash
# Analyze your track
refmatch analyze my_track.wav

# Find reference tracks
refmatch match my_track.wav

# Match on specific dimensions
refmatch match my_track.wav --dimension low-end
refmatch match my_track.wav --dimension loudness
refmatch match my_track.wav --dimension brightness
```

## How It Works

### Two matching engines

**DSP Features (43 dimensions)** — always available:
- MFCCs (timbral fingerprint)
- Spectral features (centroid, bandwidth, contrast, rolloff)
- Loudness (integrated LUFS, dynamic range)
- Rhythm (tempo detection)
- Harmony (chroma features, key estimation)

**CLAP Neural Embeddings (512 dimensions)** — with `refmatch[clap]`:
- Pretrained audio model ([laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused))
- Captures perceptual similarity — tracks that *sound alike* even with different spectral stats
- Segment-based averaging for full-track analysis

**Hybrid scoring**: 60% CLAP + 40% DSP when both are available. Falls back to DSP-only gracefully.

### Dimension-specific matching

Focus on what matters for your mix:

| Dimension | What it matches on |
|-----------|-------------------|
| `low-end` | Sub/bass energy, spectral contrast in low bands |
| `loudness` | LUFS, dynamic range, RMS energy |
| `brightness` | Spectral centroid, rolloff, high-frequency contrast |
| `rhythm` | Tempo, transient density |
| `harmony` | Chroma features, key similarity |

## Manage Your Library

```bash
# Add tracks (CLAP embeddings extracted automatically)
refmatch library add my_reference.wav
refmatch library add ~/Music/References/ --artist "Various" --genre "Hip-Hop"

# List your library
refmatch library list

# Match against your own library only
refmatch match my_track.wav --library

# Remove a track
refmatch library remove 42
```

## Seed Database

RefMatch ships with 500+ CC-licensed tracks across 25 genres from [Jamendo](https://jamendo.com). The seed database loads automatically on first use — no setup needed.

## Supported Formats

WAV, MP3, FLAC, OGG, AIFF, M4A

## For Developers

```bash
# Install with dev dependencies
pip install -e ".[dev,clap]"

# Run tests
pytest

# Lint
ruff check src/ tests/
```

## Roadmap

- [ ] VST3/CLAP plugin for in-DAW matching
- [ ] Natural language queries ("find something with heavy sub-bass and airy vocals")
- [ ] User feedback loop for personalized ranking
- [ ] Segment-based matching (intro/verse/chorus/drop)

## License

MIT
