Metadata-Version: 2.4
Name: qce
Version: 0.1.0
Summary: Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors.
Author: Benedikt Fesl
License-Expression: BSD-3-Clause
Project-URL: Homepage, https://github.com/benediktfesl/quantized-channel-estimation
Project-URL: Repository, https://github.com/benediktfesl/quantized-channel-estimation
Project-URL: Issues, https://github.com/benediktfesl/quantized-channel-estimation/issues
Keywords: channel estimation,quantization,Bussgang,GMM,MFA,wireless communications,signal processing,complex-valued
Classifier: Development Status :: 3 - Alpha
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Topic :: Scientific/Engineering :: Mathematics
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.23
Requires-Dist: scipy>=1.9
Requires-Dist: scikit-learn>=1.3
Requires-Dist: gmm-estimator>=0.1.0
Requires-Dist: mfa-estimator>=0.1.1
Dynamic: license-file

# qce

[![Python](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://www.python.org/)
[![License: BSD-3-Clause](https://img.shields.io/badge/License-BSD--3--Clause-blue.svg)](LICENSE)
[![Package](https://img.shields.io/badge/package-PyPI-informational.svg)](https://pypi.org/project/qce/)

Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors.

`qce` provides reusable Python building blocks for complex-valued channel estimation with quantized observations. It contains scalar quantizers, Bussgang and quantized-covariance utilities, covariance recovery routines, covariance generators, and estimators for quantized linear observation models.

The package is a clean, package-oriented implementation inspired by the research code for

> B. Fesl, N. Turan, B. Böck, and W. Utschick, “Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models,” in *IEEE Transactions on Signal Processing*, vol. 72, pp. 1475-1490, 2024.

[[IEEE](https://ieeexplore.ieee.org/abstract/document/10454252)] [[arXiv](https://arxiv.org/abs/2309.04014)] [[Legacy code](https://github.com/benediktfesl/Quantized_Channel_Estimation)]

## ✨ Highlights

- Quantized channel estimation for complex-valued linear observation models
- Uniform midrise and Lloyd-Max scalar quantizers
- One-bit arcsine-law covariance utilities
- Multi-bit Bussgang covariance approximations with exact scalar quantized variances on the diagonal
- Covariance recovery from quantized complex-valued samples
- Linear baselines: least-squares and Bussgang-LMMSE estimators
- Mixture-prior estimators: Bussgang-GMM and Bussgang-MFA
- Oracle and trained-prior examples with SNR-vs-MSE plots
- Random covariance generators for simulation and testing
- Integration with [cplx-gmm](https://pypi.org/project/cplx-gmm/), [cplx-mfa](https://pypi.org/project/cplx-mfa/), [gmm-estimator](https://pypi.org/project/gmm-estimator/), and [mfa-estimator](https://pypi.org/project/mfa-estimator/)
- Modern Python package layout with `pyproject.toml`, `uv`, `pytest`, and `ruff`

## 📦 Installation

Install from PyPI:

```bash
pip install qce
```

or with `uv`:

```bash
uv add qce
```

For development, clone the repository and install the development environment:

```bash
git clone https://github.com/benediktfesl/quantized-channel-estimation.git
cd quantized-channel-estimation
uv sync --group dev
```

Run tests and checks:

```bash
uv run ruff check .
uv run pytest
```

## 🚀 Quick Start

### Bussgang-LMMSE from quantized observations

```python
import numpy as np

from qce.estimators import BussgangLMMSEEstimator
from qce.quantizers import (
    bussgang_matrix,
    quantized_covariance,
    uniform_midrise_quantizer,
    uniform_quantization_step,
)

rng = np.random.default_rng(0)

n_dim = 8
snr_db = 10.0
n_bits = 3

A = np.eye(n_dim, dtype=complex)
C_h = np.eye(n_dim, dtype=complex)
noise_variance = 10.0 ** (-snr_db / 10.0)
C_y = A @ C_h @ A.conj().T + noise_variance * np.eye(n_dim)

quantizer = uniform_midrise_quantizer(
    step=float(uniform_quantization_step(snr_db, n_bits)),
    n_bits=n_bits,
)

B = bussgang_matrix(C_y, n_bits=n_bits, snr_db=snr_db)
C_r = quantized_covariance(C_y, n_bits=n_bits, snr_db=snr_db, quantizer=quantizer)

estimator = BussgangLMMSEEstimator.from_bussgang(
    measurement_matrix=A,
    channel_covariance=C_h,
    bussgang_matrix=B,
    quantized_observation_covariance=C_r,
)

y = (
    rng.standard_normal((4, n_dim)) + 1j * rng.standard_normal((4, n_dim))
) / np.sqrt(2.0)
r = quantizer.quantize(np.real(y)) + 1j * quantizer.quantize(np.imag(y))

h_hat = estimator.estimate(r)
```

### Bussgang-GMM with a fitted complex-valued GMM prior

```python
import numpy as np

from qce.estimators import BussgangGmmEstimator
from qce.quantizers import uniform_midrise_quantizer, uniform_quantization_step

rng = np.random.default_rng(0)

snr_db = 10.0
n_bits = 3
n_dim = 8

h_train = (
    rng.standard_normal((2_000, n_dim)) + 1j * rng.standard_normal((2_000, n_dim))
) / np.sqrt(2.0)

estimator = BussgangGmmEstimator(
    n_components=4,
    covariance_type="full",
    zero_mean=True,
    random_state=0,
    n_init=1,
    max_iter=100,
)
estimator.fit(h_train)

noise_covariance = 10.0 ** (-snr_db / 10.0) * np.eye(n_dim, dtype=complex)
quantizer = uniform_midrise_quantizer(
    step=float(uniform_quantization_step(snr_db, n_bits)),
    n_bits=n_bits,
)

r = quantizer.quantize(np.real(h_train[:4])) + 1j * quantizer.quantize(
    np.imag(h_train[:4])
)

h_hat = estimator.estimate_quantized(
    y=r,
    noise_covariance=noise_covariance,
    observation_matrix=np.eye(n_dim, dtype=complex),
    n_bits=n_bits,
    quantizer=quantizer,
    quantizer_kind="uniform",
    snr_db=snr_db,
)
```

### Bussgang-MFA with a fitted complex-valued MFA prior

```python
import numpy as np

from qce.estimators import BussgangMfaEstimator
from qce.quantizers import uniform_midrise_quantizer, uniform_quantization_step

rng = np.random.default_rng(0)

snr_db = 10.0
n_bits = 3
n_dim = 8

h_train = (
    rng.standard_normal((2_000, n_dim)) + 1j * rng.standard_normal((2_000, n_dim))
) / np.sqrt(2.0)

estimator = BussgangMfaEstimator(
    n_components=4,
    latent_dim=2,
    zero_mean=True,
    random_state=0,
    max_iter=100,
    verbose=False,
)
estimator.fit(h_train)

Cn = 10.0 ** (-snr_db / 10.0) * np.eye(n_dim, dtype=complex)
quantizer = uniform_midrise_quantizer(
    step=float(uniform_quantization_step(snr_db, n_bits)),
    n_bits=n_bits,
)

r = quantizer.quantize(np.real(h_train[:4])) + 1j * quantizer.quantize(
    np.imag(h_train[:4])
)

h_hat = estimator.estimate_quantized(
    y=r,
    Cn=Cn,
    A=np.eye(n_dim, dtype=complex),
    n_bits=n_bits,
    quantizer=quantizer,
    quantizer_kind="uniform",
    snr_db=snr_db,
)
```

## 🧩 Estimation Model

The package considers a quantized complex-valued linear observation model

```text
r = Q(y) = Q(A h + n)
```

where:

| Symbol | Description |
|---|---|
| `h` | Unknown complex-valued channel or signal vector. |
| `A` | Known linear observation matrix. |
| `n` | Zero-mean complex Gaussian observation noise. |
| `y` | Unquantized observation. |
| `r = Q(y)` | Quantized observation. |

For one-bit quantization, `qce` uses the complex arcsine relation to model the covariance of the quantized observations. For multi-bit quantization, `qce` uses Bussgang-linearized observation models.

For a component-wise Gaussian prior

```text
p(h) = sum_k pi_k CN(h; mu_k, C_k),
```

`qce` builds component-wise quantized observation models:

```text
C_y,k = A C_k A^H + C_n
B_k   = Bussgang(C_y,k)
C_r,k = Cov(Q(y) | k)
```

The component-wise estimate has the LMMSE form

```text
h_hat_k = mu_k + C_hr,k C_r,k^{-1} (r - E[r | k]),
```

and the final estimate combines the component-wise estimates using posterior component probabilities in the quantized observation domain.

## 🧠 Estimator API

### Linear estimators

| Class | Description |
|---|---|
| `LeastSquaresEstimator` | Least-squares estimator for linear observation models. |
| `BussgangLMMSEEstimator` | LMMSE estimator for unquantized or Bussgang-linearized quantized observations. |

### Mixture-prior estimators

| Class | Base package | Description |
|---|---|---|
| `BussgangGmmEstimator` | [`gmm-estimator`](https://pypi.org/project/gmm-estimator/) / [`cplx-gmm`](https://pypi.org/project/cplx-gmm/) | Bussgang-based quantized estimator with a complex-valued GMM prior. |
| `BussgangMfaEstimator` | [`mfa-estimator`](https://pypi.org/project/mfa-estimator/) / [`cplx-mfa`](https://pypi.org/project/cplx-mfa/) | Bussgang-based quantized estimator with a complex-valued MFA prior. |

`BussgangGmmEstimator` inherits the fitting API from `gmm-estimator`, which itself builds on `cplx-gmm`.

`BussgangMfaEstimator` inherits the fitting API from `mfa-estimator`, which itself builds on `cplx-mfa`.

The inherited `estimate(...)` methods remain the high-resolution continuous-observation estimators. The additional `estimate_quantized(...)` methods handle quantized observations.

## 🔢 Quantizers

`qce.quantizers` contains scalar quantizers and Bussgang utilities.

| Utility | Description |
|---|---|
| `ScalarQuantizer` | Frozen scalar quantizer object with thresholds, labels, validation, and `.quantize(...)`. |
| `uniform_midrise_quantizer(...)` | Symmetric uniform midrise scalar quantizer. |
| `uniform_quantization_step(...)` | Standard-Gaussian uniform quantizer step utility. |
| `uniform_distortion_factor(...)` | Approximate uniform quantization distortion factor. |
| `lloyd_max_quantizer(...)` | Lloyd-Max scalar quantizer for Gaussian scalar inputs. |
| `bussgang_matrix(...)` | Bussgang matrix for uniform scalar quantization. |
| `lloyd_max_bussgang_matrix(...)` | Bussgang matrix for Lloyd-Max scalar quantization. |
| `quantized_covariance(...)` | Quantized covariance for one-bit and uniform multi-bit quantization. |
| `quantized_variance(...)` | Exact scalar quantized variances for a given scalar quantizer. |

### Uniform vs Lloyd-Max

Uniform multi-bit estimators use `quantizer_kind="uniform"` and require `snr_db` for the Bussgang gain.

Lloyd-Max multi-bit estimators use `quantizer_kind="lloyd_max"` and expect a `ScalarQuantizer` returned by the Lloyd-Max result object:

```python
from qce.quantizers import lloyd_max_quantizer

result = lloyd_max_quantizer(snr_db=10.0, n_bits=3)
quantizer = result.quantizer
```

The Lloyd-Max mixture-estimator path uses a Bussgang covariance approximation with the supplied Lloyd-Max Bussgang matrix and exact scalar quantized variances on the diagonal.

## 📈 Covariance Utilities

`qce.covariance` contains covariance recovery, positive-definite matrix helpers, and simulation-oriented covariance generators.

### Covariance recovery

```python
from qce.covariance import estimate_covariance_from_quantized_samples

C_hat = estimate_covariance_from_quantized_samples(
    quantized_samples,
    n_bits=3,
    quantizer=quantizer,
)
```

The recovery method estimates the covariance of the unquantized signal that was observed through scalar quantization. It follows the legacy covariance-recovery setup: normalized correlations are recovered using one-bit signs, while marginal variances are recovered from threshold-hit probabilities.

### Covariance generators

| Generator | Description |
|---|---|
| `RandomSPDCovarianceGenerator` | Random unstructured Hermitian positive-definite covariance matrices. |
| `RandomExponentialCovarianceGenerator` | Structured exponential correlation covariances with random phase and marginal variances. |
| `RandomLowRankCovarianceGenerator` | Low-rank plus diagonal covariance matrices of the form `U diag(lambda) U^H + sigma^2 I`. |

All generators support:

```python
covariance = generator.sample_covariance()
covariances = generator.sample_covariance(n_draws=100)
samples = generator.sample_observations(n_samples=10_000)
```

Observation sampling supports configurable factorization modes:

```text
"auto", "none", "cholesky", "eigh"
```

## 📊 Examples

Run examples with `uv`:

```bash
uv run python examples/covariance_recovery_from_quantized_samples.py
uv run python examples/snr_vs_mse_oracle_estimators.py
uv run python examples/snr_vs_mse_trained_priors.py
uv run python examples/compare_quantizer_variants.py
```

Generated figures are written to `results/`:

```text
results/covariance_recovery/
results/snr_vs_mse_oracle_estimators/
results/snr_vs_mse_trained_priors/
results/quantizer_comparison/
```

The `results/` directory is intended for local runtime outputs and should usually not be committed.

## 🧪 Development

The project uses:

- `uv` for dependency and environment management
- `pytest` for tests
- `ruff` for linting
- `src/` package layout
- `setuptools` build backend

Useful commands:

```bash
uv sync --group dev
uv run ruff check . --fix
uv run pytest
uv build
```

## 🔗 Related Packages

`qce` is designed to sit on top of reusable complex-valued prior and estimator packages.

| Package | Role | Links |
|---|---|---|
| `cplx-gmm` | Complex-valued GMM fitting | [PyPI](https://pypi.org/project/cplx-gmm/) · [GitHub](https://github.com/benediktfesl/GMM_cplx) |
| `cplx-mfa` | Complex-valued MFA fitting | [PyPI](https://pypi.org/project/cplx-mfa/) · [GitHub](https://github.com/benediktfesl/cplx-mfa) |
| `gmm-estimator` | High-resolution GMM estimator for linear inverse problems | [PyPI](https://pypi.org/project/gmm-estimator/) · [GitHub](https://github.com/michael-koller-91/gmm-estimator) |
| `mfa-estimator` | High-resolution MFA estimator for linear inverse problems | [PyPI](https://pypi.org/project/mfa-estimator/) · [GitHub](https://github.com/benediktfesl/MFA_estimator) |
| Legacy quantized-estimation code | Original research scripts for the TSP 2024 paper | [GitHub](https://github.com/benediktfesl/Quantized_Channel_Estimation) |

## 📌 Citation

If you use `qce` in academic work, please cite the package directly:

```bibtex
@software{fesl_qce,
  author = {Fesl, Benedikt},
  title = {{qce}: Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors},
  year = {2026},
  url = {https://github.com/benediktfesl/quantized-channel-estimation},
  version = {0.1.0}
}
```

Plain-text citation:

> B. Fesl, `qce`: Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors, version 0.1.0. Available: https://github.com/benediktfesl/quantized-channel-estimation

If you use the package in the context of quantized channel estimation, please also cite the corresponding research paper.

## 📚 Research Background

This package is related to the following works on generative priors, channel estimation, quantized systems, and structured covariance models.

### Main reference

- B. Fesl, N. Turan, B. Böck, and W. Utschick, “Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models,” in *IEEE Transactions on Signal Processing*, vol. 72, pp. 1475-1490, 2024.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/10454252)] [[arXiv](https://arxiv.org/abs/2309.04014)]

### Additional related works

- B. Fesl, “Generative Model-Aided Channel Estimation Design and Optimality Analysis,” *Ph.D. dissertation, Technical University of Munich*, 2025.  
  [[TUM](https://mediatum.ub.tum.de/?id=1748775)]

- M. Koller, B. Fesl, N. Turan, and W. Utschick, “An Asymptotically MSE-Optimal Estimator Based on Gaussian Mixture Models,” *IEEE Transactions on Signal Processing*, vol. 70, pp. 4109–4123, 2022.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/9842343)] [[arXiv](https://arxiv.org/abs/2112.12499v2)]

- B. Fesl, M. Joham, S. Hu, M. Koller, N. Turan, and W. Utschick, “Channel Estimation based on Gaussian Mixture Models with Structured Covariances,” in *56th Asilomar Conference on Signals, Systems, and Computers*, 2022, pp. 533–537.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/10051921)] [[arXiv](https://arxiv.org/abs/2205.03634)]

- B. Fesl, N. Turan, M. Joham, and W. Utschick, “Learning a Gaussian Mixture Model from Imperfect Training Data for Robust Channel Estimation,” *IEEE Wireless Communications Letters*, 2023.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/10078293)] [[arXiv](https://arxiv.org/abs/2301.06488)]

- M. Baur, B. Fesl, and W. Utschick, “Leveraging Variational Autoencoders for Parameterized MMSE Estimation,” *IEEE Transactions on Signal Processing*, vol. 72, pp. 3731–3744, 2024.  
  [[IEEE](https://ieeexplore.ieee.org/document/10629241)] [[arXiv](https://arxiv.org/abs/2307.05352)]

- B. Fesl, A. Banna, and W. Utschick, “Enhancing Channel Estimation in Quantized Systems with a Generative Prior,” in *IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)*, 2024, pp. 681-685.  
  [[IEEE](https://ieeexplore.ieee.org/document/10694129)] [[arXiv](https://arxiv.org/abs/2405.03542)]

- B. Fesl, M. Koller, and W. Utschick, “On the Mean Square Error Optimal Estimator in One-Bit Quantized Systems,” in *IEEE Transactions on Signal Processing*, vol. 71, pp. 1968-1980, 2023.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/10141882)] [[arXiv](https://arxiv.org/abs/2212.04470)]

- B. Fesl and W. Utschick, “Linear and Nonlinear MMSE Estimation in One-Bit Quantized Systems Under a Gaussian Mixture Prior,” in *IEEE Signal Processing Letters*, vol. 32, pp. 361-365, 2025.  
  [[IEEE](https://ieeexplore.ieee.org/abstract/document/10745751)] [[arXiv](https://arxiv.org/abs/2407.01305)]

## 📄 License

This repository is distributed under the BSD 3-Clause License.

See [LICENSE](LICENSE) for details.
