Metadata-Version: 2.4
Name: sparse-kappa
Version: 0.0.1
Summary: GPU-accelerated sparse matrix condition number estimation using CuPy
Home-page: https://github.com/chenxinye/sparse-kappa
Author: Xinye Chen
Author-email: Xinye Chen <xinyechenai@gmail.com>
License: MIT
Project-URL: Homepage, https://github.com/chenxinye/sparse-kappa
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: cupy>=10.0.0
Requires-Dist: numpy>=1.20.0
Provides-Extra: dev
Requires-Dist: pytest>=6.0; extra == "dev"
Requires-Dist: pytest-cov>=2.0; extra == "dev"
Requires-Dist: black>=22.0; extra == "dev"
Requires-Dist: flake8>=4.0; extra == "dev"
Dynamic: author
Dynamic: home-page
Dynamic: requires-python

# CuPy Sparse Condition Number Estimation

A GPU-accelerated library for estimating condition numbers of sparse matrices using CuPy.

## Features

- **GPU-Accelerated**: All computations run on NVIDIA GPUs via CuPy
- **Multiple Norms**: Support for 1-norm and 2-norm condition numbers
- **Rich Algorithm Suite**:
  - **1-norm**: Hager-Higham algorithm
  - **2-norm**: Power method, Lanczos, Arnoldi, Golub-Kahan bidiagonalization
  - **CuPy integrations**: svds, eigsh, lobpcg wrappers
- **Automatic Method Selection**: Chooses optimal algorithm based on matrix properties
- **Memory Efficient**: Designed for large sparse matrices

## Installation


Simply via pip manager
```bash
pip install sparse-kappa
```

```bash
git clone https://github.com/chenxinye/sparse-kappa
pip install cupy-cuda11x  # or cupy-cuda12x for CUDA 12
pip install -e .
```

## Quick Start

```python
import cupy as cp
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate

# Create sparse matrix on GPU
A = sp.random(10000, 10000, density=0.01, format='csr')

# Estimate condition number (automatic method selection)
cond = cond_estimate(A)
print(f"Condition number: {cond:.2e}")

# Use specific method
cond = cond_estimate(A, norm=2, method='lanczos')

# Get detailed results
result = cond_estimate(A, norm=2, method='svds', verbose=True)
print(f"Method: {result['method']}")
print(f"Iterations: {result['iterations']}")
print(f"σ_max: {result['sigma_max']:.4e}")
print(f"σ_min: {result['sigma_min']:.4e}")
```

## Available Methods

### 1-Norm Methods

| Method | Description | Best For |
|--------|-------------|----------|
| `hager-higham` | Iterative refinement algorithm | General matrices, fast estimation |

### 2-Norm Methods

| Method | Description | Best For | Complexity |
|--------|-------------|----------|------------|
| `svds` | Partial SVD (most accurate) | Small-medium matrices (<5k) | O(k·nnz) |
| `eigsh` | Symmetric eigenvalue solver | Symmetric matrices | O(k·nnz) |
| `lobpcg` | Block preconditioned CG | Large matrices | O(k·nnz) |
| `power` | Power iteration | Quick estimates | O(k·nnz) |
| `lanczos` | Lanczos tridiagonalization | Medium matrices | O(k²·nnz) |
| `arnoldi` | Arnoldi iteration | Non-symmetric | O(k²·nnz) |
| `golub-kahan` | Bidiagonalization | Numerically stable | O(k·nnz) |
| `auto` | Automatic selection | All cases | - |

## Examples

### Example 1: Compare Methods

```python
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate

A = sp.random(2000, 2000, density=0.005, format='csr')

methods = ['power', 'lanczos', 'svds', 'golub-kahan']
for method in methods:
    cond = cond_estimate(A, norm=2, method=method)
    print(f"{method:12s}: {cond:.4e}")
```

### Example 2: 1-Norm Estimation

```python
result = cond_estimate(A, norm=1, method='hager-higham', verbose=True)
print(f"κ₁(A) = {result['condition_number']:.4e}")
print(f"||A||₁ = {result['norm_A']:.4e}")
print(f"||A⁻¹||₁ = {result['norm_Ainv']:.4e}")
```

### Example 3: Symmetric Matrix

```python
# Create symmetric matrix
A = sp.random(1000, 1000, density=0.01, format='csr')
A = (A + A.T) / 2

# Use eigsh (optimized for symmetric)
cond = cond_estimate(A, norm=2, method='eigsh')
```

## Performance Tips

1. **Auto mode is recommended** for first-time usage
2. **For symmetric matrices**, use `eigsh` or `lanczos`
3. **For large sparse matrices** (>10k), use `golub-kahan` or `lobpcg`
4. **For highest accuracy on small matrices**, use `svds`
5. **Increase `max_iter`** if convergence fails

## Testing

```bash
# Run all tests
pytest tests/ -v

# Run specific test file
pytest tests/test_norm2.py -v

# Run with coverage
pytest tests/ --cov=sparse_kappa
```


## License

MIT License

## Contributing

Contributions welcome! Please submit issues and pull requests on GitHub.


## References

1. Hager, W. W. (1984). "Condition estimates." SIAM J. Sci. Stat. Comput.
2. Higham, N. J., & Tisseur, F. (2000). "A block algorithm for matrix 1-norm estimation." SIAM J. Matrix Anal. Appl.
3. Golub, G. H., & Van Loan, C. F. (2013). "Matrix Computations" (4th ed.)
4. Saad, Y. (2011). "Numerical Methods for Large Eigenvalue Problems" (2nd ed.)

## Citation

If you use this library in your research, please cite:

```bibtex
@software{sparse_kappa,
  title={Sparse Matrices Condition Number Estimation on GPUs},
  author={Xinye Chen},
  year={2026},
  url={https://github.com/chenxinye/sparse_kappa}
}
```
