Metadata-Version: 2.4
Name: sparse-kappa
Version: 0.0.2
Summary: GPU-accelerated sparse matrix condition number estimation using CuPy
Home-page: https://github.com/chenxinye/sparse-kappa
Author: Xinye Chen
Author-email: Xinye Chen <xinyechenai@gmail.com>
License: MIT
Project-URL: Homepage, https://github.com/chenxinye/sparse-kappa
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: cupy>=10.0.0
Requires-Dist: numpy>=1.20.0
Provides-Extra: dev
Requires-Dist: pytest>=6.0; extra == "dev"
Requires-Dist: pytest-cov>=2.0; extra == "dev"
Requires-Dist: black>=22.0; extra == "dev"
Requires-Dist: flake8>=4.0; extra == "dev"
Dynamic: author
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-python

<div align="center">

# sparse-kappa

**Condition Number Estimation on GPUs for Sparse Matrices**



</div>


``sparse-kappa`` is a GPU-accelerated library for estimating condition numbers of sparse matrices using CuPy. It supports a variaty of estimation method associated with linear solvers. ``sparse-kappa`` is designed for benchmarking condition number estimator and practical use in science and engineering community.  





## Features

- **GPU-Accelerated**: All computations run on NVIDIA GPUs via CuPy
- **Multiple Norms**: Support for 1-norm and 2-norm condition numbers
- **Rich Algorithm Suite**:
  - **1-norm**: Hager-Higham, Power iteration, Oettli-Prager sampling, Block Hager
  - **2-norm**: Power method, Lanczos, Arnoldi, Golub-Kahan bidiagonalization
  - **CuPy integrations**: SVDS, EIGSH, LOBPCG wrappers
- **Flexible Solver System**: LU, LSMR, CG, GMRES, Direct, Auto-selection
- **Smart LU Caching**: Reuses factorizations for multiple solves (10-20x speedup)
- **Memory Efficient**: Designed for large sparse matrices


## Installation


Simply via pip manager
```bash
pip install sparse-kappa
```

```bash
git clone https://github.com/chenxinye/sparse-kappa
pip install cupy-cuda11x  # or cupy-cuda12x for CUDA 12
pip install -e .
```

## Quick Start

```python
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate

# Create sparse matrix
A = sp.random(10000, 10000, density=0.01, format='csr')

# Estimate condition number
cond = cond_estimate(A)
print(f"κ(A) = {cond:.2e}")

# Use specific method with LU solver
cond = cond_estimate(A, norm=1, method='hager-higham', solver='lu')
```

# Available Methods

## 1-Norm Methods

| Method          | Description                              | Best For                        | Complexity     |
|-----------------|------------------------------------------|---------------------------------|----------------|
| `hager`         | Hager algorithm (default)                | High accuracy, general matrices | O(k·nnz)       |
| `power`         | Power iteration                          | Fast rough estimates            | O(k·nnz)       |
| `oettli-prager` | Random/adaptive sampling                 | Quick estimates with variants   | O(m·nnz)       |
| `hager-higham`   | Hager-Higham  (Block algorith, multiple vectors)           | Improved robustness             | O(k·b·nnz)     |

**Recommended:** Use `solver='lu'` for all 1-norm methods (10-20x faster)

## 2-Norm Methods

| Method       | Description                        | Best For                     | Complexity      |
|--------------|------------------------------------|------------------------------|-----------------|
| `svds`       | Partial SVD (most accurate)        | Small-medium matrices (<5k)  | O(k·nnz)        |
| `eigsh`      | Symmetric eigenvalue solver        | Symmetric matrices           | O(k·nnz)        |
| `lobpcg`     | Block preconditioned CG            | Large matrices               | O(k·nnz)        |
| `power`      | Power iteration                    | Quick estimates              | O(k·nnz)        |
| `lanczos`    | Lanczos tridiagonalization         | Medium matrices              | O(k²·nnz)       |
| `arnoldi`    | Arnoldi iteration                  | Non-symmetric                | O(k²·nnz)       |
| `golub-kahan`| Bidiagonalization                  | Numerically stable           | O(k·nnz)        |
| `auto`       | Automatic selection                | All cases                    | -               |

## Solver Options

All 1-norm methods support flexible solver selection:

| Solver | Description | Best For | Speed | Memory |
|--------|-------------|----------|-------|--------|
| `auto` | Automatic selection (default) | General use | Good | Low |
| `lu` | LU factorization with caching | Small matrices (<5k), multiple solves | **Fastest** | High |
| `lsmr` | LSMR iterative solver | Large matrices, single solve | Medium | Low |
| `cg` | Conjugate Gradient | SPD matrices | Fast | Low |
| `bicgstab` | BiCGSTAB (stabilized BiCG) | **Non-symmetric matrices** | **Fast** | Low |
| `gmres` | GMRES | Non-symmetric, when BiCGSTAB fails | Medium | Low |
| `direct` | Direct solver (no caching) | Single solve, small matrices | Fast | Medium |

**Legend:**  
`k` = iterations, `b` = block size, `m` = samples, `nnz` = non-zeros

## Examples

### Example 1: Compare Methods

```python
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate

A = sp.random(2000, 2000, density=0.005, format='csr')

# Compare 1-norm methods
methods_1 = ['hager-higham', 'power', 'oettli-prager', 'block-hager']
for method in methods_1:
    result = cond_estimate(A, norm=1, method=method, solver='lu', 
                          return_dict=True)
    print(f"{method:15s}: κ={result['condition_number']:.4e}, "
          f"iters={result['iterations']}")

# Compare 2-norm methods
methods_2 = ['svds', 'lanczos', 'golub-kahan']
for method in methods_2:
    cond = cond_estimate(A, norm=2, method=method)
    print(f"{method:12s}: κ={cond:.4e}")
```

### Example 2: LU Solver (for 1-norm)

```python
# Highly recommended: use LU solver for Hager-Higham
result = cond_estimate(A, norm=1, method='hager-higham',
                      solver='lu', return_dict=True)

print(f"Condition number: {result['condition_number']:.4e}")
print(f"Solver info:")
print(f"  Type: {result['solver_info']['solver_A']['method']}")
print(f"  Factorized: {result['solver_info']['solver_A']['factorized']}")
print(f"  Solves: {result['solver_info']['solver_A']['solve_count']}")
```

### Example 3: Oettli-Prager Variants

```python
# Adaptive (most accurate)
result = cond_estimate(A, norm=1, method='oettli-prager',
                      solver='lu', variant='adaptive', max_iter=15)

# Random sampling (fastest)
result = cond_estimate(A, norm=1, method='oettli-prager',
                      solver='lu', variant='random', max_iter=20)

# Hybrid (balanced)
result = cond_estimate(A, norm=1, method='oettli-prager',
                      solver='lu', variant='hybrid', max_iter=15)
```

### Example 4: Custom Solver Parameters
```python
# LSMR with relaxed tolerance for large matrices
result = cond_estimate(A, norm=1, method='hager-higham',
                      solver='lsmr',
                      solver_kwargs={'atol': 1e-3, 'maxiter': 20})

# CG for symmetric matrices
A_spd = A @ A.T + sp.eye(A.shape[0]) * 10
result = cond_estimate(A_spd, norm=1, method='hager-higham',
                      solver='cg',
                      solver_kwargs={'atol': 1e-3, 'maxiter': 30})
```

## Performance Tips

1. **Auto mode is recommended** for first-time usage
2. **For symmetric matrices**, use `eigsh` or `lanczos`
3. **For large sparse matrices** (>10k), use `golub-kahan` or `lobpcg`
4. **For highest accuracy on small matrices**, use `svds`
5. **Increase `max_iter`** if convergence fails

## Testing

```bash
# Run all tests
pytest tests/ -v

# Run specific test file
pytest tests/test_norm2.py -v

# Run with coverage
pytest tests/ --cov=sparse_kappa
```


## License

MIT License

## Contributing

Contributions welcome! Please submit issues and pull requests on GitHub.

## References

- Hager, W. W. (1984). "Condition estimates." *SIAM J. Sci. Stat. Comput.*, 5(2), 311-316.
- Higham, N. J., & Tisseur, F. (2000). "A block algorithm for matrix 1-norm estimation." *SIAM J. Matrix Anal. Appl.*, 21(4), 1185-1201.
- Golub, G. H., & Van Loan, C. F. (2013). *Matrix Computations* (4th ed.). Johns Hopkins University Press.
- Saad, Y. (2011). *Numerical Methods for Large Eigenvalue Problems* (2nd ed.). SIAM.
- Oettli, W., & Prager, W. (1964). "Compatibility of approximate solution of linear equations." *Numerische Mathematik*, 6(1), 405-409.
- Van der Vorst, H. A. (1992). "Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems." SIAM J. Sci. Stat. Comput., 13(2), 631-644.

## Citation

If you use this library in your research, please cite:

```bibtex
@software{sparse_kappa,
  title={Sparse Matrices Condition Number Estimation on GPUs},
  author={Xinye Chen},
  year={2026},
  url={https://github.com/chenxinye/sparse_kappa}
}
```
