Metadata-Version: 2.4
Name: IFE_Surrogate
Version: 0.2.5
Summary: A machine learning library intended for surrogate modeling tasks.
Author: Tobias Leitgeb, Julian Tischler
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: <3.13,>=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: orbax-checkpoint<=0.11
Requires-Dist: numpy
Requires-Dist: flax
Requires-Dist: jax
Requires-Dist: numpyro
Requires-Dist: optax
Requires-Dist: matplotlib
Requires-Dist: pyswarms
Requires-Dist: scipy
Requires-Dist: tqdm
Requires-Dist: jaxtyping
Requires-Dist: seaborn
Requires-Dist: joblib
Dynamic: license-file

#  IFE Surrogate GP

A flexible and extensible library for Gaussian Processes, built with performance and modularity in mind.  
Models can be trained via maximum likelihood estimation or by baysian inference.
---

##  Features

-  **High-performance kernels** (JAX-compatible)  
-  **Composable API** for building custom models  
-  **Multiple optimizers** (Optax, Scipy, etc.)  
- **Built in Baysian inference of the models with NumPyro**
-  **Automatic hyperparameter handling**  
-  **Built-in training workflows**  

---

##  Installation

```bash
pip install ife_surrogate
```


---

##  Usage

### Quickstart

```python
import ife_surrogate


dataset = np.load("some_data.npy", allow_pickle=True).item()
X, Y, f = dataset["X"], dataset["Y"], dataset["f"]

key = jr.key(seed=42)
(X_train, Y_train), (X_test, Y_test), _= train_test_split(
    X=X, Y=Y, f=f, 
    split=(0.9, 0.1, 0), 
    key= key
)

d = X_train.shape[1]
priors = {"lengthscale": Uniform(1e-0, 1e1), "power": Uniform(1, 2)}
kernel = kernels.Kriging(lengthscale=jnp.ones(d), power=jnp.ones(d), priors=priors)

model = models.WidebandGP(X_train, Y_train, kernel, f)

trainer = trainers.SwarmTrainer(number_iterations=200, number_particles=100)
trainer.train(model)

pred, var = model.predict(X_test)
```
---

##  Documentation

- [API Reference](#)  
- [Tutorials](#)  
- [Examples](#)  

---

##  Key Components

- **Kernels**  
  - Kriging
  - RBF
  - Matern
  - SumKernel
  - ProductKernel
  - Scale
  - RQ
  - Noise

- **Models**  
  - WidebandGP
  - ScalerGP

- **Trainers**  
  - OptaxTrainer  
  - SwarmTrainer

---

##  Roadmap

---


---

##  License



Distributed under the MIT License. See [LICENSE](LICENSE) for more information.  




## References

The mathematical background and implementation of these models are based on the following publications:

### Gaussian Process & Student-t Theory
* **Gaussian Process Basics**: Rasmussen, C. E., & Williams, C. K. I. (2006). [Gaussian Processes for Machine Learning](http://www.GaussianProcess.org/gpml). MIT Press.
* **Student-t Process (TP) Foundations**: Shah, A., Wilson, A. G., & Ghahramani, Z. (2014). [Student-t Processes as Alternatives to Gaussian Processes](https://www.cs.cmu.edu/~andrewgw/tprocess.pdf). *Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS)*. 

### Wideband & Multi-output Modeling
* **Wideband Architecture**: Rezende, R. S., Hansen, J., Piwonski, A., & Schuhmann, R. (2024). Wideband Kriging for Multiobjective Optimization of a High-Voltage EMI Filter. *IEEE Transactions on Electromagnetic Compatibility*, 66(4), 1116–1124. 
* **Multi-output Separable GPs**: Bilionis, I., Zabaras, N., Konomi, B. A., & Lin, G. (2013). Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification. *Journal of Computational Physics*, 241, 212–239. 
* **Vector-Valued Kernels**: Alvarez, M. A., Rosasco, L., & Lawrence, N. D. (2012). [Kernels for Vector-Valued Functions: A Review](https://arxiv.org/abs/1106.6251). [cite_start]*Foundations and Trends in Machine Learning*.

### Inference & Software Stack

* **JAX**: Bradbury, J., et al. (2018). [JAX: Composable transformations of Python+NumPy programs](http://github.com/jax-ml/jax).
* **NumPyro**: Phan, D., Pradhan, N., & Jankowiak, M. (2019). [Composable Effects for Flexible and Accelerated Probabilistic Programming in NumPyro](https://arxiv.org/abs/1912.11554). *arXiv preprint*.
* **Flax**: Heek, J., et al. (2020). [Flax: A neural network library and ecosystem for JAX](https://github.com/google/flax).
* **Optax**: Babuschkin, I., et al. (2020). [Optax: Composable gradient descent optimization for JAX](https://github.com/google-deepmind/optax).
* **SciPy**: Virtanen, P., et al. (2020). [SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python](https://www.nature.com/articles/s41592-019-0686-2). *Nature Methods*.
* **NUTS Sampler**: Hoffman, M. D., & Gelman, A. (2014). [The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo](https://jmlr.org/papers/v15/hoffman14a.html). *Journal of Machine Learning Research*. 

## Acknowledgements
