Metadata-Version: 2.4
Name: autofit
Version: 2026.5.8.1
Summary: Classy Probabilistic Programming
Author-email: James Nightingale <James.Nightingale@newcastle.ac.uk>, Richard Hayes <richard@rghsoftware.co.uk>
License: MIT
Project-URL: Homepage, https://github.com/PyAutoLabs/PyAutoFit
Keywords: cli
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Physics
Classifier: Natural Language :: English
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: autoconf==2026.5.8.1
Requires-Dist: array_api_compat
Requires-Dist: anesthetic==2.8.14; python_version < "3.13"
Requires-Dist: anesthetic>=2.9.0; python_version >= "3.13"
Requires-Dist: corner==2.2.2
Requires-Dist: decorator>=4.2.1
Requires-Dist: dill>=0.3.1.1
Requires-Dist: dynesty==2.1.5
Requires-Dist: typing-inspect>=0.4.0
Requires-Dist: emcee>=3.1.6
Requires-Dist: gprof2dot==2021.2.21
Requires-Dist: matplotlib
Requires-Dist: numpydoc>=1.0.0
Requires-Dist: h5py>=3.11.0
Requires-Dist: SQLAlchemy<2.1.0,>=2.0.32
Requires-Dist: scipy<=1.17.1
Requires-Dist: astunparse==1.6.3
Requires-Dist: threadpoolctl>=3.1.0
Requires-Dist: timeout-decorator==0.5.0
Requires-Dist: xxhash<=3.4.1
Requires-Dist: networkx==3.1
Requires-Dist: pyvis==0.3.2
Requires-Dist: psutil==6.1.0
Provides-Extra: jax
Requires-Dist: autoconf[jax]; extra == "jax"
Provides-Extra: optional
Requires-Dist: autofit[jax]; extra == "optional"
Requires-Dist: astropy>=5.0; extra == "optional"
Requires-Dist: blackjax>=1.2.0; extra == "optional"
Requires-Dist: getdist==1.4; extra == "optional"
Requires-Dist: nautilus-sampler==1.0.5; extra == "optional"
Requires-Dist: zeus-mcmc==2.5.4; extra == "optional"
Provides-Extra: docs
Requires-Dist: sphinx; extra == "docs"
Requires-Dist: furo; extra == "docs"
Requires-Dist: myst-parser; extra == "docs"
Requires-Dist: sphinx_copybutton; extra == "docs"
Requires-Dist: sphinx_design; extra == "docs"
Requires-Dist: sphinx_inline_tabs; extra == "docs"
Requires-Dist: sphinx_autodoc_typehints; extra == "docs"
Provides-Extra: test
Requires-Dist: pytest; extra == "test"
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: black; extra == "dev"
Dynamic: license-file

# PyAutoFit: Classy Probabilistic Programming

[![Project Status: Active](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
[![Python Versions](https://img.shields.io/pypi/pyversions/autofit)](https://pypi.org/project/autofit/)
[![PyPI Version](https://img.shields.io/pypi/v/autofit.svg)](https://pypi.org/project/autofit/)
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PyAutoLabs/autofit_workspace/blob/2026.5.1.4/start_here.ipynb)
[![Tests](https://github.com/rhayes777/PyAutoFit/actions/workflows/main.yml/badge.svg)](https://github.com/rhayes777/PyAutoFit/actions)
[![Build](https://github.com/rhayes777/PyAutoBuild/actions/workflows/release.yml/badge.svg)](https://github.com/rhayes777/PyAutoBuild/actions)
[![Documentation Status](https://readthedocs.org/projects/pyautofit/badge/?version=latest)](https://pyautofit.readthedocs.io/en/latest/?badge=latest)
[![JOSS](https://joss.theoj.org/papers/10.21105/joss.02550/status.svg)](https://doi.org/10.21105/joss.02550)

[Installation Guide](https://pyautofit.readthedocs.io/en/latest/installation/overview.html) |
[readthedocs](https://pyautofit.readthedocs.io/en/latest/index.html) |
[Introduction on Colab](https://colab.research.google.com/github/PyAutoLabs/autofit_workspace/blob/2026.5.1.4/notebooks/overview/overview_1_the_basics.ipynb) |
[HowToFit](https://github.com/PyAutoLabs/HowToFit)

**PyAutoFit** is a Python based probabilistic programming language for model fitting and Bayesian inference
of large datasets.

The basic **PyAutoFit** API allows us a user to quickly compose a probabilistic model and fit it to data via a
log likelihood function, using a range of non-linear search algorithms (e.g. MCMC, nested sampling).

Users can then set up **PyAutoFit** scientific workflow, which enables streamlined modeling of small
datasets with tools to scale up to large datasets.

**PyAutoFit** supports advanced statistical methods, most
notably [a big data framework for Bayesian hierarchical analysis](https://pyautofit.readthedocs.io/en/latest/features/graphical.html).

## Getting Started

The following links are useful for new starters:

- [The PyAutoFit readthedocs](https://pyautofit.readthedocs.io/en/latest), which includes an [installation guide](https://pyautofit.readthedocs.io/en/latest/installation/overview.html) and an overview of **PyAutoFit**'s core features.
- [The introduction Jupyter Notebook on Colab](https://colab.research.google.com/github/PyAutoLabs/autofit_workspace/blob/2026.5.1.4/notebooks/overview/overview_1_the_basics.ipynb), where you can try **PyAutoFit** in a web browser (without installation).
- [The autofit_workspace GitHub repository](https://github.com/Jammy2211/autofit_workspace), which includes example scripts demonstrating **PyAutoFit**'s features.
- [The standalone HowToFit repository](https://github.com/PyAutoLabs/HowToFit), a series of Jupyter notebook lectures which give new users a step-by-step introduction to **PyAutoFit**.

## Support

Support for installation issues, help with Fit modeling and using **PyAutoFit** is available by
[raising an issue on the GitHub issues page](https://github.com/rhayes777/PyAutoFit/issues).

We also offer support on the **PyAutoFit** [Slack channel](https://pyautoFit.slack.com/), where we also provide the
latest updates on **PyAutoFit**. Slack is invitation-only, so if you'd like to join send
an [email](https://github.com/Jammy2211) requesting an invite.

## HowToFit

For users less familiar with Bayesian inference and scientific analysis you may wish to read through
the **HowToFits** lectures. These teach you the basic principles of Bayesian inference, with the
content pitched at undergraduate level and above.

The lectures are available in the [standalone HowToFit repository](https://github.com/PyAutoLabs/HowToFit).

## API Overview

To illustrate the **PyAutoFit** API, we use an illustrative toy model of fitting a one-dimensional Gaussian to
noisy 1D data. Here's the `data` (black) and the model (red) we'll fit:

<img src="https://raw.githubusercontent.com/rhayes777/PyAutoFit/main/files/toy_model_fit.png" width="400" />

We define our model, a 1D Gaussian by writing a Python class using the format below:

```python
class Gaussian:

    def __init__(
        self,
        centre=0.0,        # <- PyAutoFit recognises these
        normalization=0.1, # <- constructor arguments are
        sigma=0.01,        # <- the Gaussian's parameters.
    ):
        self.centre = centre
        self.normalization = normalization
        self.sigma = sigma

    """
    An instance of the Gaussian class will be available during model fitting.

    This method will be used to fit the model to data and compute a likelihood.
    """

    def model_data_from(self, xvalues):

        transformed_xvalues = xvalues - self.centre

        return (self.normalization / (self.sigma * (2.0 * np.pi) ** 0.5)) * \
                np.exp(-0.5 * (transformed_xvalues / self.sigma) ** 2.0)
```

**PyAutoFit** recognises that this Gaussian may be treated as a model component whose parameters can be fitted for via
a non-linear search like [emcee](https://github.com/dfm/emcee).

To fit this Gaussian to the `data` we create an Analysis object, which gives **PyAutoFit** the `data` and a
`log_likelihood_function` describing how to fit the `data` with the model:

```python
class Analysis(af.Analysis):

    def __init__(self, data, noise_map):

        self.data = data
        self.noise_map = noise_map

    def log_likelihood_function(self, instance):

        """
        The 'instance' that comes into this method is an instance of the Gaussian class
        above, with the parameters set to values chosen by the non-linear search.
        """

        print("Gaussian Instance:")
        print("Centre = ", instance.centre)
        print("normalization = ", instance.normalization)
        print("Sigma = ", instance.sigma)

        """
        We fit the ``data`` with the Gaussian instance, using its
        "model_data_from" function to create the model data.
        """

        xvalues = np.arange(self.data.shape[0])

        model_data = instance.model_data_from(xvalues=xvalues)
        residual_map = self.data - model_data
        chi_squared_map = (residual_map / self.noise_map) ** 2.0
        log_likelihood = -0.5 * sum(chi_squared_map)

        return log_likelihood
```

We can now fit our model to the `data` using a non-linear search:

```python
model = af.Model(Gaussian)

analysis = Analysis(data=data, noise_map=noise_map)

emcee = af.Emcee(nwalkers=50, nsteps=2000)

result = emcee.fit(model=model, analysis=analysis)
```

The `result` contains information on the model-fit, for example the parameter samples, maximum log likelihood
model and marginalized probability density functions.
