Simple example#
This example is supposed to serve as a quick start to the ModelFitting module of FlavorPy.
Import FlavorPy#
After installing flavorpy with
pip install flavorpy
,
import the modelfitting module of FlavorPy.
# Import the modelfitting module of FlavorPy
import flavorpy.modelfitting as mf
# We will also need numpy and pandas
import numpy as np
import pandas as pd
Defining mass matrices#
To define a model of leptons, we start by defining its mass matrices
\(m_e = \begin{pmatrix}v_1 & v_2 & v_3 \\ v_3 & v_1 & v_2 \\ v_2 & v_3 & v_1\end{pmatrix} \quad\) and \(\quad m_n = \begin{pmatrix}v_1 & v_2 & v_3 \\ v_2 & v_1 & 2 \\ v_3 & 2 & v_1\end{pmatrix}\)
For this example we have:
# Charged lepton mass matrix
def Me(params):
v1, v2, v3 = params['v1'], params['v2'], params['v3']
return np.array([[v1, v2 ,v3], [v3, v1, v2], [v2, v3, v1]])
# Neutrino mass matrix
def Mn(params):
v1, v2, v3 = params['v1'], params['v2'], params['v3']
return np.array([[v1, v2, v3], [v2, v1, 2], [v3, 2, v1]])
Defining parameterspace#
Next, we define the parameterspace of our model. We therefore construct an empty parameter space and add the parameters to it. When drawing random points in our parameter space, we will evaluate the ‘sample_fct’, which in this case is numpys uniform distribution between 0 and 1.
ParamSpace = mf.ParameterSpace()
ParamSpace.add_dim(name='v1', sample_fct=np.random.uniform)
ParamSpace.add_dim(name='v2', sample_fct=np.random.uniform)
ParamSpace.add_dim(name='v3', sample_fct=np.random.uniform)
Constructing Model#
Then we can construct the lepton model as follows:
Model0 = mf.LeptonModel(mass_matrix_e=Me, mass_matrix_n=Mn, parameterspace=ParamSpace, ordering='NO')
Now we can determine the masses and mixing observables of a given point in parameter space by:
Model0.get_obs({'v1': 1.5, 'v2': 1.1, 'v3': 1.3})
{'me/mu': 0.9999999999999992,
'mu/mt': 0.08882311833686546,
's12^2': 0.6420066494741999,
's13^2': 0.008093453559868121,
's23^2': 0.0012798653500391485,
'd/pi': 0.0,
'r': 0.001571483801833027,
'm21^2': 3.90198549455977e-05,
'm3l^2': 0.024829944094927194,
'm1': 0.018287792823374217,
'm2': 0.019325196539654005,
'm3': 0.15863287005308152,
'eta1': 1.0,
'eta2': 0.0,
'J': 0.0,
'Jmax': 0.0015294982440766927,
'Sum(m_i)': 0.19624585941610972,
'm_b': 0.02367630520881936,
'm_bb': 0.020085293881200113,
'nscale': 0.03548228305985807}
Here, ‘me/mu’ is the mass ratio of electron mass divided by muon mass, ‘sij^2’ refers to the mixing angles \(\sin^2(\theta_{ij})\), ‘d/pi’ is the cp violating phase in the PMNS matrix divided by \(\pi\), ‘m21^2’ and ‘m3l^2’ and the squared neutrino mass differences, i.e. mij^2 = m_i^2 - m_j^2, ‘r’ is their quotient r = m21^2 / m3l^2, ‘m1’ and ‘m2’ and ‘m3’ are the neutrino masses, ‘eta1’ and ‘eta2’ are the majorana phases, ‘J’ is the Jarskog determinant, ‘m_b’ and ‘m_bb’ are the effective neutrino masses for beta decay and neutrinoless double beta decay, respectively.
Fitting model to experimental data#
Let us now fit this model to a specific experimental data set. As a default the NuFit v5.2 for NO with SK data is used. To fit this model we choose for example 3 randomly drawn points in the parameter space and apply minimization algorithms to these points, in order to find a point that matches the experimental data well. Note that by default 4 minimization algorithms are applied consecutively to all 3 random points such that we get 12 points in the end.
pd.set_option('display.max_columns', None) # This pandas setting allows us to see all columns
df = Model0.make_fit(points=3)
df
chisq | chisq_dimless | v1 | v2 | v3 | n_scale | me/mu | mu/mt | s12^2 | s13^2 | s23^2 | d/pi | r | m21^2 | m3l^2 | m1 | m2 | m3 | eta1 | eta2 | J | Jmax | Sum(m_i) | m_b | m_bb | nscale | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 4.734807e+04 | 4.734231e+04 | -1.058010 | 1.041775 | 0.007423 | 1.0 | 0.004846 | 1.000000 | 0.939127 | 0.020008 | 0.043020 | 0.0 | 0.031346 | 0.000076 | 0.002435 | 0.016754 | 0.018896 | 0.052117 | 0.0 | 0.0 | 0.000000e+00 | 0.006725 | 0.087767 | 0.020031 | 0.019447 | 0.015745 |
1 | 4.734955e+04 | 4.734214e+04 | -1.057276 | 1.041025 | 0.007412 | 1.0 | 0.004864 | 1.000000 | 0.939013 | 0.019994 | 0.043101 | 0.0 | 0.031618 | 0.000077 | 0.002425 | 0.016712 | 0.018867 | 0.052006 | 0.0 | 0.0 | 0.000000e+00 | 0.006734 | 0.087585 | 0.019997 | 0.019416 | 0.015717 |
2 | 4.763479e+04 | 4.762738e+04 | -1.057276 | 1.041023 | 0.007410 | 1.0 | 0.004866 | 1.000000 | 0.939013 | 0.019994 | 0.956899 | 1.0 | 0.031618 | 0.000077 | 0.002425 | 0.016712 | 0.018867 | 0.052006 | 1.0 | 1.0 | 8.247443e-19 | 0.006735 | 0.087585 | 0.019997 | 0.019416 | 0.015717 |
3 | 1.455653e+07 | 1.508985e+07 | -0.060572 | 0.757247 | -0.067181 | 1.0 | 0.766606 | 1.000000 | 0.875886 | 0.016517 | 0.471467 | 1.0 | 0.856485 | 0.001111 | 0.001297 | 0.000261 | 0.033327 | 0.036011 | 1.0 | 1.0 | 2.547627e-18 | 0.020803 | 0.069600 | 0.031568 | 0.029551 | 0.016206 |
4 | 2.476148e+07 | 2.476143e+07 | 1.116427 | 0.933827 | 0.761301 | 1.0 | 1.000000 | 0.109402 | 0.308190 | 0.025502 | 0.005890 | 1.0 | 0.036244 | 0.000082 | 0.002276 | 0.007401 | 0.011716 | 0.048275 | 1.0 | 0.0 | 6.733970e-19 | 0.005499 | 0.067392 | 0.011819 | 0.009761 | 0.013125 |
5 | 2.476176e+07 | 2.476165e+07 | 1.081985 | 0.903959 | 0.711235 | 1.0 | 1.000000 | 0.119073 | 0.352498 | 0.027924 | 0.007796 | 1.0 | 0.042208 | 0.000090 | 0.002131 | 0.007458 | 0.012066 | 0.046764 | 1.0 | 0.0 | 8.358433e-19 | 0.006825 | 0.066289 | 0.012185 | 0.010161 | 0.012988 |
6 | 2.476205e+07 | 2.476193e+07 | 1.070584 | 0.917638 | 0.644437 | 1.0 | 1.000000 | 0.142031 | 0.521885 | 0.026420 | 0.012484 | 1.0 | 0.044094 | 0.000092 | 0.002094 | 0.007832 | 0.012396 | 0.046423 | 1.0 | 0.0 | 1.074881e-18 | 0.008777 | 0.066652 | 0.012884 | 0.011197 | 0.013033 |
7 | 2.476215e+07 | 2.476202e+07 | 1.065823 | 0.917655 | 0.625517 | 1.0 | 1.000000 | 0.148736 | 0.554173 | 0.026439 | 0.013648 | 1.0 | 0.044925 | 0.000093 | 0.002078 | 0.007942 | 0.012508 | 0.046274 | 1.0 | 0.0 | 1.118010e-18 | 0.009129 | 0.066724 | 0.013071 | 0.011445 | 0.013044 |
8 | 2.477452e+07 | 2.477439e+07 | 2.665004 | -0.107629 | 4.486246 | 1.0 | 1.000000 | 0.568848 | 0.627061 | 0.025983 | 0.085722 | 0.0 | 0.044634 | 0.000093 | 0.002084 | 0.014539 | 0.017446 | 0.047906 | 0.0 | 1.0 | 0.000000e+00 | 0.021255 | 0.079891 | 0.018020 | 0.017207 | 0.006356 |
9 | 2.477452e+07 | 2.477439e+07 | 2.665004 | -0.107629 | 4.486246 | 1.0 | 1.000000 | 0.568848 | 0.627061 | 0.025983 | 0.085722 | 0.0 | 0.044634 | 0.000093 | 0.002084 | 0.014539 | 0.017446 | 0.047906 | 0.0 | 1.0 | 0.000000e+00 | 0.021255 | 0.079891 | 0.018020 | 0.017207 | 0.006356 |
10 | 2.477857e+07 | 2.499999e+07 | 0.283663 | 0.041842 | 0.679834 | 1.0 | 1.000000 | 0.554919 | 0.884184 | 0.000612 | 0.133847 | 1.0 | 0.563388 | 0.000743 | 0.001319 | 0.003915 | 0.027542 | 0.036532 | 1.0 | 0.0 | 3.299055e-19 | 0.002694 | 0.067989 | 0.025949 | 0.024819 | 0.015164 |
11 | 2.477902e+07 | 2.478546e+07 | 2.169190 | -0.118765 | 5.225849 | 1.0 | 1.000000 | 0.638308 | 0.605378 | 0.015513 | 0.119236 | 1.0 | 0.127448 | 0.000197 | 0.001544 | 0.011954 | 0.018431 | 0.041074 | 1.0 | 1.0 | 2.378544e-18 | 0.019422 | 0.071459 | 0.016947 | 0.016302 | 0.005316 |
Well, from the high value of \(\chi^2\), we see that this model doesn’t seem to be able to replicate the experimentally measured values. Let us take a look at the individual contributions to \(\chi^2\) for the first point by
Model0.print_chisq(df.loc[0])
'me/mu': 0.0048458380255911905, chisq: 0.05252811475246681
'mu/mt': 1.0, chisq: 43960.11111111111
's12^2': 0.9391272761692583, chisq: 2810.124385323049
's13^2': 0.020008064149810444, chisq: 15.032334445663338
's23^2': 0.04301966448870213, chisq: 543.503523800527
'd/pi': 0.0, chisq: 10.98525
'm21^2': 7.634175414433096e-05, chisq: 1.115587066597515
'm3l^2': 0.002435482734326478, chisq: 7.142800422396115
Total chi-square: 47348.067520284094
It looks like several observables are not in agreement with the experimental data. Note that \(\chi^2=x\) is often interpreted as the specific point lying in the \(\sqrt{x}\,\sigma\) confidence level region.
All in all, the model was probably to simple or we needed to widen the boundaries of our parameter space.