Welcome to a pyGSTi analysis report! This report is organized into tabs, each of which is accessible from the sidebar on the left. This Summary tab summarizes the most popular analyses and figures of merit. Much more detailed analysis is available on other tabs. If this report encapsulates multiple datasets, estimates, or gauges, then you can switch between those using the dropdown menus on the sidebar. For more information about how to use this report, click on the Help tab link to the left.
Model violation summary.This figure is about goodness-of-fit. It indicates how well each of the estimates contained in this report fits its corresponding data. PyGSTi finds the maximum value of the loglikelihood (-2\log\mathrm{Pr(data|gateset)}), and compares it to what we expect to see if the data were generated by a Markovian gateset. In this plot, each bar or box shows by how many standard deviations the actual log-likelihood exceeds its expected value. Expected values and standard deviations are derived from \chi^2 theory. Low values indicate better fits (less model violation).
{{ final_fits_comparison_plot|render }}
Model violation per iteration.This figure is about goodness-of-fit. This plot shows how well this estimate fits the data. PyGSTi finds the maximum value of the loglikelihood (-2\log\mathrm{Pr(data|gateset)}), and compares it to what we expect to see if the data were generated by a Markovian gateset. In this plot, each bar shows by how many standard deviations the actual log-likelihood exceeds its expected value. Expected values and standard deviations are derived from \chi^2 theory. On the horizontal axis, L indexes different ML estimates based on datasets including only circuits of length up to L. Low values indicate better fits (less model violation). Each bar is colored according to the star rating shown in the Model Violation tab.
{{ final_model_fit_progress_bar_plot_sum|render }}
Histogram of per-circuit model violation.This figure is about goodness-of-fit. When the estimate doesn't fit the data perfectly, we can quantify how well it fails to predict each individual circuit in the dataset, using the excess loglikelihood (-2\log\mathrm{Pr}(\mathrm{data}|\mathrm{gateset})) above and beyond the minimum value (-2 \log \mathrm{Pr}(\mathrm{data}|\mathrm{observed\ frequencies})). This plot shows a histogram of the those values for all the circuits in the dataset. Ideally, they should have the \chi^2 distribution shown by the solid line. Red indicates data that are inconsistent with the model at the 0.95 confidence level, as shown in more detail in the Model Violation tab.
{{ final_model_fit_histogram|render }}
Comparison of estimated gates to targets.This table is about gate error metrics (fidelity). The metrics in this table compare the estimated gates to their ideal counterparts, and can generally be interpreted as some kind of error rate (per gate use). Entanglement (process) fidelity and 1/2-diamond norm are the best known of these; they are the same for purely stochastic errors, but coherent errors contribute much more to diamond norm. 1/2-trace-distance is a proxy for diamond norm that doesn't require cvxPy to be installed. The Eigenvalue metrics are gauge-invariant versions of fidelity and diamond-norm that only depend on the gate itself (not its relationship to other gates). Hovering the pointer over a heading will pop up a description.
{{ final_gates_vs_target_table_insummary|render }}