Skip to content

Optimize a neuron model

Introduction#

CompNeuroPy provides the OptNeuron class which can be used to define your optimization of an ANNarchy neuron model (tuning the parameters). You can either optimize your neuron model to some data or try to reproduce the dynamics of a different neuron model (for example to reduce a more complex model). In both cases, you have to define the experiment which generates the data of interest with your neuron model.

Warning

OptNeuron has to be imported from "CompNeuroPy.opt_neuron" and you have to install torch, sbi and hyperopt (e.g. pip install torch sbi hyperopt)

Used optimization methods:

  • hyperopt

    Bergstra, J., Yamins, D., Cox, D. D. (2013) Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. TProc. of the 30th International Conference on Machine Learning (ICML 2013), June 2013, pp. I-115 to I-23.

  • sbi

    Tejero-Cantero et al., (2020). sbi: A toolkit for simulation-based inference. Journal of Open Source Software, 5(52), 2505, https://doi.org/10.21105/joss.02505

Example:#

opt = OptNeuron(
    experiment=my_exp,
    get_loss_function=get_loss,
    variables_bounds=variables_bounds,
    results_soll=experimental_data["results_soll"],
    time_step=experimental_data["time_step"],
    compile_folder_name="annarchy_opt_neuron_example",
    neuron_model=my_neuron,
    method="hyperopt",
    record=["r"],
)

A full example is available in the Examples.

Run the optimization#

To run the optimization simply call the run() function of the OptNeuron object.

Define the experiment#

You have to define a CompNeuroExp object containing a run() function. In the run() function simulations and recordings are performed.

Warning

While defining the CompNeuroExp run() function for the optimization with OptNeuron you must observe the following rules:

  • the run() function has to take a single argument (besides self) which contains the name of the population consiting of a single neuron of the optimized neuron model (you can use this to access the population)
  • call self.reset(parameters=False) at the beginning of the run function, thus the neuron will be in its compile state (except the paramters) at the beginning of each simulation run
  • always set parameters=False while calling the self.reset() function (otherwise the parameter optimization will not work)
  • besides the optimized parameters and the loss, the results of the experiment (using the optimized parameters) will be available after the optimization, you can store any additional data in the self.data attribute

Example:#

class my_exp(CompNeuroExp):
    """
    Define an experiment by inheriting from CompNeuroExp.

    CompNeuroExp provides the attributes:

        monitors (CompNeuroMonitors):
            a CompNeuroMonitors object to do recordings, define during init otherwise
            None
        data (dict):
            a dictionary for storing any optional data

    and the functions:
        reset():
            resets the model and monitors
        results():
            returns a results object
    """

    def run(self, population_name):
        """
        Do the simulations and recordings.

        To use the CompNeuroExp class, you need to define a run function which
        does the simulations and recordings. The run function should return the
        results object which can be obtained by calling self.results().

        For using the CompNeuroExp for OptNeuron, the run function should have
        one argument which is the name of the population which is automatically created
        by OptNeuron, containing a single neuron of the model which should be optimized.

        Args:
            population_name (str):
                name of the population which contains a single neuron, this will be
                automatically provided by opt_neuron

        Returns:
            results (CompNeuroExp._ResultsCl):
                results object with attributes:
                    recordings (list):
                        list of recordings
                    recording_times (recording_times_cl):
                        recording times object
                    mon_dict (dict):
                        dict of recorded variables of the monitors
                    data (dict):
                        dict with optional data stored during the experiment
        """
        ### For OptNeuron you have to reset the model and monitors at the beginning of
        ### the run function! Do not reset the parameters, otherwise the optimization
        ### will not work!
        self.reset(parameters=False)

        ### you have to start monitors within the run function, otherwise nothing will
        ### be recorded
        self.monitors.start()

        ### run the simulation, remember setting parameters=False in the reset function!
        ...
        simulate(100)
        self.reset(parameters=False)
        ...

        ### optional: store anything you want in the data dict. For example infomration
        ### about the simulations. This is not used for the optimization but can be
        ### retrieved after the optimization is finished
        self.data["sim"] = sim_step.simulation_info()
        self.data["population_name"] = population_name
        self.data["time_step"] = dt()

        ### return results, use the object's self.results()
        return self.results()

The get_loss_function#

The get_loss_function must have two arguments. When this function is called during optimization, the first argument is always the results object returned by the experiment, i.e. the results of the neuron you want to optimize. The second argument depends on whether you have specified results_soll, i.e. data to be reproduced by the neuron_model, or whether you have specified a target_neuron_model whose results are to be reproduced by the neuron_model. Thus, the second argument is either results_soll provided to the OptNeuron class during initialization or another results object (returned by the CompNeuroExp run function), generated with the target_neuron_model.

Example:#

In this example we assume, that results_soll was provided during initialization of the OptNeuron class (no target_neuron_model used).

def get_loss(results_ist: CompNeuroExp._ResultsCl, results_soll):
    """
    Function which has to have the arguments results_ist and results_soll and should
    calculates and return the loss. This structure is needed for the OptNeuron class.

    Args:
        results_ist (object):
            the results object returned by the run function of experiment (see above)
        results_soll (any):
            the target data directly provided to OptNeuron during initialization

    Returns:
        loss (float or list of floats):
            the loss
    """
    ### get the recordings and other important things for calculating the loss from
    ### results_ist, we do not use all available information here, but you could
    rec_ist = results_ist.recordings
    pop_ist = results_ist.data["population_name"]
    neuron = 0

    ### get the data for calculating the loss from the results_soll
    r_target_0 = results_soll[0]
    r_target_1 = results_soll[1]

    ### get the data for calculating the loss from the recordings
    r_ist_0 = rec_ist[0][f"{pop_ist};r"][:, neuron]
    r_ist_1 = rec_ist[1][f"{pop_ist};r"][:, neuron]

    ### calculate the loss, e.g. the root mean squared error
    rmse1 = rmse(r_target_0, r_ist_0)
    rmse2 = rmse(r_target_1, r_ist_1)

    ### return the loss, one can return a singel value or a list of values which will
    ### be summed during the optimization
    return [rmse1, rmse2]

CompNeuroPy.opt_neuron.OptNeuron #

This class is used to optimize neuron models with ANNarchy.

Source code in src/CompNeuroPy/opt_neuron.py
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
class OptNeuron:
    """
    This class is used to optimize neuron models with ANNarchy.
    """

    opt_created = []

    @check_types()
    def __init__(
        self,
        experiment: Type[CompNeuroExp],
        get_loss_function: Callable[[Any, Any], float | list[float]],
        variables_bounds: dict[str, float | list[float]],
        neuron_model: Neuron,
        results_soll: Any | None = None,
        target_neuron_model: Neuron | None = None,
        time_step: float = 1.0,
        compile_folder_name: str = "annarchy_OptNeuron",
        num_rep_loss: int = 1,
        method: str = "hyperopt",
        prior=None,
        fv_space: list = None,
        record: list[str] = [],
    ):
        """
        This prepares the optimization. To run the optimization call the run function.

        Args:
            experiment (CompNeuroExp class):
                CompNeuroExp class containing a 'run' function which defines the
                simulations and recordings

            get_loss_function (function):
                function which takes results_ist and results_soll as arguments and
                calculates/returns the loss

            variables_bounds (dict):
                Dictionary with parameter names (keys) and their bounds (values). If
                single values are given as values, the parameter is constant, i.e., not
                optimized. If a list is given as value, the parameter is optimized and
                the list contains the lower and upper bound of the parameter (order is
                not important).

            neuron_model (ANNarchy Neuron):
                The neuron model whose parameters should be optimized.

            results_soll (Any, optional):
                Some variable which contains the target data and can be used by the
                get_loss_function (second argument of get_loss_function)
                !!! warning
                    Either provide results_soll or a target_neuron_model not both!
                Default: None.

            target_neuron_model (ANNarchy Neuron, optional):
                The neuron model which produces the target data by running the
                experiment.
                !!! warning
                    Either provide results_soll or a target_neuron_model not both!
                Default: None.

            time_step (float, optional):
                The time step for the simulation in ms. Default: 1.

            compile_folder_name (string, optional):
                The name of the annarchy compilation folder within annarchy_folders/.
                Default: 'annarchy_OptNeuron'.

            num_rep_loss (int, optional):
                Only interesting for noisy simulations/models. How often should the
                simulaiton be run to calculate the loss (the defined number of losses
                is obtained and averaged). Default: 1.

            method (str, optional):
                Either 'sbi' or 'hyperopt'. If 'sbi' is used, the optimization is
                performed with sbi. If 'hyperopt' is used, the optimization is
                performed with hyperopt. Default: 'hyperopt'.

            prior (distribution, optional):
                The prior distribution used by sbi. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.

            fv_space (list, optional):
                The search space for hyperopt. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.

            record (list, optional):
                List of strings which define what variables of the tuned neuron should
                be recorded. Default: [].
        """

        if len(self.opt_created) > 0:
            print(
                "OptNeuron: Error: Already another OptNeuron created. Only create one per python session!"
            )
            quit()
        else:
            print(
                "OptNeuron: Initialize OptNeuron... do not create anything with ANNarchy before!"
            )

            ### set object variables
            self.opt_created.append(1)
            self.record = record
            self.results_soll = results_soll
            self.variables_bounds = variables_bounds
            self.fitting_variables_name_list = self._get_fitting_variables_name_list()
            self.method = method
            if method == "hyperopt":
                if fv_space is None:
                    self.fv_space = self._get_hyperopt_space()
                else:
                    self.fv_space = fv_space
            self.const_params = self._get_const_params()
            self.num_rep_loss = num_rep_loss
            self.neuron_model = neuron_model
            if method == "sbi":
                self.prior = self._get_prior(prior)
            self.target_neuron = target_neuron_model
            self.compile_folder_name = compile_folder_name
            self.__get_loss__ = get_loss_function

            ### check target_neuron/results_soll
            self._check_target()
            ### check neuron models
            self._check_neuron_models()

            ### setup ANNarchy
            setup(dt=time_step)

            ### create and compile model
            ### if neuron models and target neuron model --> create both models then
            ### test, then clear and create only model for neuron model
            model, target_model, monitors = self._generate_models()

            self.pop = model.populations[0]
            if target_model is not None:
                self.pop_target = target_model.populations[0]
            else:
                self.pop_target = None
            ### create experiment with current monitors
            self.experiment = experiment(monitors=monitors)

            ### check variables of model
            self._test_variables()

            ### check neuron models, experiment, get_loss
            ### if results_soll is None -_> generate results_soll
            self._check_get_loss_function()

            ### after checking neuron models, experiment, get_loss
            ### if two models exist --> clear ANNarchy and create/compile again only
            ### standard model, thus recreate also monitors and experiment
            clear()
            model, _, monitors = self._generate_models()
            self.monitors = monitors
            self.experiment = experiment(monitors=monitors)

    def _generate_models(self):
        """
        Generates the tuned model and the target_model (only if results_soll is None).

        Returns:
            model (CompNeuroModel):
                The model which is used for the optimization.

            target_model (CompNeuroModel):
                The model which is used to generate the target data. If results_soll is
                provided, target_model is None.

            monitors (CompNeuroMonitors):
                The monitors which are used to record the data. If no variables are
                recorded, monitors is None.
        """
        with ef.suppress_stdout():
            model = None
            target_model = None
            monitors = None
            if self.results_soll is None:
                ### create two models
                model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={"neuron": self.neuron_model, "name": "model_neuron"},
                    name="standard_model",
                    do_create=True,
                    do_compile=False,
                    compile_folder_name=self.compile_folder_name,
                )

                target_model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={
                        "neuron": self.target_neuron,
                        "name": "target_model_neuron",
                    },
                    name="target_model",
                    do_create=True,
                    do_compile=True,
                    compile_folder_name=self.compile_folder_name,
                )

                ### create monitors
                if len(self.record) > 0:
                    monitors = CompNeuroMonitors(
                        {
                            pop_name: self.record
                            for pop_name in [
                                model.populations[0],
                                target_model.populations[0],
                            ]
                        }
                    )

            else:
                ### create one model
                model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={"neuron": self.neuron_model, "name": "model_neuron"},
                    name="single_model",
                    do_create=True,
                    do_compile=True,
                    compile_folder_name=self.compile_folder_name,
                )
                ### create monitors
                if len(self.record) > 0:
                    monitors = CompNeuroMonitors({model.populations[0]: self.record})

        return model, target_model, monitors

    def _check_neuron_models(self):
        """
        Checks if the neuron models are ANNarchy neuron models.
        """
        if not (isinstance(self.neuron_model, type(Neuron()))) or (
            self.target_neuron is not None
            and not (isinstance(self.target_neuron, type(Neuron())))
        ):
            print(
                "OptNeuron: Error: neuron_model and/or target_neuron_model have to be ANNarchy neuron models"
            )
            quit()

    def _check_target(self):
        """
        Check if either results_soll or target_neuron are provided and not both.
        """
        if self.target_neuron is None and self.results_soll is None:
            print(
                "OptNeuron: Error: Either provide results_soll or target_neuron_model"
            )
            quit()
        elif self.target_neuron is not None and self.results_soll is not None:
            print(
                "OptNeuron: Error: Either provide results_soll or target_neuron_model, not both"
            )
            quit()

    def _get_prior(self, prior):
        """
        Get the prior distribution used by sbi. If no prior is given, uniform
        distributions between the variable bounds are assumed. If a prior is given,
        this prior is used.

        Args:
            prior (distribution, optional):
                The prior distribution used by sbi. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.

        Returns:
            prior (distribution):
                The prior distribution used by sbi.
        """
        if prior is None:
            prior_min = []
            prior_max = []
            for _, param_bounds in self.variables_bounds.items():
                if isinstance(param_bounds, list):
                    prior_min.append(param_bounds[0])
                    prior_max.append(param_bounds[1])

            return utils.BoxUniform(
                low=torch.as_tensor(prior_min), high=torch.as_tensor(prior_max)
            )
        else:
            return prior

    def _get_fitting_variables_name_list(self):
        """
        Returns a list with the names of the fitting variables.

        Returns:
            fitting_variables_name_list (list):
                list with names of fitting variables
        """
        name_list = []
        for param_name, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                name_list.append(param_name)
        return name_list

    def _get_hyperopt_space(self):
        """
        Generates the hyperopt variable space from the fitting variable bounds. The
        variable space is a uniform distribution between the bounds.

        Returns:
            fitting_variables_space (list):
                list with hyperopt variables
        """
        fitting_variables_space = []
        for param_name, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                fitting_variables_space.append(
                    hp.uniform(param_name, min(param_bounds), max(param_bounds))
                )
        return fitting_variables_space

    def _get_const_params(self):
        """
        Returns:
            const_params (dict):
                Dictionary with constant variables. The keys are the parameter names
                and the values are the parameter values.
        """
        const_params = {}
        for param_name, param_bounds in self.variables_bounds.items():
            if not (isinstance(param_bounds, list)):
                const_params[param_name] = param_bounds
        return const_params

    def _check_get_loss_function(self):
        """
        Checks if the get_loss_function is compatible to the experiment and the neuron
        model(s). To test, the experiment is run once with the tuned neuron model
        (generating results_ist) and once with the target neuron model (if provided,
        generating results_soll). Then, the get_loss_function is called with the
        results_ist and results_soll.
        """
        print("checking neuron_models, experiment, get_loss...", end="")

        fitparams = []
        for bounds in self.variables_bounds.values():
            if isinstance(bounds, list):
                fitparams.append(bounds[0])

        if self.results_soll is not None:
            ### only generate results_ist with standard neuron model
            results_ist = self._run_simulator_with_results(fitparams)["results"]
        else:
            ### run simulator with both populations (standard neuron model and target
            ### neuron model) and generatate results_ist and results_soll
            results_ist = self._run_simulator_with_results(fitparams)["results"]
            self.results_soll = self._run_simulator_with_results(
                fitparams, pop=self.pop_target
            )["results"]

        try:
            self.__get_loss__(results_ist, self.results_soll)
        except:
            print(
                "\nThe get_loss_function, experiment and neuron model(s) are not compatible:\n"
            )
            traceback.print_exc()
            quit()
        print("Done\n")

    def _raw_neuron(self, neuron, name):
        """
        Generates a population with one neuron of the given neuron model.

        Args:
            neuron (ANNarchy Neuron):
                The neuron model.

            name (str):
                The name of the population.
        """
        Population(1, neuron=neuron, name=name)

    def _test_variables(self):
        """
        Check if the tuned neuron model contains all parameters which are defined in
        variables_bounds or even more.
        """
        ### collect all names
        all_vars_names = np.concatenate(
            [
                np.array(list(self.const_params.keys())),
                np.array(self.fitting_variables_name_list),
            ]
        ).tolist()
        ### check if pop has these parameters
        pop_parameter_names = get_population(self.pop).attributes.copy()
        for name in pop_parameter_names.copy():
            if name in all_vars_names:
                all_vars_names.remove(name)
                pop_parameter_names.remove(name)
        if len(pop_parameter_names) > 0:
            print(
                "OptNeuron: WARNING: attributes",
                pop_parameter_names,
                "are not used/initialized.",
            )
        if len(all_vars_names) > 0:
            print(
                "OptNeuron: WARNING: The neuron_model does not contain parameters",
                all_vars_names,
                "!",
            )

    def _run_simulator(self, fitparams):
        """
        Runs the function simulator with the multiprocessing manager (if function is
        called multiple times this saves memory, otherwise same as calling simulator
        directly).

        Args:
            fitparams (list):
                list with values for fitting parameters

        Returns:
            return_dict (dict):
                dictionary needed for optimization with hyperopt, containing the loss,
                the loss variance (in case of noisy models with multiple runs per loss
                calculation), and the status (STATUS_OK for hyperopt).
        """

        ### initialize manager and generate m_list = dictionary to save data
        manager = multiprocessing.Manager()
        m_list = manager.dict()

        ### in case of noisy models, here optionally run multiple simulations, to mean the loss
        lossAr = np.zeros(self.num_rep_loss)

        return_results = False
        for nr_run in range(self.num_rep_loss):
            ### initialize for each run a new rng (--> not always have same noise in case of noisy models/simulations)
            rng = np.random.default_rng()
            ### run simulator with multiprocessign manager
            proc = Process(
                target=self._simulator, args=(fitparams, rng, m_list, return_results)
            )
            proc.start()
            proc.join()
            ### get simulation results/loss
            lossAr[nr_run] = m_list[0]

        ### calculate mean and std of loss
        if self.num_rep_loss > 1:
            loss = np.mean(lossAr)
            std = np.std(lossAr)
        else:
            loss = lossAr[0]
            std = None

        ### return loss and other things for optimization
        if self.num_rep_loss > 1:
            return {"status": STATUS_OK, "loss": loss, "loss_variance": std}
        else:
            return {"status": STATUS_OK, "loss": loss}

    def _sbi_simulation_wrapper(self, fitparams):
        """
        This function is called by sbi. It calls the simulator function and
        returns the loss and adjusts the format of the input parameters.

        Args:
            fitparams (tensor):
                either a batch of parameters (tensor with two dimensions) or a single
                parameter set

        Returns:
            loss (tensor):
                loss as tensor for sbi inference
        """
        fitparams = np.asarray(fitparams)
        if len(fitparams.shape) == 2:
            ### batch parameters!
            data = []
            for idx in range(fitparams.shape[0]):
                data.append(self._run_simulator(fitparams[idx])["loss"])
        else:
            ### single parameter set!
            data = [self._run_simulator(fitparams)["loss"]]

        return torch.as_tensor(data)

    def _run_simulator_with_results(self, fitparams, pop=None):
        """
        Runs the function simulator with the multiprocessing manager (if function is
        called multiple times this saves memory, otherwise same as calling simulator
        directly) and also returns the results.

        Args:
            fitparams (list):
                list with values for fitting parameters

            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.

        Returns:
            return_dict (dict):
                dictionary needed for optimization with hyperopt, containing the loss,
                the loss variance (in case of noisy models with multiple runs per loss
                calculation), and the status (STATUS_OK for hyperopt) and the results
                generated by the experiment.
        """
        ### check if pop is given
        if pop is None:
            pop = self.pop
        ### initialize manager and generate m_list = dictionary to save data
        manager = multiprocessing.Manager()
        m_list = manager.dict()

        ### in case of noisy models, here optionally run multiple simulations, to mean the loss
        lossAr = np.zeros(self.num_rep_loss)
        all_loss_list = []
        return_results = True
        for nr_run in range(self.num_rep_loss):
            ### initialize for each run a new rng (--> not always have same noise in case of noisy models/simulations)
            rng = np.random.default_rng()
            ### run simulator with multiprocessign manager
            proc = Process(
                target=self._simulator,
                args=(fitparams, rng, m_list, return_results, pop),
            )
            proc.start()
            proc.join()
            ### get simulation results/loss
            lossAr[nr_run] = m_list[0]
            results_ist = m_list[1]
            all_loss_list.append(m_list[2])

        all_loss_arr = np.array(all_loss_list)
        ### calculate mean and std of loss
        if self.num_rep_loss > 1:
            loss = np.mean(lossAr)
            std = np.std(lossAr)
            all_loss = np.mean(all_loss_arr, 0)
        else:
            loss = lossAr[0]
            std = None
            all_loss = all_loss_arr[0]

        ### return loss and other things for optimization and results
        if self.num_rep_loss > 1:
            return {
                "status": STATUS_OK,
                "loss": loss,
                "loss_variance": std,
                "std": std,
                "all_loss": all_loss,
                "results": results_ist,
            }
        else:
            return {
                "status": STATUS_OK,
                "loss": loss,
                "std": std,
                "all_loss": all_loss,
                "results": results_ist,
            }

    def _simulator(
        self, fitparams, rng, m_list=[0, 0, 0], return_results=False, pop=None
    ):
        """
        Runs the experiment with the given parameters and 'returns' the loss and
        optionally the results and all individual losses of the get_loss_function. The
        'returned' values are saved in m_list.

        Args:
            fitparams (list):
                list with values for fitting parameters

            rng (numpy random generator):
                random generator for the simulation

            m_list (list, optional):
                list with the loss, the results, and the all_loss. Default: [0, 0, 0].

            return_results (bool, optional):
                If True, the results are returned. Default: False.

            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.
        """
        ### TODO use rng here and add it to CompNeuroExp
        ### check if pop is given
        if pop is None:
            pop = self.pop

        ### set parameters which should not be optimized and parameters which should be
        ### optimized before the experiment, they should not be resetted by the
        ### experiment!
        self._set_fitting_parameters(fitparams, pop=pop)

        ### conduct loaded experiment
        results = self.experiment.run(pop)

        if self.results_soll is not None:
            ### compute loss
            all_loss = self.__get_loss__(results, self.results_soll)
            if isinstance(all_loss, list) or isinstance(all_loss, type(np.zeros(1))):
                loss = sum(all_loss)
            else:
                loss = all_loss
        else:
            all_loss = 999
            loss = 999
        ### "return" loss and other optional things
        m_list[0] = loss
        if return_results:
            m_list[1] = results
            m_list[2] = all_loss

    def _set_fitting_parameters(
        self,
        fitparams,
        pop=None,
    ):
        """
        Sets all given parameters for the population pop.

        Args:
            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.
        """
        if pop is None:
            pop = self.pop

        ### get all variables dict (combine fitting variables and const variables)
        all_variables_dict = self.const_params.copy()

        for fitting_variable_idx, fitting_variable_name in enumerate(
            self.fitting_variables_name_list
        ):
            all_variables_dict[fitting_variable_name] = fitparams[fitting_variable_idx]

        ### evaluate variables defined by a str
        for key, val in all_variables_dict.items():
            if isinstance(val, str):
                all_variables_dict[key] = ef.evaluate_expression_with_dict(
                    val, all_variables_dict
                )

        ### only set parameters of the fitted neuron model (in case target neuron model is given)
        if pop == self.pop:
            ### set parameters
            for param_name, param_val in all_variables_dict.items():
                pop_parameter_names = get_population(pop).attributes
                ### only if param_name in parameter attributes
                if param_name in pop_parameter_names:
                    setattr(
                        get_population(pop),
                        param_name,
                        param_val,
                    )

    def _test_fit(self, fitparams_dict):
        """
        Runs the experiment with the optimized parameters obtained with hyperopt and
        returns the loss, the results and all individual losses of the
        get_loss_function.

        Args:
            fitparams_dict (dict):
                dictionary with parameter names (keys) and their values (values)

        Returns:
            fit (dict):
                dictionary containing the loss, the loss variance (in case of noisy
                models with multiple runs per loss calculation), and the status
                (STATUS_OK for hyperopt) and the results generated by the experiment.
        """
        return self._run_simulator_with_results(
            [fitparams_dict[name] for name in self.fitting_variables_name_list]
        )

    def _run_with_sbi(self, max_evals, sbi_plot_file):
        """
        Runs the optimization with sbi.

        Args:
            max_evals (int):
                number of runs the optimization method performs

            sbi_plot_file (str):
                If you use "sbi": the name of the figure which will be saved and shows
                the posterior.

        Returns:
            best (dict):
                dictionary containing the optimized parameters and the posterior.
        """
        ### get prior bounds
        prior_min = []
        prior_max = []
        for _, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                prior_min.append(param_bounds[0])
                prior_max.append(param_bounds[1])

        ### run sbi
        simulator, prior = prepare_for_sbi(
            self._sbi_simulation_wrapper,
            self.prior,
            {
                "lower_bound": torch.as_tensor(prior_min),
                "upper_bound": torch.as_tensor(prior_max),
            },
        )
        inference = SNPE(prior, density_estimator="mdn")
        theta, x = simulate_for_sbi(
            simulator=simulator,
            proposal=prior,
            num_simulations=max_evals,
            num_workers=1,
        )
        density_estimator = inference.append_simulations(theta, x).train()
        posterior = inference.build_posterior(density_estimator)
        x_o = torch.as_tensor([0])  # data which should be obtained: loss==0
        posterior = posterior.set_default_x(x_o)

        ### get best params
        posterior_samples = posterior.sample(
            (10000,)
        )  # posterior = distribution P(params|data) --> set data and then sample possible parameters
        best_params = posterior_samples[
            torch.argmax(posterior.log_prob(posterior_samples))
        ].numpy()  # sampled parameters with highest prob in posterior

        ### create best dict with best parameters
        best = {}
        for param_idx, param_name in enumerate(self.fitting_variables_name_list):
            best[param_name] = best_params[param_idx]

        ### also return posterior
        best["posterior"] = posterior

        ### plot posterior
        plot_limits = [
            [prior_min[idx], prior_max[idx]] for idx in range(len(prior_max))
        ]
        analysis.pairplot(
            posterior_samples,
            limits=plot_limits,
            ticks=plot_limits,
            fig_size=(5, 5),
            labels=self.fitting_variables_name_list,
        )

        ### save plot
        sf.create_dir("/".join(sbi_plot_file.split("/")[:-1]))
        plt.savefig(sbi_plot_file)

        return best

    @check_types()
    def run(
        self,
        max_evals: int,
        results_file_name: str = "best",
        sbi_plot_file: str = "posterior.svg",
    ):
        """
        Runs the optimization.

        Args:
            max_evals (int):
                number of runs the optimization method performs

            results_file_name (str, optional):
                name of the file which is saved. The file contains the optimized and
                target results, the obtained parameters, the loss, and the SD of the
                loss (in case of noisy models with multiple runs per loss calculation)
                Default: "best".

            sbi_plot_file (str, optional):
                If you use "sbi": the name of the figure which will be saved and shows
                the posterior. Default: "posterior.svg".

        Returns:
            best (dict):
                dictionary containing the optimized parameters (as keys) and:

                - "loss": the loss
                - "all_loss": the individual losses of the get_loss_function
                - "std": the SD of the loss (in case of noisy models with multiple
                    runs per loss calculation)
                - "results": the results generated by the experiment
                - "results_soll": the target results
        """
        if self.method == "hyperopt":
            ### run optimization with hyperopt and return best dict
            best = fmin(
                fn=self._run_simulator,
                space=self.fv_space,
                algo=tpe.suggest,
                max_evals=max_evals,
            )
        elif self.method == "sbi":
            ### run optimization with sbi and return best dict
            best = self._run_with_sbi(max_evals, sbi_plot_file)
        else:
            print("ERROR run; method should be 'hyperopt' or 'sbi'")
            quit()
        fit = self._test_fit(best)
        best["loss"] = fit["loss"]
        if self.method == "sbi":
            print("\tbest loss:", best["loss"])
        best["all_loss"] = fit["all_loss"]
        best["std"] = fit["std"]
        best["results"] = fit["results"]
        best["results_soll"] = self.results_soll
        self.results = best

        ### SAVE OPTIMIZED PARAMS AND LOSS
        sf.save_variables([best], [results_file_name], "parameter_fit")

        return best

__init__(experiment, get_loss_function, variables_bounds, neuron_model, results_soll=None, target_neuron_model=None, time_step=1.0, compile_folder_name='annarchy_OptNeuron', num_rep_loss=1, method='hyperopt', prior=None, fv_space=None, record=[]) #

This prepares the optimization. To run the optimization call the run function.

Parameters:

Name Type Description Default
experiment CompNeuroExp class

CompNeuroExp class containing a 'run' function which defines the simulations and recordings

required
get_loss_function function

function which takes results_ist and results_soll as arguments and calculates/returns the loss

required
variables_bounds dict

Dictionary with parameter names (keys) and their bounds (values). If single values are given as values, the parameter is constant, i.e., not optimized. If a list is given as value, the parameter is optimized and the list contains the lower and upper bound of the parameter (order is not important).

required
neuron_model ANNarchy Neuron

The neuron model whose parameters should be optimized.

required
results_soll Any

Some variable which contains the target data and can be used by the get_loss_function (second argument of get_loss_function)

Warning

Either provide results_soll or a target_neuron_model not both!

Default: None.

None
target_neuron_model ANNarchy Neuron

The neuron model which produces the target data by running the experiment.

Warning

Either provide results_soll or a target_neuron_model not both!

Default: None.

None
time_step float

The time step for the simulation in ms. Default: 1.

1.0
compile_folder_name string

The name of the annarchy compilation folder within annarchy_folders/. Default: 'annarchy_OptNeuron'.

'annarchy_OptNeuron'
num_rep_loss int

Only interesting for noisy simulations/models. How often should the simulaiton be run to calculate the loss (the defined number of losses is obtained and averaged). Default: 1.

1
method str

Either 'sbi' or 'hyperopt'. If 'sbi' is used, the optimization is performed with sbi. If 'hyperopt' is used, the optimization is performed with hyperopt. Default: 'hyperopt'.

'hyperopt'
prior distribution

The prior distribution used by sbi. Default: None, i.e., uniform distributions between the variable bounds are assumed.

None
fv_space list

The search space for hyperopt. Default: None, i.e., uniform distributions between the variable bounds are assumed.

None
record list

List of strings which define what variables of the tuned neuron should be recorded. Default: [].

[]
Source code in src/CompNeuroPy/opt_neuron.py
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
@check_types()
def __init__(
    self,
    experiment: Type[CompNeuroExp],
    get_loss_function: Callable[[Any, Any], float | list[float]],
    variables_bounds: dict[str, float | list[float]],
    neuron_model: Neuron,
    results_soll: Any | None = None,
    target_neuron_model: Neuron | None = None,
    time_step: float = 1.0,
    compile_folder_name: str = "annarchy_OptNeuron",
    num_rep_loss: int = 1,
    method: str = "hyperopt",
    prior=None,
    fv_space: list = None,
    record: list[str] = [],
):
    """
    This prepares the optimization. To run the optimization call the run function.

    Args:
        experiment (CompNeuroExp class):
            CompNeuroExp class containing a 'run' function which defines the
            simulations and recordings

        get_loss_function (function):
            function which takes results_ist and results_soll as arguments and
            calculates/returns the loss

        variables_bounds (dict):
            Dictionary with parameter names (keys) and their bounds (values). If
            single values are given as values, the parameter is constant, i.e., not
            optimized. If a list is given as value, the parameter is optimized and
            the list contains the lower and upper bound of the parameter (order is
            not important).

        neuron_model (ANNarchy Neuron):
            The neuron model whose parameters should be optimized.

        results_soll (Any, optional):
            Some variable which contains the target data and can be used by the
            get_loss_function (second argument of get_loss_function)
            !!! warning
                Either provide results_soll or a target_neuron_model not both!
            Default: None.

        target_neuron_model (ANNarchy Neuron, optional):
            The neuron model which produces the target data by running the
            experiment.
            !!! warning
                Either provide results_soll or a target_neuron_model not both!
            Default: None.

        time_step (float, optional):
            The time step for the simulation in ms. Default: 1.

        compile_folder_name (string, optional):
            The name of the annarchy compilation folder within annarchy_folders/.
            Default: 'annarchy_OptNeuron'.

        num_rep_loss (int, optional):
            Only interesting for noisy simulations/models. How often should the
            simulaiton be run to calculate the loss (the defined number of losses
            is obtained and averaged). Default: 1.

        method (str, optional):
            Either 'sbi' or 'hyperopt'. If 'sbi' is used, the optimization is
            performed with sbi. If 'hyperopt' is used, the optimization is
            performed with hyperopt. Default: 'hyperopt'.

        prior (distribution, optional):
            The prior distribution used by sbi. Default: None, i.e., uniform
            distributions between the variable bounds are assumed.

        fv_space (list, optional):
            The search space for hyperopt. Default: None, i.e., uniform
            distributions between the variable bounds are assumed.

        record (list, optional):
            List of strings which define what variables of the tuned neuron should
            be recorded. Default: [].
    """

    if len(self.opt_created) > 0:
        print(
            "OptNeuron: Error: Already another OptNeuron created. Only create one per python session!"
        )
        quit()
    else:
        print(
            "OptNeuron: Initialize OptNeuron... do not create anything with ANNarchy before!"
        )

        ### set object variables
        self.opt_created.append(1)
        self.record = record
        self.results_soll = results_soll
        self.variables_bounds = variables_bounds
        self.fitting_variables_name_list = self._get_fitting_variables_name_list()
        self.method = method
        if method == "hyperopt":
            if fv_space is None:
                self.fv_space = self._get_hyperopt_space()
            else:
                self.fv_space = fv_space
        self.const_params = self._get_const_params()
        self.num_rep_loss = num_rep_loss
        self.neuron_model = neuron_model
        if method == "sbi":
            self.prior = self._get_prior(prior)
        self.target_neuron = target_neuron_model
        self.compile_folder_name = compile_folder_name
        self.__get_loss__ = get_loss_function

        ### check target_neuron/results_soll
        self._check_target()
        ### check neuron models
        self._check_neuron_models()

        ### setup ANNarchy
        setup(dt=time_step)

        ### create and compile model
        ### if neuron models and target neuron model --> create both models then
        ### test, then clear and create only model for neuron model
        model, target_model, monitors = self._generate_models()

        self.pop = model.populations[0]
        if target_model is not None:
            self.pop_target = target_model.populations[0]
        else:
            self.pop_target = None
        ### create experiment with current monitors
        self.experiment = experiment(monitors=monitors)

        ### check variables of model
        self._test_variables()

        ### check neuron models, experiment, get_loss
        ### if results_soll is None -_> generate results_soll
        self._check_get_loss_function()

        ### after checking neuron models, experiment, get_loss
        ### if two models exist --> clear ANNarchy and create/compile again only
        ### standard model, thus recreate also monitors and experiment
        clear()
        model, _, monitors = self._generate_models()
        self.monitors = monitors
        self.experiment = experiment(monitors=monitors)

run(max_evals, results_file_name='best', sbi_plot_file='posterior.svg') #

Runs the optimization.

Parameters:

Name Type Description Default
max_evals int

number of runs the optimization method performs

required
results_file_name str

name of the file which is saved. The file contains the optimized and target results, the obtained parameters, the loss, and the SD of the loss (in case of noisy models with multiple runs per loss calculation) Default: "best".

'best'
sbi_plot_file str

If you use "sbi": the name of the figure which will be saved and shows the posterior. Default: "posterior.svg".

'posterior.svg'

Returns:

Name Type Description
best dict

dictionary containing the optimized parameters (as keys) and:

  • "loss": the loss
  • "all_loss": the individual losses of the get_loss_function
  • "std": the SD of the loss (in case of noisy models with multiple runs per loss calculation)
  • "results": the results generated by the experiment
  • "results_soll": the target results
Source code in src/CompNeuroPy/opt_neuron.py
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
@check_types()
def run(
    self,
    max_evals: int,
    results_file_name: str = "best",
    sbi_plot_file: str = "posterior.svg",
):
    """
    Runs the optimization.

    Args:
        max_evals (int):
            number of runs the optimization method performs

        results_file_name (str, optional):
            name of the file which is saved. The file contains the optimized and
            target results, the obtained parameters, the loss, and the SD of the
            loss (in case of noisy models with multiple runs per loss calculation)
            Default: "best".

        sbi_plot_file (str, optional):
            If you use "sbi": the name of the figure which will be saved and shows
            the posterior. Default: "posterior.svg".

    Returns:
        best (dict):
            dictionary containing the optimized parameters (as keys) and:

            - "loss": the loss
            - "all_loss": the individual losses of the get_loss_function
            - "std": the SD of the loss (in case of noisy models with multiple
                runs per loss calculation)
            - "results": the results generated by the experiment
            - "results_soll": the target results
    """
    if self.method == "hyperopt":
        ### run optimization with hyperopt and return best dict
        best = fmin(
            fn=self._run_simulator,
            space=self.fv_space,
            algo=tpe.suggest,
            max_evals=max_evals,
        )
    elif self.method == "sbi":
        ### run optimization with sbi and return best dict
        best = self._run_with_sbi(max_evals, sbi_plot_file)
    else:
        print("ERROR run; method should be 'hyperopt' or 'sbi'")
        quit()
    fit = self._test_fit(best)
    best["loss"] = fit["loss"]
    if self.method == "sbi":
        print("\tbest loss:", best["loss"])
    best["all_loss"] = fit["all_loss"]
    best["std"] = fit["std"]
    best["results"] = fit["results"]
    best["results_soll"] = self.results_soll
    self.results = best

    ### SAVE OPTIMIZED PARAMS AND LOSS
    sf.save_variables([best], [results_file_name], "parameter_fit")

    return best