bio_transformations package

Overview

The bio_transformations package implements biologically inspired modifications to artificial neural networks. It enhances the learning capabilities of neural networks by mimicking the plasticity and stability characteristics observed in biological synapses.

Key Components

  1. BioConverter

    The main interface for converting standard PyTorch modules to bio-inspired versions.

  2. BioModule

    The core class implementing biologically inspired modifications.

Submodules

bio_transformations.bio_converter module

class bio_transformations.bio_converter.BioConverter(config=(Identity(), 0.16, 0.6, 4.5e-05, 8.0, 0, False, 0.1, 2.0, 0.2), **kwargs)[source]

Bases: object

A utility class to convert standard PyTorch modules to BioNet modules with bio-inspired modifications.

This class implements modifications inspired by dendritic spine dynamics observed in our research, potentially enhancing the learning and adaptability of artificial neural networks.

__call__(module_class)[source]

Makes the BioConverter callable, allowing for convenient conversion of module classes.

Return type:

Type[Module]

static automark_last_module_for_weight_split_skip(model)[source]
convert(module_class_or_instance)[source]

Converts a given module class or instance by adding bio-inspired modifications.

Parameters:

module_class_or_instance (Type[Module] | Module) – The module class or instance to convert.

Returns:

The converted module class or instance.

Return type:

Type[Module] | Module

classmethod from_dict(config_dict)[source]

Creates a BioConverter instance from a dictionary of parameters.

Parameters:

config_dict (dict) – Dictionary of parameter names and values.

Returns:

A BioConverter instance with the specified parameters.

Return type:

BioConverter

get_config()[source]

Returns the current configuration of the BioConverter.

Returns:

The current BioConfig object.

Return type:

BioConfig

static mark_skip_weight_splitting(module)[source]

Marks a module to skip weight_splitting.

Parameters:

module (Module) – The module to mark.

Returns:

The marked module.

Return type:

Module

update_config(**kwargs)[source]

Updates the configuration of the BioConverter.

Parameters:

**kwargs (Any) – Keyword arguments to update in the configuration.

The BioConverter class is responsible for converting standard PyTorch modules to BioNet modules with bio-inspired modifications. It provides methods for:

  • Converting entire model architectures

  • Configuring bio-inspired parameters

  • Applying weight splitting techniques

bio_transformations.bio_module module

class bio_transformations.bio_module.BioModule(parent, config=(Identity(), 0.16, 0.6, 4.5e-05, 8.0, 0, False, 0.1, 2.0, 0.2))[source]

Bases: Module

A module that provides bio-inspired modifications to standard PyTorch modules.

This module implements various biologically-inspired learning mechanisms, including volume-dependent learning rates, fuzzy learning rates, and weight crystallization.

crystallize()[source]

Crystallizes the weights of the parent module by adjusting gradient scaling.

This process mimics the biological phenomenon of synaptic stabilization, where frequently used synapses become less plastic over time.

static dalian_network_initialization(module)[source]

Initializes the network weights according to Dale’s principle.

Dale’s principle states that neurons release the same neurotransmitters at all of their synapses, which is reflected here by enforcing consistent sign for all outgoing weights from each neuron.

Parameters:

module (Module) – The module to initialize.

enforce_dales_principle()[source]

Enforces Dale’s principle on the weights of the parent module.

This ensures that all outgoing weights from a neuron have the same sign, consistent with the biological principle that neurons release the same neurotransmitters at all of their synapses.

exposed_functions = ('rejuvenate_weights', 'crystallize', 'fuzzy_learning_rates', 'volume_dependent_lr')
fuzzy_learning_rates()[source]

Scales the gradients of the parent module with random values.

This method introduces stochasticity into the learning process, mimicking the variability observed in biological synaptic plasticity.

l1_reg()[source]

Computes the L1 regularization of the module’s parameters.

Returns:

The L1 regularization value.

Return type:

Tensor

rejuvenate_weights()[source]

Rejuvenates the weights of the parent module.

This function replaces weights below a certain threshold with random values, mimicking the biological process of synaptic pruning and regrowth.

volume_dependent_lr()[source]

Applies a volume-dependent learning rate to the weights of the parent module.

This method implements a biologically-inspired learning rate adjustment based on observations of dendritic spine dynamics. It reflects the following key findings: 1. Larger weights (analogous to larger spines) are more stable and less plastic. 2. Smaller weights (analogous to smaller spines) are more dynamic and plastic. 3. There is significant variability in plasticity among weights of similar sizes. 4. The relationship between weight size and plasticity is continuous, not discrete.

The BioModule class implements the core biologically inspired modifications. It includes the following key functions:

Synaptic Plasticity Functions

Structural Plasticity Functions

Homeostatic Plasticity Functions

  • scale_grad(): Implements synaptic scaling for overall network stability.

Additional Functions

For detailed information on each function, please refer to the individual function documentation.

Extending BioModule

To add a new biologically motivated function:

  1. Implement the new function in the BioModule class.

  2. Add the function name to the exposed_functions list.

  3. Update __init__ methods if new parameters are required.

  4. Create a test case in test_biomodule.py.

  5. Update this documentation to include details about the new function.

Module contents

For a practical guide on using these components, please refer to the Tutorials section.