Brian2GeNN documentation

Contents:

Using Brian2GeNN

Brian supports generating standalone code for multiple devices. In this mode, running a Brian script generates source code in a project tree for the target device/language. This code can then be compiled and run on the device, and modified if needed. The Brian2GeNN package provides such a ‘device’ to run Brian 2 code on the GeNN (GPU enhanced Neuronal Networks) backend. GeNN is in itself a code-generation based framework to generate and execute code for NVIDIA CUDA. Through Brian2GeNN one can hence generate and run CUDA code on NVIDIA GPUs based solely in Brian 2 input.

Installing the Brian2GeNN interface

In order to use the Brian2GeNN interface, all three Brian 2, GeNN and Brian2GeNN need to be fully installed. To install GeNN and Brian 2, refer to their respective documentation:

Note that you will have to also install the CUDA toolkit and driver necessary to run simulations on a NVIDIA graphics card. These will have to be installed manually, e.g. from NVIDIA’s web site (you can always run simulations in the “CPU-only” mode, but that of course defeats the main purpose of Brian2GeNN…). Depending on the installation method, you might also have to manually set the CUDA_PATH environment variable (or alternatively the devices.genn.cuda_path preference) to point to CUDA’s installation directory.

To install brian2genn, use pip:

pip install brian2genn

(might require administrator privileges depending on the configuration of your system; add --user to force an installation with user privileges only).

As detailed in the GeNN installation instructions), you need to set the GENN_PATH environment variable to the GeNN installation directory. Alternatively, you can set the devices.genn.path preference to the same effect.

Note

We no longer provide conda packages for Brian2GeNN. Conda packages for previous versions of Brian2GeNN have been tagged with the archive label and are still available in the brian-team channel.

Using the Brian2GeNN interface

To use the interface one then needs to import the brian2genn interface:

import brian2genn

The you need to choose the ‘genn’ device at the beginning of the Brian 2 script, i.e. after the import statements, add:

set_device('genn')

At the encounter of the first run statement (Brian2GeNN does currently only support a single run statement per script), code for GeNN will be generated, compiled and executed.

The set_device function can also take additional arguments, e.g. to run GeNN in its “CPU-only” mode and to get additional debugging output, use:

set_device('genn', useGPU=False, debug=True)

Not all features of Brian work with Brian2GeNN. The current list of excluded features is detailed in Unsupported features in Brian2GeNN.

Unsupported features in Brian2GeNN

Summed variables

Summed variables are currently not supported in GeNN due to the cross- population nature of this feature. However, a simple form of summed variable is supported and intrinsic to GeNN. This is the action of ‘pre’ code in a Synapses definition onto a pre-synaptic variable. The allowed interaction is summing onto one pre-synaptic variable from each Synapses group.

Linked variables

Linked variables create a communication overhead that is problematic in GeNN. They are therefore at the moment not supported. In principle support for this feature could be added but in the meantime we suggest to look into avoiding linked variables by combining groups that are linked. For example

from brian2 import *
import brian2genn
set_device('genn_simple')

# Common deterministic input
N = 25
tau_input = 5*ms
input = NeuronGroup(N, 'dx/dt = -x / tau_input + sin(0.1*t/ tau_input) : 1')

# The noisy neurons receiving the same input
tau = 10*ms
sigma = .015
eqs_neurons = '''
dx/dt = (0.9 + .5 * I - x) / tau + sigma * (2 / tau)**.5 * xi : 1
I : 1 (linked)
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold='x > 1',
                      reset='x = 0', refractory=5*ms)
neurons.x = 'rand()'
neurons.I = linked_var(input, 'x') # input.x is continuously fed into neurons.I
spikes = SpikeMonitor(neurons)

run(500*ms)example

could be replaced by

from brian2 import *
import brian2genn
set_device('genn_simple')

N = 25
tau_input = 5*ms

# Noisy neurons receiving the same deterministic input
tau = 10*ms
sigma = .015
eqs_neurons = '''
dI/dt= -I / tau_input + sin(0.1*t/ tau_input) : 1')
dx/dt = (0.9 + .5 * I - x) / tau + sigma * (2 / tau)**.5 * xi : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold='x > 1',
                      reset='x = 0', refractory=5*ms)
neurons.x = 'rand()'
spikes = SpikeMonitor(neurons)

run(500*ms)example

In this second solution the variable I is calculated multiple times within the ‘noisy neurons’, which in a sense is an unnecessary computational overhead. However, in the massively parallel GPU accelerators this is not necessarily a problem. Note that this method only works where the common input is deterministic. If the input had been:

input = NeuronGroup(1, 'dx/dt = -x / tau_input + (2 /tau_input)**.5 * xi : 1')

i.e. contains a random element, then moving the common input into the ‘noisy neuron’ population would make it individual, independent noisy inputs with likely quite different results.

Custom events

GeNN does not support custom event types in addition to the standard threshold and reset, they can therefore not be used with the Brian2GeNN backend.

Heterogeneous delays

At the moment, GeNN only has support for a single homogeneous delay for each synaptic population. Brian simulations that use heterogeneous delays can therefore not use the Brian2GeNN backend. In simple cases with just a few different delay values (e.g. one set of connections with a short and another set of connections with a long delay), this limitation can be worked around by creating multiple Synapses objects with each using a homogeneous delay.

Multiple synaptic pathways

GeNN does not have support for multiple synaptic pathways as Brian 2 does, you can therefore only use a single pre and post pathway with Brian2GeNN.

Timed arrays

Timed arrays post a problem in the Brian2GeNN interface because they necessitate communication from the timed array to the target group at runtime that would result in host to GPU copies in the final CUDA/C++ code. This could lead to large inefficiences, the use of TimedArray is therefore currently restricted to code in run_regularly operations that will be executed on the CPU.

Multiple clocks

GeNN is by design operated with a single clock with a fixed time step across the entire simulation. If you are using multiple clocks and they are commensurate, please reformulate your script using just the fastest clock as the standard clock. If your clocks are not commensurate, and this is essential for your simulation, Brian2GeNN can unfortunately not be used.

Multiple runs

GeNN is designed for single runs and cannot be used for the Brian style multiple runs. However, if this is of use, code can be run repeatedly “in multiple runs” that are completely independent. This just needs device.reinit() and device.activate() issued after the run(runtime) command.

Note, however, that these multiple runs are completely independent, i.e. for the second run the code generation pipeline for Brian2GeNN is repeated in its entirety which may incur a measurable delay.

Multiple networks

Multiple networks cannot be supported in the Brian2GeNN interface. Please use only a single network, either by creating it explicitly as a Network object or by not creating any (i.e. using Brian’s “magic” system).

Custom schedules

GeNN has a fixed order of operations during a time step, Brian’s more flexible scheduling model (e.g. changing a network’s schedule or individual objects’ when attribute) can therefore not be used.

Brian2GeNN specific preferences

Connectivity

The preference devices.genn.connectivity determines what connectivity scheme is used within GeNN to represent the connections between neurons. GeNN supports the use of full connectivity matrices (‘DENSE’) or a representation where connections are represented with sparse matrix methods (‘SPARSE’). You can set the preference like this:

from brian2 import *
import brian2genn
set_device('genn')

prefs.devices.genn.connectivity = 'DENSE'

Compiler preferences

Brian2GeNN will use the compiler preferences specified for Brian2 for the C++ compiler call. This means you should set the codegen.cpp.extra_compile_args preference, or set codegen.cpp.extra_compile_args_gcc and codegen.cpp.extra_compile_args_msvc to set preferences specifically for compilation under Linux/OS-X and Windows, respectively.

Brian2GeNN also offers a preference to specify additional compiler flags for the CUDA compilation with the nvcc compiler: devices.genn.extra_compile_args_nvcc.

Note that all of the above preferences expect a Python list of individual compiler arguments, i.e. to for example add an argument for the nvcc compiler, use:

prefs.devices.genn.extra_compile_args_nvcc += ['--verbose']

On Windows, Brian2GeNN will try to find the file vcvarsall.bat to enable compilation with the MSVC compiler automatically. If this fails, or if you have multiple versions of MSVC installed and want to select a specific one, you can set the codegen.cpp.msvc_vars_location preference.

List of preferences

Preferences that relate to the brian2genn interface

devices.genn.auto_choose_device = True
The GeNN preference autoChooseDevice that determines whether or not a GPU should be chosen automatically when multiple CUDA enabled devices are present.

devices.genn.connectivity = 'SPARSE'

This preference determines which connectivity scheme is to be employed within GeNN. The valid alternatives are ‘DENSE’ and ‘SPARSE’. For ‘DENSE’ the GeNN dense matrix methods are used for all connectivity matrices. When ‘SPARSE’ is chosen, the GeNN sparse matrix representations are used.
devices.genn.cuda_path = None
The path to the CUDA installation (if not set, the CUDA_PATH environment variable will be used instead)
devices.genn.default_device = 0
The GeNN preference defaultDevice that determines CUDA enabled device should be used if it is not automatically chosen.
devices.genn.extra_compile_args_nvcc = ['-O3']
Extra compile arguments (a list of strings) to pass to the nvcc compiler.
devices.genn.init_blocksize = 32
The GeNN preference initBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.init_sparse_blocksize = 32
The GeNN preference initSparseBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.kernel_timing = False
This preference determines whether GeNN should record kernel runtimes; note that this can affect performance.
devices.genn.learning_blocksize = 32
The GeNN preference learningBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.neuron_blocksize = 32
The GeNN preference neuronBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.optimise_blocksize = True
The GeNN preference optimiseBlockSize that determines whether GeNN should use its internal algorithms to optimise the different block sizes.
devices.genn.path = None
The path to the GeNN installation (if not set, the GENN_PATH environment variable will be used instead)
devices.genn.pre_synapse_reset_blocksize = 32
The GeNN preference preSynapseResetBlockSize that determines the CUDA block size for the pre-synapse reset kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.synapse_blocksize = 32
The GeNN preference synapseBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.synapse_dynamics_blocksize = 32
The GeNN preference synapseDynamicsBlockSize that determines the CUDA block size for the neuron kernel if not set automatically by GeNN’s block size optimisation.
devices.genn.synapse_span_type = 'POSTSYNAPTIC'
This preference determines whether the spanType (parallelization mode) for a synapse population should be set to pre-synapstic or post-synaptic.

How Brian2GeNN works inside

The Brian2GeNN interface is providing middleware to use the GeNN simulator framework as a backend to the Brian 2 simulator. It has been designed in a way that makes maximal use of the existing Brian 2 code base by deriving large parts of the generated code from the cpp_standalone device of Brian 2.

Model and user code in GeNN

In GeNN a simulation is assembled from two main sources of code. Users of GeNN provide “code snippets” as C++ strings that define neuron and synapse models. These are then assembled into neuronal networks in a model definition function. Based on the mdoel definition, GeNN generates GPU and equivalent CPU simulation code for the described network. This is the first source of code.

The actual simulation and handling input and output data is the responsibility of the user in GeNN. Users provide their own C/C++ code for this that utilizes the generated code described above for the core simulation but is otherwise fully independent of the core GeNN system.

In the Brian2GeNN both the model definition and the user code for the main simulation are derived from the Brian 2 model description. The user side code for data handling etc derives more or less directly from the Brian 2 cpp_standalone device in the form of GennUserCodeObjects. The model definition code and “code snippets” derive from separate templates and are capsulated into GeNNCodeObjects.

Code generation pipeline in Brian2GeNN

The model generation pipeline in Brian2GeNN involves a number of steps. First, Brian 2 performs the usual interpretation of equations and unit checking, as well as, applying an integration scheme onto ODEs. The resulting abstract code is then translated into C++ code for GeNNUserCodeObjects and C++-like code for GeNNCodeObjects. These are then assembled using templating in Jinja2 into C++ code and GeNN model definition code. The details of making Brian 2’s cpp_standalone code suitable for the GeNN user code and GeNN model definition code and code snippets are taken care of in the GeNNDevice.build function.

Once all the sources have been generated, the resulting GeNN project is built with the GeNN code generation pipeline. See the GeNN manual for more details on this process.

Templates in Brian2GeNN

The templates used for code generation in Brian2GeNN, as mentioned above, partially derive from the cpp_standalone templates of Brian 2. More than half of the templates are identical. Other templates, however, in particular for the model definition file and the main simulation engine and main entry file “runner.cc” have been specifically written for Brian2GeNN to produce a valid GeNN project.

Data transfers and results

In Brian 2, data structures for initial values and synaptic connectivities etc are written to disk into binary files if a standalone device is used. The executable of the standalone device then reads the data from disk and initializes its variables with it. In Brian2GeNN the same mechanism is used, and after the data has been read from disk with the native cpp_standalone methods, there is a translation step, where Brian2GeNN provides code that translates the data from cpp_standalone arrays into the appropriate GeNN data structures. The methods for this process are provided in the static (not code-generated) “b2glib”.

At the end of a simulation, the inverse process takes place and GeNN data is transfered back into cpp_standalone arrays. Native Brian 2 cpp_standalone code is then invoked to write data back to disk.

If monitors are used, the translation occurs at every instance when monitors are updated.

Memory usage

Related to the implementation of data flows in Brian2GeNN described above the host memory used in a run in brian2GeNN is about twice what would have been used in a Brian 2 native cpp_standalone implementation because all data is held in two different formats - as cpp_standalone arrays and as GeNN data structures.

brian2genn package

x.__init__(…) initializes x; see help(type(x)) for signature

binomial module

Implementation of BinomialFunction

codeobject module

Brian2GeNN defines two different types of code objects, GeNNCodeObject and GeNNUserCodeObject. GeNNCodeObject is the class of code objects that produce code snippets for GeNN neuron or synapse models. GeNNUserCodeObject is the class of code objects that produce C++ code which is used as “user-side” code in GeNN. The class derives directly from Brian 2’s CPPStandaloneCodeObject, using the CPPCodeGenerator.

Exported members: GeNNCodeObject, GeNNUserCodeObject

Classes

GeNNCodeObject(owner, code, variables, …) Class of code objects that generate GeNN “code snippets”
GeNNUserCodeObject(owner, code, variables, …) Class of code objects that generate GeNN “user code”

correctness_testing module

Definitions of the configuration for correctness testing.

Exported members: GeNNConfiguration, GeNNConfigurationCPU, GeNNConfigurationOptimized

Classes

GeNNConfiguration([maximum_run_time])

Methods

GeNNConfigurationCPU([maximum_run_time])

Methods

GeNNConfigurationOptimized([maximum_run_time])

Methods

device module

Module implementing the bulk of the brian2genn interface by defining the “genn” device.

Exported members: GeNNDevice

Classes

CPPWriter(project_dir) Class that provides the method for writing C++ files from a string of code.
DelayedCodeObject(owner, name, …) Dummy class used for delaying the CodeObject creation of stateupdater, thresholder, and resetter of a NeuronGroup (which will all be merged into a single code object).
GeNNDevice() The main “genn” device.
neuronModel() Class that contains all relevant information of a neuron model.
rateMonitorModel() CLass that contains all relevant information about a rate monitor.
spikeMonitorModel() Class the contains all relevant information about a spike monitor.
spikegeneratorModel() Class that contains all relevant information of a spike generator group.
stateMonitorModel() Class that contains all relvant information about a state monitor.
synapseModel() Class that contains all relevant information about a synapse model.

Functions

decorate(code, variables, shared_variables, …) Support function for inserting GeNN-specific “decorations” for variables and parameters, such as $(.).
extract_source_variables(variables, varname, …) Support function to extract the “atomic” variables used in a variable that is of instance Subexpression.
freeze(code, ns) Support function for substituting constant values.
get_compile_args() Get the compile args based on the users preferences.
stringify(code) Helper function to prepare multiline strings (potentially including quotation marks) to be included in strings.

Objects

genn_device The main “genn” device.

genn_generator module

The code generator for the “genn” language. This is mostly C++ with some specific decorators (mainly “__host__ __device__”) to allow operation in a CUDA context.

Exported members: GeNNCodeGenerator

Classes

GeNNCodeGenerator(*args, **kwds) “GeNN language”

Functions

get_var_ndim(v[, default_value]) Helper function to get the ndim attribute of a DynamicArrayVariable, falling back to the previous name dimensions if necessary.

insyn module

GeNN accumulates postsynaptic changes into a variable inSyn. The idea of this module is to check, for a given Synapses, whether or not it can be recast into this formulation, and if so to relabel the variables appropriately.

In GeNN, each synapses object has an associated variable inSyn. The idea is that we will do something like this in Brian terms:

v += w (synapses code) dv/dt = -v/tau (neuron code)

should be replaced by:

inSyn += w (synapses code) dv/dt = -v/tau (neuron code) v += inSyn; inSyn = 0; (custom operation carried out after integration step)

The reason behind this organisation in GeNN is that the communication of spike events and the corresponding updates of post-synaptic variables are separated out for better performance. In priniciple all kinds of operations on the pre- and post-synaptic variables can be allowed but with a heavy hit in the computational speed.

The conditions for this rewrite to be possible are as follows for presynaptic event code: - Each expression is allowed to modify synaptic variables. - An expression can modify a neuron variable only in the following ways:

neuron_var += expr (where expr contains only synaptic variables) neuron_var = expr (where expr-neuron_var can be simplified to contain only synaptic variables)
  • The set of modified neuron variables can only have one element

And for the postsynaptic code, only synaptic variables can be modified.

The output of this code should be: - Raise an error if it is not possible, explaining why - Replace the line neuron_var (+)= expr with addtoinSyn = new_expr - Return neuron_var so that it can be used appropriately in GeNNDevice.build

The GeNN syntax is:

addtoinSyn = expr

Brian codegen implementation:

I think the correct place to start is given a Statement sequence for a Synapses pre or post code object, check the conditions. Then, we need to create two additional CodeObjects which overwrite translate_one_statement_sequence to call this function and rewrite the appropriate statement.

Functions

check_pre_code(codegen, stmts, vars_pre, …) Given a set of statements stmts where the variables names in vars_pre are presynaptic, in vars_syn are synaptic and in vars_post are postsynaptic, check that the conditions for compatibility with GeNN are met, and return a new statement sequence translated for compatibility with GeNN, along with the name of the targeted variable.

preferences module

Preferences that relate to the brian2genn interface.

Subpackages

Indices and tables