A Simple IntegratorΒΆ

Purpose: This demo implements a one-dimensional neural integrator.

Comments: This is the first example of a recurrent network in the demos. It shows how neurons can be used to implement stable dynamics. Such dynamics are important for memory, noise cleanup, statistical inference, and many other dynamic transformations.

Usage: When you run this demo, it will automatically put in some step functions on the input, so you can see that the output is integrating (i.e. summing over time) the input. You can also input your own values. Note that since the integrator constantly sums its input, it will saturate quickly if you leave the input non-zero. This reminds us that neurons have a finite range of representation. Such saturation effects can be exploited to perform useful computations (e.g. soft normalization).

Output: See the screen capture below

../_images/integrator.png
Code:
import nef

net=nef.Network('Integrator') #Create the network object
net.make_input('input',{0.2:5, 0.3:0, 0.44:-10,
                            0.54:0, 0.8:5, 0.9:0} )  #Create a controllable input 
                                                     #function with a default function
                                                     #that goes to 5 at time 0.2s, to 0 
                                                     #at time 0.3s and so on
                                                     
net.make('A',100,1,quick=True) #Make a population with 100 neurons, 1 dimension

net.connect('input','A',weight=0.1,pstc=0.1) #Connect the input to the integrator,
                                             #scaling the input by .1; postsynaptic
                                             #time constant is 10ms
net.connect('A','A',pstc=0.1) #Connect the population to itself with the 
                              #default weight of 1
net.add_to_nengo()

Nengo User Manual

Table Of Contents

This Page