Purpose: This is the first demo showing learning in Nengo. It learns the same circuit constructed in the Communication Channel demo.
Comments: The particular connection that is learned is the one between the ‘pre’ and ‘post’ populations. This particular learning rule is a kind of modulated Hebb-like learning (see Bekolay, 2011 for details).
Note: The red and blue graph is a plot of the connection weights, which you can watch change as learning occurs (you may need to zoom in with the scroll wheel; the learning a square demo has a good example). Typtically, the largest changes occur at the beginning of a simulation. Red indicates negative weights and blue positive weights.
Usage: When you run the network, it automatically has a random white noise input injected into it. So the input slider moves up and down randomly. However, learning is turned off, so there is little correlation between the representation of the pre and post populations.
Turn learning on: To allow the learning rule to work, you need to move the ‘switch’ to +1. Because the learning rule is modulated by an error signal, if the error is zero, the weights won’t change. Once learning is on, the post will begin to track the pre.
Monitor the error: When the switch is 0 at the beginning of the simulation, there is no ‘error’, though there is an ‘actual error’. The difference here is that ‘error’ is calculated by a neural population, and used by the learning rule, while ‘actual error’ is computed mathematically and is just for information.
Repeat the experiment: After a few simulated seconds, the post and pre will match well. You can hit the ‘reset’ button (bottom left) and the weights will be reset to their original random values, and the switch will go to zero. For a different random starting point, you need to re-run the script.
Output: See the screen capture below.
N=60 D=1 import nef import nef.templates.learned_termination as learning import nef.templates.gate as gating import random random.seed(27) net=nef.Network('Learn Communication') #Create the network object # Create input and output populations. net.make('pre',N,D) #Make a population with 60 neurons, 1 dimensions net.make('post',N,D) #Make a population with 60 neurons, 1 dimensions # Create a random function input. net.make_fourier_input('input', base=0.1, high=10, power=0.5, seed=12) #Create a white noise input function .1 base freq, max #freq 10 rad/s, and RMS of .5; 12 is a seed net.connect('input','pre') # Create a modulated connection between the 'pre' and 'post' ensembles. learning.make(net,errName='error', N_err=100, preName='pre', postName='post', rate=5e-4) #Make an error population with 100 neurons, and a learning #rate of 5e-4 # Set the modulatory signal. net.connect('pre', 'error') net.connect('post', 'error', weight=-1) # Add a gate to turn learning on and off. net.make_input('switch',) #Create a controllable input function #with a starting value of 0 gating.make(net,name='Gate', gated='error', neurons=40, pstc=0.01) #Make a gate population with 100 neurons, and a postsynaptic #time constant of 10ms net.connect('switch', 'Gate') # Add another non-gated error population running in direct mode. net.make('actual error', 1, 1, mode='direct') #Make a population with 1 neurons, 1 dimensions, and #run in direct mode net.connect('pre','actual error') net.connect('post','actual error',weight=-1) net.add_to_nengo()