Input Space

Mapping to an Input Space

What is a “potential pool” or “receptive field”? These are terms for the same thing, referring to the viewpoint of the neuron, looking out upon its dendrites and synapses. Each neuron’s receptive field includes all the other neurons it might possibly create a connection with. Within a layer of cortex, some neurons may be physically too far away from other neurons in the input space to ever create connections. Those neurons it can connect with are said to be in its receptive field (or potential pool).

Potential Pools

We will be implementing a discrete model of cells and input. Data about the cell state and their relationships can be stored in bits or as scalar values. For example, a mini-column can have many proximal dendritic segments that map to an input space. Each of these is like a small binary receptive field across the input space. Each potential connection (whether it is currently on/off doesn’t matter) also needs a scalar permanence value, which represents the strength of a potential synaptic connection.

  • mini-columns have many proximal dendritic segments
  • proximal dendritic segments have many permanences

On the left in the illustration below, you see an example of an input space with bits. On the right, you see one cell for each mini-column in the spatial pooling structure. Imagine the mini-columns below as if you were looking down on them from above. For visual simplicity’s sake, in this guide we’re going to model one dendritic segment for each mini-column.

NOTE: All SDR representations displayed in this document are 1-dimensional and simply wrapped for sake of visual presentation (unless otherwise noted).

Figure 1:To the left is a binary input space, where blue indicates cells that are part of a mini-column’s potential pool of proximal connections. To the right, looking down at an array of mini-columns, where only the topmost cell in each can be seen. Mouse over the mini-columns to make a selection. Use the sliders to change sizes or change how large each mini-column’s receptive field is.

Move the sliders above 👆. Now hover your mouse cursor over the righthand grid. As you hover over each mini-column, its receptive field is shown in the input space on the left. As you move between two cells, notice that they have unique viewpoints of the input space. Use the sliders to change the percentage of the input space you want your receptive fields to cover. Each mini-column will only ever connect to the colored cells in the input space.

Permanence Values

Now we must create floating point permanence values to represent the potential synapses between each mini-column’s dendritic segment and the input space. Each permanence value is between 0 and 1, representing how permanent the synapse is. We must generate a permanence value for each cell in each mini-column’s receptive field. In this case, this is the number of mini-columns () times the size of the potential pool for each (~ ), which is ~ total scalar values to generate.

Prior to receiving any inputs, the Spatial Pooling algorithm is initialized by computing a list of initial potential synapses for each mini-column. This consists of a random set of inputs selected from the input space (within a mini-column’s inhibition radius). Each input is represented by a synapse and assigned a random permanence value. The random permanence values are chosen with two criteria.

First, the values are chosen to be in a small range around a threshold, the minimum permanence value at which a synapse is considered “connected”. This enables potential synapses to become connected (or disconnected) after a small number of training iterations.

Second, each column has a natural center over the input region, and the permanence values have a bias towards this center, so that they have higher values near the center. We’re not going to discuss this now, for details see Topology.

Let’s make sure we’re accomplishing #1 above. We’re going to create a mapping that exhibits a normal distribution over a range of permanence values, so we can bunch them up around the connection threshold. If they are all bunched near the threshold, that means there are more ready to either immediately connect or immediately disconnect as soon as we start running input and start learning.

Figure 2:The top grid shows the selected mini-column’s proximal relationship with the input space. Each square is a potential synaptic connection, colored from green to red by how permanent the connection. Mouse over the cells to display the scalar permanence value associated with the mini-column’s connection to that input cell. A navy circle is shown in the cell if the permanence value is above the connection threshold, and therefore connected. The Histogram displays every permanence value for the selected mini-column as frequency counts within bins. You can see how these values are distributed around a center by changing the sliders above. See how changing the connection threshold affects connections across the input space.

All the circles within the input space cells above represent cells with connections — or where the synaptic permanence value is above the threshold, represented by the red bar in the distribution histogram at the bottom. All the histogram bars to the right of the threshold are currently connected and displayed in the above grid as circles. As you move the connection threshold slider, you change the number of connected synapses.

With these few parameters, we can define how quickly neurons will initially learn. We’ll show in future experiments how initial permanence distributions affect learning.

Streaming Scalar Data

Let’s look at how streaming scalar input will look over time as it is encoded into an input space. As you change the resolution below, watch as the size of the encoding increases to encompass more of the input space. You can also hover your mouse over bits in the binary representation to see the range of input values that bit will encode.