Portal:GreySmith Institute new/About7

From Wikiversity
Jump to navigation Jump to search

Neural Network Lab[edit | edit source]

Neural Networks have been around since the day of Minsky, but what we demand from a neural network today is much more than we demanded from perceptrons.

The Neural Model:

The Lab will be working on a SCALA Neuron that incorporates a large amount of information on the bio-chemistry of learning into the basic neuron. One of the reasons that SCALA is to be chosen is that we can offload some of the processing to a scalable cluster computer as needed. This will it is hoped bring the learning elements out of band with the actual neural simulation, so that long-term learning processes do not overwhelm the same processor as the neurons are implemented on.

Scala uses an actor model for concurrency instead of just the java threads, by setting up servers to do the actors roles, and subscribing neurons to those servers, we can offload the internal functions of the cell, that span multiple cycles of the neural network, without blocking the threads for more normal processing. this means that the overhead even for long-term memory elements is mostly based on the message passing costs of the Actors. Keeping Actor messages cursive, will allow the Actors to inter-operate at high speeds.

An example is the simple aging of Habituation.

With aging each signal has to be compared against the history of the neuron to see if the neuron is habituated or not, and there are both short term and long term pathways to habituation. This can be done with a queue, or it can be done with an actor. If it is done with a queue, the same processor has to be tie up computer cycles calculating the habituation of both terms before it can calculate the signal strength. With an actor, the neuron could simply pass the output to a habituation actor, which will calculate the signal strength, on a separate computer and return, meanwhile another neuron can be being calculated on another thread while the current thread sleeps waiting for the actor to return.

By monitoring the load on the different computers, you should be able to balance the system so that it works at peak efficiency for any size cluster. For instance if the Habituation computers are saturated, but the neural network computers are not, you can add another Habituation actor to a separate computer, somehow and thus increase the number of actors available to do habituation. Of course this increases the overhead slightly as you reapportion the habituation load across more processors, but that is partly why we use clusters.