Artificial Consciousness/Neural Correlates/Neural Models/Weightless Neuron

From Wikiversity
Jump to navigation Jump to search

Weightless Neurons[edit | edit source]

A further abstraction of the Hopfield Neuron which created a neuron that had an output of 0 or 1, is the weightless neuron, which does away with the complexity of weights, and directly derives its output from the outputs of the previous layer of neurons. Because the model is simpler a much larger network of neurons can be implemented and the simulation will run faster. Dr. Aleksander at Imperial University works with a system called MAGNUS that simulates these weightless neurons.

The output of a Weightless Neuron is either 1 or firing, 0 or not firing or X, unknown which may fire or not despite the state of it's inputs. All neurons start in the unknown state and learn from their inputs whether or not to fire. Dr. Aleksander has been able to show that this system is capable of learning to represent the environment at a third person viewpoint level.

Because this model is so far abstracted from the real neurons, there has been some discussion about Dr. Aleksanders conceptualization of the neuron as a state machine. However there is evidence that even weighted neurons have states, even if they present their outputs as a real number output. Each synapse is either active, or inactive depending on the presence of its neurotransmitter signal, which qualifies as a state. It is how these states are interpreted that makes the weightless neuron so simple, and the weighted neuron so complex, and therefore confuses the output of the state machine in the case of weighted neurons.

It is important to realize however that things like synaptic connections and weights, while complicating the neuron also give us clues as to how it works, and so Dr. Aleksanders ability to get his model to represent 3rd person viewpoints using Iconic Learning does not necessarily mean that the brain uses iconic learning to achieve the same goal. However Dr. Aleksanders ability to get a neural network to represent the world in a clear fashion, indicates that there is a chance that neural networks with more complex models might be able to do the same.