Monday, November 17, 2014

Steam C# Wrapper Steamworks.NET Unity

I got Steamworks.NET and C# wrapping the Steamworks C++ library to work alot.

The code is in a standalone dedicated server, but als should work in Unity3D with Unity networking disabled.  Our goal is to be on Steam networking instead of the Unity native RakNet based networking.

This lets us have a dedicated (no GUI, no Untiy) server on Linux, Win, Mac.

So here is the key that was not immediately obvious about the Steam libraries.

There is teh 'normal' SteamUser, SteamNetworking set of calls and callbacks.  They let you do things like SendP2PPacket, and IsP2PPacketAvailable calls to send packets.  I also make a lobby with SteamLobby.Create.

Gotcha #1
Callbacks MUST keep a reference when made or they get deleted. 
So the code...

Callback< SteamServersConnected_t >.CreateGameServer(OnSteamServersConnected);

is deadly.  The callback gets made and then soon gets deleted as the garbage collector in C# sees it has no reference.  Instead make a reference in your class and then the code reads...

SteamServersConnectedCB = Callback< SteamServersConnected_t >.CreateGameServer(OnSteamServersConnected);

Gotcha #2
Steam has two sets of callback and message and event queues.  On for the client use and one for a server if there is one.  I can't stress this enough.  The code...

SteamGameServerNetworking.IsP2PPacketAvailable(out sz, 0)

is very different from 

SteamNetworking.IsP2PPacketAvailable(out sz, 0)

The first one is asking if the GameServer you made has incomming packets, and the second one asks if the local SteamUser.GetSteamID() user has incomming packets.  The second one is the local player.

Same for all the callbacks.

 P2PSessionRequestCB = Callback< P2PSessionRequest_t >.Create(OnP2PSessionRequest);

 P2PSessionRequestServerCB = Callback< P2PSessionRequest_t >.CreateGameServer(OnP2PSessionRequestServer);

Are completely dioffernet events.  The first is when the CLIENT receives incomming connections, and the second is when the SERVER gets data.

And the implementations are different....

    void OnP2PSessionRequest(P2PSessionRequest_t s)
    {
        Debug.Log("OnP2PSessionRequest: " + s.m_steamIDRemote);
        SteamNetworking.AcceptP2PSessionWithUser(s.m_steamIDRemote);
    }
       
    void OnP2PSessionRequestServer(P2PSessionRequest_t s)
    {
        Debug.Log("OnP2PSessionRequestServer: " + s.m_steamIDRemote);
        SteamGameServerNetworking.AcceptP2PSessionWithUser(s.m_steamIDRemote);
    }




Thursday, August 14, 2014

Avalanche Pattern Recognition in Neural Networks

This paper is a follow on to the New Theory of Cognition.  It is some research I did with code to test the idea of avalanching into neural states.








Avalanche Pattern Recognition in Neural Networks
Richard Keene
Phone: 801-961-3668
2009 Tommy Moe Place, Park City UTAH 84098
rkeene@xmission.com (2014 now rmkeene@gmail.com)
Keene.Dick@amstr.com
http://www.xmission.com/~rkeene







Abstract
The results of research into avalanche pattern recognition, neural fatigue, and successive pattern sequencing are presented. Given an array of neurons that are interconnected such that each neuron is trying to predict the next state of all the other neurons, and stimulate the other neurons to the predicted state, successive patterns can be taught to the system. Then a partial stimulation of the pattern will cause an avalanche effect into the full pattern. If the neurons have a fatigue limit, the current pattern can be made to fade, and an avalanche into another pattern may occur.


  1. Motivation for Research - The Subsumptive Regular Architecture

In a previous paper 1 an new architecture called a Subsumtive-Regular System (SRS) was proposed. This system consists of a subsumptive hard-wired neural system that takes the environmental inputs, processes these inputs into abstract mappings of environmental state, and generates outputs from the system to effect the environment. In conjunction with the subsumptive system as a homogenous array of teachable neurons that are randomly connected to each other and the subsumptive system. This ‘cortex’ is constantly trying to predict the next state of the subsumptive system and, according to the strength of the pattern match, stimulate the subsumptive system into that state.
The major building block for this theory is the cortex layer that avalanches or cascades into a pattern match. This paper is the result of test programs that implemented various algorithms for avalanche pattern recognition.

1.The Avalanche Process

The usual use of neural networks is to arrive at a degree of confidence that the inputs are of a given pattern. The classic example of differentiating between Little Red Riding Hood and the Wolf uses cape color, ear size, tooth size, and eye size to arrive at a confidence level of which choice it is. This is done by presenting input pattern (can be partial) and, in one cycle, getting an output that is the degree of confidence.
If we now take such a neural network and feed the output back around into the inputs such that a match of Wolf will increase the inputs [ears, teeth, eyes] and decrease the input [red cape], and a match of Red Riding Hood will do the opposite, then successive cycles should avalanche into the original teaching pattern.
The pattern may also be partial. If only the ears input is slightly turned on, then the feedback should cause an avalanche into the full Wolf pattern.
Once on the state the system is now ‘locked up’ in that state. In a more general and complex system, if one desires to have the neural system progress to successive states (a requirement for the SRS) then there must be a method for getting out of the state. This is where neural fatigue comes into play. After a neuron has been firing for a certain time it becomes fatigued and turns off. This allows an new avalanche into another state.
To get time sequenced states there must be some embodiment in the system of the current state and the past state.

2.First Experiment - Three Pairs of Neurons

The first experiment used three neurons that were stimulated in one of three patterns. The patterns were 100, 010, 001. All neurons had teachable connections to all other neurons and there selves. The idea being that a neuron would feed-back on it’s self and progress to a full-on state. This caused balance problems where all neurons would progress to a mid level state to where the self-stimulus was balanced by the inhibition from the other two neurons.
The next generation of the program uses three pairs of neurons called A, a, B, b, C, c where all neurons have teachable connections to all other neurons but not to self. This works very well. The system is first taught with the three patterns [110000, 001100, 000011] until the patterns are well established. Then the system is set to random neuron activity levels and cycled. Which ever neuron pair has the highest activity will dominate the others and the system will usually avalanche into the corresponding pattern. Occasionally the system will get into an intermediate and balanced state where no pattern dominates.

2.1The Teachable Connections

The weighted and teachable connections were tested in two different styles. There is a connectivity factor used in both styles that determines how strongly all neurons are connected.

2.1.1Level Seeking Connections

The first style is not the normal neural network style connection where the activity level (charge) of the neuron is added to the target neuron. Instead the connection has a expected value and a strength (confidence level). This allows the connection to force the target toward a given state.
For example if a connection has a expected value of 0.1 and a strength of .99 and the current neuron charge is 0.9 then the target neuron will be pushed toward 0.1 by .99 * .9 * connection_factor plus any external stimulus.
This approach results in the fastest avalanches. If the expected use of the system is to be taught a series of patterns, and then freeze the connection factors (turn learning off), and then match patterns, this is the best connection method. If learning is constant and always turned on, then this approach ‘softens’ the learned patterns fairly rapidly, due to the connections trying to learn the in-between patterns during an avalanche event.

2.1.2Classical Connections

The second style is a more classical learning connection where the charge of the neuron times the connection weight is added to the target neuron. Neurons are randomly excitory or inhibitory when first created. This approach results in any given pattern having some excitory and some inhibitory neurons as members. The neurons can then force the entire neuron array into the pattern by exciting the ON neurons of the pattern and inhibiting the OFF neurons of the pattern.
Avalanches works well with this approach if there are a statistically significant number of neurons in all patterns. It also preserves learning during avalanches. In fact avalanches tend to reinforce the already-learned patterns. With this approach connection weights tend to be either almost 0 or almost 1. Intermediate weights represent learning-in-progress. Such a system should represent intermediate input and output levels with multiple neurons.

2.2Conservation of Charge

If a given network has a total energy which is the sum of all the charges of the neurons, and there is no external stimulus, then total charge should increase on successive cycles. Fast increases represent fast avalanches. Too fast of an increase tends to dominate teaching input so much that the learned patterns tend to be fuzzy.
With the level seeking connections, if charge is not conserved then the system rapidly becomes inactive. If charge is increasing then the system rapidly runs away until all charges are at maximum. The actual algorithm used clips the neuron charge to a range between 0 and 1 inclusive after each cycle. This limits run-away and limits the total system energy. The charge of a neuron ‘leaks’ toward 0 on each cycle by about 1%. This allows for 99% increase on total system charge each cycle. The balance between the clipping drawing off charge and the inter-neuron stimulus generating charge allows the avalanching state to dominate the system energy pattern. This makes the system continue changing states while not overloading. In fact, the actual leak rates and inter-neuron stimulus factors (called the connectivity factor in the code) can vary considerably and the system balance is not effected.
The classical connections also must conserve charge. The neurons have a relaxation constant such that a charged neuron, with no stimulus will decay to 0 charge. The balance between the relaxation constant, the connectivity factor, and the ratio of inhibitory to excitory neurons determines the avalanche rate, and whether the system will even avalanche. The levels where found by trial and error.
The connectivity factor very strongly effects the avalanche rate by being an overall throttle on system energy. To low of a connectivity factor results in slow avalanches and more often the system can not reach any particular state. Too high of a factor causes very fast avalanches but then the external stimulus is dominated by the avalanches. Still the working range is fairly broad.

2.3Neuron Fatigue

Once the system is avalanched into a given state, it is stuck there. (Much like an electronic flip-flop.) After some number of cycles the fatigue level of the neuron reaches a certain level and the neuron state changes to fatigued. The neuron then stops stimulating its targets for some number of cycles while it recovers from the fatigue. The neuron charge leaks toward zero. While this is happening, other neurons can begin to avalanche to some other state.
The number of cycles it takes for a neuron to fatigue must be more than the avalanche time or the full state will not usually be reached. If it is much more than the avalanche time then the system simply reaches successive states more slowly and may miss time-dependent factors. The fatigue recovery time must be somewhat longer than the avalanche time.
A problem encountered in the test program was where one neuron in a group matching a pattern would start fatigue recovery before another neuron reached a saturated avalanched state. This caused the system to oscillate on a single state. The solution is to have a delay of a few cycles between setting the fatigued state and beginning recovery to allow all the neurons in a pattern to become fatigued.
See the source code for the actual implementation of the algorithms.

2.4Test Results

The test program showed that avalanching does occur most of the time. A typical avalanche time was 15 to 19 cycles. The fatigue time was set to 25 cycles, the recovery delay was 5 cycles, with a recovery of about 20 cycles. (Implemented as when the sum of Charge over time reached 25 with a slight relaxation factor.)
If an avalanche did not occur then the fatigue algorithm would eventually upset the balance and the system would avalanche to some other state.
The system would continually avalanche to various states. The test program had no time-dependent components.

3.Test Program - 5x5 Grid of Neurons with Graphic Output

The third test program added in a graphical view of the neuron activity level and made the grid of neurons any size. The tests were done on a 5 by 5 grid of neurons that are all connected to each other, but not self connected.
Pattern sets that are non-overlapping result in connection expected values that are exact and strong. This results in very clean avalanches.
Patterns that share common neurons, such as a rotating bar on the center ( patterns that look like the ‘twirly wait bar’ which is a sequence that looks like | / - \ and the center pixel is shared by all patterns, see appendix B) avalanche very well. All patterns have the same total energy level.
Patterns that share some neurons but have different total number of neurons in the pattern generally avalanche into the patterns with the most active neurons. (Highest total energy). One can lightly (50%) set the low energy patterns and they will still avalanche.

3.1Other Algorithms Tested

Several other algorithms were tested.
Test were done on the firing algorithm. For the level seeking style connections: The first algorithm used added some percent of the difference between the expected target charge and the actual target charge such that the target charge would avalanche asymptotically toward the expected value. This resulted in slow avalanches. The algorithm was changed to do a linear seek toward the expected value. This resulted in very fast avalanches. Also with constant learning turned on, linear seeking resulted in less forgetting.
The learning of the confidence value (strength) for a level seeking style connection has been done in several ways, none of which are really satisfactory. This needs work. One interesting algorithm was to lower the learning rate for connections with a high confidence. This attempted to compensate for the forgetting that happens when learning is always turned on. It actually had the exact same effect as a lower learning rate would have.
A test was done where instead of weighted connections the connections kept a running sum-of-charge and sum-of-charge-squared and then calculated the weighted mean and standard deviation. This became the expected value and the deviation was used for the confidence level. This did not work well for connections to bi-modal neurons.

4.The Program Code

The test programs are written in Java. They are available at http://www.xmission.com/~rkeene.

5.Conclusions and Future Research

This series of tests has shown that interconnected arrays of neurons with fatigue algorithms can avalanche into successive states of learns patterns.
There is still the difficulty of arriving at some algorithm for turning learning on and off.
The next step is to add groups (also called maps) of neurons that act together to represent some external state, and let the groups interact with each other. If some of the maps represent past states, while others represent current states, the system should be able to repeat patterns series in the order they were presented.
In large system all neurons would not be connected to all other neurons. Instead each neuron would connect to N other randomly selected neurons where N is the fanout.

6.References

  1. Keene, Richard 1995, A New Model for the Cognitive Process - Artificial Cognition, International IEEE Symposium on Intelligence in Neural and Biological Systems, IEEE Press
  2. Haykins, Simon 1993. Neural Networks, A Comprehensive Foundation, IEEE Press

Appendix A - Output From the Three-Pair Test
Here is the three pair system after it has learned the three patterns, has been set to random charge values and is about to avalanche. Neurons 2 and 3 will dominate because that pair has the highest total energy.
Neuron 0: Id A: C 0.111791: Fanout 5: Fatigue 2.25323: State 0
Target 0(a): Expected 0.98659: Strength 1
Target 1(B): Expected 0.00476323: Strength 1
Target 2(b): Expected 0.00665825: Strength 1
Target 3(C): Expected 0.0172547: Strength 1
Target 4(c): Expected 0.0178598: Strength 1
Neuron 1: Id a: C 0.0739657: Fanout 5: Fatigue 1.73814: State 0
Target 0(A): Expected 0.987056: Strength 1
Target 1(B): Expected 0.0126316: Strength 1
Target 2(b): Expected 0.014937: Strength 1
Target 3(C): Expected 0.00880566: Strength 1
Target 4(c): Expected 0.00888966: Strength 1
Neuron 2: Id B: C 0.133838: Fanout 5: Fatigue 3.72987: State 0
Target 0(A): Expected 0.00983726: Strength 1
Target 1(a): Expected 0.0158047: Strength 1
Target 2(b): Expected 0.982846: Strength 1
Target 3(C): Expected 0.0147772: Strength 1
Target 4(c): Expected 0.0139572: Strength 1
Neuron 3: Id b: C 0.218574: Fanout 5: Fatigue 3.18379: State 0
Target 0(A): Expected 0.0257993: Strength 1
Target 1(a): Expected 0.0133968: Strength 1
Target 2(B): Expected 0.983183: Strength 1
Target 3(C): Expected 0.0133469: Strength 1
Target 4(c): Expected 0.0111659: Strength 1
Neuron 4: Id C: C 0.0244048: Fanout 5: Fatigue 4.25388: State 0
Target 0(A): Expected 0.0160625: Strength 1
Target 1(a): Expected 0.00663962: Strength 1
Target 2(B): Expected 0.0146463: Strength 1
Target 3(b): Expected 0.0163282: Strength 1
Target 4(c): Expected 0.987261: Strength 1
Neuron 5: Id c: C 0.0540738: Fanout 5: Fatigue 2.00599: State 0
Target 0(A): Expected 0.00808988: Strength 1
Target 1(a): Expected 0.0166216: Strength 1
Target 2(B): Expected 0.00707348: Strength 1
Target 3(b): Expected 0.00900241: Strength 1
Target 4(C): Expected 0.989449: Strength 1

Here is the same system part way into the avalanche.
Neuron 0: Id A: C 0.0185807: Fanout 5: Fatigue 1.84688: State 0
Target 0(a): Expected 0.98659: Strength 1
Target 1(B): Expected 0.00476323: Strength 1
Target 2(b): Expected 0.00665825: Strength 1
Target 3(C): Expected 0.0172547: Strength 1
Target 4(c): Expected 0.0178598: Strength 1
Neuron 1: Id a: C 0.0147144: Fanout 5: Fatigue 1.23754: State 0
Target 0(A): Expected 0.987056: Strength 1
Target 1(B): Expected 0.0126316: Strength 1
Target 2(b): Expected 0.014937: Strength 1
Target 3(C): Expected 0.00880566: Strength 1
Target 4(c): Expected 0.00888966: Strength 1
Neuron 2: Id B: C 0.684502: Fanout 5: Fatigue 7.66481: State 0
Target 0(A): Expected 0.0103409: Strength 1
Target 1(a): Expected 0.0161878: Strength 1
Target 2(b): Expected 0.974916: Strength 0.99733
Target 3(C): Expected 0.0144067: Strength 1
Target 4(c): Expected 0.0143724: Strength 1
Neuron 3: Id b: C 0.693296: Fanout 5: Fatigue 7.48861: State 0
Target 0(A): Expected 0.0249297: Strength 1
Target 1(a): Expected 0.0152: Strength 1
Target 2(B): Expected 0.946959: Strength 0.985975
Target 3(C): Expected 0.0153209: Strength 1
Target 4(c): Expected 0.0130931: Strength 1
Neuron 4: Id C: C 0.0180507: Fanout 5: Fatigue 3.79868: State 0
Target 0(A): Expected 0.0160625: Strength 1
Target 1(a): Expected 0.00663962: Strength 1
Target 2(B): Expected 0.0146463: Strength 1
Target 3(b): Expected 0.0163282: Strength 1
Target 4(c): Expected 0.987261: Strength 1
Neuron 5: Id c: C 0.0161788: Fanout 5: Fatigue 1.52298: State 0
Target 0(A): Expected 0.00808988: Strength 1
Target 1(a): Expected 0.0166216: Strength 1
Target 2(B): Expected 0.00707348: Strength 1
Target 3(b): Expected 0.00900241: Strength 1
Target 4(C): Expected 0.989449: Strength 1

Here is the same system now fully in the pattern state. Fatigue is building up.
Neuron 0: Id A: C 0.0185807: Fanout 5: Fatigue 1.65732: State 0
Target 0(a): Expected 0.98659: Strength 1
Target 1(B): Expected 0.00476323: Strength 1
Target 2(b): Expected 0.00665825: Strength 1
Target 3(C): Expected 0.0172547: Strength 1
Target 4(c): Expected 0.0178598: Strength 1
Neuron 1: Id a: C 0.0147144: Fanout 5: Fatigue 1.02516: State 0
Target 0(A): Expected 0.987056: Strength 1
Target 1(B): Expected 0.0126316: Strength 1
Target 2(b): Expected 0.014937: Strength 1
Target 3(C): Expected 0.00880566: Strength 1
Target 4(c): Expected 0.00888966: Strength 1
Neuron 2: Id B: C 0.93139: Fanout 5: Fatigue 12.7788: State 0
Target 0(A): Expected 0.0103409: Strength 1
Target 1(a): Expected 0.0161878: Strength 1
Target 2(b): Expected 0.974916: Strength 0.99733
Target 3(C): Expected 0.0144067: Strength 1
Target 4(c): Expected 0.0143724: Strength 1
Neuron 3: Id b: C 0.959067: Fanout 5: Fatigue 12.7305: State 0
Target 0(A): Expected 0.0249297: Strength 1
Target 1(a): Expected 0.0152: Strength 1
Target 2(B): Expected 0.946959: Strength 0.985975
Target 3(C): Expected 0.0153209: Strength 1
Target 4(c): Expected 0.0130931: Strength 1
Neuron 4: Id C: C 0.0180507: Fanout 5: Fatigue 3.60477: State 0
Target 0(A): Expected 0.0160625: Strength 1
Target 1(a): Expected 0.00663962: Strength 1
Target 2(B): Expected 0.0146463: Strength 1
Target 3(b): Expected 0.0163282: Strength 1
Target 4(c): Expected 0.987261: Strength 1
Neuron 5: Id c: C 0.0161788: Fanout 5: Fatigue 1.31921: State 0
Target 0(A): Expected 0.00808988: Strength 1
Target 1(a): Expected 0.0166216: Strength 1
Target 2(B): Expected 0.00707348: Strength 1
Target 3(b): Expected 0.00900241: Strength 1
Target 4(C): Expected 0.989449: Strength 1


Appendix B - The 5x5 System With Rotating Bar Pattern

This series of images show the training patterns.

These three pictures are a random charges, than a cascade into a pattern, then the full pattern with fatigue just beginning to set in.


This is after an avalanche to another pattern.



New Cognition

Attached is a paper I published in 1995. (Reformatted by copying into the blog editor)

A New Model for the Cognitive Process
Artificial Cognition


Richard Keene - Park City Group
Phone: 801-645-2875
Box 5000, Park City UTAH 84060


(2014 now rmkeene@gmail.com)

Abstract
A theory is presented: That a subsumptive neural system coupled with a semi-randomly connected, teachable, neural net will result in cognitive behavior similar to what appears to happen in biological brains. The paper discusses a new theory of what cognition is, and an algorithm for the simulation of cognition. The topics of what the brain appears to do, why the brain provides the functions it does, and how this could be simulated are discussed. The intent is to arrive at a single unified algorithm that covers all functions of the brain.


1. Why Brains?

It is obvious that the reason that animals have brains is because the brain imparts a survival advantage. Not only does a brain such as the human brain impart a survival advantage, the various stages of evolution that neural systems have gone through also must help the organism survive.

1.1 What are the advantages of neurons

If one studies very primitive creatures, such as the hydra, one finds they have simple networks of undifferentiated neurons. The survival advantage of neurons to such a creature is that information about it’s environment can be transmitted quickly to all parts of the creature. For instance if a small particle touches one part of the hydra the rest of the hydra can respond quickly and close the tentacles before the food gets away. In contrast, a Venus Fly Trap uses chemical diffusion to detect a fly touching the sensor hairs and thus closes rather slowly.
The other advantage of neurons it that the pulsing nature of neurons makes the strength of the signal (pulse frequency encoded) fairly independent of temperature, sugar levels, and oxygen.

1.2 Abstractions of the Environment

As one looks at slightly more complex organisms one finds the neural networks begin to specialize into several types of neurons. The organisms such as the Planarian use neurons to directly arrive at a more abstract representation of their environment and to respond in pre-programmed ways. (The planarian has some primitive learning too but is used here as an example of a pre-programmed organism.) For example a group of neurons could convert “the touch sensors are being stimulated on the left side” to be “there is danger from the left side”. This primitive abstract mapping could then be used to stimulate a reflex movement of the Planarian toward the right, and thus escape being eaten. This ability to have hardwired (unchanging) abstractions and hardwired reflexes imparts a survival advantage.
Another reason for neurons that use pulse frequency encoding also becomes apparent at this level. If a group of neurons is not pulsing then it appears not to exist. If a small genetic mutation causes some new neurons to be created in some rather random fashion, there is a high probability that the organism will still be able to function. The reason the organism can still function is that when the new neurons are not pulsing they appear not to exist, and the organism acts normally. When the new neurons are active the organism will exhibit some new behavior that may or may not impart a survival advantage. This need for neural hiding in incremental designs has been shown with a program called NuTank. See the appendix A for details of the NuTank program and experiment. With neural hiding a neural system can mutate with some finite probability that the mutation will not completely destroy the function of the neural system. This is very different from current computer programs, where a single bit error almost always results in a fatal program crash.

1.3 Subsumption -- A Simple Idea to make Evolvable Complex Systems

Such a system where inputs are abstracted or transformed to new representations of the environment, and the abstractions are then converted to low level output stimuli, is called a subsumptive architecture [4]. Each of the abstractions of the environment in a subsumptive system is called a concept map.
Figure One shows a cut away view of such a system with one slice showing possible concept maps. Sensory neural input come in at the bottom left, and neural outputs to muscles and such come out the bottom right. Note that in biological systems the upward traveling signals are often spatially intermingled with the downward signals. The diagram separates them spatially for clarity. Biological systems seem to have two dimensional concept maps, and three dimensional concept volumes.
As the input signals are propagated upward the inputs from lower layers are transformed into representations of the conditions or attributes of the organism’s surroundings. This results in neural maps of abstract concepts (concept maps or direct concepts)[2]. These concept maps determine what the organism can have a concept of and respond to. An example of a direct concept in the human brain is “red”. The intuitive feeling for the concept “red” is probably manifest by the pattern of neural activity in a single concept map in the visual areas of the brain.
Signal paths in the system may travel part way up, propagate across to the reflex areas and downward. Such a path is called a subsumptive loop or reflex loop. If an upper layer reflex is strongly stimulated it can inhibit lower level reflexes (making them hide) and thus subsume the lower level reflexes.

1.4 Predicting the Future Environment

The second major component of the proposed system is a neural net that can learn by changing it’s connection weights. This neural net is like traditional teachable neural nets [1]. This neural network does not serve as a direct information transformer as the subsumptive system does. Instead the neural network serves to recognize past patterns and to predict the future state of the environment, and then to stimulate (or inhibit) the concept maps into the predicted future state. The strength of this stimulus depends on how strong of a pattern match there is. The network is always learning.
Here is an example of why this is a pro-survival algorithm; An organism detects a certain sound and sight combination, thus setting the concept maps to a certain state, and then is attacked and narrowly escapes. The pattern recognizer now has been imprinted with this pattern to some degree. Later when a similar pattern of sound, and sight occurs the pattern recognizer detects a similarity and stimulates the concept maps partially to the state they were in just after the pattern occurred. (The state of being attacked and fleeing.) Now the organism is reacting to a future state of the environment and this imparts a very big survival advantage. The organism can now learn what environmental states result in danger and react before the actual danger arrives. I have coined the term subsumptive-regular system or SRS for this combination of a subsumptive system and a regular or traditional neural network.

One can extrapolate several characteristics of such a system. (See Figure Two)
The pattern matcher (I theorize that this is what the cortex is) co-evolves with the subsumptive system. This co-evolution causes the subsumptive system to be evolved to have as many abstractions of the environment as possible, and to minimize reflexes. (Since the cortex supplies learned reflexes instead.) Thus most of the human brain would be dedicated to input abstraction and have relatively few hard wired reflexes will cause the subsumptive system to respond to that stimuli, and strong pattern matches from the cortex will cause the subsumptive system to be in a state caused by the cortex. There would be a fuzzy boundary that moves up and down, between where the subsumptive system is representing the real environment and where it is representing the predicted environment. Thus at times much of the brain can be “thinking” about an “imagined” situation instead of the current real environment.
  • There is a very interesting behavior implied in this design. The system can have a repeated cycle of:
  • Do a pattern match on the current state of the subsumptive system, stimulate the subsumptive system into the predicted next state (predicted from the sum total of all past experiences) , and repeat the cycle.
    This is what one calls a “train of thought” or “Cognition”. With such a behavior one can do very complex predictions of the future state of the environment. Such a cycle would not be synchronized in the SRS but would consist of any number of asynchronous cycles.
  • Memory is manifest in such a system. In a digital computer to remember something means to transfer data from disk to memory. In an SRS to remember something means to put the subsumptive system partially into the state it was in at some previous time.
  • Such a system would also be able to work with concepts that are not directly represented in one of it’s conceptual maps. Instead many indirect concepts would be managed as chains and blends of direct concepts. For example the human brain manages to think about money, yet there is no single concept map in the brain related to money. Instead money is a mixture of hundreds of direct concepts that blend into the patterns we associate with money.
  • The learning rate of the cortex can be very low and still result in immediate learning and memory. If 1% of 100 million neurons change by 0.01% when a significant external event occurs, this is equivalent to 100 neural connections changing by 100%. Such a change would result in an immediate pattern match on similar situations.
  • Such a change probably consists of an immediate chemical change followed by cellular growth. As the chemical change fades the growth would make the “memory” permanent.


    2. A New Design and Programming Algorithm

    With the previous concepts one can now define a new way of programming and designing a cognitive system.

    2.1 Set A Goal

    To achieve an artificially cognitive system one must first set a goal to imitate some functions of a biological SRS. For example one might set the goal to create a program in a robot that exhibits much of the behavior of some biological system.
    The very lofty goal of a cognitive system that can carry on a meaningful human conversation would require creating a system that has conceptual maps that cover most of the conceptual maps in the human brain and is able to update them in real time. This is currently beyond the capabilities of current computers. One might create a system that lacks many of the concept maps that humans have but has very good mappings of audio stimulus and could at least talk about limited subjects.
    A very realistic experiment would be to build a system with a single channel of video input, a single robotic arm, an audio spectrum analyzer input, and a synthesizer output. Such a system could be endowed with several hundred direct concepts. One could then put the system in a rich environment and see if it develops learned behavior. This would be an interesting system where one could test various combinations of cortex complexity and the strength of connection of the cortex to the subsumptive system.

    2.2 Designing the Subsumptive System

    Next one would need to design the computer based tools to represent the subsumptive design with a useful notation. One would also need other support tools for visualizing the resulting operational system activity, and debugging. Next one would start at the bottom of the system and develop the subsumptive system from simple concepts to more abstract concepts. This process could be directed by what we know about biological brains [2] and by what direct concepts are desired in the final system. This design process could work in stages developing a simple system and testing it, then adding new concept maps

    1.3 Add the Cortex

    The cortex could be generated with a simple algorithm:
    Randomly choose a neuron in the subsumptive system with the probability skewed toward choosing neurons in the more abstract parts of the subsumptive system. This neuron is the learning neuron’s target. Next make some number of connections randomly to nearby neurons, with diminishing probability with distance from the target neuron, and initialize the weight of the connection to zero.
    By “distance” one means conceptual distance. For example, vision processing functions are “close together” and vision and hearing might be “far apart”. Also low level concept maps are far from very abstract concept maps. The cortex neurons could also connect to other cortex neurons. The design tools for the subsumptive system would need to have some notation of coordinates for each map so it’s “distance” from another map could be determined.
    In the brain the cortex is separate from the subsumptive regions. This may be so the cortex neurons can connect with each other and arrive at second, third and higher order solutions for pattern matches.
    It would be interesting to experiment with truly random connectivity such that the connection probability is global to the system. One also might experiment with deleting connections that have zero weight for long periods of time (days, weeks, months) and creating new connections randomly. This might emulate the connection growth that happens in an infants brain.

    2.4 Teach the System

    Next one must let the system developed learned behavior. In a human starting from scratch until a conversation can be held takes about 4 years learning. An animal such as a dog learns most of it’s behavior in about a year. More primitive creatures have much less learned behavior and so have shorter learning times. An artificial SRS might have a much shorter learning time since it could be awake 24 hours a day and might have much more perfect memory.
    One would want to be able to dump the entire state of the system to permanent storage, add new concept maps, and reload the previous system back in. This would allow previous learning to be carried over into the more advanced system. This is far different from how biological systems progress. Biological systems depend on the next generation to mutate and then start learning from scratch.

    2.5 Proof of the Theory

    This paper has proposed a single central theory: That a subsumptive neural system coupled with a semi-randomly connected, teachable, neural net will result in cognitive behavior similar to what appears to happen in biological brains.
    The evidence that the theory is worth pursuing is intellectual. If one follows the logical path that brains have evolved along one arrives at the SRS theory. This is a different approach from “let’s simulate some cognitive functions” that created rule base systems.
    The other indication that the SRS theory is worth testing is that once one understands the theory one can do a mental simulation of how the algorithm would behave and see that the behavior would be similar to the thought process. The SRS theory also makes much of human behavior obvious. See appendix B for a discussion on human behavior.
    The proof of the SRS theory would be to actually build a reasonably complex system and have it behave as a biological system. The ultimate proof would be to build a system that was not explicitly “programmed” to talk and yet could hold a conversation. This second proof of holding a conversation is very homocentric. Such a system might pass the Turing Test, but the validity of the Turing Test is debatable. It might be possible to create an SRS that can talk yet has very limited conceptual maps. This might be possible on a current super computer. As stated in the goals section above, a good short term proof would be to attach a computer to a simple robot and see if learned behavior emerges.

    2.6 Possible Applications

    If this theory proves out the result would be robotic systems that can behave as biological systems in natural environments, such as woods, jungle, deserts, and oceans. The systems could be given behaviors and sensors that are beyond natural biological systems. If the system does not need to reproduce and it’s food is gasoline then it’s behavior could be much more focused than biological systems. Such a system would be a very good house cleaner, explorer, or war machine. Such a system would probably not be a very good factory worker, unless it could achieve human cognitive levels.

    3. Appendix A - The NuTank Program

    NuTank stands for neural tanks. The program provides a neural editor, a drawing editor, and a two dimensional environment simulator. NuTank is a DOS VGA program. The idea is to design a “brain” for a tank or beast with neurons and see how it reacts in the simulated environment. The tanks can have many inputs such as optical sensors, whiskers, smell, and hearing. The outputs are sound emitters, smell emitters, and the wheels. The tanks can also have jaws that can bite things.
    The theory that a subsumptive design allows for easy incremental modification of the system was well proven while designing the brains in tanks. One could concentrate on behaviors and not worry too much about how they interact. At a later time one could decide to add a new behavior, and not need to modify the lower level behaviors.
    The NuTank program’s neural editor is much like an electronic schematic editor, with blocks representing direct concept maps, and “wires” between block to represent connections. This method of designing a neural subsumptive system has proven to be a poor choice. It appears that a much better design would be a text style editor and a textual representation of each concept map. This would greatly increase the information density of the screen when designing systems, and would localize relevant information. The current “schematic” style design requires too much panning of the screen to trace connections.
    The entire NuTank program is currently being rewritten from scratch as a MS Windows (TM) program.

    4. Appendix B - Human Behavior

    Given that the human brain is a subsumptive system with a pattern matching cortex (Actually the human brain appears to be at least two SRS’s nested one inside the other.) one can see why people have mental problems. The following examples are a bit simplistic. Like Figure one, human psychology is made of many intermingled and complex parts.
    Let’s imagine someone who has been hurt when they were a child. Now they are an adult and the same situation would not be harmful. The person can easily go through the chain of thought and recognize that such a situation is not now harmful. Still when a similar situation occurs the cortex gets a very strong pattern match on the situation and stimulates the related concept maps into the previous state of extreme fear. The person knows it makes no sense, yet feels fear and panic and may act irrationally. This is because the primitive emotional response is not effected by the weaker “chain of thought” that requires several cycles of pattern matching and stimulus. Current psychological theories show that the best way to get over such traumas is to get in a calm controlled environment and “re-live” the experience in one’s imagination. This “re-living” needs to be on an emotional level, not just an intellectual level. Now the cortex gets as similar as possible of a pattern match, but the expected bad events don’t happen. This replaces the previous pattern with a new positive pattern and thus reduces the impact of the original trauma. If one has experience with psychotic people one gets the feeling that the psychotic person is reacting to a script that only they know. Indeed they are, they are reacting to previous patterns that are no longer applicable.
    One factor that can be arrived at only by testing in a real SRS is the ratio of connection strength needed to achieve a balance between awareness of the external world and the imagined world. If one has an imbalance toward too high of stimulus from the cortex (or week stimulus from between subsumptive layers) then one might have something as mild as Attention Deficit Disorder or serious as Autism. In the reverse, if the cortex connectivity is week then the person’s ability to chain thoughts would be hindered. (low IQ)
    A severe trauma could also cause such a strong pattern match--stimulus cycle that the entire system could get stuck in a loop where external stimulus can’t override the cortex pattern match. This would put the person in a catatonic state.

    5. References

    1. Haykins, Simon 1993. Neural Networks, A Comprehensive Foundation, IEEE Press
    1. Konishi, Masakazu April 1993, Listening with Two Ears, Scientific American
    1. Thompson, Richard 1985, The Brain, WH Freeman Pub.
    1. Wallich, Paul December 1991, Silicon Babies, Scientific American
    1. Whole Issue, September 1992, The Mind and Brain, Scientific American

    Photon Soup

    Quite a while back, in 1994 I published some images at SIGGRAPH.  Thought I'd put them here so they are not lost.

    First image:

    This image was generated in 1991 by simulating the motion of 29.8
    Billion photons in a room. The room is 2 meters cubed with a 30 cm
    aperture in one wall. The opposite and adjacent walls are mirrors, so
    this is a 'tunnel of mirrors'. The depth of field is very shallow. In
    the foreground is a prism, resting on the floor. A beam of light
    emerges from the left wall, goes through the prism and makes a spectrum
    on the right wall. About 1 in 177 photons made it through the aperture.

    The image took 100 Sun SparcStation1s 1 month to generate using
    background processing time. This represents 10 CPU years of processing
    time. If the lights are 25 watt bulbs this represents a few picoseconds
    of time.
    This was 'grid computing' way before its time.



    Photon Soup 2

    This next series was done several years later on much faster machines.

    These are made by simulating the motion of 382 Billion Photons in a 2 meter cubed room.  There are aperatures on the walls that capture photons.  These are stereo pairs. The lighter ones are 382 billion photons.  The darker ones are just the prisim to show off the caustics better.

    Front View
    A white beam of light comes in from the wall on the left, hits the prism, and is refracted to a rainbow on the right side wall. BUT... It also is internally reflected and bounces around a bit more and goes throught the clear ball on the right.  If you take the overhead view image that is mostly black below and brighten it way up you can see all the caustics.



    Top View


    Side View


    Front View - Prism Only


    Top View


    Side View


    Tuesday, September 3, 2013

    Failing at Meditation

    So here I sit trying to meditate.

    The dog wants to play.  Go away can't you see I'm meditating? Oh failed again, my mind has wandered off about the dog.

    Back to focusing. Ok, no thought. Hmmm.  It would be so cool to be a Zen teacher. All those students admiring my great mind control.  I could teach them so much about the empty mind and no thought. Failed again, my mind wandered.

    Focus! Just experience what is right in front of me.  Oh, the dog again. The dog is so Zen.  Just right there in the moment.  No thought of the past, future, and no self analysis of the present.  Just being a dog.  Which I was like that. Ah, failed again.

    Ok, be present in my room. The rug.  The table. The wall. The dog. Just as they are.  I remember a cool talk by Gil Fronsdale about not letting the mind wander. He is so cool.  Wish I could be like that. I bet he's rich too. And gets to meditate all day.  Arrrgh, failed again.

    New approach, watch the failures and stay with them.  Just observe what my mind is doing. Ahh, the dog.  Watch how I perceive the dog. My mind wanders.  Let it.  Watch where it wanders and why.  Then I start to think too much.  Why? What is happening? Ah, what desires are pushing my thoughts? Oh 'failure'. Why do I think that? Be conscious of the failure and and the thoughts and feeling of disappointment at 'failing'.

    "Success".



    Friday, April 19, 2013

    The Rules of Engagement

    For Software Developers

    Some rules of working, in particular in start-up companies...
    • I you are self employed or 1099.  Ignore any promises of a bonus to make up the tax differences at the end of the year.  Compared to W2 you must make about 20% more to account for medical insurance, self-employment tax, and in general, risk.  So $50 per hour @ 2000 hours per year, is not the same as $100,000 / year as an employee.
    • Any stock option promises should be discounted to about 10% of what they vaguely quote they will be worth.
    • If you have the meeting... "We are short this payroll, but will get money on Monday and everyone will get paid".  Leave immediately.  If you are not getting paid, do not work even one hour.  It is not about your loyalty to the company, it is about the company's loyalty to you. (Exception: if you own more than 30% of the company then you are the company.)
    • Do not ignore the signs of failure.  There is always the hope that "things are about to get better".

    Monday, August 27, 2012

    Eco-Tripping

    Definition: Eco-Tripping

    To own a 'local' car that is ultra efficient, perhaps electric, with a limited range a speed. When you want to go on a " trip to grandma's " you rent a family van with all the fancy TV and such for the trip.

    The net result is less money spent, and a HIGHER STANDARD OF LIVING.


    Standard of Living:

    What is meant by wealth and standard of living?  These are the combined sum of how much 'stuff' you get for your day's work at some job.  By 'stuff' we mean material position, travel, security (police, fire, safe neighborhood), medical care, entertainment, internet access, recreation, family togetherness, food, clothing, house, exercise.  Notice that money is not in that list.  Money is a future claim on all the above.

    A standard of living item that Americans cling to very tightly is transportation freedom.  Go where you want, when you want, in style.

    There are two ways to raise standard of living: First is to make more money, and second is to get more stuff for less money.

    The Diesel Rabbit Story:

    My first car way back when was a VW Diesel Rabbit.  It was a freedom machine for me because at the time it went very far on very little money.  Going on a trip somewhere was not a big money decision.  The car was a very tangible increase in my standard of living.

    One thing that lowers standard of living is when we worry about some activity we are doing or material we are using.  For example if you have to worry about the cost of gas, global warming, foreign wars fought over oil, then you standard of living is lower. You tend to have a negative mindset about driving anywhere.

    Large Car Costs

    If you currently have a Ford Expedition and haul the kids around in it, and gas hits $5 per gallon, the car becomes a drag on all trip decisions.  "Maybe we could go to the park, but it would cost too much in gas."  Why haul around such a huge car all the time when you only need it twice a year for long distance trips?  Not only is the big car a gas hog, but also repairs, tires, filters, oil changes, transmission, and the whole thing is very expensive. On the flip side if you look at person-miles-per-gallon a big car is very efficient if all seats have people in them.  A Hummer at 12 MPG with 8 people in it is 12 x 8 = 96 person-miles-per-gallon!

    Electric Car Costs

    A very efficient electric car (or gas for that matter) is less expensive, and less costly to repair.  The exception to this is some hybrids where the unique nature of the car pushes repair costs up.  That is a temporary condition.  An all electric car is less expensive to build, repair, and drive. The current lineup of electric cars are expensive because it is new technology. (Remember how expensive flat screen TV's started out?)

    The Electric Car Delusion

    There is  a current impression that electric cars are a reduction in standard of living.  This has been fostered by the automotive manufacturers. The problem is that it is obvious that if electric cars cost less to make, and have virtually no repairs, then selling the same number of cars per year will result in something like half the revenue.  Not a good business direction.  The fallacy is that either way cars are going to change to electric, the question is just when.  Only Nissan and Toyota have fully embraced this fact and are looking toward a future where they dominate the market. (Side note: electric cars can waaaaay out accelerate gas cars.)

    Other Similar Topics

    On a similar vein is how we handle the two largest home uses of energy, heat and hot water.  Both use natural gas and are completely unnecessary. Solar systems, if massed produced, should not cost much more to install, and result in a complete freedom from subsequent cost of hot water.  So take a shower as long as you want.  Once again a higher standard of living on less money.

    Cultural Inertia

    We have a cultural inertia where the way we have been doing things is what everyone knows, and it is difficult to imagine a different way, even if it might be better, more fun, and a 'higher standard of living'.  Instead we are stuck with the tree hugger nightmare where we all are supposed to suffer, trim back, reduce, scrimp and save, drive some eco-car that is no fun, freeze in the winter, bake in the summer and so on.  It is all wrong.  We need to put our creative powers toward how to live sustainably at a higher standard of living

    It can be done.

    TF