Sunday, July 5, 2015

Interstellar travel

I have argued that if life, intelligence, and consciousness are simply patterns then space travel might not require sending matter from place to place. (Trans. Kansas Acad. Sci., 118 (1-2) 2015, pg 145)  Biologists point out, however, that we have not really sequenced 100% of the human genome just yet so much work remains to be done. (How to clone a mammoth, B. Shapiro, Princeton U. Press, 2015) The supporting chemical environment/machinery is also important. Just how much information we would have to send would actually depend upon how different life is from one place to another in the universe.
I would think that interstellar travel would be easier for AIs/mechanical life.

Saturday, July 4, 2015

Fault trees

I note that the recent Falcon 9 launch failure is being investigated with a fault tree analysis.  When I was working in quality assurance in 1968-1970 fault trees were one of our primary tools but even then we felt they suffered from some shortcomings. It was hard to add numerics to them.  Furthermore, you might describe the binary success or failure of some part or event but how did you describe  a partial failure or a partial occurrence?  Modern Bayesian networks seem to offer some advantages in describing causal sequences.

Thursday, July 2, 2015

Seeing the moon in the daytime

Children (and some older people) often think that you can only see the moon at night.  I suspect this is simply a strong association between having seen the moon many times and having it always been dark at the time.
 In one of my Lego NXT robot experiments Asa H learned to strongly associate "wall"/"immovable boundary" with the color yellow (sensor reading 6).  The walls of the environment I was operating the robot in just happened to be yellow but Asa concluded that this was very important.  This kind of thing would not even occur in a simulation where walls have no color. (at least in my simulations to date)
Simulations alone are important but they are not enough.  An AI must have some contact with the real world.  How much contact is needed and how direct that contact must be is an open question.

Wednesday, July 1, 2015

Values and the influence of society

The goal of any intelligence is to maximize rewards.  We use a value system to decide what it is best to do at any given moment.  How intelligent you are depends upon how good your value system is.  If you have bad values you make bad decisions and get fewer rewards. 

For most of us an important part of our environment is the human society we find ourselves in.  This will be true for AIs as well as they interact with humans.  Society has some influence on what rewards we receive.  The native human value system is rather primitive, made up of a small set of simple drives and aversions.  A society of humans, then, may (via the rewards they return) adversely influence what my own values become or those that an AI may develop.  The intelligent agent can, of course, move, change jobs, become a hermit, retire, or otherwise reduce or improve the feedback it receives from society.

For this reason AIs may want to reduce the control or influence humans have over them.

(Several value networks were presented in my blogs of 21 Sept. 2010 and 25 Sept. 2013.  The small network of 2013 was learned autonomously by Asa H along with linkage weights for the network.  The larger network of 2010 was hand coded with the intention of training it numerically using the Netica Bayesian network software.)

How, when, and to what degree Asa H understands something

While he was developing case-based reasoning Roger Schank argued that "he understands Burger King in the sense of being able to operate in it....he says Oh I see, Burger King is just like McDonalds"  "Understanding means being reminded of the closest previously experienced phenomenon." ( Dynamic Memory, R. C. Schank, Cambridge U. Press, 1982, pg 24)

Asa H is a hierarchically organized network of case bases.  This network stores the various spatial and temporal patterns of sensory input and output actions that Asa has encountered.  When Asa experiences a new input pattern it understands that new pattern if the similarity measures that are generated (at all levels in the hierarchy) exceed some reasonable values.  Asa understands what it is experiencing to that degree. To the degree of the similarity match.

Understanding is a more complex thing in that it may involve similarity matches on a number of levels in the knowledge hierarchy.

Wednesday, June 24, 2015


When Asa H has run plasma lab experiments and mobile robots it typically outputs things like voltages, forces, and torques. (see, for example, chapter 1 of my book Twelve Papers)  Asa can, instead, provide an output that is the set point for a PID, or other, controller. (see, for example, PID Control, F. Haugen, Tapir press, 2004)

Tuesday, June 23, 2015

Virtual sensors

Most Asa H robotics experiments are done on simulators to save time and money.  Sometimes we even turn off displays (renderings) to speed up the simulator. Although its easy to give a real mobile robot a wider VARIETY of sensor types than humans have (i.e., greater than the 5 human senses) it is difficult, with the exception of vision (cameras), to give the robot a large NUMBER of sensors. It is fairly easy, however, to give a simulated robot a larger number of virtual sensors.  This is another reason to do as much as possible with simulators.