Tuesday, November 24, 2015

Should robots have their own language?

Rather than trying to teach them English or some other human natural language
Mubin et. al. have suggested that it might be best if robots had their own spoken language (What you say is not what you get, Spoken dialog and human-robot interaction workshop, IEEE, Japan, 2009).  My AI Asa H has just such a language as defined in my blogs of 1 Oct. and 5 Nov. 2015.

Thursday, November 19, 2015

Alternate concepts, alternate realities

In some of my work with my AI Asa H I have sought alternate concepts with which to model reality.
See, for example, my blog of  22 April 2013.  Perhaps one way to promote the formation of such alternate models of reality is to give Asa H senses which humans don't have, things like radiation detectors, sonar, etc.  Years ago Eddington presented his "two tables" paradox.  He noted that we have a concept like "table", something that is continuous, colored, and solid when sensed with human fingers and eyes.  But he argued that this same object would be mostly colorless empty space when observed via the scattering of, say, an electron beam.

Wednesday, November 18, 2015

A concept of height

Touch sensors on the robot's head versus on the robot's base can define a difference in height. A Vernier barometer raised and lowered by as little as a few feet can detect and define a height change.  A hill can be defined by the pattern of altitude change as a robot climbs and descends it.  A gyro sensor can detect the accompanying changes in inclination of the robot ("pitching"). gps sensors can also give altitude information but they are much less sensitive.

Wednesday, November 11, 2015

Asa H vocabulary growth

After the work described in my blogs of 1 Oct. and 5 Nov. 2015 the next most logical step might be to try to teach Asa H the 1000-3000 most commonly used words in english.  Besides making it easier for Asa to communicate with humans, and learn from humans, a larger vocabulary means you know more concepts and can make finer distinctions between the patterns you observe.

I have never been good at languages.  I may not be the best person to do this work.

When presented with unrestricted real world input Asa has always learned some concepts that I have been unable to name (i.e.,attach human labels to).  Humans may also have such unnamed concepts in their heads.  These could be what is active when we have a hunch or experience intuition. Could this account for some "psychic" phenomena?

Thursday, November 5, 2015

Studying the concepts that Asa H learns

My artificial intelligence Asa H can be presented with quite complex spatial-temporal input patterns and then learns a hierarchical representation which is many layers deep. (i.e., deep learning) In that case even if I watch the activation that is transmitted up the levels of the hierarchy I typically can not name/identify all of the concepts that are being formed/taught. 
I am now trying to present a more organized curriculum for Asa H to learn from.  I want to be able to identify as many of the concepts Asa learns as possible. This should also help us to teach Asa human language.

Using the methods described previously (see for example my blog of  1 Oct. 2015) I have given "level 1" of Asa H the concepts:

far, near, hit/strike front, hit/strike back, hit/strike left. hit/strike right, hit/strike top, hit/strike bottom, touch hand/gripper, say, time, taste, smell/smoke, light, arm left, arm right, arm up, arm down, hand/gripper open, hand/gripper close, rotate gripper cw, rotate gripper ccw, location, temperature, black, red, green, blue, yellow, orange, purple,  eye, food/energy/charge/voltage, eat/current, ground/floor, wall, hear/sound, wind/air current, bump, rotate/turn body left, rotate/turn body right, magnetism,  pain/breakage, mouth contact, move body forward, move body backward, age, line, square, circle, triangle, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, sense dock.

By presenting the robot with simple (real or computer simulated) activities I have given level 2 of Asa H the concepts:

letter, number, shape, hot, cold, see, surface, collision, north, south, east, west, side, old, young, piece, inside, color, arrive, leave, dark, wait/stay, dead servo, grasp, hard, soft, stop/end, drop, need, turn/rotate, fast, slow, name, path, kick, home, left, right, front, back, top, bottom, body, hunger, control, arm.

By presenting the robot with more (and more complex) activities I have given level 3 of Asa H the concepts:

sense/feel, direction, room, damage, tool, take, move, lift, dock.

Level 4 of Asa H has acquired the concepts:

health, carry

Names can be associated with each of these concepts in their respective case-bases.
It should be noted that a given concept may not always be learned in the same level of the hierarchy (see my blog of  3 June 2013). Rather, this depends upon the senses available to the robot, the activities it has experienced, and their order.

Wednesday, November 4, 2015

Self knowledge in Asa H

A robot embodied Asa H has inputs from its physical senses.  On the lowest level in the concept/memory hierarchy Asa feels things like its level of battery charge, temperature, pain/damage, sight, sound, touch, acceleration, etc.
Asa can also have access to internal/software features. On the various levels in the concept hierarchy it can accept as input things like the size of the current casebase, the current learning rate ("L"), how often it is attempting case extrapolation ("skip"), etc.  (see my blog of  10 Feb. 2011 for an example of "L" and "skip" )  We can also measure,  record, and input the time spent in any of Asa's algorithmic processes. Asa can then learn to adjust/optimize any of these quantities. (See my book Twelve Papers, chapter 1, page 15, self monitoring) In this way Asa can sense its own thought processes. Is this the nature of qualia?

Friday, October 30, 2015


I now have the University of Waterloo's Nengo spiking neural network software package running in my computer lab.