Wednesday, April 9, 2014

Twenty years ago today

In looking at some old notes I see that 20 years ago today I was working on 2 AI projects: my semantic network (published in Trans. Kan. Acad. Sci., vol. 102, pg 32, 1999) and a constructive ("growing") neural-network program. (Work on Asa F was to begin about a year later.)

Monday, April 7, 2014

AI better than human again

As a test of Asa H's vision capability (see blog of 13 June 2013) we have input a stream of hand written numerals like those used for postal zip codes.  After training, Asa H could recognize these numerals with a 99.1 % accuracy.  Humans read the same codes with an accuracy of 98.8%.

Some examples of numerals that are difficult to identify:

Saturday, April 5, 2014

Artificial intelligence in use today

At a recent conference (where I presented my work on Asa H 2.0) I was again asked when we would have an artificial intelligence. I replied that there were many AIs in use today. Since Asa is, among other things, an example of deep learning I gave some examples of deep learners that are in use every day: speech recognition  in iphone's Siri and Google's Android smart phone software, Google's photo search software, etc.

Friday, April 4, 2014

Rewriting, reformulating, and problem solving

If a student can't answer a question posed in words they are often advised to reword the question and see if they better understand what is being asked for in the reformulated question.  This advice is also applicable to other forms of knowledge representation like mathematics, diagrams, figures, etc.  When working with an electric circuit diagram, for example, it may be useful to consider lengthening and shortening various wires and moving components and connections around.  This may make it more obvious that several resistors are purely in series or purely in parallel, for example.  In mathematics, if you have a set of equations, rather than eliminating variables it may be useful to rewrite the equations in matrix form and then seek a solution by finding an inverse matrix.

Thursday, April 3, 2014


I am reading Mariam Thalos' book Without Hierarchy (Oxford, 2013). Patterns in nature are seen on large spatial and temporal scales and on small spatial and temporal scales.  Life and intelligence are seen to be present at large scales.  An electron is not alive or intelligent. Science (even physics) is not just about the smallest of things.  PV=NkT is a useful description of behavior at a large scale. The shape of V doesn't matter.  Many different experimental setups might yield exactly the same measurements.

Wednesday, April 2, 2014


I have been looking at MIT's ConceptNet 5.2.  It is intended to do some of the things I was doing with my associative/semantic networks (Trans. Kan. Acad. Sci., vol. 102, pg 32, 1999). My network ran very slowly (in PROLOG) when the number of associations (the database) reached about 3000.
(In some PROLOG interpreters it crashed.) Several times I have thought of rewriting the program in another language to speed it up but I've not spent the time needed to do it.

ConceptNet was given some verbal IQ tests and came out about equal to a human 4 year old.

Tuesday, April 1, 2014

Human values again

The flight 370 story reminds me again that human values are not what they should be and that an AI can (and should) have better values than humans.  We spend large sums on rescue and disaster relief but relatively little on prevention and infrastructure. More money needs to be spent on prevention.