Thursday, August 28, 2014

Distraction and focus of attention again

Each layer of the Asa H hierarchy passes a vector up to the next layer.  Perhaps focus of attention might be obtained in the following way:  calculate an average and standard deviation from all of the vector components (assume all components are positive), keep only those components which are "a couple" of standard deviations above the average, delete all other components, renormalize the vector. Report a vector =0 if no components survive this test. (What number should "a couple" really be? Should it vary?) I plan to try this on Asa H.

Wednesday, August 27, 2014

A separate training phase in Asa H

We can give Asa H distinct training and performance stages by altering thresholds like Th2 (line 75 of my code in the blog of 10 Feb. 2011). A casebase can be recorded while using one value of Th2 (code from the 26 Aug. 2013 blog) and then employed by an agent using a different value of Th2 (and possibly other thresholds).

Friday, August 22, 2014

Specialist AIs

Asa H can be trained in an area of expertise and the resulting casebase/knowledgebase saved to an external drive. (see, for example, my blog of 26 Aug. 2013) I have a 4 terabyte drive for this purpose.  Such specialty knowledge can be organized much like the Dewey decimal system and the standard industrial classification.

Friday, August 15, 2014

The Asa H value hierarchy

The values assigned to Asa H cases may vary from one level in the hierarchy to another. At the lowest level(s) case length and how often the case is seen to recur is valued. (see, for instance, Asa H 2.0 light in my blog of 10 Feb 2011).  At the highest level in the hierarchy agent lifespan and number of offspring (diskcopies)  may be whats most highly valued (see, for instance, my paper, Trans. Kan. Acad. Sci., vol. 109, No. 3/4, 2006 )

Ensemble learning with Asa H

Various Asa H experiments have employed ensemble learning.  Perhaps the simplest averages the output from two or more individual Asa H agents.  These may have different similarity measures for instance or have been trained separately. Ensemble learning is also possible within a single Asa agent.  The N best case matches can be followed, for example, and the output can be generated by voting, averaging, interpolation, or the like.  Weighting of the individual outputs by the degree of case match and case utility can be employed. Again, as a rule groups make better decisions than individuals do.

Thursday, August 14, 2014

Granular computing and Asa H

Asa H can be considered to be a project in granular computing (see, for example, Y. Y. Yao, Proc. 4th Chinese National Conf. on Rough Sets and Soft Comp., 2004) "interpreted as the abstraction, generalization, clustering, levels of abstraction, levels of detail, and so on."

Tuesday, August 12, 2014

Big data and artificial intelligence

It is being suggested that big data may be the key to a strong artificial intelligence (see, for example, AI gets its groove back by Lamont Wood, Computerworld, 14 April 2014).  In the 1980s it was common to hear the claim that "you can't be intelligent without knowing a lot" as a part of the work on knowledge based expert systems. 

Certainly big data may offer an environment in which humans find themselves at a disadvantage again. Currently some environments are easier for humans (natural language conversations for example) while some are easier for computing machinery (pocket calculators for example).

Along these lines over the last couple of years I have been slowly increasing the data flow and flow rate into my various Asa H AI experiments.