This Monday at 4:00 pm in ACES Auditorium, Robert Hecht-Nielsen will speak at FAI. I haven’t met Robert, but I understand he is an entertaining and provocative speaker. His talk certainly seems provocative — basically a unified theory of cognition based on associative memory between vector quantizors. Right up my alley! The info is below.
Robert Hecht-Nielsen University of California, San Diego
This talk will overview the author’s recently published theory of the cerebral cortex (in: Hecht-Nielsen, R. and McKenna, T. [Eds.] (2003) Computational Models for Neuroscience, Springer-Verlag) and discuss some of its startling implications for neuroscience, AI, and philosophy. Mathematically, the theory views the cortical surface as a collection of about 120,000 notional vector quantizers which are organized and frozen at various points during childhood. These provide a fixed set of terms of reference so that knowledge can be accumulated. Cortical knowledge takes the form of a vast number of pairwise unidirectional links between tokens in these quantizer’s codebooks, with each link having a strength directly related to a particular conditional probability (the antecedent support probability of the source token given the presence of the target token). How this knowledge is used to carry out thinking is explicitly explained and a related local-circuit neuroscience prediction of the theory is described. This talk will show how this weird design, while inherently incapable of directly carrying out any sort of ordinary reasoning i.e., Aristotelian logic, learned neural network input-output mappings, Bayesian inferencing, fuzzy logic, etc.), is nonetheless able to arrive at excellent conclusions. Further, unlike the situation in the cases of existing AI reasoning schemes, the amount of antecedent support knowledge required by cortex, while large, is not combinatorially explosive and can be feasibly obtained during childhood. While scientific testing of this theory is probably a long way off, the theory can be put into service immediately as a new mathematical foundation for AI.