Wow, for the past ten years I've been telling everyone who will listen that A.I. should investigate exactly this kind of learning!
I got to believe that the solution to this kind of condensed representations that you call 'invariant representations' is modelling probabilities of interaction sequences. This boils down to having the computer record what information comes in (perceptions) and what goes out (actions) and try to find stochastic patterns that describe what comes in, possibly as a result of what goes out.
These patterns can be used to form expectations about the future (as well as interpretations of the past). Many different possible paths into the future can be evaluated in parallel, and the next action of the learning system can be chosen to optimize the likelihood of adding or revising a pattern
. That means that the learning algorithm isn't achieving some externally given preprogrammed goal, it is autonomously trying to learn.
Therefore the name of my A.I. project is: Learning Expectations Autonomously (LExAu).
For my view of the possible implications (in the long term) for the evolution of Mind see:
For more down-to-earth stuff see the home page of the project
I am looking for resonating minds, so anyone who still has confidence in A.I. and who is prepared to think out of the 'either logic or neural net' box: please contact me using the e-mail form on the LExAu site.
Thanks for all the really interesting and helpful material on your site! I started tuning myself to the LOA only two weeks ago and it already changed my life completely (for the second time in a few weeks, so I must have been ready for something