Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:
The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.
Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.
He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.
Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.
One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.
An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.
An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.
However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".
It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.
The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.