Today was all about my field: description logics and ontologies.
Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.
Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.
Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.
Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.
Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.
I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.
Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.
Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.
Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.
Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.
All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.
It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.