Today Nuno, a researcher from Porto, Portugal, asked me about my distinct hairstyle (sikha) and why I seem so peaceful and relaxed. While asking he was constantly apologizing, thinking that I might be offended. I told him a little bit about Krishna consciousness.
One presentations was about image analysis on 3D cell slices. Matlab's image toolkit is very good for this purpose. The researchers from Amsterdam used RuleML to capture shape classification rules from medical image interpretation experts. However, they suggested using SWRL instead, since RuleML is quite a clunky rules engine. Post-presentation questions raised the issue of rules vs. machine learning. Many people preferred the neural net approach, though a few people defined rules as they allow for better provenance, logging and examination.
Pat Hayes presented the COE ontology editor. This was originally a concept map creation tool, but has been expanded into a fully featured graphical OWL ontology editor. The major advantage COE has is that it is very intuitive to use. Like HTML, people can "view source" on ontologies and "steal" other people's designs/modeling tricks. COE doesn't work with ontologies larger than about 2000 classes. This is another area where my segmentation work might come in handy.
Here a list of some top-level ontologies: DOLCE, CYC, OpenCyc, OntoClean, SUMO.
There was a panel discussion about machine learning vs. manual knowledge capture. The conclusion was to do both:
Improve the volume of manual K-CAP by mass-collaboration
Automatically capture knowledge and manually clean up any mistakes (in this case it is very important to use codes that indicate where a particular piece of knowledge data came from)
Use manual methods to guide (but not haul) large knowledge acquisition methods
Revolutionary concept: make knowledge capture fun by making the task into a game. Carol Goble in particular was very impressed by this idea from Tim Chklovski from USC. She intends to build this into her bio-annotation tools.
An interesting presentation was about estimating the health of pigs by the consistency of their feces. The researchers worked with veterinarians to build a Bayesian network of external circumstances and pig disease. The interesting part was their use of a combination of statistical data and expert rules of thumb. They used isotonic regression to bias the statistical data to match their expert's intuitions. Ultimately, the graphical structure of the Bayesian network matters much more than the exact probabilities on the nodes.