Now the chance for up and coming semantic web developers to demo their killer applications. The apps that will revolutionize the Internet, on display.
Tim Bernes-Lee (who uses a Mac, by the way) showed his Tabulator RDF browser. He gave a brief talk and demo of the app. It gives an "outline" style view of RDF and asynchronously and recursively loads connecting RDF using AJAX technology. It follows the 303 redirects, follows # sub-page links, uses the GRDDL protocol on xHTML and smushes on owl:sameAs and inverse functional properties (the killer feature, apparently).
Some commented to me afterwards that they thought that no one should ever have to see the RDF of a semantic web application, let alone browse it. Oh well.
Then came DBin. Not just a browser, no, a semantic web rich client! It uses so called brainlets (HTMLS) and a new semantic transport layer (not HTTP) to dynamically query and retrieve RDF using peer-to-peer transfer.
Again, I'm skeptical. It is just (yet another useless) RDF browser that saves bandwidth by sending the data through a peer-to-peer network. But RDF file sizes aren't exactly huge and compression will do far more than peer-to-peer to help with bandwidth. This browser is solving a problem that doesn't exist.
Next up: Rhizome. A python-based app that allows one to build RDF applications in a wiki style. Is uses a Raccoon application server to transform incoming HTTP requests into RDF, evolve them using rules and uses schematron validation. In short, it is to RDF what Apache Cocoon is to XML. Or, in more understandable terms: you declaratively build your web-site using RDF for everything from the layout to the database.
Pity, of course, that no one uses Cocoon and this Rhizome system looks really complicated, despite being pitched at "non-techical folks".
At this point I left the semantic web demo session. My thinking: these guys are nuts.
I caught the end of an entirely different demo/paper. The Guide-O system by researchers from Stony Brook University in New York.
It uses a shallow (read: simple) ontology to label areas of a web page according to their functional roles. It also creates a hierarchy of elements inside of each area or module. The third component of the system is a Finite State Automata for moving between functional states of the website.
Putting these three things together allows one to identify common trails of FSA transitions. That is, processes which users tend to perform regularly. Having identified these trails, one can cut out all the modules that do not contribute to the task. All useless clutter is eliminated from each web-page.
Result: mobile web surfing speed could be accomplished twice as fast as before and blind web surfing (using a screen reader) could be performed 4 times faster than before.
Future work: mining for workflows, using web services and analyzing the semantics of web content. Problems: coming up with standard way to describe the process and concept models. A system for semantic annotation of web content is needed.
I was impressed. It sounds like a really good idea. It takes three relatively simple ideas and combines them into something innovate and powerful. Nice.