Janmastami Swansea 2005
→ Home

17|1Last week I was in Swansea, Wales for Krishna's birthday (Janmastami). On this day, roughly 5000 years ago Krishna appeared on Earth. He stayed for a total of 125 years and then disappeared from our vision to some other pastime on some other planet, just like the sun sets in one part of the world, but remains visible in another.

It was really nice. Since the celebration fell on a weekday, the festival was quite small. A nice family atmosphere. The day after, Srila Prabhupada's birthday, was just as nice. Hearing about Prabhupada all day long was very inspiring.

Realization: I can be perfectly satisfied simply making garlands all day for Krishna. There is no need for all kinds of extravagant entertainment and technology. A simple life of bliss and knowledge is such much better than the rat-race of modern society. But then again, happiness in Krishna consciousness does not depend on external conditions. One can be happy in any situation, if you follows the Krishna conscious process. I'm beginning to see and experience that more and more.

Enjoy the photos (I didn't take very many this time).

The Pitfalls of Outsourcing
→ Home

I'm reading The Best Software Writing I: Selected and Introduced by Joel Spolsky at the moment. It contains some interesting essays. I'll be summarizing some of them as I read more of the book.


The Pitfalls of Outsourcing Programmers
by Michael Bean talks about how companies (especially American ones) go outsourcing-mad and outsource software development to foreign countries (especially India). This is bad since these companies thereby throw away their capacity to innovate. The people driving the core business assets should be kept as physically close together as possible. Their unique set of skills are what differentiates an organization from the competition. A company is otherwise just a generic aggregation of commodity services. There is no added value in that.

Question: is the practice of certain temples that, in-effect, outsource their deity worship (by importing devotees from other countries to do the service) also confusing the box with the milk sweet?

The Pitfalls of Outsourcing
→ Home

I'm reading The Best Software Writing I: Selected and Introduced by Joel Spolsky at the moment. It contains some interesting essays. I'll be summarizing some of them as I read more of the book.


The Pitfalls of Outsourcing Programmers
by Michael Bean talks about how companies (especially American ones) go outsourcing-mad and outsource software development to foreign countries (especially India). This is bad since these companies thereby throw away their capacity to innovate. The people driving the core business assets should be kept as physically close together as possible. Their unique set of skills are what differentiates an organization from the competition. A company is otherwise just a generic aggregation of commodity services. There is no added value in that.

Question: is the practice of certain temples that, in-effect, outsource their deity worship (by importing devotees from other countries to do the service) also confusing the box with the milk sweet?

Vyasa puja offering to Srila Prabhupada
→ Home

Thank you Srila Prabhupada for all your transcendental literature. When I was first leaning about Krishna consciousness someone sold me a Krishna Book at the London Rathayatra (2001). I had previously tried to read your Bhagavad-Gita, but was too proud and contaminated with impersonalism and atheism to understand even a word of it. Then however, upon reading your so expertly composed Krishna Book my skeptic mentality was defeated.

"This is so amazing, it must be true, no one could have made something like this up", I though, reading pastime after pastime.

I would also like to thank you for all the amazing devotees you have cultivated, both directly and indirectly. Your power to transform people is extraordinary. You have taught by your personal example how to be a pure devotee of Krishna and have also empowered others to become acharyas in the same way. We are all eternally indebted to you for this. The only way to repay part of this unlimited debt is to pass it on to others.

I can see how my own spiritual master, your disciple Devamrita Swami, extends so much personal care and attention to me, just as you undoubtedly cared so much for him. I wish that I may also one day be able to help others in the same way. I have so much to learn about this Krishna conscious process. Please give me shelter at your lotus feet.

Your servant,
Candidasa dasa

Vyasa puja offering to Srila Prabhupada
→ Home

Thank you Srila Prabhupada for all your transcendental literature. When I was first leaning about Krishna consciousness someone sold me a Krishna Book at the London Rathayatra (2001). I had previously tried to read your Bhagavad-Gita, but was too proud and contaminated with impersonalism and atheism to understand even a word of it. Then however, upon reading your so expertly composed Krishna Book my skeptic mentality was defeated.

"This is so amazing, it must be true, no one could have made something like this up", I though, reading pastime after pastime.

I would also like to thank you for all the amazing devotees you have cultivated, both directly and indirectly. Your power to transform people is extraordinary. You have taught by your personal example how to be a pure devotee of Krishna and have also empowered others to become acharyas in the same way. We are all eternally indebted to you for this. The only way to repay part of this unlimited debt is to pass it on to others.

I can see how my own spiritual master, your disciple Devamrita Swami, extends so much personal care and attention to me, just as you undoubtedly cared so much for him. I wish that I may also one day be able to help others in the same way. I have so much to learn about this Krishna conscious process. Please give me shelter at your lotus feet.

Your servant,
Candidasa dasa

Gurudeva visit snippets and cooking
→ Home

DS: The body is meant for aching, might as well ache for Krishna.

DS: Herbs from the Amazon that will help your sore back.
BKD: Where do you get all this stuff, Gurudeva?
DS: Secrets of the Swamis.

DS: Yes, the famous Manchester rut. It's like a something that fossilizes you. A weight that pushes everyone down so they are unable to get up and free themselves. The mode of ignorance.

Cooked:

Day 1:
Cherry-Tomato lettuce salad
Kumara and Broccoli with Arrowroot
Green beans, peas and carrot subji
Split mung dal
Turmeric basmati rice with wild rice

Day 2:
Sak
Asparagus subji
Sweet potato pie
Radish, carrot and lettuce salad
Corn on cob
Brown basmati rice with wild rice

Gurudeva visit snippets and cooking
→ Home

DS: The body is meant for aching, might as well ache for Krishna.

DS: Herbs from the Amazon that will help your sore back.
BKD: Where do you get all this stuff, Gurudeva?
DS: Secrets of the Swamis.

DS: Yes, the famous Manchester rut. It's like a something that fossilizes you. A weight that pushes everyone down so they are unable to get up and free themselves. The mode of ignorance.

Cooked:

Day 1:
Cherry-Tomato lettuce salad
Kumara and Broccoli with Arrowroot
Green beans, peas and carrot subji
Split mung dal
Turmeric basmati rice with wild rice

Day 2:
Sak
Asparagus subji
Sweet potato pie
Radish, carrot and lettuce salad
Corn on cob
Brown basmati rice with wild rice

Cardiff Rathayatra 2005
→ Home

16|1Yesterday was really, really fun. I attended the Rathayatra in Cardiff, Wales.

There was a huge procession with hundreds of people through the streets of downtown Cardiff ending up in a peaceful park surrounded by an old castle wall. The kirtan (chanting) lasted for over three hours, non-stop. Maha-Visnu Swami was running around like-mad with his accordion (as usual). Bhakti Vidya Purna Swami gave a brilliant introductory talk. I met lots of old friends from all over the UK.

The whole event was well planned and organized. The heavens smiled upon the whole affair by supplying beaming sunshine throughout. All in all, a most excellent Krishna conscious experience.

Krishna consciousness is so nice. The festival made everyone super-happy. See the pictures, though be warned, I took many.

Cardiff Rathayatra 2005
→ Home

16|1Yesterday was really, really fun. I attended the Rathayatra in Cardiff, Wales.

There was a huge procession with hundreds of people through the streets of downtown Cardiff ending up in a peaceful park surrounded by an old castle wall. The kirtan (chanting) lasted for over three hours, non-stop. Maha-Visnu Swami was running around like-mad with his accordion (as usual). Bhakti Vidya Purna Swami gave a brilliant introductory talk. I met lots of old friends from all over the UK.

The whole event was well planned and organized. The heavens smiled upon the whole affair by supplying beaming sunshine throughout. All in all, a most excellent Krishna conscious experience.

Krishna consciousness is so nice. The festival made everyone super-happy. See the pictures, though be warned, I took many.

Krishna’s personality
→ Home

After analyzing my personality I thought I would think a bit about Krishna's personality. To better understand it, I'll try to characterize it in terms of its similarity to several popular personalities from Star Wars and Star Trek. I'm no expert, so please correct me if and when I am wrong.

The Nectar of Devotion, Second Wave, Chapter 23 describes Krishna's personality. It characterizes him as simultaneously having four opposing personalities. This is not contradictory. I would, in fact, expect nothing less from him. This is God: far from being an impersonal oneness, he has the most variegated and interesting personality in existence. Moreover, it??(TM)s all described, in detail, in the Vedic literature.

Dh?«rod?tta

A dh?«rod?tta is a person who is naturally very grave, gentle, forgiving, merciful, determined, humble, highly qualified, chivalrous and physically attractive. Krishna exhibited this mood when he lifted Govardhana Hill. The NOD also gives the example of Lord R?macandra as a dh?«rod?tta personality.

To me like this sounds like an introverted, task oriented/thinking personality. That would make Krishna similar to the personalities of Obi-Wan Kenobi, Boba Fett and Yoda from Star Wars.

"Things are only impossible until they're not." - Jean-Luc Picard (Star Trek: TNG)

Dh?«ra-lalita

A person is called dh?«ra-lalita if he is naturally very funny, always in full youthfulness, expert in joking and free from all anxieties. Krishna shows this personality when he teases and embarrasses R?dh?r?n?£?«. Cupid is also described as a dh?«ra-lalita.

To me this seems to be an extroverted, people oriented/feeling personality. This personality is similar to Anakin Skywalker, Jar-Jar Binks and Captain Kirk.

"We're Starfleet officers. Weird is part of the job." ??" Captain Janeway (Star Trek: Voyager)

Dh?«ra-pra?>?nta

A person who is very peaceful, forbearing, considerate and obliging is called dh?«ra-pra?>?nta. Krishna, when serving the P?n?£d?£ava brothers, is an example of this aspect of his personality. Some scholars also characterize King Yudhis?£t?£hira in this way.

This sounds very much like an introverted, people oriented/feeling personality. Film characters who share this trait with Krishna include Princess Padm?© Amidala, Leia Organa, Luke Skywalker and C3PO.

"Why don't we look at this from a mosquito's point of view?" ??" Dr. Beverly Crusher (Star Trek: TNG)

Dh?«roddhata

A person who is very envious, proud, easily angered, restless and complacent is called dh?«roddhata by learned scholars. Such qualities were visible in the character of Lord Kr?£s?£n?£a, because when He was writing a letter to K?layavana, Kr?£s?£n?£a addressed him as a sinful frog. Sometimes Bh?«ma, the second brother of the P?n?£d?£avas, is also described as dh?«roddhata.

An extroverted, task oriented/thinking personality-type fits this description. Characters like R2-D2, Han Solo and Jabba the Hut also have this personality.

"No one gets the best of me in my kitchen!" ??" Neelix (Star Trek: Voyager)

Krishna’s personality
→ Home

After analyzing my personality I thought I would think a bit about Krishna's personality. To better understand it, I'll try to characterize it in terms of its similarity to several popular personalities from Star Wars and Star Trek. I'm no expert, so please correct me if and when I am wrong.

The Nectar of Devotion, Second Wave, Chapter 23 describes Krishna's personality. It characterizes him as simultaneously having four opposing personalities. This is not contradictory. I would, in fact, expect nothing less from him. This is God: far from being an impersonal oneness, he has the most variegated and interesting personality in existence. Moreover, it??(TM)s all described, in detail, in the Vedic literature.

Dh?«rod?tta

A dh?«rod?tta is a person who is naturally very grave, gentle, forgiving, merciful, determined, humble, highly qualified, chivalrous and physically attractive. Krishna exhibited this mood when he lifted Govardhana Hill. The NOD also gives the example of Lord R?macandra as a dh?«rod?tta personality.

To me like this sounds like an introverted, task oriented/thinking personality. That would make Krishna similar to the personalities of Obi-Wan Kenobi, Boba Fett and Yoda from Star Wars.

"Things are only impossible until they're not." - Jean-Luc Picard (Star Trek: TNG)

Dh?«ra-lalita

A person is called dh?«ra-lalita if he is naturally very funny, always in full youthfulness, expert in joking and free from all anxieties. Krishna shows this personality when he teases and embarrasses R?dh?r?n?£?«. Cupid is also described as a dh?«ra-lalita.

To me this seems to be an extroverted, people oriented/feeling personality. This personality is similar to Anakin Skywalker, Jar-Jar Binks and Captain Kirk.

"We're Starfleet officers. Weird is part of the job." ??" Captain Janeway (Star Trek: Voyager)

Dh?«ra-pra?>?nta

A person who is very peaceful, forbearing, considerate and obliging is called dh?«ra-pra?>?nta. Krishna, when serving the P?n?£d?£ava brothers, is an example of this aspect of his personality. Some scholars also characterize King Yudhis?£t?£hira in this way.

This sounds very much like an introverted, people oriented/feeling personality. Film characters who share this trait with Krishna include Princess Padm?© Amidala, Leia Organa, Luke Skywalker and C3PO.

"Why don't we look at this from a mosquito's point of view?" ??" Dr. Beverly Crusher (Star Trek: TNG)

Dh?«roddhata

A person who is very envious, proud, easily angered, restless and complacent is called dh?«roddhata by learned scholars. Such qualities were visible in the character of Lord Kr?£s?£n?£a, because when He was writing a letter to K?layavana, Kr?£s?£n?£a addressed him as a sinful frog. Sometimes Bh?«ma, the second brother of the P?n?£d?£avas, is also described as dh?«roddhata.

An extroverted, task oriented/thinking personality-type fits this description. Characters like R2-D2, Han Solo and Jabba the Hut also have this personality.

"No one gets the best of me in my kitchen!" ??" Neelix (Star Trek: Voyager)

Darth Vader = my personality type
→ Home

Guardian Inspector (ISTJ: Introverted, Sensing, Thinking, Judging)

Overview:

  • Extremely thorough, responsible, and dependable.
  • Serious and quiet, interested in security and peaceful living.
  • Well-developed powers of concentration.
  • Usually interested in supporting and promoting traditions and establishments.
  • Well-organized and hard working, they work steadily towards identified goals.
  • They can usually accomplish any task once they have set their mind to it.

Frequency: 6%

Star Trek type: Spock and Chief Miles O'Brien

Star Wars type: Obi-Wan Kenobi, Darth Vader, Grand Moff Tarkin (this site also has links to some very good practical tips for the various different personalities)

Values:

  • Productivity
  • Conformity, Tradition, Duty and Service
  • Reliability, Etiquette, Protocols
  • Structure, Order, Attention to Detail

Leadership style:

  • Traditionalist, Stabilizer, Consolidator
  • Strategy Management, Logistics
  • Processes, Routines, Policies, Rules and Regulations

More about my personality:

Personality testing and evaluation:

  • Socionics has an excellent personality self-test and tool to evaluate how two different personalities relate with one another.

(Thanks to Sitapati and Tri-Yuga for the initial info)

Darth Vader = my personality type
→ Home

Guardian Inspector (ISTJ: Introverted, Sensing, Thinking, Judging)

Overview:

  • Extremely thorough, responsible, and dependable.
  • Serious and quiet, interested in security and peaceful living.
  • Well-developed powers of concentration.
  • Usually interested in supporting and promoting traditions and establishments.
  • Well-organized and hard working, they work steadily towards identified goals.
  • They can usually accomplish any task once they have set their mind to it.

Frequency: 6%

Star Trek type: Spock and Chief Miles O'Brien

Star Wars type: Obi-Wan Kenobi, Darth Vader, Grand Moff Tarkin (this site also has links to some very good practical tips for the various different personalities)

Values:

  • Productivity
  • Conformity, Tradition, Duty and Service
  • Reliability, Etiquette, Protocols
  • Structure, Order, Attention to Detail

Leadership style:

  • Traditionalist, Stabilizer, Consolidator
  • Strategy Management, Logistics
  • Processes, Routines, Policies, Rules and Regulations

More about my personality:

Personality testing and evaluation:

  • Socionics has an excellent personality self-test and tool to evaluate how two different personalities relate with one another.

(Thanks to Sitapati and Tri-Yuga for the initial info)

Cheating education system
→ Home

The UK government has decided to change the way university fees work. They claim this change is to help the poorest students to be able to afford an education. It's fairly complicated with different amounts of loans, bursaries, grants and fees. However, cutting through all the fluff, it turns out that, on-average, students starting university this September will end up paying about double the amount of money for their education (tuition fees have tripled, but do not need to be paid until after gradation).

Average student debt (from Natwest Student Money Matters Survey):
2002: ?£2,489
2003: ?£8,125

Estimated average student debt at graduation (from annual Barclays Student Survey):
2004: ?£12,069
2006: ?£17,561
2010: ?£33,708

According to the NatWest survey the average monthly expense of a student breaks down as follows (figures for 2004):

Rent: ?£289.60
Food shopping: ?£78.73
Alcohol: ?£74.14
Going out: ?£62.40
Cigarettes: ?£70.00
Clothes: ?£53.56
Eating out: ?£53.47
Utilities: ?£58.76
Transport: ?£54.56
Phone: ?£44.29
Books: ?£38.13
Monthly total: ?£877.64

The UK government isn't stupid. They have as a target: "to have 50% of young people enter higher education by 2010". Why do they want nearly everyone in the country to have the equivalent of NZ$ 100k debt? Is this some conspiracy to control the people?

Guru's answer: "It's the new version of the old serfdom.
University students now equal the peasantry."

Cheating education system
→ Home

The UK government has decided to change the way university fees work. They claim this change is to help the poorest students to be able to afford an education. It's fairly complicated with different amounts of loans, bursaries, grants and fees. However, cutting through all the fluff, it turns out that, on-average, students starting university this September will end up paying about double the amount of money for their education (tuition fees have tripled, but do not need to be paid until after gradation).

Average student debt (from Natwest Student Money Matters Survey):
2002: ?£2,489
2003: ?£8,125

Estimated average student debt at graduation (from annual Barclays Student Survey):
2004: ?£12,069
2006: ?£17,561
2010: ?£33,708

According to the NatWest survey the average monthly expense of a student breaks down as follows (figures for 2004):

Rent: ?£289.60
Food shopping: ?£78.73
Alcohol: ?£74.14
Going out: ?£62.40
Cigarettes: ?£70.00
Clothes: ?£53.56
Eating out: ?£53.47
Utilities: ?£58.76
Transport: ?£54.56
Phone: ?£44.29
Books: ?£38.13
Monthly total: ?£877.64

The UK government isn't stupid. They have as a target: "to have 50% of young people enter higher education by 2010". Why do they want nearly everyone in the country to have the equivalent of NZ$ 100k debt? Is this some conspiracy to control the people?

Guru's answer: "It's the new version of the old serfdom.
University students now equal the peasantry."

IJCAI day 7
→ Home

The conference is over. Over 300 papers were presented. Over 1000 people attended.

The last day started with an interesting keynote from a researcher from Sarcos Corp. Sarcos make robots. For example (in chronological order from the late '80s until today):

  • Utah artificial arm: a realistic looking replacement arm for amputees which picks up the small electrical current on the limb stump and uses it to control a motorized elbow and finger-grabbing action. Not as good as a real arm, but much better than no arm at all. Bionic man is coming.
  • Dexterous undersea arm with gravity compensation: a huge half-a-ton arm for undersea operation that can be remote controlled from an arm-glove-like device. It can, at one moment, pick up a raw egg without breaking the shell and the next pick up and throw a 150 kg anvil.
  • Disney theme park humanoid robots: fast moving robots that, for example, sword-fight each other.
  • Jurassic Park theme park dinosaurs: for example, a huge 40000 kg moving T-Rex for a water ride. They had to slow-down the movement of the robot, because it caused the entire building to shake.
  • Las Vegas Bellagio Robotic Fountains: 224 robotic super-high-pressure water shooters that can be programmed to quickly rotate in any direction while shooting water and thereby deliver artistic flowing water displays (price: $37000 each)
  • Micro surgery tube and video camera: a flexible robotic tube that can be feed through an artery from the hip all the way up into the brain to repair critical damage and stop internal bleeding
  • Exoskeleton XOS-1: an exoskeleton which makes carrying 100 kg on ones back feel like 5 kg and enables the wearer to otherwise move as he or should would normally. Developed for the US military for use in a variety of combat conditions.

Statistical Machine translation is getting good. Google has created a corpus of 200 millions words of multi-language-aligned training data and made it available for researchers. RAM is becoming cheap enough to make complex phrase based translation algorithms feasible. The results in translating the world's most spoken languages (in this order: Mandarin, English, Spanish, Hindi, Bengali, Arabic, Malay, Portuguese, Russian, Japanese, German, French) into each other are getting really good. Sentience structure is a bit weird, but the translations are fully understandable.

Oh yes, Peter Patel Schneider thinks that the semantic web is doomed. The RDF foundation on which the semantic web is based can't be extended to support First-Order Logic without introducing paradoxes. One could, for example, use FOL RDF to say: "this sentence is false". The RDF triple syntax is not expressive enough to prevent these kinds of illogical statements. Tragic, isn't it?

Now I'll have to spend some time detoxing from all the materialism I've been absorbed in for the past week.

(Update: check out some Sarcos Robot videos here)

IJCAI day 7
→ Home

The conference is over. Over 300 papers were presented. Over 1000 people attended.

The last day started with an interesting keynote from a researcher from Sarcos Corp. Sarcos make robots. For example (in chronological order from the late '80s until today):

  • Utah artificial arm: a realistic looking replacement arm for amputees which picks up the small electrical current on the limb stump and uses it to control a motorized elbow and finger-grabbing action. Not as good as a real arm, but much better than no arm at all. Bionic man is coming.
  • Dexterous undersea arm with gravity compensation: a huge half-a-ton arm for undersea operation that can be remote controlled from an arm-glove-like device. It can, at one moment, pick up a raw egg without breaking the shell and the next pick up and throw a 150 kg anvil.
  • Disney theme park humanoid robots: fast moving robots that, for example, sword-fight each other.
  • Jurassic Park theme park dinosaurs: for example, a huge 40000 kg moving T-Rex for a water ride. They had to slow-down the movement of the robot, because it caused the entire building to shake.
  • Las Vegas Bellagio Robotic Fountains: 224 robotic super-high-pressure water shooters that can be programmed to quickly rotate in any direction while shooting water and thereby deliver artistic flowing water displays (price: $37000 each)
  • Micro surgery tube and video camera: a flexible robotic tube that can be feed through an artery from the hip all the way up into the brain to repair critical damage and stop internal bleeding
  • Exoskeleton XOS-1: an exoskeleton which makes carrying 100 kg on ones back feel like 5 kg and enables the wearer to otherwise move as he or should would normally. Developed for the US military for use in a variety of combat conditions.

Statistical Machine translation is getting good. Google has created a corpus of 200 millions words of multi-language-aligned training data and made it available for researchers. RAM is becoming cheap enough to make complex phrase based translation algorithms feasible. The results in translating the world's most spoken languages (in this order: Mandarin, English, Spanish, Hindi, Bengali, Arabic, Malay, Portuguese, Russian, Japanese, German, French) into each other are getting really good. Sentience structure is a bit weird, but the translations are fully understandable.

Oh yes, Peter Patel Schneider thinks that the semantic web is doomed. The RDF foundation on which the semantic web is based can't be extended to support First-Order Logic without introducing paradoxes. One could, for example, use FOL RDF to say: "this sentence is false". The RDF triple syntax is not expressive enough to prevent these kinds of illogical statements. Tragic, isn't it?

Now I'll have to spend some time detoxing from all the materialism I've been absorbed in for the past week.

(Update: check out some Sarcos Robot videos here)

IJCAI day 6
→ Home

Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:

The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.

Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.

He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.

Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.

One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.

An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.

An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.

However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".

It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.

The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.

IJCAI day 6
→ Home

Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:

The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.

Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.

He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.

Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.

One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.

An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.

An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.

However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".

It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.

The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.

IJCAI day 5
→ Home

Today was all about my field: description logics and ontologies.
Sony QRIO

Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.

Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.

Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.

Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.

Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.

I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.

Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.

Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.

Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.

Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.

All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.

It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.

IJCAI day 5
→ Home

Today was all about my field: description logics and ontologies.
Sony QRIO

Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.

Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.

Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.

Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.

Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.

I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.

Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.

Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.

Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.

Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.

All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.

It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.

IJCAI day 4
→ Home

Tutorials are over. No more play time. The actual conference started today.

It kicked off with a keynote from Alison Gopnik. Author of "Scientist in the Crib", a book about how young children learn and experiment very effectively. Grown ups can't generally come up with novel ways of approaching a problem, whereas children will do all kinds of crazy things when trying to figure something out. Playfulness is important. Interestingly, kids as young as 3 years olds can probabilistic causal inference almost as well as grown-ups. Most people are only good for producing things and management when they get older. The young are the innovators.

An interesting talk on temporal reasoning added temporal markup to a corpus of newspaper articles. Their system used a first-order logic reasoner (OTTER) to allow users to make free-text temporal queries on the data set. E.g. "who were the prime-ministers of France from 1962 - 1998?"

When it came time for question I asked how much the temporal reasoning slowed down their query processing. Their answer: while a normal search takes 0.1 seconds to answer, turning on temporal inference increases the query time to 4 - 10 minutes (depending on the number of transitive chains that need to be evaluated). Uh-huh. Next. First-order logic reasoning is too slow.

Carsten Lutz gave a survey of description logic work. Many ontology reasoning systems are EXPTime in the worst case, but do quite well in the average case. This makes them quite usable in practice. However, more tools and systems integration is now required.

I found out what "hypergraph decomposition" is. A research from Vienna was presenting a poster on the subject. Hypergraphs are graphs where each arc/edge can connect more than two nodes together. They are good at capturing several NP-complete problems graphically. An algorithm to perfectly decompose hypergraphs is, of course, unsolvable in the worst case. A graph with as little as 100 nodes can require days of processing to solve. However, a quick-and-dirty algorithm called "bucket elimination" does very well.

This conference is turning out to be quite useful. My body actually functioned reasonably well today, too.

IJCAI day 4
→ Home

Tutorials are over. No more play time. The actual conference started today.

It kicked off with a keynote from Alison Gopnik. Author of "Scientist in the Crib", a book about how young children learn and experiment very effectively. Grown ups can't generally come up with novel ways of approaching a problem, whereas children will do all kinds of crazy things when trying to figure something out. Playfulness is important. Interestingly, kids as young as 3 years olds can probabilistic causal inference almost as well as grown-ups. Most people are only good for producing things and management when they get older. The young are the innovators.

An interesting talk on temporal reasoning added temporal markup to a corpus of newspaper articles. Their system used a first-order logic reasoner (OTTER) to allow users to make free-text temporal queries on the data set. E.g. "who were the prime-ministers of France from 1962 - 1998?"

When it came time for question I asked how much the temporal reasoning slowed down their query processing. Their answer: while a normal search takes 0.1 seconds to answer, turning on temporal inference increases the query time to 4 - 10 minutes (depending on the number of transitive chains that need to be evaluated). Uh-huh. Next. First-order logic reasoning is too slow.

Carsten Lutz gave a survey of description logic work. Many ontology reasoning systems are EXPTime in the worst case, but do quite well in the average case. This makes them quite usable in practice. However, more tools and systems integration is now required.

I found out what "hypergraph decomposition" is. A research from Vienna was presenting a poster on the subject. Hypergraphs are graphs where each arc/edge can connect more than two nodes together. They are good at capturing several NP-complete problems graphically. An algorithm to perfectly decompose hypergraphs is, of course, unsolvable in the worst case. A graph with as little as 100 nodes can require days of processing to solve. However, a quick-and-dirty algorithm called "bucket elimination" does very well.

This conference is turning out to be quite useful. My body actually functioned reasonably well today, too.

IJCAI day 3
→ Home

I was really sick this morning. I felt horrible. My body seemed to reject everything I ate the previous day. No wonder really considering what it was. I've come to the conclusion that people in the UK can't cook vegetables (except potatoes) without turning them into poison. Even potatoes taste like nothing. I'll eat only fruit and cereals for the remainer of the conference. It's not the healthiest diet, but the alternative is worse.

I'm not even going to talk about the people's consciousness when preparing the food. Simple technical cooking skill alone is bad enough to kill me.

Workshop day today. It was on "Intelligent Techniques for Web Personalization".

The title sounded interesting, but the presenters were not. Some truly awful presentations by that sent me straight to sleep. Some Indian researchers presented the most boring and uninnovative "research" I've ever seen, Slaves of the west. Some American guy gave another guy's presentation he knew nothing about. I couldn't understand a word he was saying. It made absolutely no sense whatsoever.

Some of the things I learnt:
- Never use more than three colors on a website. It looks horrible.
- People, in general, understand the concept of "menus" very well. Search isn't as intuitive for the average person.
- Link clicking-through is not an accurate measure of the usefulness of a web resource. However, adding a "time spent reading page" metric makes it quite accurate.
- Personalisation techniques will be quite important for mobile devices with limited screen real estate.
- Component critiques and custom deep-links are useful for cutting through a large search space to an area of interest. Fine-grained links are then necessary to zero-in on exactly what the user wants.
- Lots of work on personalizing search, but nothing to write home about. Ontology matters.
- Product recommendation systems are frequently attacked by companies wanting to boost their particular product's ratings. Amazon and CNet suffer heavily from this. Even a simple shilling attack will dramatically distort a product's rating. Something to be aware of.
- Using a domain ontology is useful in product recommendation. The system can improve the recommendation, provide the user with a compelling explanation of why, a product was recommend and even provide a certain degree of protection from shilling attacks by using an ontology. For example: "I see you like films with Tom Cruise, other people with the same gender as you who like Tom Cruise also liked romantic comedies with Mel Gibson. Here is one you haven't yet seen."
- Look at RuleML for automating reasoning about recommendations

IJCAI day 3
→ Home

I was really sick this morning. I felt horrible. My body seemed to reject everything I ate the previous day. No wonder really considering what it was. I've come to the conclusion that people in the UK can't cook vegetables (except potatoes) without turning them into poison. Even potatoes taste like nothing. I'll eat only fruit and cereals for the remainer of the conference. It's not the healthiest diet, but the alternative is worse.

I'm not even going to talk about the people's consciousness when preparing the food. Simple technical cooking skill alone is bad enough to kill me.

Workshop day today. It was on "Intelligent Techniques for Web Personalization".

The title sounded interesting, but the presenters were not. Some truly awful presentations by that sent me straight to sleep. Some Indian researchers presented the most boring and uninnovative "research" I've ever seen, Slaves of the west. Some American guy gave another guy's presentation he knew nothing about. I couldn't understand a word he was saying. It made absolutely no sense whatsoever.

Some of the things I learnt:
- Never use more than three colors on a website. It looks horrible.
- People, in general, understand the concept of "menus" very well. Search isn't as intuitive for the average person.
- Link clicking-through is not an accurate measure of the usefulness of a web resource. However, adding a "time spent reading page" metric makes it quite accurate.
- Personalisation techniques will be quite important for mobile devices with limited screen real estate.
- Component critiques and custom deep-links are useful for cutting through a large search space to an area of interest. Fine-grained links are then necessary to zero-in on exactly what the user wants.
- Lots of work on personalizing search, but nothing to write home about. Ontology matters.
- Product recommendation systems are frequently attacked by companies wanting to boost their particular product's ratings. Amazon and CNet suffer heavily from this. Even a simple shilling attack will dramatically distort a product's rating. Something to be aware of.
- Using a domain ontology is useful in product recommendation. The system can improve the recommendation, provide the user with a compelling explanation of why, a product was recommend and even provide a certain degree of protection from shilling attacks by using an ontology. For example: "I see you like films with Tom Cruise, other people with the same gender as you who like Tom Cruise also liked romantic comedies with Mel Gibson. Here is one you haven't yet seen."
- Look at RuleML for automating reasoning about recommendations

IJCAI day 2
→ Home

I somehow managed to read an hour of the Nectar of Devotion throughout the day. Wow. Amazing book. It gets better every time I read it. Reading it makes me ecstatic, even if I don't understand what it is talking about. The other conference attendees must have been wondering why I was grinning ear-to-ear while reading some book.

Today's tutorial was on "Principles of AI Problem Solving". Three professors talked the group through various "classic" AI methods for solving problems.

All problems in classic AI can be reduced to the satisfiability problem SAT. Some examples of common problems can be abstracted into moving a robot from a certain initial state to a certain goal state on a grid. Four variations are possible:

- Actions are predicable and we can see exact what happens.
- Actions are predicable, but none of the moves are observable.
- Actions' effects are random/probabilistic, but we can see what happens.
- Actions' effects are probabilistic and we can only get partial information about the events on the grid. This is the hardest problem.

The various techniques for solving these problems involve searching different types of graph models of the problem space. Graphs can be transformed in certain ways to improve the efficiency of the search. A simple transformation is, for example, to reorder the graph to start at the node with the smallest domain/branching factor.

One lecturer mentioned that a technique called "hypergraph decomposition" that can be used to break a graph into weighted, equi-sized pieces. The AI problem is thereby divided up and (hopefully) becomes solvable in logarithmic time instead of the usual exponential time necessary to solve NP-complete problems,

I might be able to use this decomposition technique to break up my ontology by using a reasoning dependency structure. That would help a lot. Very interesting. I'll investigate further.

IJCAI day 2
→ Home

I somehow managed to read an hour of the Nectar of Devotion throughout the day. Wow. Amazing book. It gets better every time I read it. Reading it makes me ecstatic, even if I don't understand what it is talking about. The other conference attendees must have been wondering why I was grinning ear-to-ear while reading some book.

Today's tutorial was on "Principles of AI Problem Solving". Three professors talked the group through various "classic" AI methods for solving problems.

All problems in classic AI can be reduced to the satisfiability problem SAT. Some examples of common problems can be abstracted into moving a robot from a certain initial state to a certain goal state on a grid. Four variations are possible:

- Actions are predicable and we can see exact what happens.
- Actions are predicable, but none of the moves are observable.
- Actions' effects are random/probabilistic, but we can see what happens.
- Actions' effects are probabilistic and we can only get partial information about the events on the grid. This is the hardest problem.

The various techniques for solving these problems involve searching different types of graph models of the problem space. Graphs can be transformed in certain ways to improve the efficiency of the search. A simple transformation is, for example, to reorder the graph to start at the node with the smallest domain/branching factor.

One lecturer mentioned that a technique called "hypergraph decomposition" that can be used to break a graph into weighted, equi-sized pieces. The AI problem is thereby divided up and (hopefully) becomes solvable in logarithmic time instead of the usual exponential time necessary to solve NP-complete problems,

I might be able to use this decomposition technique to break up my ontology by using a reasoning dependency structure. That would help a lot. Very interesting. I'll investigate further.

IJCAI day 1
→ Home

The conference started today. The first few days are workshops and tutorials. The actual conference comes later.

So, today I attend a tutorial on "Automated Reasoning in First-Order Logic". Professor Andrei Voronkov from the University of Manchester talked us through some of his Vampire theorem-prover (version 8). He's been working on this system for the past 10 years and it is by far the fastest 1st-order logic reasoning system in the world. It wins just about every category in a yearly theorem-proving competition. The competition is often as much as 100 times slower. Vampire devours other provers.

However, for specialized reasoning in OWL, dedicated tableaux algorithm-based reasoners are quicker than Vampire. For now. I learnt that there are many parameters by which Vampire can be tweaked. A small change in the parameters will often allow the prover to answer a problem that was taking 24 hours before in a couple of seconds. However, finding the optimal settings is very much a black-art. No one understand which combination of parameters will give a good result. Andrei himself is pretty good, but he doesn't have the time to investigate all possible things people want to do with Vampire.

Professor Voronkov is taking a sabbatical at Microsoft Research in Seattle for the next year. Microsoft wants to use Vampire to formally verify device drivers. Bad drivers are a frequent cause of Windows crashing, so Microsoft is very interested in translating the code into logic syntax and letting Vampire find the bugs. Intel has been verifying all their chips in a similar fashion after the embarrassing bug in the original Pentium processor that caused it to give the incorrect answer on a few simple division operations.

Relating to my research, I found out that Vampire, unlike the tableaux-based reasoners, doesn't have a problem when classifying large data structures. One major difference between description logic and FOL is that the later is undecidable. The prover can answer "don't know". A description logic reasoner will however always, in theory, be able answer conclusively. In practice it often answers "stack-overflow" when faced with the ontologies I throw at it.

Anyway, Vampire achieves it's slow memory usage by simply discarding inactive clauses it has generated by its resolution process. The most "young and heavy" clauses are going to be processed last anyway, so why not just throw them out? We're likely to find a solution (= empty clause) (= contradiction) before then. I wonder if I can do a similar thing in description logic. I'll loose some completeness, of course.

IJCAI day 1
→ Home

The conference started today. The first few days are workshops and tutorials. The actual conference comes later.

So, today I attend a tutorial on "Automated Reasoning in First-Order Logic". Professor Andrei Voronkov from the University of Manchester talked us through some of his Vampire theorem-prover (version 8). He's been working on this system for the past 10 years and it is by far the fastest 1st-order logic reasoning system in the world. It wins just about every category in a yearly theorem-proving competition. The competition is often as much as 100 times slower. Vampire devours other provers.

However, for specialized reasoning in OWL, dedicated tableaux algorithm-based reasoners are quicker than Vampire. For now. I learnt that there are many parameters by which Vampire can be tweaked. A small change in the parameters will often allow the prover to answer a problem that was taking 24 hours before in a couple of seconds. However, finding the optimal settings is very much a black-art. No one understand which combination of parameters will give a good result. Andrei himself is pretty good, but he doesn't have the time to investigate all possible things people want to do with Vampire.

Professor Voronkov is taking a sabbatical at Microsoft Research in Seattle for the next year. Microsoft wants to use Vampire to formally verify device drivers. Bad drivers are a frequent cause of Windows crashing, so Microsoft is very interested in translating the code into logic syntax and letting Vampire find the bugs. Intel has been verifying all their chips in a similar fashion after the embarrassing bug in the original Pentium processor that caused it to give the incorrect answer on a few simple division operations.

Relating to my research, I found out that Vampire, unlike the tableaux-based reasoners, doesn't have a problem when classifying large data structures. One major difference between description logic and FOL is that the later is undecidable. The prover can answer "don't know". A description logic reasoner will however always, in theory, be able answer conclusively. In practice it often answers "stack-overflow" when faced with the ontologies I throw at it.

Anyway, Vampire achieves it's slow memory usage by simply discarding inactive clauses it has generated by its resolution process. The most "young and heavy" clauses are going to be processed last anyway, so why not just throw them out? We're likely to find a solution (= empty clause) (= contradiction) before then. I wonder if I can do a similar thing in description logic. I'll loose some completeness, of course.

IJCAI day 0
→ Home

IJCAI, the International Joint Conference on Artificial Intelligence, is probably the most important conference in the area of AI in the world (another important one is AAAI). This year's IJCAI is in Edinburgh, Scotland. The conference will be in Hyderabad, India next year.

My supervisor thought it was a good idea for me to attend this year's conference. Especially since it was so close to Manchester. He saves some money and I (hopefully) learn something.

So I traveled up to Scotland, wanted the streets of Edinburgh trying to find the place where I was supposed to stay (I forgot to take money with me for a taxi and couldn't find a cash machine), eventually found the Pollock Halls and collapsed in my room.

Edinburgh is a very old city with lots of history. Lots of ancient rock walls, rustic buildings and stone bridges. However, this backdrop does little to hide the usual vices of Kali-yuga. Scots seem a bit more brash than the usual Englishman. The homeless are more obvious, the drunks more visible, the prostitutes abound everywhere. So, altogether, a typical western city.