Cheating education system
→ Home

The UK government has decided to change the way university fees work. They claim this change is to help the poorest students to be able to afford an education. It's fairly complicated with different amounts of loans, bursaries, grants and fees. However, cutting through all the fluff, it turns out that, on-average, students starting university this September will end up paying about double the amount of money for their education (tuition fees have tripled, but do not need to be paid until after gradation).

Average student debt (from Natwest Student Money Matters Survey):
2002: ?£2,489
2003: ?£8,125

Estimated average student debt at graduation (from annual Barclays Student Survey):
2004: ?£12,069
2006: ?£17,561
2010: ?£33,708

According to the NatWest survey the average monthly expense of a student breaks down as follows (figures for 2004):

Rent: ?£289.60
Food shopping: ?£78.73
Alcohol: ?£74.14
Going out: ?£62.40
Cigarettes: ?£70.00
Clothes: ?£53.56
Eating out: ?£53.47
Utilities: ?£58.76
Transport: ?£54.56
Phone: ?£44.29
Books: ?£38.13
Monthly total: ?£877.64

The UK government isn't stupid. They have as a target: "to have 50% of young people enter higher education by 2010". Why do they want nearly everyone in the country to have the equivalent of NZ$ 100k debt? Is this some conspiracy to control the people?

Guru's answer: "It's the new version of the old serfdom.
University students now equal the peasantry."

Cheating education system
→ Home

The UK government has decided to change the way university fees work. They claim this change is to help the poorest students to be able to afford an education. It's fairly complicated with different amounts of loans, bursaries, grants and fees. However, cutting through all the fluff, it turns out that, on-average, students starting university this September will end up paying about double the amount of money for their education (tuition fees have tripled, but do not need to be paid until after gradation).

Average student debt (from Natwest Student Money Matters Survey):
2002: ?£2,489
2003: ?£8,125

Estimated average student debt at graduation (from annual Barclays Student Survey):
2004: ?£12,069
2006: ?£17,561
2010: ?£33,708

According to the NatWest survey the average monthly expense of a student breaks down as follows (figures for 2004):

Rent: ?£289.60
Food shopping: ?£78.73
Alcohol: ?£74.14
Going out: ?£62.40
Cigarettes: ?£70.00
Clothes: ?£53.56
Eating out: ?£53.47
Utilities: ?£58.76
Transport: ?£54.56
Phone: ?£44.29
Books: ?£38.13
Monthly total: ?£877.64

The UK government isn't stupid. They have as a target: "to have 50% of young people enter higher education by 2010". Why do they want nearly everyone in the country to have the equivalent of NZ$ 100k debt? Is this some conspiracy to control the people?

Guru's answer: "It's the new version of the old serfdom.
University students now equal the peasantry."

IJCAI day 7
→ Home

The conference is over. Over 300 papers were presented. Over 1000 people attended.

The last day started with an interesting keynote from a researcher from Sarcos Corp. Sarcos make robots. For example (in chronological order from the late '80s until today):

  • Utah artificial arm: a realistic looking replacement arm for amputees which picks up the small electrical current on the limb stump and uses it to control a motorized elbow and finger-grabbing action. Not as good as a real arm, but much better than no arm at all. Bionic man is coming.
  • Dexterous undersea arm with gravity compensation: a huge half-a-ton arm for undersea operation that can be remote controlled from an arm-glove-like device. It can, at one moment, pick up a raw egg without breaking the shell and the next pick up and throw a 150 kg anvil.
  • Disney theme park humanoid robots: fast moving robots that, for example, sword-fight each other.
  • Jurassic Park theme park dinosaurs: for example, a huge 40000 kg moving T-Rex for a water ride. They had to slow-down the movement of the robot, because it caused the entire building to shake.
  • Las Vegas Bellagio Robotic Fountains: 224 robotic super-high-pressure water shooters that can be programmed to quickly rotate in any direction while shooting water and thereby deliver artistic flowing water displays (price: $37000 each)
  • Micro surgery tube and video camera: a flexible robotic tube that can be feed through an artery from the hip all the way up into the brain to repair critical damage and stop internal bleeding
  • Exoskeleton XOS-1: an exoskeleton which makes carrying 100 kg on ones back feel like 5 kg and enables the wearer to otherwise move as he or should would normally. Developed for the US military for use in a variety of combat conditions.

Statistical Machine translation is getting good. Google has created a corpus of 200 millions words of multi-language-aligned training data and made it available for researchers. RAM is becoming cheap enough to make complex phrase based translation algorithms feasible. The results in translating the world's most spoken languages (in this order: Mandarin, English, Spanish, Hindi, Bengali, Arabic, Malay, Portuguese, Russian, Japanese, German, French) into each other are getting really good. Sentience structure is a bit weird, but the translations are fully understandable.

Oh yes, Peter Patel Schneider thinks that the semantic web is doomed. The RDF foundation on which the semantic web is based can't be extended to support First-Order Logic without introducing paradoxes. One could, for example, use FOL RDF to say: "this sentence is false". The RDF triple syntax is not expressive enough to prevent these kinds of illogical statements. Tragic, isn't it?

Now I'll have to spend some time detoxing from all the materialism I've been absorbed in for the past week.

(Update: check out some Sarcos Robot videos here)

IJCAI day 7
→ Home

The conference is over. Over 300 papers were presented. Over 1000 people attended.

The last day started with an interesting keynote from a researcher from Sarcos Corp. Sarcos make robots. For example (in chronological order from the late '80s until today):

  • Utah artificial arm: a realistic looking replacement arm for amputees which picks up the small electrical current on the limb stump and uses it to control a motorized elbow and finger-grabbing action. Not as good as a real arm, but much better than no arm at all. Bionic man is coming.
  • Dexterous undersea arm with gravity compensation: a huge half-a-ton arm for undersea operation that can be remote controlled from an arm-glove-like device. It can, at one moment, pick up a raw egg without breaking the shell and the next pick up and throw a 150 kg anvil.
  • Disney theme park humanoid robots: fast moving robots that, for example, sword-fight each other.
  • Jurassic Park theme park dinosaurs: for example, a huge 40000 kg moving T-Rex for a water ride. They had to slow-down the movement of the robot, because it caused the entire building to shake.
  • Las Vegas Bellagio Robotic Fountains: 224 robotic super-high-pressure water shooters that can be programmed to quickly rotate in any direction while shooting water and thereby deliver artistic flowing water displays (price: $37000 each)
  • Micro surgery tube and video camera: a flexible robotic tube that can be feed through an artery from the hip all the way up into the brain to repair critical damage and stop internal bleeding
  • Exoskeleton XOS-1: an exoskeleton which makes carrying 100 kg on ones back feel like 5 kg and enables the wearer to otherwise move as he or should would normally. Developed for the US military for use in a variety of combat conditions.

Statistical Machine translation is getting good. Google has created a corpus of 200 millions words of multi-language-aligned training data and made it available for researchers. RAM is becoming cheap enough to make complex phrase based translation algorithms feasible. The results in translating the world's most spoken languages (in this order: Mandarin, English, Spanish, Hindi, Bengali, Arabic, Malay, Portuguese, Russian, Japanese, German, French) into each other are getting really good. Sentience structure is a bit weird, but the translations are fully understandable.

Oh yes, Peter Patel Schneider thinks that the semantic web is doomed. The RDF foundation on which the semantic web is based can't be extended to support First-Order Logic without introducing paradoxes. One could, for example, use FOL RDF to say: "this sentence is false". The RDF triple syntax is not expressive enough to prevent these kinds of illogical statements. Tragic, isn't it?

Now I'll have to spend some time detoxing from all the materialism I've been absorbed in for the past week.

(Update: check out some Sarcos Robot videos here)

IJCAI day 6
→ Home

Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:

The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.

Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.

He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.

Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.

One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.

An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.

An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.

However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".

It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.

The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.

IJCAI day 6
→ Home

Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:

The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.

Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.

He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.

Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.

One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.

An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.

An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.

However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".

It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.

The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.

IJCAI day 5
→ Home

Today was all about my field: description logics and ontologies.
Sony QRIO

Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.

Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.

Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.

Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.

Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.

I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.

Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.

Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.

Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.

Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.

All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.

It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.

IJCAI day 5
→ Home

Today was all about my field: description logics and ontologies.
Sony QRIO

Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.

Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.

Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.

Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.

Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.

I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.

Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.

Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.

Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.

Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.

All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.

It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.

IJCAI day 4
→ Home

Tutorials are over. No more play time. The actual conference started today.

It kicked off with a keynote from Alison Gopnik. Author of "Scientist in the Crib", a book about how young children learn and experiment very effectively. Grown ups can't generally come up with novel ways of approaching a problem, whereas children will do all kinds of crazy things when trying to figure something out. Playfulness is important. Interestingly, kids as young as 3 years olds can probabilistic causal inference almost as well as grown-ups. Most people are only good for producing things and management when they get older. The young are the innovators.

An interesting talk on temporal reasoning added temporal markup to a corpus of newspaper articles. Their system used a first-order logic reasoner (OTTER) to allow users to make free-text temporal queries on the data set. E.g. "who were the prime-ministers of France from 1962 - 1998?"

When it came time for question I asked how much the temporal reasoning slowed down their query processing. Their answer: while a normal search takes 0.1 seconds to answer, turning on temporal inference increases the query time to 4 - 10 minutes (depending on the number of transitive chains that need to be evaluated). Uh-huh. Next. First-order logic reasoning is too slow.

Carsten Lutz gave a survey of description logic work. Many ontology reasoning systems are EXPTime in the worst case, but do quite well in the average case. This makes them quite usable in practice. However, more tools and systems integration is now required.

I found out what "hypergraph decomposition" is. A research from Vienna was presenting a poster on the subject. Hypergraphs are graphs where each arc/edge can connect more than two nodes together. They are good at capturing several NP-complete problems graphically. An algorithm to perfectly decompose hypergraphs is, of course, unsolvable in the worst case. A graph with as little as 100 nodes can require days of processing to solve. However, a quick-and-dirty algorithm called "bucket elimination" does very well.

This conference is turning out to be quite useful. My body actually functioned reasonably well today, too.

IJCAI day 4
→ Home

Tutorials are over. No more play time. The actual conference started today.

It kicked off with a keynote from Alison Gopnik. Author of "Scientist in the Crib", a book about how young children learn and experiment very effectively. Grown ups can't generally come up with novel ways of approaching a problem, whereas children will do all kinds of crazy things when trying to figure something out. Playfulness is important. Interestingly, kids as young as 3 years olds can probabilistic causal inference almost as well as grown-ups. Most people are only good for producing things and management when they get older. The young are the innovators.

An interesting talk on temporal reasoning added temporal markup to a corpus of newspaper articles. Their system used a first-order logic reasoner (OTTER) to allow users to make free-text temporal queries on the data set. E.g. "who were the prime-ministers of France from 1962 - 1998?"

When it came time for question I asked how much the temporal reasoning slowed down their query processing. Their answer: while a normal search takes 0.1 seconds to answer, turning on temporal inference increases the query time to 4 - 10 minutes (depending on the number of transitive chains that need to be evaluated). Uh-huh. Next. First-order logic reasoning is too slow.

Carsten Lutz gave a survey of description logic work. Many ontology reasoning systems are EXPTime in the worst case, but do quite well in the average case. This makes them quite usable in practice. However, more tools and systems integration is now required.

I found out what "hypergraph decomposition" is. A research from Vienna was presenting a poster on the subject. Hypergraphs are graphs where each arc/edge can connect more than two nodes together. They are good at capturing several NP-complete problems graphically. An algorithm to perfectly decompose hypergraphs is, of course, unsolvable in the worst case. A graph with as little as 100 nodes can require days of processing to solve. However, a quick-and-dirty algorithm called "bucket elimination" does very well.

This conference is turning out to be quite useful. My body actually functioned reasonably well today, too.

IJCAI day 3
→ Home

I was really sick this morning. I felt horrible. My body seemed to reject everything I ate the previous day. No wonder really considering what it was. I've come to the conclusion that people in the UK can't cook vegetables (except potatoes) without turning them into poison. Even potatoes taste like nothing. I'll eat only fruit and cereals for the remainer of the conference. It's not the healthiest diet, but the alternative is worse.

I'm not even going to talk about the people's consciousness when preparing the food. Simple technical cooking skill alone is bad enough to kill me.

Workshop day today. It was on "Intelligent Techniques for Web Personalization".

The title sounded interesting, but the presenters were not. Some truly awful presentations by that sent me straight to sleep. Some Indian researchers presented the most boring and uninnovative "research" I've ever seen, Slaves of the west. Some American guy gave another guy's presentation he knew nothing about. I couldn't understand a word he was saying. It made absolutely no sense whatsoever.

Some of the things I learnt:
- Never use more than three colors on a website. It looks horrible.
- People, in general, understand the concept of "menus" very well. Search isn't as intuitive for the average person.
- Link clicking-through is not an accurate measure of the usefulness of a web resource. However, adding a "time spent reading page" metric makes it quite accurate.
- Personalisation techniques will be quite important for mobile devices with limited screen real estate.
- Component critiques and custom deep-links are useful for cutting through a large search space to an area of interest. Fine-grained links are then necessary to zero-in on exactly what the user wants.
- Lots of work on personalizing search, but nothing to write home about. Ontology matters.
- Product recommendation systems are frequently attacked by companies wanting to boost their particular product's ratings. Amazon and CNet suffer heavily from this. Even a simple shilling attack will dramatically distort a product's rating. Something to be aware of.
- Using a domain ontology is useful in product recommendation. The system can improve the recommendation, provide the user with a compelling explanation of why, a product was recommend and even provide a certain degree of protection from shilling attacks by using an ontology. For example: "I see you like films with Tom Cruise, other people with the same gender as you who like Tom Cruise also liked romantic comedies with Mel Gibson. Here is one you haven't yet seen."
- Look at RuleML for automating reasoning about recommendations

IJCAI day 3
→ Home

I was really sick this morning. I felt horrible. My body seemed to reject everything I ate the previous day. No wonder really considering what it was. I've come to the conclusion that people in the UK can't cook vegetables (except potatoes) without turning them into poison. Even potatoes taste like nothing. I'll eat only fruit and cereals for the remainer of the conference. It's not the healthiest diet, but the alternative is worse.

I'm not even going to talk about the people's consciousness when preparing the food. Simple technical cooking skill alone is bad enough to kill me.

Workshop day today. It was on "Intelligent Techniques for Web Personalization".

The title sounded interesting, but the presenters were not. Some truly awful presentations by that sent me straight to sleep. Some Indian researchers presented the most boring and uninnovative "research" I've ever seen, Slaves of the west. Some American guy gave another guy's presentation he knew nothing about. I couldn't understand a word he was saying. It made absolutely no sense whatsoever.

Some of the things I learnt:
- Never use more than three colors on a website. It looks horrible.
- People, in general, understand the concept of "menus" very well. Search isn't as intuitive for the average person.
- Link clicking-through is not an accurate measure of the usefulness of a web resource. However, adding a "time spent reading page" metric makes it quite accurate.
- Personalisation techniques will be quite important for mobile devices with limited screen real estate.
- Component critiques and custom deep-links are useful for cutting through a large search space to an area of interest. Fine-grained links are then necessary to zero-in on exactly what the user wants.
- Lots of work on personalizing search, but nothing to write home about. Ontology matters.
- Product recommendation systems are frequently attacked by companies wanting to boost their particular product's ratings. Amazon and CNet suffer heavily from this. Even a simple shilling attack will dramatically distort a product's rating. Something to be aware of.
- Using a domain ontology is useful in product recommendation. The system can improve the recommendation, provide the user with a compelling explanation of why, a product was recommend and even provide a certain degree of protection from shilling attacks by using an ontology. For example: "I see you like films with Tom Cruise, other people with the same gender as you who like Tom Cruise also liked romantic comedies with Mel Gibson. Here is one you haven't yet seen."
- Look at RuleML for automating reasoning about recommendations

IJCAI day 2
→ Home

I somehow managed to read an hour of the Nectar of Devotion throughout the day. Wow. Amazing book. It gets better every time I read it. Reading it makes me ecstatic, even if I don't understand what it is talking about. The other conference attendees must have been wondering why I was grinning ear-to-ear while reading some book.

Today's tutorial was on "Principles of AI Problem Solving". Three professors talked the group through various "classic" AI methods for solving problems.

All problems in classic AI can be reduced to the satisfiability problem SAT. Some examples of common problems can be abstracted into moving a robot from a certain initial state to a certain goal state on a grid. Four variations are possible:

- Actions are predicable and we can see exact what happens.
- Actions are predicable, but none of the moves are observable.
- Actions' effects are random/probabilistic, but we can see what happens.
- Actions' effects are probabilistic and we can only get partial information about the events on the grid. This is the hardest problem.

The various techniques for solving these problems involve searching different types of graph models of the problem space. Graphs can be transformed in certain ways to improve the efficiency of the search. A simple transformation is, for example, to reorder the graph to start at the node with the smallest domain/branching factor.

One lecturer mentioned that a technique called "hypergraph decomposition" that can be used to break a graph into weighted, equi-sized pieces. The AI problem is thereby divided up and (hopefully) becomes solvable in logarithmic time instead of the usual exponential time necessary to solve NP-complete problems,

I might be able to use this decomposition technique to break up my ontology by using a reasoning dependency structure. That would help a lot. Very interesting. I'll investigate further.

IJCAI day 2
→ Home

I somehow managed to read an hour of the Nectar of Devotion throughout the day. Wow. Amazing book. It gets better every time I read it. Reading it makes me ecstatic, even if I don't understand what it is talking about. The other conference attendees must have been wondering why I was grinning ear-to-ear while reading some book.

Today's tutorial was on "Principles of AI Problem Solving". Three professors talked the group through various "classic" AI methods for solving problems.

All problems in classic AI can be reduced to the satisfiability problem SAT. Some examples of common problems can be abstracted into moving a robot from a certain initial state to a certain goal state on a grid. Four variations are possible:

- Actions are predicable and we can see exact what happens.
- Actions are predicable, but none of the moves are observable.
- Actions' effects are random/probabilistic, but we can see what happens.
- Actions' effects are probabilistic and we can only get partial information about the events on the grid. This is the hardest problem.

The various techniques for solving these problems involve searching different types of graph models of the problem space. Graphs can be transformed in certain ways to improve the efficiency of the search. A simple transformation is, for example, to reorder the graph to start at the node with the smallest domain/branching factor.

One lecturer mentioned that a technique called "hypergraph decomposition" that can be used to break a graph into weighted, equi-sized pieces. The AI problem is thereby divided up and (hopefully) becomes solvable in logarithmic time instead of the usual exponential time necessary to solve NP-complete problems,

I might be able to use this decomposition technique to break up my ontology by using a reasoning dependency structure. That would help a lot. Very interesting. I'll investigate further.

IJCAI day 1
→ Home

The conference started today. The first few days are workshops and tutorials. The actual conference comes later.

So, today I attend a tutorial on "Automated Reasoning in First-Order Logic". Professor Andrei Voronkov from the University of Manchester talked us through some of his Vampire theorem-prover (version 8). He's been working on this system for the past 10 years and it is by far the fastest 1st-order logic reasoning system in the world. It wins just about every category in a yearly theorem-proving competition. The competition is often as much as 100 times slower. Vampire devours other provers.

However, for specialized reasoning in OWL, dedicated tableaux algorithm-based reasoners are quicker than Vampire. For now. I learnt that there are many parameters by which Vampire can be tweaked. A small change in the parameters will often allow the prover to answer a problem that was taking 24 hours before in a couple of seconds. However, finding the optimal settings is very much a black-art. No one understand which combination of parameters will give a good result. Andrei himself is pretty good, but he doesn't have the time to investigate all possible things people want to do with Vampire.

Professor Voronkov is taking a sabbatical at Microsoft Research in Seattle for the next year. Microsoft wants to use Vampire to formally verify device drivers. Bad drivers are a frequent cause of Windows crashing, so Microsoft is very interested in translating the code into logic syntax and letting Vampire find the bugs. Intel has been verifying all their chips in a similar fashion after the embarrassing bug in the original Pentium processor that caused it to give the incorrect answer on a few simple division operations.

Relating to my research, I found out that Vampire, unlike the tableaux-based reasoners, doesn't have a problem when classifying large data structures. One major difference between description logic and FOL is that the later is undecidable. The prover can answer "don't know". A description logic reasoner will however always, in theory, be able answer conclusively. In practice it often answers "stack-overflow" when faced with the ontologies I throw at it.

Anyway, Vampire achieves it's slow memory usage by simply discarding inactive clauses it has generated by its resolution process. The most "young and heavy" clauses are going to be processed last anyway, so why not just throw them out? We're likely to find a solution (= empty clause) (= contradiction) before then. I wonder if I can do a similar thing in description logic. I'll loose some completeness, of course.

IJCAI day 1
→ Home

The conference started today. The first few days are workshops and tutorials. The actual conference comes later.

So, today I attend a tutorial on "Automated Reasoning in First-Order Logic". Professor Andrei Voronkov from the University of Manchester talked us through some of his Vampire theorem-prover (version 8). He's been working on this system for the past 10 years and it is by far the fastest 1st-order logic reasoning system in the world. It wins just about every category in a yearly theorem-proving competition. The competition is often as much as 100 times slower. Vampire devours other provers.

However, for specialized reasoning in OWL, dedicated tableaux algorithm-based reasoners are quicker than Vampire. For now. I learnt that there are many parameters by which Vampire can be tweaked. A small change in the parameters will often allow the prover to answer a problem that was taking 24 hours before in a couple of seconds. However, finding the optimal settings is very much a black-art. No one understand which combination of parameters will give a good result. Andrei himself is pretty good, but he doesn't have the time to investigate all possible things people want to do with Vampire.

Professor Voronkov is taking a sabbatical at Microsoft Research in Seattle for the next year. Microsoft wants to use Vampire to formally verify device drivers. Bad drivers are a frequent cause of Windows crashing, so Microsoft is very interested in translating the code into logic syntax and letting Vampire find the bugs. Intel has been verifying all their chips in a similar fashion after the embarrassing bug in the original Pentium processor that caused it to give the incorrect answer on a few simple division operations.

Relating to my research, I found out that Vampire, unlike the tableaux-based reasoners, doesn't have a problem when classifying large data structures. One major difference between description logic and FOL is that the later is undecidable. The prover can answer "don't know". A description logic reasoner will however always, in theory, be able answer conclusively. In practice it often answers "stack-overflow" when faced with the ontologies I throw at it.

Anyway, Vampire achieves it's slow memory usage by simply discarding inactive clauses it has generated by its resolution process. The most "young and heavy" clauses are going to be processed last anyway, so why not just throw them out? We're likely to find a solution (= empty clause) (= contradiction) before then. I wonder if I can do a similar thing in description logic. I'll loose some completeness, of course.

IJCAI day 0
→ Home

IJCAI, the International Joint Conference on Artificial Intelligence, is probably the most important conference in the area of AI in the world (another important one is AAAI). This year's IJCAI is in Edinburgh, Scotland. The conference will be in Hyderabad, India next year.

My supervisor thought it was a good idea for me to attend this year's conference. Especially since it was so close to Manchester. He saves some money and I (hopefully) learn something.

So I traveled up to Scotland, wanted the streets of Edinburgh trying to find the place where I was supposed to stay (I forgot to take money with me for a taxi and couldn't find a cash machine), eventually found the Pollock Halls and collapsed in my room.

Edinburgh is a very old city with lots of history. Lots of ancient rock walls, rustic buildings and stone bridges. However, this backdrop does little to hide the usual vices of Kali-yuga. Scots seem a bit more brash than the usual Englishman. The homeless are more obvious, the drunks more visible, the prostitutes abound everywhere. So, altogether, a typical western city.

IJCAI day 0
→ Home

IJCAI, the International Joint Conference on Artificial Intelligence, is probably the most important conference in the area of AI in the world (another important one is AAAI). This year's IJCAI is in Edinburgh, Scotland. The conference will be in Hyderabad, India next year.

My supervisor thought it was a good idea for me to attend this year's conference. Especially since it was so close to Manchester. He saves some money and I (hopefully) learn something.

So I traveled up to Scotland, wanted the streets of Edinburgh trying to find the place where I was supposed to stay (I forgot to take money with me for a taxi and couldn't find a cash machine), eventually found the Pollock Halls and collapsed in my room.

Edinburgh is a very old city with lots of history. Lots of ancient rock walls, rustic buildings and stone bridges. However, this backdrop does little to hide the usual vices of Kali-yuga. Scots seem a bit more brash than the usual Englishman. The homeless are more obvious, the drunks more visible, the prostitutes abound everywhere. So, altogether, a typical western city.

Tags for the masses, ontologies for developers
→ Home

In my line of research I?m very much involved with ontology development. I?m not going to beat around the bush: developing ontologies is hard. Really hard. The more logically rigorous they get, the more difficult they become to construct.

So, you might ask, how is the vision of the great and wonderful ?semantic web? ever going to work? After all, ontologies are the framework that is meant to undergrid the Internet of tomorrow.

Take a look at del.icio.us, flickr.com and technorati. They all use an up-and-coming (craze of the moment) idea of tagging. You allow people to add any word to their content and collect all these tags up into a large list. The larger the font, the more frequently used the tag. The obvious problems are synoyms and homonyms. However: who cares?! It kind of works, anyone can understand the idea, so wa-hey: let?s go tag crazy.

Ontologies however are much more powerful and dangerous. They exactly and unambiously define terms and formally capture relationships between terms. You get transitivity, inheritance and other great stuff like that. Moreover, computers can automatically navigate these data structures and use them to answer almost any question you can you throw at them. Feel the power!

What to do? The general populus is never going to be able to author ontologies, but could possibility be induced to use them, given a simple enough interface. So, if the subject area we are describing is sufficiently limited that we can construct an ontology to cover it (no one is going to be able to create an ontology of ?everything??), then we can allow people to tag their content with our ontologies terms. The result: we can have our computers sort, manage, slice and dice their tagged content any which way, take advantage of all the advanced features and the world is a better place. Amen.

 

Tags for the masses, ontologies for developers
→ Home

In my line of research I?m very much involved with ontology development. I?m not going to beat around the bush: developing ontologies is hard. Really hard. The more logically rigorous they get, the more difficult they become to construct.

So, you might ask, how is the vision of the great and wonderful ?semantic web? ever going to work? After all, ontologies are the framework that is meant to undergrid the Internet of tomorrow.

Take a look at del.icio.us, flickr.com and technorati. They all use an up-and-coming (craze of the moment) idea of tagging. You allow people to add any word to their content and collect all these tags up into a large list. The larger the font, the more frequently used the tag. The obvious problems are synoyms and homonyms. However: who cares?! It kind of works, anyone can understand the idea, so wa-hey: let?s go tag crazy.

Ontologies however are much more powerful and dangerous. They exactly and unambiously define terms and formally capture relationships between terms. You get transitivity, inheritance and other great stuff like that. Moreover, computers can automatically navigate these data structures and use them to answer almost any question you can you throw at them. Feel the power!

What to do? The general populus is never going to be able to author ontologies, but could possibility be induced to use them, given a simple enough interface. So, if the subject area we are describing is sufficiently limited that we can construct an ontology to cover it (no one is going to be able to create an ontology of ?everything??), then we can allow people to tag their content with our ontologies terms. The result: we can have our computers sort, manage, slice and dice their tagged content any which way, take advantage of all the advanced features and the world is a better place. Amen.

 

Wisdom to know the difference
→ Home

God,
Give us grace to accept with serenity the things that cannot be changed, courage to change the things which should be changed, and the wisdom to distinguish the one from the other.

(The Serenity Prayer is generally thought to have been written by Reinhold Niebuhr. Frequently used by Alcoholics Anonymous.)

"You have a right to perform your prescribed duty, but you are not entitled to the fruits of action. Never consider yourself the cause of the results of your activities, and never be attached to not doing your duty". (BG 2.47)

Wisdom to know the difference
→ Home

God,
Give us grace to accept with serenity the things that cannot be changed, courage to change the things which should be changed, and the wisdom to distinguish the one from the other.

(The Serenity Prayer is generally thought to have been written by Reinhold Niebuhr. Frequently used by Alcoholics Anonymous.)

"You have a right to perform your prescribed duty, but you are not entitled to the fruits of action. Never consider yourself the cause of the results of your activities, and never be attached to not doing your duty". (BG 2.47)

Every leader should have a blog
→ Home

I was listening to an excellent talk with Jonathan Schwartz, president and COO of Sun Microsystems. One of the many interesting things Jonathan said was that a blog is a great tool for leaders. Ever leader should have one. He uses his blog to communicate his ideas to his employees. They can also directly interact with him by posting comments and talkbacks. It effectively cuts through the corporate hierarchy and puts allows him, as a leader, to directly lead a large number of people. The result: massively decentralized decision making and management!

The alternative is to going through the usual management structure, down the multi-level corporate hierarchy. A process that is both slow and prone to Chinese whispers.

As to the danger of putting his corporate strategy up on the net for everyone, including competing companies, to read: "The competition's employees also read it and if they like what I'm saying better than what their boss is saying, they'll join Sun".

Every leader should have a blog
→ Home

I was listening to an excellent talk with Jonathan Schwartz, president and COO of Sun Microsystems. One of the many interesting things Jonathan said was that a blog is a great tool for leaders. Ever leader should have one. He uses his blog to communicate his ideas to his employees. They can also directly interact with him by posting comments and talkbacks. It effectively cuts through the corporate hierarchy and puts allows him, as a leader, to directly lead a large number of people. The result: massively decentralized decision making and management!

The alternative is to going through the usual management structure, down the multi-level corporate hierarchy. A process that is both slow and prone to Chinese whispers.

As to the danger of putting his corporate strategy up on the net for everyone, including competing companies, to read: "The competition's employees also read it and if they like what I'm saying better than what their boss is saying, they'll join Sun".

The Overthrow of Everything
→ Home

The Revolution Will Not Be Televised -- Democracy, The Internet, and the Overthrow of Everything is a book on the story of Howard Dean's presidential campaign. He was a candidate with no chance whatsoever of winning, who, out of nowhere, almost won the US-primaries. He did this by using the Internet and his campaign blog in many innovative ways. Listen to his former campaign manager, Toe Trippi, talk about the whole thing.

The Overthrow of Everything
→ Home

The Revolution Will Not Be Televised -- Democracy, The Internet, and the Overthrow of Everything is a book on the story of Howard Dean's presidential campaign. He was a candidate with no chance whatsoever of winning, who, out of nowhere, almost won the US-primaries. He did this by using the Internet and his campaign blog in many innovative ways. Listen to his former campaign manager, Toe Trippi, talk about the whole thing.

Vlogs
→ Home

Wired magazine has an article about Vlogging, or video blogging, or video world wide web logging (to expand the shorthand complete). Short 3-5 minute videos of Vedic philosophy delivered in a fun way by devotees with interesting personalities have so much potential to become really popular.

The most popular of the vlogs is Rocketboom. We can do at least as good as they, don't you think? Let alone the other (terrible) vlogs out there.

Vlogs
→ Home

Wired magazine has an article about Vlogging, or video blogging, or video world wide web logging (to expand the shorthand complete). Short 3-5 minute videos of Vedic philosophy delivered in a fun way by devotees with interesting personalities have so much potential to become really popular.

The most popular of the vlogs is Rocketboom. We can do at least as good as they, don't you think? Let alone the other (terrible) vlogs out there.

Gurudeva stopover: day three
→ Home

Snippet of advice: "Make the best Krishna conscious decision at the moment. Who knows what is going to happen in the future?"

CD: It is difficult dealing with people's packed schedule of engaging their senses in so many ways.
DS: Keep trying. The mode of passion is like that. They are like monkeys pointlessly swinging from branch to branch.
DS: Mode of goodness means that one is not attached to the result. It also means that one doesn't do just anything, but engages in mode of goodness activities.

Lunch:

Drink: Apple and Ginger juice
Salad: Carrot and Watercress Salad with Tahini sauce dressing (GVD page 127)
Cumin Basmati Rice with Wild Rice
Pea and Broccoli Samosas (my spicing was just a off by a bit)
Subji 1: Kumara, Corn and Spinach (variation of Sweet Potato Pie filling GVD page 91)
Subji 2: Tomato Soup with Zucchini (GVD page 27, but without flour)
Vanilla Raisin cookies (variation of Chinese Almond Cookies GVD page 141, needs more flour because of the extra liquid in the raisins)

Computer questions I answered:

  • As a travel wireless network router the Apple Airport Express is the best, lightest, smallest, most fully featured and easiest option. However, it costs nearly double the price of the competition. Nevertheless, I recommended the quality Apple product.
  • The Docupen and other pen-like hand-held scanners remain too flaky for actual use.
  • Flash memory stick prices have plummeted, since DRAM prices have fallen drastically.
    The reason: the iPod shuffle has not sold nearly as well as expected. Consequence 1: the Chinese and Taiwanese RAM manufacturers have overproduced. Consequence 2: more supply than demand. Result: price falls (hint: buy memory in the next few months!)
  • A digital camera for capturing spontaneous shots has to be small and light enough to carry around anywhere and capable of capturing clear images in low-light. I recommended the Fuji F10, an ultra-compact point-and-shoot with D-SLR-like ISO 1600 image capturing capability.

... and just like that, he was gone again. Off to helping people in far off countries ...

Gurudeva stopover: day three
→ Home

Snippet of advice: "Make the best Krishna conscious decision at the moment. Who knows what is going to happen in the future?"

CD: It is difficult dealing with people's packed schedule of engaging their senses in so many ways.
DS: Keep trying. The mode of passion is like that. They are like monkeys pointlessly swinging from branch to branch.
DS: Mode of goodness means that one is not attached to the result. It also means that one doesn't do just anything, but engages in mode of goodness activities.

Lunch:

Drink: Apple and Ginger juice
Salad: Carrot and Watercress Salad with Tahini sauce dressing (GVD page 127)
Cumin Basmati Rice with Wild Rice
Pea and Broccoli Samosas (my spicing was just a off by a bit)
Subji 1: Kumara, Corn and Spinach (variation of Sweet Potato Pie filling GVD page 91)
Subji 2: Tomato Soup with Zucchini (GVD page 27, but without flour)
Vanilla Raisin cookies (variation of Chinese Almond Cookies GVD page 141, needs more flour because of the extra liquid in the raisins)

Computer questions I answered:

  • As a travel wireless network router the Apple Airport Express is the best, lightest, smallest, most fully featured and easiest option. However, it costs nearly double the price of the competition. Nevertheless, I recommended the quality Apple product.
  • The Docupen and other pen-like hand-held scanners remain too flaky for actual use.
  • Flash memory stick prices have plummeted, since DRAM prices have fallen drastically.
    The reason: the iPod shuffle has not sold nearly as well as expected. Consequence 1: the Chinese and Taiwanese RAM manufacturers have overproduced. Consequence 2: more supply than demand. Result: price falls (hint: buy memory in the next few months!)
  • A digital camera for capturing spontaneous shots has to be small and light enough to carry around anywhere and capable of capturing clear images in low-light. I recommended the Fuji F10, an ultra-compact point-and-shoot with D-SLR-like ISO 1600 image capturing capability.

... and just like that, he was gone again. Off to helping people in far off countries ...

Gurudeva stopover: day two
→ Home

Today he met with Hitesh and his mother. He also met with two youth workers who work to better the lives of 14 - 25 year-olds using a holistic 6-angled view. A recording of the conversation is available in the audio downloads section.

Lunch:

Drink: Apple and Ginger juice (not enough ginger)
Salad: Spinach and Tomato
Coriander Basmati Brown Rice with Wild Rice
Corn on the Cob
Subji 1: Kumara and Broccoli
Subji 2: Vegetable au Gratin (GVD page 56)
Carob Cake with Strawberry Jam filling and Vienna Icing (GVD page 151)

His response: You're good at the pies and cakes, Shilpa is good at subtle Indian spicings and flavours.

Gurudeva stopover: day two
→ Home

Today he met with Hitesh and his mother. He also met with two youth workers who work to better the lives of 14 - 25 year-olds using a holistic 6-angled view. A recording of the conversation is available in the audio downloads section.

Lunch:

Drink: Apple and Ginger juice (not enough ginger)
Salad: Spinach and Tomato
Coriander Basmati Brown Rice with Wild Rice
Corn on the Cob
Subji 1: Kumara and Broccoli
Subji 2: Vegetable au Gratin (GVD page 56)
Carob Cake with Strawberry Jam filling and Vienna Icing (GVD page 151)

His response: You're good at the pies and cakes, Shilpa is good at subtle Indian spicings and flavours.

Gurudeva stopover: day one
→ Home

My spiritual master is staying with me for a few days. Hare Krishna!

He arrived from Russia via Finland, tried and hungry. The UK is very, very different from Russia, he said.

Shilpa and I prepared his lunch:

Drink: Apple and Ginger juice (revived him by restoring his appetite)
Salad: Lettuce, Radish and Carrot
Yellow Basmati Rice with Wild Rice
Split-Mung Dal (GVD page 27)
Subji 1: Green Beans with added Carrots (GVD page 55)
Subji 2: Zucchini, Green Peppers and Tomato (GVD page 58)
Easy Apple Pie (GVD page 145)

His response: Hare Krishna! Really good. You two could open a restaurant. Really good ...later: that was good prasadam! I can't get over it!

It is nice to please the spiritual master and, for once, not to be a total space-out.

Gurudeva stopover: day one
→ Home

My spiritual master is staying with me for a few days. Hare Krishna!

He arrived from Russia via Finland, tried and hungry. The UK is very, very different from Russia, he said.

Shilpa and I prepared his lunch:

Drink: Apple and Ginger juice (revived him by restoring his appetite)
Salad: Lettuce, Radish and Carrot
Yellow Basmati Rice with Wild Rice
Split-Mung Dal (GVD page 27)
Subji 1: Green Beans with added Carrots (GVD page 55)
Subji 2: Zucchini, Green Peppers and Tomato (GVD page 58)
Easy Apple Pie (GVD page 145)

His response: Hare Krishna! Really good. You two could open a restaurant. Really good ...later: that was good prasadam! I can't get over it!

It is nice to please the spiritual master and, for once, not to be a total space-out.

Podcasting on the Mac
→ Home

Prominent blogger John Gruber has posted a very good article on his "Daring Fireball" blog about what Apple is doing to support podcasting on the Macintosh. Among other things he comments on how quickly Apple has picked up on this very new phenomenon and developed software to handle it. They must see major growth potential (as also predicted by Adam Curry, see here).

All the tools and tech are outlined in detail. Check it out if you want to create Podcasts and own a Mac (or just use iTunes on the PC and want to listen-in).

Podcasting on the Mac
→ Home

Prominent blogger John Gruber has posted a very good article on his "Daring Fireball" blog about what Apple is doing to support podcasting on the Macintosh. Among other things he comments on how quickly Apple has picked up on this very new phenomenon and developed software to handle it. They must see major growth potential (as also predicted by Adam Curry, see here).

All the tools and tech are outlined in detail. Check it out if you want to create Podcasts and own a Mac (or just use iTunes on the PC and want to listen-in).

Blogs: use both RSS and email
→ Home

Here is an interesting take on the RSS phenomenon. The author's basic point: RSS is too complicated for most people, so provide them with an email delivery mechanism in addition to RSS.

A "top-10 blog postings of the week" type email newsletter could, for example, reach a much wider audience than a difficult to subscribe to RSS feed.

Blogs: use both RSS and email
→ Home

Here is an interesting take on the RSS phenomenon. The author's basic point: RSS is too complicated for most people, so provide them with an email delivery mechanism in addition to RSS.

A "top-10 blog postings of the week" type email newsletter could, for example, reach a much wider audience than a difficult to subscribe to RSS feed.

Lotus flowers and compassion
→ Home

Compassion is like a lotus flower. Did you know that a lotus flower has a brilliant self cleaning mechanism? Its leaves' surfaces are not smooth, as one might expect, but very rough. The roughness somehow causes any dirt on a leaf to attach itself to water as it glides off the leaf's surface. The result: a perfectly clean and beautiful flower.

I hate death. I'm sad even when a bee dies (and, believe me, I hate bees), let alone when another human being leaves his or her body. My condolences to those victims of the London terror attack and Word Trade Center terror attack. My condolences to the victims of the war on terror in Iraq and Afghanistan. I wish they could all still be alive. However, as stated in the Bhagavad-Gita, for everyone who is born death is certain (BG 2.27). It's only a question of time.

So, though I may sometimes/often seem quite cruel and heartless, this attitude is not due to a lack of compassion (though possibly due to a lack tact). I'm attempting to agitate the dirt of materialistic conceptions out of my mind (and the minds of others) by "roughing things up" and letting current events do the cleaning. Please help me in this endeavour.