Search This Blog

Follow adrianbowyer on Twitter

My home page

Monday, 10 October 2011


It is beyond argument that democracy is the best form of government that humanity has implemented so far.  It may not be the best possible form of government (see here, for example), but it's the best we've tried.  Democracies are the richest societies on Earth; their populations have the longest life expectancy; and they have the worst immigration problems (which is another way of saying that everyone else wants to live in them).

But there is a superficial paradox: democracies are intrinsically inefficient compared to, say, dictatorships in the same way that a mob is less efficient than a disciplined army, and for the same reasons.  So why should an inefficient form of government work best?  The obvious answer - that government in itself is a bad thing and that less of it is therefore better - falls at the first counterexample: Somalia has no government at all, and it is one of the most unpleasant places to live in the World.

Democratic politicians are no less venal and corrupt than politicians working in other forms of government (read any newspaper for proof); indeed it is reasonable to suppose that much the same people would be running the government regardless of the political system under which they found themselves operating.  Think of any minister in your government and visualize him or her serving under Robert Mugabe ("I think it's best to work for change from within.").  It's not a big leap of imagination, is it?

No.  The reason that democracy works is not because it puts the right people in government.  There are no right people to be in government because no human being - you, me, Barack Obama, Wen Jiabao; none of the seven billion of us - has the faintest idea how to run a country.  (We merely all have opinions about how it should be done, which is not the same thing at all.)  The reason that democracy works is because it has a solid mechanism for removing people from power.

Having no government is bad (Somalia).  Government by the same people for a long time is bad (Zimbabwe).  But high turnover among governors is good.

It follows that, in an election, we should all ignore the record of the incumbents, we should all ignore the policies of the candidates, and we should all ignore their personalities.

We should simply vote in the way that is most likely to remove the current lot (whoever they are) from office.

Friday, 7 October 2011


There are two main evolutionary theories of altruism: Fisher, Haldane and Hamilton's idea of kin selection, and Trivers' idea of reciprocal altruism.

Neither of these theories is mutually exclusive and both may operate together.  I would like to propose a third that may also be operating.

It is this.  Your fitness is increased if you associate with altruistic people, whether you yourself are altruistic or selfish.   In the latter case the others may well find you irritating, but even then - all things being equal - the others are less likely to act against you than more selfish people would.  Thus we would expect all individuals to seek out altruistic company.

People are inclined to have children with those with whom they associate, simply because of opportunity.   When altruistic people have children with other altruistic people that will tend to reinforce impulses towards altruism in their children (though we should be cognizant of the regression to the mean).  And when selfish people have children with altruistic people, that will tend to dilute selfish impulses.

Thus we should expect altruistic behaviour in the population as a whole to rise in response to the statistical effect that everyone is more likely to have children with altruistic people than they are with selfish people, simply because of the breeding opportunities provided by the ubiquitous preference for association with the altruistic.

This principle does not just apply to altruism and selfishness.  For example everyone - whether well or ill - will have a preference for associating with people who are well because the associators will then be less likely to catch something nasty from the associatees.  Thus we should expect disease resistance to rise, even above the rise that would be expected anyway simply because disease resistance is in itself intrinsically evolutionarily fitter.

Thus, let X be a characteristic possessed by animal A.  If, by associating with A, animal B increases its fitness regardless of whether B possesses X or not, then we would expect the proportion of X in the population to increase. 

Note that I said "animal" - animals are motile and associate voluntarily.  This principle should apply to any organism that can move about and decide who its friends are, and it will apply particularly strongly in social species (us, say, or bats).  But plants, for example, and solitary animals (leopards, say, or polar bears) will be much less likely to exhibit the principle.

I have decided to call this selection by associative opportunity.

Wednesday, 28 September 2011


Some professions seem highly heritable.  Think of the number of famous writers and actors whose parents did the same thing.  Though less publicly obvious, engineering is a highly heritable profession too.  (Anecdotally, I am an engineer and my father was an engineer, as was my maternal grandfather.)

And yet, no one seems (i.e. I did one Google search...)  to have studied this.  There are - literally - gigabytes of public records in the form of marriage certificates in the like that simultaneously record the jobs of both parents and their children, so it ought to be straightforward to rank professions by their heritability.  There would probably be a bias towards father/son relationships that would mirror the inequity in job opportunities in past ages (and today...), but it ought to be reasonably straightforward to control for that in such a large statistical sample.

A ranking of professions by heritability would serve as  rich basis for all those entertaining nature/nurture arguments about human characteristics, as well as for some serious genetic and developmental studies...

Wednesday, 21 September 2011


Money is not evolutionarily stable.

By this, I mean that no living organism would ever evolve by natural selection that had a system of exchanging worthless, but hard-to-forge, tokens for items of value with other organisms (either of its own species, or of another).

Firstly, it is obvious that this has never happened.  No living thing has - or has ever had - a system equivalent to money.  This could be simply because the money mutation has never been chanced upon by natural selection.  But in fact it goes deeper than that.  Such a mutation would be less fit than its competitors and would die out, as I shall show.

But hang on, you say, people are naturally-evolved living organisms, and people use money.  Yes - but human money is a temporary aberration; our money is about to succumb to the forces that have prevented its evolving and surviving elsewhere in nature.  Indeed, we have just started to see the first signs of its mortality. 

Before we go on, note that there are plenty of bartering systems in nature.  Nectar is exchanged for pollination; sugary fruit is exchanged for the distribution of seeds.   But nectar and fruit have real value; they cost real energy to synthesise and their consumer gets some of that real energy back; they are like you obtaining your groceries by giving the shop 20 litres of petrol.

So why don't plants give bees a nectar token that the bees can cash in for a meal later, or maybe exchange with a house martin for a nest site?  The reason is the phrase "hard-to-forge" that I started with.  Such a scheme would put a selection pressure on the bees to forge the nectar tokens so they could get free nectar (or nest sites).  And they would succeed because, like all living things, they make everything atom by atom.

Nothing, and certainly no worthless token, is unforgeable if you construct everything atom by atom, and this is the reason that money is evolutionarily unstable. 

The picture above is of a forged keyboard on an ATM machine designed to clone credit cards.   The latest scam is to use 3D printing to make these.   Eventually 3D printers will be using atomic force microscope technology to build everything they make atom by atom.  Coins, bank notes, and credit cards will be among the simpler items to manufacture.

At that point money becomes valueless.  We will be left with just the encryption-based schemes like Bitcoin in its place.  Except that we won't, because one of the items that atom-by-atom 3D printing will probably give us is the quantum computer, which allows anyone to crack all the encryption schemes currently in use.  Maybe money will survive in the form of quantum encryption, but that seems far from certain.

If the ability to make anything kills money, how fortunate it will be that we will coincidentally have just aquired a universal technology that will allow people to make everything that they want for themselves without money...

Thursday, 4 August 2011


Dr Sam Peters stepped carefully along the island path.  It was nine in the morning, and under the leaf canopy the air was still cool, though the tropical sun was glinting fiercely off the sea ahead through the trees.   She came to one of her traps, which was empty.  She renewed the bait in it, then carried on.  The ultrasound detector clipped to her belt was silent.

After another thirty meters she arrived at the spot where she had gathered a fecal sample the week before.  Though heavily contaminated by bacterial DNA, her analysis had suggested that this was left by that rarest organism: a mammal new to science.   Her next trap was just a little further, on the other side of a fallen branch.  As she got to it and looked over she could see that it was occupied.  She smiled and climbed across.

The trap contained a mouse-like animal with a squashed and deformed nose.  It had enormous ears.  Dr Peters' smile expanded to a laugh of delight.  The animal had vestigial membranes between its front legs and its body.  It was a bat, flightless because of the lack of predation on its remote home.   As she carefully removed it from the trap, the ultrasound detector chirped.

That little story was not intended to be about bats, though it would be credible where there still to exist such a remote island any more on Earth.  It was about the smile and the laugh - regardless of the rest of the story, you didn't trip over those details as being implausible.

Jimmy Carr, who knows a thing or two about laughter, says that all jokes work the same way: you set up a situation in the listener’s mind, and then reveal the true situation to be otherwise by the punch line.  He was not the first to point this out.  The listener's laughter is a pleasurable response to a realisation that they were mistaken, but are now enlightened. 

Our sense of humour is our scientific sense, distilled, our facility for systematic falsification, amplified.

There are many measures of different aspects of intelligence, and psychologists labour long to try to devise unbiased tests to quantify each.  There's everything from Spearman's familiar g, to the latest: my old friend Dylan Evans's Risk Intelligence.  But, as far as I am aware, no one has yet devised a proper psychometric test to quantify sense of humour, h.

It seems to me that devising a way of measuring h is long overdue.  And, once it has been established, it would be most interesting to try to correlate h with creativity and scientific innovative ability, and - differently - the ability to test scientific hypotheses.

Laughing at a joke is the internal reward that evolution has given us for seeing our hypothesis falsified.

Friday, 29 July 2011


Late one night, half a lifetime ago when I was a student, I was repairing an old valve oscilloscope rescued from a skip at the back of Imperial College.  I decided that I'd had enough and that it was time for bed.  I poured a bowl of cornflakes and set it on the top of the bedside fridge (Peltier, so silent) that had the milk in for the morning, and went to sleep.

At about three a.m. I was woken by a crunching noise.  I turned on the light to find my nose about 20 centimetres from the nose of a mouse holding a cornflake in both hands.

The mouse didn't stay long.  I spent a few minutes chasing it round the room, until it ran into the scope, which still had its casing off.

I then had a really good idea: I turned the scope on.  After a few moments it had warmed up, and I clapped my hands.  I heard scampering noises from inside the works.  Another clap, another scamper, then a squeak followed by silence.

I could see the corpse lying in the wiring.

I then had a really bad idea: I reached in to extract the mouse.

I found myself on the opposite side of the room.

Either directly, or via the mouse, I had touched the final anode.  I estimate that this must have been at about 6,000 volts DC.  I was fortunate that I hadn't instead touched the main positive power line at around 300 volts; that might well have finished me.  The final anode supply was, of course, severely current-limited.

I said I found myself on the other side of the room, and that is exactly what the experience was.  I remember the shock, and I hadn't lost consciousness at all.  But, because what passes for my brain hadn't told my muscles to move, that brain deduced that an external agency had thrown me across the room.  In reality I had jumped, but my brain said thrown.

The next day I got to thinking, like all inventors, about better mousetraps.

Now, as a mousetrap, an oscilloscope is perhaps over-complicated.  But its principle is simplicity itself.

So I took a square of cardboard about 30 centimetres on one side and glued a 10 centimetre disc of aluminium cooking foil in the middle.  Around this I glued an annulus of aluminium foil with a 2 centimetre gap between it and the disc.  I put a dollop of peanut butter (crunchy) in the middle of the disc, wired the two pieces of aluminium across the mains, set the trap on the floor, left a note saying "Beware Electric Mousetrap" for my flatmates, and went back into IC to do a day's research for my PhD.

On my return that evening there was another dead mouse.  This time I turned off the mains before I picked it up to dispose of it.  It was a bit cooked...

But, now I'm older, I don't think I would use HT electricity in a mousetrap.

Better would be a strong gamma source such as 60Co in a small depleted uranium labyrinth with no line-of-sight from the source to the outside.  A DU cup dropped over the source should allow you to put peanut butter in and to take dead mice out...

Saturday, 23 July 2011


"Even the most powerful man on Earth, Barack Obama,...": thus some MP on a political TV show the other day that I flipped into by mistake.

This struck me as a decidedly ill-informed and parochial view.  It was rather as if Itzhak Perlman were to say, "Even the most powerful man on Earth, Sir Simon Rattle,..."

If politicians regard Barack Obama as the most powerful person in their field of endeavour, that is fair enough.  But that doesn't make him the most powerful person on Earth.  The most powerful person on Earth is the one who has changed it most.

Look at how people change things and what resources and constraints operate when they try to do so.  Those resources and constraints work at four different levels, each overwhelmingly stronger than the one that preceded it.

The bottom, least important, level is the level of rules.  That is to say, instructions said or written down by some people in the hope (a hope sometimes backed up by main force) that others will do as they say.  They are instructions like, 'You shall not eat pork,' or, 'Here you may not drive faster than 50 kph.'

The next level up is that of economics.  Money is more powerful than rules.   Or, to put it another way, the system of rules has to strain every last clause to have even a small effect on the world of money.  For example, possessing and trading in the alkaloids produced by the poppy Papaver somniferum are against the rules just about everywhere in the World.  And yet that trade is one of the World's most enduring, sustainable and profitable industries.

The next level up is biology in general, and human biology in particular.  Consider a teenage boy spending $200 on a pair of training shoes.  There's a perfect functional equivalent available for $50, and conventional economic theory says that those cheaper ones are the pair that he should buy.  But he is not buying shoes.  He is buying a peacock's tail.  The purchase is part of his breeding strategy, albeit probably a completely unconscious one.  Biology trumps money, and recently economists (to their credit) have recognised this and introduced a whole new branch of their subject - behavioural economics - to apply corrections to economics from the biology that overrules it.

The top level is that of physics - the substrate upon which all this other activity is played out.  No organism has ever evolved a perpetual motion machine; if one had, the results would have so out-competed everything else that all living things would work that way.  This omission might be because mutation had never chanced upon the right arrangement.  But the true reason is more profound: a living perpetual motion machine would break the Second Law of Thermodynamics, which is just about the most solid physical law that we have ever discovered.  Physics trumps biology.

Now we can see the depth of the MP's mistake.  Both he and Barack Obama are operating at the bottom level - they create and modify the rule book.  When they try to influence the next level above them - the world of money - it is as if they are operating a machine using levers made from jelly.  And they cannot hope to have any influence worth speaking of in the worlds of biology and physics.

To imagine that "the most powerful man on Earth" could be a politician is to make a category error.

The person who has made the biggest change to the world in our lifetimes - the most powerful man on Earth - did so by working at the biological and physical levels.  He used electronics to satisfy the innate biological need of people to communicate.  He is Sir Tim Berners-Lee.

The engine of history is engines...

Sunday, 10 July 2011


Mountains are not really very tenable objects.  They get thrown up by tectonic plate collisions or volcanism, but gravity and the weather then flatten them pretty quickly.  That is to say, quickly if you are a stone.  The longest-lived mountains tend to be ones that are made of low-density rock, or that float on a low-density area of plate.  Those are more-or-less in equilibrium, like ships on the sea.

This gives a clue how to make an artificial mountain.  You make it from bricks filled with helium.  The bricks would be tough rectangular plastic bags with aluminised interiors to retain the gas.  Each brick might be about the size of a house, and it would come with tabs on the edges to allow it to be attached to all its neighbouring bricks in the same sort of very strong bond pattern that you find in an ordinary brick wall.  Instead of the tabs, you might even be able to do something clever with Velcro on the surfaces.

As the mountain of bricks gets higher, you reduce the pressure of the helium in the bricks to match that of the surrounding atmosphere.  Some reasonably straightforward calculations should allow the entire structure to have neutral buoyancy as a whole, and also at every horizontal slice at all altitudes.  The individual bricks, whilst piled kilometres high, would not be being compressed by those above them, and would be almost completely unstressed.

If you made the helium mountain big, it would probably be a good idea to anchor it to something reasonably substantial in order to resist the wind.  For example, you could build it round an existing natural mountain.

You could then build your helium mountain right up to space.

Once you had done that, you could run a linear accelerator up the side and launch payloads into orbit electrically for a few dollars a kilogram...

Thursday, 30 June 2011


Mitochondria are bacteria that wandered into each of our cells about two billion years ago and set about making themselves indispensable.

They still reproduce separately from the rest of us, though.  All your mitochondria came from your mother via her egg.  They exist in sperm, too, and those sperm mitochondria enter an egg when it's fertilised, but then they are assassinated.

The mechanism of that assassination is superficially understood: ubiquitin is tacked onto the sperm's mitochondria in the egg, which marks their proteins for destruction.  But as far as I have been able to find out, no one knows what it is in the egg that does the ubiquitin attachment.

I'd lay a hefty (well, a few quid...) bet that it is the egg's mitochondria that do the dirty deed.  Look at things from the evolutionary perspective: the egg's mitochondria are in a secure comfortable environment with plenty of resources (a whole big egg) when along come some unrelated interlopers that will compete with them for lebensraum.  But those interlopers are exhausted after powering a very long swim in a midget submarine, and their defences are low.  Best exterminate them now before they recover.

And suppose some sperm  mitochondria mutated to put energy into developing defences.  Of course, their sperm would come last in the race as a consequence, so the forearmed mitochondria would never reach the egg in the first place.

The main nuclear genetic material in both the egg and the sperm have no dog in this fight - they don't care where they get their mitochondria from.  So they look on dispassionately and don't take sides.

The trouble with this arrangement (from our perspective) is that mutations can accumulate in our mitochondria because there is no sexual combination going on to allow such mutations to be shelved as recessive and all the other shuffling advantages that sex confers.   The results are many different types of mitochondrial disease such as early-age diabetes with deafness, various neuromuscular conditions (some fatal), certain epilepsies and - well; the list goes on.

What can we do?  We can't attempt to give the sperm's mitochondria an effective defence against ubiquitination so that we all end up with two types of mitochondria from egg plus sperm, at least half of which ought to work.  Any defence that we cook up will be at an evolutionary disadvantage for the reasons outlined above.  It will not be evolutionarily stable.

Instead let's confiscate all the mitochondrias' DNA and save it as extra genes in our cell neuclei.  Those genes would still build the protiens that would make mitochondria in our cells.  But those mitochondria would have no genetic material of their own.  They would just be a useful cell structure like many others.

Now, of course, those extra genes could have alleles, and could be equally inherited from both parents.  This would allow much more variation and recombination than is possible with the - essentially bacterial - way that mitochondria reproduce at the moment, which in turn would make things far more robust and less prone to inherited disease.

Friday, 24 June 2011


The colder you are, the richer you are.  The top map is world temperatures in 2007, and the bottom is GDP per person in 2006.

There are exceptions, of course.  But they only go to prove the rule: the exceptions are hot places that are rich just because of natural resources (Saudi Arabia), or because they were very recently colonised by people from cold places (Australia).

We are tropical animals, so this seems strange.  Surely, we would expect, our natural environment should be the one in which we should be the most productive?  But the reverse is the case.

Those of us in the cold world carry a couple of millimeters of the tropics around with us all the time in the form of our clothes, and this gives us a clue.

Also, hot India is in the process of becoming rich, as is multi-climated China.  Both are following on the heels of the now-rich  Far East, which is also hot.  Wherever the hot world becomes cold by the introduction of air conditioning, wealth follows.

The minimum you need to survive in a cold climate is food and shelter.  But the minimum you need to survive in a hot climate is just food.

People have not evolved to be productive.  Evolution doesn't care a hoot about GDP.  People have evolved to survive with the minimum of effort in their natural environment.  Out of it, they have to work harder.  We don't need much in hot places, so we don't work hard there.  The greater wealth in cold places is just a by-product of people needing to keep active and needing to be inventive to keep warm.

It is no accident that the Industrial Revolution was started by the manufacture of those millimeter-thicknesses of the tropics in a cold country.

Sunday, 19 June 2011


My late father once wrote a short-story in which the protagonist wore a hearing aid.  The protagonist's hearing was fine, but he thought that the aid might make it even more sensitive than that of his unenhanced ears.  In the story the idea didn't work very well, and I suspect that that would also be true in real life - it would certainly be a simple enough experiment to try.

About six years ago there was a minor (which is to say widely e-mailed then forgotten) fuss about Jie-jie shown above, who was born with three arms.  Neither of his left arms was fully functional, but they had such similar - if partial - abilities that there was a long debate about which to amputate.  At the time it seemed to me rather sinister (there's a joke for the Latin-literate in there somewhere) that no one involved was strongly advocating leaving both of them alone on the basis that one good arm plus two partially-functioning ones might be a lot more useful than one-and-a-half in total.

I was reminded of this when I dug my father's story out.  There is a lot of work being done on artificial arms for amputees.   These arms are powered and active, being controlled by nerve impulses that would be sent to the real arm if it existed.  And they can be given abilities that no real arm has, such as continuous rotation at the wrist, or a multimeter wired into the finger and thumb, so the owner can analyse working electronic circuits.

Why not develop these for the able-bodied too?  They have fully-functioning nerve signals that can be tapped for control, and it should be reasonably straightforward to separate the signals for the real arm from those for the fake: all that's needed is the equivalent of a shift key - something like curling up one's little finger would do.

We would all be more dexterous with an optional extra arm strapped to our shoulders...

Friday, 10 June 2011


Talking of Sherlock Holmes in my last blog post reminded me: I think I discovered the other day where his and John Watson's names may ultimately come from.  Roger Johnson, the Press and Publicity Officer of the Sherlock Holmes Society of London says that, ' the first draft, the characters were called Sherringford Holmes and Ormond Sacker. Sherringford was soon changed to Sherlock because Doyle ran 30 runs against a bowler named Sherlock at a cricket match and had "retained a certain fondness for the name”. Sacker became Watson and the first story was published in Beeton’s Christmas Annual.'

I would not presume to dispute so distinguished a Holmesian - I am sure that what he says above is Arthur Conan Doyle's true account.

But Conan Doyle may not have consciously known his real source.

Daniel Defoe wrote a pamphlet entitled A True Relation of the Apparition of One Mrs. Veal the Next Day after her Death to One Mrs. Bargrave at Canterbury the 8th of September, 1705.  This is a ghost story; indeed it is one of the very first written short stories of any kind.  It is only 15 pages long, but in it there appear both a Dr Sherlock and a Captain Watson.

Conan Doyle was famously interested in spiritualism, so it is highly likely that he would have read this pamphlet during his researches into that world.  The pamphlet was written the best part of 200 years before Sherlock Holmes was created.

The names must have stuck in Conan Doyle's mind.


The distributors of e-books and the manufacturers of e-book readers wax long on the foolproof digital rights software that they incorporate to prevent users copying book files between each other.  But the great strength of the electronic ink that e-books use - namely that it works by reflected light, making reading comfortable - is also a great weakness.  You can put an e-book in an ordinary scanner and take the text straight off the screen.

Above is a scan at 600 dots-per-inch from an e-book.  I ran it through gocr to extract the text:

Sherlock Holmes swallowed a cup of coffee, and
turned hi_ attention to the ham and eggs. Then he
rose_ lit his pipe, and settled himself down înto his

I'll tell you what I did first_ and how I came to
do it afterwards_ '' said he. ''After leaving you at the
st8_on I went for a charming walk through some
admir8ble Surrey scene_ to a pret_ little village
called Ripley_ where I had my tea at an inn, and
took the precaution of filling my nask and of
pum'ng a paper of sandwiches in my pocket. There
I remained until evening, when I set off for Woking
ag8in_ and found myself in the high-road outside
B_8rbrae just after sunset.

Well_ I waited until the road was clear-it is
never  a  ve_  frequented  one  at  any  time,  I
fancy-and then I clambered over the fence into
_e grounds. ''

Surely the gate was open.! '' ejaculated Phelps.

That was with the program running its defaults and not tuned to, and taught, the e-book's font.  Neither was it run through a spell-checker.  If one did all that the result would be even better.  Put a couple of wires into the e-book's next-page button and solder a MOSFET across them driven by one bit on the scanning computer's parallel port, and you could scan an entire book to text completely automatically...

Thursday, 9 June 2011


Some time ago I blogged about clothes that sweat for you.  Feet are notoriously sweaty, and this is compounded by our habit of stuffing them into shoes.  Despite the best endeavours of manufacturers to make shoes that allow moisture out but prevent rain coming in (like Goretex), most footwear tends to hold the sweat in, with malodorous consequences.

What's needed is some sort of wick and some sort of ventilation.  But we already have both: socks and sandals.  The only trouble there is that the combination, though intensely practical, is regarded as a fashion faux pas on a par with wearing wellies and a swimsuit to the opera.

Suppose we keep the socks  and do something else for the ventilation?  Specifically, suppose we put little air pumps in the shoe heels so that every step blows a puff of air in at the toes, which then flows up and out at the ankle taking the evaporated sweat with it.

Cool fresh feet, and the more you walk, the cooler and fresher they become...

Monday, 30 May 2011


Does people's handwriting look like their parent's handwriting?  It must have a genetic component - after all the muscles and the nervous system driving them were built by the genes.  But it is clearly also cultural - a European orphan raised by Japanese adoptive parents would learn to write Kanji not Roman letters.

This raises another question: can handwriting be characterised by geometric measures that are invariant when the language and the alphabet being written change?  The sharpness of angles; the aspect ratios of characters or pictogrms, slants, the circularity of curves and so on.

Given such a characterisation system, a classic heritability study could easily be done between separately-raised identical twins, and adoptive and biological children to find how much of the way we write is fixed.

Of course, hardly anyone writes anything longhand any more.  We should hurry to do the study before handwriting dies out alltogether...

Friday, 20 May 2011



A street, vertically above which is suspended a computer projector projecting a plain white rectangle onto the pavement.  Beside the projector is a TV camera connected to the same computer.   The computer is analysing the shapes as people walk into view.

It generates a fake shadow for them from the projector, darkening the area of pavement where an imaginary light source to one side would cast a real shadow.  The shadows follow the people apparently casting them just as real shadows would.

But a shadow then detaches from its caster and does a little dance.

Two people's shadows walk away from their casters, shake hands, and then go on each to become the shadow of the other caster.

Two people's shadows pause as their casters walk on, hesitate, then kiss passionately.

One person's shadow detaches and creeps up menacingly behind another person.

A fat person's shadow suddenly becomes thin; a thin person's fat.

A shadow acquires the profile of a famous person.

Suddenly, everyone has an umbrella.

And so on...

Saturday, 14 May 2011


Inventions sweep the world with increasing frequency, as is only to be expected with an exponentially growing pool of inventors and an exponentially growing foundation of science upon which they can base their inventions.

One of the big sweeps is unquestionably the mobile phone.  Anyone over the age of thirty can remember a world without it, but for all of us that world seems impossibly distant and unconnected, both with itself, and with us.

The frequency-shift communication that makes mobile phones possible was famously invented by Hedy Lamarr.  Indeed, it is an interesting aspect of fame that her invention will probably turn out to be the one by which future ages will remember her.  Anyone in the 1940s would have been flabbergasted if they could have known that back then.  It is as if a time traveler from 2500 were to tell us that Einstein will be principally famous for his haircut.  (Hedy Lamarr was called "The Most Beautiful Woman in Film"  by her biographer Ruth Barton; form your own judgement from the picture.)

Now all over the world dictators are beginning to tremble as ordinary people take an instrument they use for chatting, and with it scythe off trigger-happy riot squads at the knees with the deadlier weapon of documentary exposure.

But the weak point in the system is the service providers and their base stations.  The base stations are too vulnerable to being turned off by those dictators, and the companies succumb to injunctions to surrender up call records and the like.

However, now that we have open phones that we can program ourselves, those service providers and base stations should become redundant.  That is because every mobile phone can be a base station for others nearby.  If I make a call it would hop from phone to phone across the city, then maybe dive down a wi-fi connection a few miles away to emerge at another on a different continent and then to go hopping on to my interlocutor.  The nice thing about this scheme is that the bandwidth of such a system is proportional to the number of phones in a locality.  (To be pedantic, the bandwidth is proportional to the square-root of the density of phones in the locality.)  Just where you need a lot of bandwidth, there are a lot of phones.

There is a problem.  To preserve your battery, your optimal strategy is to be a parasite - to use the open network, but selfishly not to forward the links needed by others.  But this is easily remedied: the software in the other's phones simply tests your phone to see if it is programmed like that and - if it is - other phones don't give you your calls.  That way everyone is rewarded if they add their bit to the commonwealth.

The system could be fully protected using hard encryption, with friends Diffie–Hellman-swapping their public keys just as for encrypted e-mail.  No one in the middle could listen in.

And then what would we have?  We'd have a completely free universal phone system that no company or government could shut down, and in which anonymity and untraceability would be preserved.  All you would have to buy would be the phone and electricity needed to charge its battery.

Friday, 6 May 2011


When you walk down a corridor and someone is behind you, it is obviously polite to hold any doors you encounter open for them.  They arrive at the door, put their hand on it to hold it open, and you walk on while they walk through.  If they too are polite they will say, "Thanks."

But if it is a long corridor with many doors, you will find yourself holding each and all of them for your follower.  After about three doors this becomes a slight embarrassment to you both.  Your follower will grin at you sheepishly and make I-am-hurrying-up gestures  as they approach.  You will smile back with a slight air of condescension.

It would make a bit more sense to swap: you hold Door 1 for them while they walk through, they (now ahead) hold Door 2 for you while you walk through that, and so on.  For some reason we never do this.  Maybe it is because the follower feels that - if they were to walk ahead - they would be usurping your leader's position, and that that would be presumptuous.   This is despite the fact that the swapping scheme would allow them to return your favour at the next door - immediate reciprocal altruism.

The swapping scheme even extends to arbitrarily many people walking down an infinite corridor with doors: the leader holds the next door for everyone, then joins the back of the procession.  (Though why a bunch of people would want to walk in Indian file down an infinite corridor I'm not quite sure.)

Game theoretically, of course, the Nash equilibrium is the selfish let-the-door-go-in-the-follower's-face strategy; the equivalent of defaulting in Prisoner's Dilemma.  But - as we all know - people don't behave like the individual self-interested rational actors of naive game theory because our utility function is not individual advantage, it is the advantage of the genes that we both carry and share with others.

Monday, 2 May 2011


Chemists are a species of German: when they have something new to name they take all the names of its component parts and string them together.  This is an admirably reductionist approach, but the result is usually something like polytetrafluoroethylene.   If I am reading out loud I have to take a run-up to stand a chance of pronouncing this sort of thing, and I suspect that others - even chemists - have to do so as well.

This is a problem that the computer scientists have attacked and solved.   They are under the constraint that the parsing rules of computer languages don't usually allow them to put spaces in names.   So, if they have a multi-component name, they have somehow to chain it together.  One solution is to use the underscore character, so, for example, a garbage collector heap pointer gets named garbage_collector_heap_pointer.   The other, and more elegant, solution is this: again like the Germans, they use capital letters judiciously; but they don't just put them at the start; they distribute them throughout the name to make its components stand out.  GarbageCollectorHeapPointer is as easy to read and to pronounce as garbage collector heap pointer (though I'll allow it does somehow seem to be in a hurry...)

This is the lesson the chemists need to learn.  Anyone can say  PolyTetraFluoroEthylene on the fly as they encounter it on the page.

PolyTetraFluoroEthylene itself (shown above) is remarkable stuff.  It has this structure:

The fluorine atoms bind so strongly to the carbon that the resulting plastic will easily handle temperatures of 250oC without breaking down.  It also has one of the lowest friction coefficients of any solid substance, and these two characteristics make it an invaluable material throughout the whole of engineering.  Just about the least interesting use of it is the one everyone has heard of: non-stick frying pans.

Another polymer with similar temperature stability is PolyDiMethylSiloxane:

But this acts like a rubber - it is incredibly elastic and makes moulds that are so good that they will reproduce features down below the wavelength of light.  Again, PolyDiMethylSiloxane is one of the most useful of all engineering polymers.  And also again, its least interesting use is the one everyone sees: bathroom sealant.

As far as I know, nobody has ever tried to combine these two ideas to make this stuff:

which, I conjecture, ought to have a considerable number of useful and interesting properties.

I call it PolyDiFluoroMethylSiloxane.

Thursday, 21 April 2011


Beekeeping is a three-way symbiotic mutualism.  The plants get pollinated by the bees, the bees get fed from the plants' nectar (and get extra nectar for their nest), and we get some of the extra nectar as honey and also get our crops pollinated.  We contribute the tilled land for the crops, the beehives, and medical care for both the plants and the bees.  All round, a very satisfactory Darwinian arrangement for the three parties.

But only one of those three parties is smart enough to understand how the whole thing works, and to use that understanding deliberately to enhance or to copy the mutualism.  That would be us.

We all know that leafcutter ants from the two genera Atta and Acromyrmex use the leaf-parts that they gather to farm fungi in their nests for food.  Given the rich social-insect resources of bees and ants, it ought to be possible  to genetically-engineer a species that farms out in the open.   That is, a species that would plant a crop, tend it, sting other species who would eat it, then harvest it concentrating it in one place.  We, once again, would provide the cleared land, the nest boxes, and the Medicaid.

This would be far too useful and powerful a technology to waste on mere honey.  Honey has a specific energy of 13 MJ per kilogram.  Vegetable oil, on the other hand, has a specific energy of 35 MJ per kilogram.  Our social-insect farmers could plant, tend, and harvest an oil crop for their own benefit, the crop's benefit, and our benefit.  (Bees already make wax, of course, so some of the chemistry is more-or-less written in the genes already.)

We could then use the oil in the place of petroleum.  Carbon neutral, future-proof, and entirely solar powered, the oil would simply drain from the insects' nests down a network of plastic pipes to the processing factory, leaving some behind to fuel the insects in their toil...

Thursday, 14 April 2011


Wellington boots are really indispensable.  They turn even the most timid city-dweller into an all-terrain vehicle.  But they are bulky and - if worn for a long time in hot weather - sweaty.  Also, they are not as comfortable to wear as shoes, particularly for extended walks.

Galoshes - that is foot-shaped plastic bags that go over ordinary shoes - go part-way to solving the problem.  You can deploy them just when you have to ford a stream, and wear comfortable shoes for the drier parts of your journey.   But they are not so good for riding horses or climbing - things that can be done in wellies (they have heels, which you need for riding).

So how about inflatable wellies?  These would be much lighter than conventional wellies, and - when emptied of air - would roll up and fit in your pocket.  But if you blew them up they would aquire sprung thick soles, and reasonably stiff calf-gripping sides.  A highly fluoridated outer polymer surface could make them very non-stick, so they could easily be rinsed off before rolling up and putting away.

And if the stream being forded turned out to be a little deeper than expected, they would make an excellent buoyancy aid, though at the wrong end until taken off and slung round your neck...

Wednesday, 6 April 2011


When our friends are ill, we often know it just by looking at them for an instant.  The lineaments of disease are writ on their physiognomy, and good doctors must become adept at making provisional diagnoses as soon as their patients walk through the door (assuming they are up to walking).

Now.  There are a lot of dead people - their number is increasing all the time; and they all died of something.  In the last century or so, almost all of them will have been photographed as well, and many will have been photographed towards the ends of their lives.  It should be straightforward to establish a truly vast database of late (in both senses) portraits linked to causes of death.  If all the portraits for each cause were then to be averaged, we'd have a typical face for pancreatic cancer, and another for a dissecting aortic aneurysm, and so on.  Non-lethal conditions could also be catalogued in the same way.

A website could then be established to which people feeling poorly could upload their picture from their webcam.  The picture would be compared with the database, and back would come the ten most likely diagnoses, in order of probability.

It would be hypochondriac heaven.

It would also automatically become more accurate with time.  This has nothing to to with the adding of extra data (though that would happen too), but is a specific instance of the general rule that people who exhibit text-book symptoms for a disease have a higher Darwinian fitness than those who don't.  They are more likely to get the right treatment, and in turn are then more likely to survive and to reproduce, passing on their text-book-symptom tendency to their children.  By this means, medical text books also become automatically more accurate with time, even if neither a word nor a picture in them changes.

It would be interesting to test this idea by adding an arbitrary unconnected symptom to a disease in medical training - say that diverticulitis often causes a slight rash along the hairline above the ears.  At the start, the correlation between rash and inflamed diverticula would be no better than random.  But as time went on, and the people exhibiting the rash lived to have more children, the initially nonsensical symptom would become a real one with diagnostic power.

In fact, it may be that many symptoms that you and I exhibit when we are ill have already been established in this way, dating right back to the preposterous diagnoses of pre-scientific physicians in the eighteenth century and before.   The evolutionary mechanism just described will have made them self-fulfilling prophecies.

All this assumes that treatments are effective, even if the diagnoses start off being based on random rubbish.  But, of course, the same evolutionary pressure will work on random treatments too.  So a witch doctor diagnosing demonic possession for what is, in reality, porphyria, and prescribing a course of viper venom is applying a selection pressure on the unfortunate patient to get cured by viper venom.  The first diagnosis and treatment will work no better than chance, but persistence and consistency will make viper venom a cure for porphyria in the long run.

Could this be the basis for much of medicine?  It doesn't matter what the diagnosis and corresponding treatment are at the start.  As long as they are applied consistently and over many generations, they will come to work effectively as the patients evolve to match them.

Saturday, 2 April 2011


Here you are reading a blog.  So, as a user of this new-fangled World Wide Web thing, the chances are that you also, like me, get far too much e-mail.

I don't just mean the tired old stuff asking please could I deposit $10,000,000,000,000,000 from my uncle in the Libyan Government in your bank account for a bit, or how I made $1,000 per hour with a can't-fail scheme that anyone can follow from the comfort of their home.  Those are instant-delete, and actually don't take up too much effort.

The worst e-mails are genuine ones asking you to do something time-consuming with possibly some small benefit to yourself.  You are a busy man or woman.  You have your own projects.  But these at least deserve the courtesy of a polite and (briefly) considered reply.  You make a start on them, and there is half a day of your life gone, never to be seen again.

There are many automatic e-mail filters and prioritizers out there, but I would like to propose a new one:

Sort e-mails by the average time you took to respond to the sender in the past, with the shortest times at the top.

Your e-mail system would then learn the people to whom you like to reply quickly, and put their e-mails at the head of your queue.  It would be simple for the e-mail system to calculate, and it could even be set up retrospectively by having the software run through your existing Sent mailbox compiling the initial set of averages.

Simple, self-modifying, and useful, it would adapt as you worked.  And you could just go down through the top few every day and then stop, confident that you have answered the e-mails most important to you.

Wednesday, 23 March 2011


There is no point in genetically engineering an organism unless the result is evolutionarily stable.  Usually that stability comes about as a result of engineering in a characteristic that is desirable to people - making a rice that is more nutritious, say.  The new rice will not be able to out-compete its wild rivals because it's wasting energy laying down beta-carotene (or whatever), but it still prospers because of its symbiotic mutualist relationship with its creators.  We protect and nurture it because it gives us something we need, so it doesn't go extinct.

We also put a lot of effort into eradicating pests.  There is a constant human war against pigeons, rats, cockroaches and the rest.  We rarely win of course (except for smallpox), but we manage.  However, suppose we were to change the balance?  Suppose we were to take a pest and to modify it so that it had a characteristic that people valued?  If we were to release that ex-pest into the wild, it should out-compete its unaltered brethren because it would enjoy human support.  We would have won by overwhelming the pest with competitors from among its own kind.

Doing this agriculturally (like the rice) would be useless.  We concentrate agricultural species into a protected monoculture in one place.  In contrast, the ex-pests would have to survive in the wild to compete successfully with the continuing pests, just occasionally gaining support from people to tip the balance.  How could we engineer a characteristic in the ex-pest that would benefit it in just occasional interactions with people?

How about performing for food?  For example, how about stand-up mice?

Artificial intelligence researchers are fond of computer programs called chatbots.  These are formulaic (and quite simple) pieces of software that are able to hold rather restricted conversations with people.  They are a step on the road to programs that can pass the Turing Test.  That's a long road, and chatbots really are only one step, but they are quite interesting to talk with.  They can be set up to tell jokes, to ask people riddles and to conduct all sorts of other, well, chat.

We can't make a genuinely intelligent mouse, of course (well - not yet).  But we might be able to incorporate the straightforward algorithmic processes of a chatbot, and to alter the animal's vocal tract to form the equivalent of a simple train-announcement-style speech synthesiser.  The mouse would come up to you and tell you a joke.  If you gave it cheese, it would tell you another...

Soon, all the mice in the world would be much-loved well-fed comedians...

Saturday, 19 March 2011


Goretex (invented by the Gores père et fils and Rowena Taylor) famously allows air and water vapour to pass through, but not liquid water.  It is a waterproof fabric that breathes.  Its trick is to sandwich a thin film of PTFE punctured by micropores in between two reinforcing sheets of conventional cloth.  The micropores allow air and vapour through, but are too small for a liquid to pass because of surface tension.

But it doesn't have a directionality.  The ideal garment would behave like Gortex, but only allow air and water vapour to pass out, not in.  That would keep you warm as well as dry, while still allowing a healthy flow of air.

The simplest one-way valve is a flap valve.  This uses the fluid medium that it is regulating as the source of power to prevent flow in one direction while allowing it in the other.  It ought to be possible to make micropores with additional microflaps over one end all out of the same material.  The flaps would not have to be perfect - we are after an aggregate effect (rather like Velcro, where not every loop catches a hook, but it still works fine).   A fabric made of such a material would allow gases to pass one way through it, but not the other and be exceptionally comfortable to wear.

Of course, if you can make the flaps small and light enough, right down at the atomic scale, then you have constructed a Maxwell's Demon...

Thursday, 10 March 2011


If you live a blameless life, and then develop a brain tumour that turns you to violence, the law may need to restrain you, but it has no business punishing you.  This is a general legal principle that extends to pleas of insanity, diminished responsibility and so on.

In the - seemingly eternal - philosophical debate over free will a key point is often made that universalises this idea.  It is that, if there is no such thing as free will, then it makes no moral sense to punish criminals at all.  If all acts that people make are either determined or spontaneous, and people have no magic Cartesian inner pilot exercising willfulness, then every accused criminal is effectively in the same position as the tumour sufferer.  Their actions are the products of circumstances, rather than a brain lesion, but their social and moral position is indistinguishable.  Once again, restraint may be appropriate, but not punishment.  (Irrelevantly, I hold that view.  But this is not a blog about my views.)

This argument, though, contains a profound asymmetry that - as far as I have been able to find - is never pointed out.  If the criminal has no free will, and hence no moral culpability, then neither do the judge and jury.  We can't say that the criminals couldn't help robbing the bank without also saying that the jury can't help convicting them, and the judge can't help punishing them.

Either all the people in the system are clockwork, or none of them are.

Thursday, 3 March 2011


This week, patents.

There are conservation laws for things like matter, and angular momentum.  You can't make more of either, or get rid of them.  If I have some angular momentum, and you take it, then I have less angular momentum.  But if I have an idea, then it doesn't matter who else knows about it - I still have the idea.  Information isn't a conserved quantity like the matter in my TV and my car, and the energy they use.  (And yes - I do know about the quantum Liouville theorem...)

It makes sense to prevent people stealing real physical property like TVs and energy.  But how can someone "steal" an idea?  The idea's creator is not, after all, deprived of it.

The creator is not even deprived of reasonable profit from the idea if someone else uses it.  All the creator has lost is their monopoly.  We usually consider monopolies to be a bad thing, and yet a monopoly is precisely what a patent or a copyright gives the creator of an idea.  If I want to make and to sell toothbrushes, then - quite rightly - I can't prevent your doing so too.  Why should I be able to restrict your freedom just because my idea is new?

The whole mad principle of "intellectual property" makes about as much sense as faith-based education or chiropractic, but it is entrenched.  How could we improve things without lots of lazy vested interests like drug companies and music producers bleating?

I propose the concept of a free patent (and its equivalent for copyright).  We keep patent law exactly the same, but add a "Free Patent" tick box to the patent application.  If a new idea's creator doesn't tick that box then they get their patent just like now.

If they do tick it, however, anyone else can use the creator's idea without paying any royalties.   But they have to pay tax on what they do, whereas the creator doesn't.

Governments would actually make more money in taxation - the non-idea-creator's tax bills would almost always outweigh the tax that would have been paid just by the creator alone.  The creator of the idea gets a real advantage from it - exclusively, no tax needs to be paid by them on the idea's exploitation.  And they also have no need to pursue patent infringers to keep their advantage (something individual inventors find financially impossible anyway).  So, finally, lots of lawyers have no patent infringement cases to fight and end up unemployed.

Result all round, I think.

Thursday, 24 February 2011


It is notoriously the case that nobody in Hollywood has a clue what they are doing.

Specifically, nobody can tell the difference between a script that will be a blockbuster and a script that will be a turkey.  Possibly I should moderate that: maybe some scripts are definite turkeys.  One can't imagine that a script written by a six-year-old would make a fortune.  Or maybe it would...

Anyway.  From among the remainder, nobody can tell.

Perhaps it's time to take people out of the choice altogether.   I have written here before about stylometry: the battery of computational techniques used to compare the authorship of documents.  These techniques come up with a large number of statistics on a given document.  Comparing those statistics with the values from another document allows a probability to be assigned to the hypothesis that both were written by the same person.

But that's not all that one could do with such a large bunch of stats obtained from - in this case - a film script.  They could also be used as input to an artificial neural network.

Artificial neural networks are computer programs that learn to distinguish between sets of input parameters.  For example, you might want a computer program to examine CCTV footage from a road and automatically to flag up when there was a traffic jam on it.  In this case what you would do is to show the network lots of historical footage of the road when it was clear, and another lot of when it was jammed.  As the two sets of historical teaching footage have already been classified, the network can be taught to distinguish the two and to flag up one of two outputs: clear or jammed.

The stylometric measures from movie scripts are just as good a potential input to a neural network.  Hollywood has hundreds of thousands of old scripts, and of course it knows the blockbuster or turkey status of each.  You use those as the teaching input.  Once again the network has two outputs: produce this movie, or bin it.

Large numbers of expensive movie executives and their acolytes could be dismissed, and the whole problem of movie finance could be turned over to a PC.  When it lit up and rang its bell, a nearby secretary would pick up a phone and call a director from a list.  Job done.

Of course, this doesn't end with movies.  It would work just as well with TV scripts, and with novels.  Publishers would no longer have to flatter the pampered egos of their once-bestselling authors in the futile hope that they will be able to do the magic again.  All manuscripts, whether from the famous or the unknown, would go into the machine, and out would come a whittled-down but steady stream of copper-bottomed, sure-fire publishing sensations.

Thursday, 17 February 2011


Golf balls are denser than water.  They are also quite expensive.  Golf clubs (I understand) make a decent side income from the balls golfers lose in their course's water traps.

But that means that they have to extract them.  There are all sorts of expensive machines that attempt to fish the things off the bottom.  Alternatively, some clubs leave fine nets over the bottom of the traps.  This makes recovery simple, but it disrupts the underwater flora.

A better solution would, I think, be to make a duck-food dispenser.  This would be designed to give out a few duck pellets whenever a golf ball was dropped into it.  The birds would soon learn to retrieve the balls from the bottoms of the water traps in which they swim every day, and then to drop them in the dispenser to get their supper.

Of course they might also be tempted to steal balls from the fairways...

Friday, 11 February 2011


"I think I can safely say," said Richard Feynman, who understood everything, "that nobody understands quantum mechanics."

Quantum mechanics just doesn't gel in the human mind.  We can use the mathematical language of quantum mechanics simply enough, but it doesn't paint a picture in our heads.  Language only has meaning, according to Wittgenstein, to the extent that it paints a picture.  He later revised this principle (the principle itself has obvious meaning, but paints no picture), but for many physical explanations that picture is still essential.

And almost the whole of our technology depends upon quantum mechanics, in the form of electronics.  Just think what more we could achieve if we all found quantum mechanics as intuitive as we find Newtonian mechanics.

My modest proposal, therefore, is to teach quantum mechanics at primary school to everyone, starting sometime around the age of five.  Of course, not every child will get it.  But then not every child gets music, or poetry.  That doesn't stop us teaching those subjects.

But for many children the principles and ideas of quantum mechanics will sink into their subconsciouses, which will work their usual magic, making the ideas instinctive when some of those children, as adults, encounter them again.

Then we will have a whole generation the physicists and engineers who will both understand the mathematics of quantum mechanics, and also find that it chimes with their subconscious idea painter.

Thursday, 3 February 2011


Liquid fuel rockets are good because you can turn the engines on, and - more importantly - off.

Solid fuel rockets are good because they are a lot simpler and cheaper, plus fuel storage is not a problem.

Liquid fuel has a higher specific impulse than solid fuel (typically 500 seconds rather than 250), but this is - in part - because of the necessarily inefficient design of a solid-fuel rocket: the entire rocket body is the combustion chamber.

The usual solid rocket fuel is a mixture of ammonium perchlorate, aluminium, and polybutadiene acrylonitrile, which is a synthetic rubber.  The latter is the binder, but it burns along with the aluminium in the oxygen from the ammonium perchlorate, of course.

Thermites (like copper oxide mixed with aluminium) would make lousy rocket fuels on their own.  They are very stable until you set fire to them, and they create a lot of heat, and both these are good.  But they give off very little gas, and so there is nothing actually to eject stuff out the back end of the rocket.  You just end up melting its casing.

But suppose you were to mix thermite with a binder?  That would decompose to gas to give an exhaust, and the exhaust would carry the metals from the thermite along with it out of the nozzle.  And, because copper has a high atomic weight, the exhaust gas velocity needed to carry a given momentum would be low and easy to control.   Lead oxide and aluminium should work even better in this regard, though it might not be a good idea to get downwind of the exhaust...

But we still haven't solved the control problems for a solid fuel rocket.  How do you turn it off, and then on again?

Instead of making the entire rocket body one solid lump of fuel, lighting it at the bottom end, and then running away as fast as your legs can carry you, why not wind a filament of rubberised solid fuel into a reel?

The reel would fill the entire body of the rocket, except for a liquid-fuel-sized engine at the bottom.  You would unwind the reel and feed the filament into the engine, where it would be ignited.  If you match the filament feed rate to the rate of burn, the thing would run steadily.  To shut the motor down, simply guillotine the filament then stop the feed.  To start it again, start the feed and re-ignite.

It ought even to be possible to arrange things so that, as the filament unwinds, the reel moves slowly inside the rocket body to keep its centre of gravity in the same place, making flight control much simpler and the rocket more stable.

Thursday, 27 January 2011


At a party in Washington in November 1954 a general confided in Richard Feynman that what the army needed was a tank that ran on sand...

The entire global capacity to generate electricity is about 2 TW.

To give you some idea of how big that is, if you were to weigh all the electricity humans generate in a year (we're talking E = mc2 here) it would come in at almost a tonne.  A year of the world's electricity weighs about as much as your car.

To give you some idea of how small that is, if you were to cover a 100 kilometre square of one of the world's deserts near the equator with solar photovoltaic cells, they would comfortably generate more than 2 TW, at least in the daytime.  That's an area slightly bigger than Devonshire or Los Angeles.  A square metre of PV cells costs less than a square metre of Los Angeles.

But we don't want to plod about in deserts laying down PV cells that we've made in factories on a different continent.  It's expensive and, worse, it's hard sweaty work.

PV cells are made of silicon.  A desert (to a first approximation) is also made of silicon.  Getting that silicon out and turning it into PV cells needs energy, of course.  But if you have some PV cells to start you off, you have some energy.

So.  Let's build an autonomous robot that crawls slowly over a desert shovelling up the ground and refining it into solar cells.  These it connects up and lays down behind it as it goes, keeping its own connection to them to power the refining process.

After many years the PV cells will lose efficiency as they get scratched by sandstorms and the like.  The robot could work in a continuous cycle, like painting the Forth Bridge, going back to its start point then grinding up the worn old cells and re-refining them back to brand new ones in its wake.

Free green electricity for everyone for ever...

Wednesday, 19 January 2011


All technologies that we create, create problems - just think of the motor car.  But on balance every single technology that we have created has given us more than it has taken away.

Every single technology except one: explosives.  Explosives do have limited beneficial uses: mining, quarrying and demolition come to mind.  However, the principal use of explosives is the opposite of beneficial - it is killing.  What's more, virtually every machine that we make for killing relies on explosives.  These machines range from those such as the suicide vest or the nuclear bomb that have killed comparatively few people, to viscously lethal weapons of mass destruction such as the assault rifle.

And the beneficial uses of explosives all have alternatives.  Mining, quarrying and demolition can be done with expanding collets driven by very high-pressure hydraulic oil.  Explosives are a little cheaper for the job, that is all.

At least since the invention of agriculture in Mesopotamia 12,000 years ago, humanity has been doing genetic engineering.  These days this is achieved by the direct editing of DNA, but the old way - selective breeding - still works well, and it can be done by anyone.  It is particularly easy to breed microbes selectively.   Suppose you want a yeast that will produce (and tolerate) higher concentrations of alcohol.  You just set up a few dozen small fermentation vessels, add yeast to each, and drop in a little extra alcohol.  Those yeasts that survive you breed from; the rest you discard.  Repeating this process over a few weeks (upping the alcohol each time) will give you a surviving yeast with much higher alcohol tolerance than its wild forbears.

What has the selective breeding of microbes to do with alleviating the misery caused by explosives?   Well - explosives are organic molecules with a lot of energy locked up in their chemical bonds.  In other words they are an ideal potential food source for yeasts, bacteria and archaea.  But explosives haven't been around for long enough for such explosive-eating microbes to evolve by natural selection.

So why not apply artificial selection to the problem?  Dope the nutrient medium in a collection of petri dishes with small quantities of many explosives, add a large number of different microbes (sewage would probably be a good source) and pick out those microbes that do well.  Gradually reducing the conventional nutrient and upping the explosive content over the generations should select for those organisms that can digest the explosives best.  And, as the microbes multiplied through the explosives, they would neutralise them by using up chemical energy.  The energy would still be released - ultimately as heat - but harmlessly over weeks rather than lethally over a microsecond.

It might be an idea to add a little brass and steel to the breeding mix, so that the bugs could also work up the ability to etch through shell casings to get at their lunch.

The result would be that ammunition would turn to harmless goo in its magazines and that bombs would rot in their silos.

A world elevated to using bows and arrows again would not be completely peaceful.  But it would be a lot more peaceful than the world that explosives has created.

Thursday, 13 January 2011


Bitter is the taste of poison.  Almost all toxic substances taste bitter to some extent.  And some things - particularly some harmless plants - taste bitter even though they are not poisonous.  This is for the same reason that a harmless hover fly looks like a wasp.  The plant has evolved a bitter taste after animals that might eat it evolved to flag up poisons using their sense of bitterness.  Think cabbage; think coffee.

Children don't eat their greens because they are very sensibly resisting their mad parents' attempts to poison them.  Parents do eat their greens because they have learned that they get sustenance (or - more immediately - a caffeine hit) by overcoming their initial revulsion.  Indeed, most adults have conditioned themselves actively to like bitter - coffee and pink gin are treats.

But if such a diverse range of chemicals all give rise to the same taste, that must surely mean that there are receptors on our tongues for all those many different molecules?  Most poisons work by binding to some specific vital body protein and so stopping it from doing its job.  Maybe the way that evolution has designed the bitter area of our tongues is simply to take all (or a lot of) the proteins that - if blocked - would stop us from working, and to wire a small sample of each one in our taste buds into the one bitter signal to our brains.

Now most drugs work in the same way as most poisons - they bind to specific proteins to slow or to stop their function.  Indeed, many drugs are poisons if taken in super-medicinal doses.  And most drugs taste bitter.

Perhaps an easy way to find lots of useful new drugs would be to find just those proteins wired into our bitter taste buds, and then look specifically for substances that bind to them.

Thursday, 6 January 2011


Ever since it was devised, people have sought to improve on the QWERTY keyboard - the Dvorak keyboard above is a famous example.  Of course, there is a sense in which an established order is best simply because it is established.  For example, try to think of an improvement on this arbitrary sequence: ABCDEFGHIJKLMNOPQRSTUVWXYZ.

But clearly there is an ergonomic aspect to a keyboard that ought to admit of some sort of optimisation.   So why not make keyboards so that you can unplug and swap the keys?  Then people could experiment to find the pattern that worked best for them.  Optionally, a mouse click could upload that pattern to a central website, where statistics on popular (and unpopular) patterns could be gathered.

Then we could just let the best patterns evolve...