Banknotes – Go Compare

Someone blind faces a mountain of problems. One example.

If you’re blind (or have low vision), how can you know the face value of a banknote? You can rustle it in your hands but it won’t whisper its secret.

In the UK and most countries different denominations of banknote are smaller or larger. For example, according to my tape measure, UK notes step up by 1 cm per denomination, in length and width. The 10 GB pound note is 1 cm longer and 1 cm wider than the 5 GB one for instance.

So if you are blind, and lucky enough to be presented with a large fistful of GB notes of different denominations, you can (with a bit of effort!) sort them into piles of 5, 10, 20 and 50 notes, using size comparison.

However it’s much harder in practice. What if (as is most likely) you don’t receive a fistful but only have a few notes (or just a single note) of one or two different values? You can’t do a full comparison. They might be all 10 pound notes, or some 5 and some 10 pound notes, or… You get the idea.

Of course there is plenty of modern technology that can recognize the value of banknotes. But there is also a simple, portable, inexpensive, and homespun approach, needing no special equipment and no batteries. It’s called the ‘Arthur Pearson method’, I assume after the famous blind newspaper proprietor. It uses your fingers as a gauge, i.e. as a measuring device. Here’s how it works and you may like to try it as an experiment.

Get hold of a couple of banknotes of different values. Put them on the table before you. Then shut your eyes tightly.

Place the first note, width-wise, between your index and second finger. Imagine your fingers are scissors and you want to cut the note in half.

Now use your other hand to compare the width of the note with the length of your fingers.

Then do the same with the other note.

It’s a surprisingly good way of distinguishing different values. Of course it would take practice for it to become reliable, but for me

  • A 5 pound note is taller than finger 1 but shorter than finger 2
  • A 10 pound note is taller than finger 1 but about the same size as finger 2
  • A 20 pound note is taller than either finger.

Obviously all this will vary a lot from one person to another, but our fingers do form a rough and ready size (and therefore value) gauge for banknotes.

My fingers are too stubby to tell 20 and 50 pounds apart but I never have 50 pound notes anyway.




Posted in 101 experiments in seeing, Assistive technology, Blindness and visual impairment, Uncategorized | Tagged , , | Leave a comment

A Tale Of Two Polychromats


A brightly coloured mantis shrimp facing towards the camera.

This gorgeous but rather disturbing creature is a mantis shrimp. It looks like an alien being, and indeed it has superpowers!

But it’s not an alien. Nor a mantis. Nor a shrimp. But it looks a little bit like a mantis. And it’s related to shrimps.

Mantis shrimps are remarkable crustaceans. They are ferocious marine predators, up to about 30 cm in length. There are many species which come in two kinds: ‘clubbers’ and ‘spearers’. Something both kinds share is lightening speed. The clubbers give the prey (such as other crustaceans, or fishes) a knock-out punch, the spearers impale it. To give an idea of how fast these animals can strike, the claws of punchers can achieve speeds of about 17 m per second, or 40 mph.

Anyone who keeps mantis shrimps in an aquarium may find a clubber can break the glass, and a spearer can give a nasty hand injury if handled.

Why do I write about mantis shrimps? It’s because as well as being super-boxers, they have another super power. Extraordinary colour vision.

Humans are typically trichromats, with three types of colour receptors (cones) in our retinas. One type of cone is most sensitive to red, one to green and one to blue. Each type of cone has a peak response at a particular wavelengths of light.

Some people (mostly men) are colour blind, and have more restricted colour vision. It’s also thought some women may have a fourth kind of cone and are tetrachromats. If so they may be able to see more different shades of colour than the rest of us. However the existence of human tetrachromacy is not proved. There are web sites that claim to test for it but I think the results are unlikely to be be reliable.

So three types of colour receptor is the commonest situation for humans. The mantis shrimp can knock coloured spots off this. Some species have as many as 12 dfferent sorts of colour receptor (and that’s leaving out kinds attuned to ultraviolet, and to different polarization of light). Four times the number of an average human being.

Intuitively we’d think from this that mantis shrimps can see far more many different shades of colour than we can. However very recent research has challenged this idea. By rewarding mantis shrimps with snacks researchers were able to train them to respond to 10 particular wavelengths of light. (I was amazed that this degree of training was possible.) The researchers then tested whether the animals could discriminate between the wavelengths they had learned and other wavelengths nearby. And they couldn’t tell the colours apart.

How can this be? If the mantis shrimp has many different types of colour receptors then surely it’s a no-brainer that they must be able recognise a huge palette of colours?

Brainer is the cue. Things are more complicated. Think of a daffodil flower. It’s yellow. But we have no special photoreceptors for yellow, only for red and green. But both types of cones are stimulated to a certain extent by yellow light, and the brain takes this information and combines it. So we perceive the daffodil as yellow (and have a strong sensation of yellowness, a qualia for a colour we can’t see except as a mixture of others).

The research suggests the mantis shrimp cannot (it turns out) see more colours than human beings, in spite of having more different forms of colour receptor. What it seems it  can do is recognize colours from a limited range, but do so blindingly quickly.

Humans have evolved a strategy of being trichromats, but able to enormously expand the variety of colours they can perceive by sending the signals from the different sorts of cone to the brain and combining them there. This requires a lot of brain capacity and effort, but it lets us distinguish between about 10 million colours.

The mantis shrimp has a smaller brain, and seems to follow a simpler strategy. Essentially it has a more complex retina, one that generates enough colour discrimination ‘up front’, without its brain having to process the information in the way ours would. The mantis shrimp can recognize the colours it needs to ‘at a glance’, without investing effort in combining signals from different inputs. It’s reasonable to theorize that this works faster – less to transmit to the brain, less for the brain to process. The mantis shrimp must be swift in club and claw and so speed matters.

I promise a tale of two polychromats, so what is the second sort?

Dragonflies. These are savage predators of air (as flying insects) and water (as larvae). Like mantis shrimps they are brightly coloured, almost jeweled, and they have large and conspicuous eyes.

A dragonfly perched on a leaf.

Other recent research has investigated how many different kinds of colour receptor dragonflies might have. This is more complicated because (as I understand it) there is no direct evidence of what different types of cones the insects actually have, only or what genes they have for opsins – pigments used by visual receptors. The count is 15-33, a bit a ahead of mantis shrimps. It’s been suggested that they can therefore see many more different colours than humans. But there is no direct evidence that dragonflies actually have that many different kinds of cone, and if they do, they probably can’t recognize any more different colours than we can, for the same reason a mantis shrimp can’t – they don’t have the neurological equipment (a dragonfly is too small) to carry out complicated processing of visual signals, but just rely on instant recognition of a small colour range.

It cannot be a coincidence though that predators that hunt by daylight have large eyes and good colour vision. If only we could examine the vision of a velociraptor!

While writing this post I saw a post on one of the blogs I follow (Adventures in Low Wision), which rather magically was also about the number of cones we have, and also mentioned the beloved mantis shrimp. It’s well worth reading. You can find it here.


Mantis shrimp image from

Dragonfly from


Posted in 101 experiments in seeing, Animal Intelligence, Sensory perception, The brain and visual perception, Uncategorized | Tagged , , , | 4 Comments

The Colour of Nothing

Recently Belgian Artist Frederik De Wilde exhibited a square blacker than any human being has ever seen before. Blackboards look black but actually reflect as much as 10% of the light falling on them. De Wilde’s black square reflects 0.01% – one thousand times less.

There is an impressive image here. New Scientist magazine has described it as an attempt to paint nothing.

The work is a reflection of the celebrated Black Square that the Russian Malevich showed in St Petersburg in 1915. The image above is an image of Malevich’s work I found in Wikimedia Commons. The painting had huge influence at the time and I believe at the end of his life the artist had it hanging in his bedroom. Today it’s in a fragile state (with the black foreground crazing to reveal the white below), and in another echo from the past De Wilde’s NanoBlck-Sqr #1, which uses carbon nanotubes on a white frame, is so delicate that you are only permitted to view it under supervision.

But neither Malevich nor De Wilde have captured what nothing looks like. The blind have a better understanding, which you can share by a thought experiment.

Right now, what do you see round the back of your head?

You’ve no eyes there, so you just saw (or didn’t) nothing. And it’s not a bit like black, is it?

This might seem trivial or frivolous, but it’s not at all. I have a large blind spot (a scotoma), that occupies nearly half my visual field. People frequently ask me what I see there. They intuitively expect it to be a black patch.

But it’s not: it’s nothing. That’s very hard to explain. And impossible to paint. Even invisibility has a sort of appearance. Nothingness doesn’t. So how could it have a colour?




Posted in 101 experiments in seeing, Art and vision, Blindness and visual impairment, The brain and visual perception, Uncategorized | Tagged , , , , , , , , , , | 2 Comments

Your (Driverless) Carriage Awaits, Milady

There’s a family tale about my great-grandfather. Every Saturday (a local market day) he would take his horse and cart into town. There he would visit a pub (long gone now, but I remember it) called the Dog, and drink exactly 8 pints of beer. At that point he would get back on the cart and let the horse take him home. It knew the way.

In the past week or so there has been a blizzard of news stories (here’s one) about the decision of the UK government to allow driverless cars to be trialed in four cities. Other governments have also shown great interest in the idea. This is quite surprising, because governments aren’t typically early adopters of new technology, but I suppose driverless cars are seen as economically compelling.

A related story is about Google’s ambition to win the race for the driverless car market, having made a big investment in experiment vehicles over the last five years.

What seems to me most interesting is how a driverless car could help someone with a disability that prevents them driving. The goal of driverless cars is to be fully automated, so the driver would become effectively a passenger. Traveling in such a vehicle you wouldn’t need a license, or to watch the road, or even to be awake. (A bit like my great-granddad being carried back home by his horse.)

This raises other issues. Was my great-grandad breaking the law, by being intoxicated whilst in charge of a horse and cart? I’ve not researched it but suspect yes. So what’s the legal position with driverless cars?

Here again governments (perhaps feeling they have been wrong-footed by technology in the past) have taken the initiative. Legal changes proposed in several countries would abolish the idea of being in charge of a vehicle, if it is fully automated.

It’s a surprising thought, but robot cars might be more reliable, and lead to greater safety and less damage to vehicles than ones driven by humans. Third-party insurance might no longer be needed, because the manufacturer of a driverless car might simply pick up the liability, once the risk is so low. One recent story suggests that automated cars might even make motor racing with human drivers obsolete (although the “Big Blue” chess program did not make human chess players obsolete).

Do I think driverless cars will replace driven ones? Yes. But, but..

Do I they will help people who are current excluded from personal mobility –  such as the young, the disabled, those without driving licenses, the elderly? A qualified yes. Eventually they will.

When? Here are some various views. Band wagons (possibly driverless) were being jumped on. Projections are pretty bullish: for example it’s claimed a driverless Audi A8 limo is only two years away. Others projection for a mass produced driverless car are 2020 or 2025. Or even 2028 – 2032.

So I’m not going to hold my breath, in case I don’t have enough left. The most optimistic estimate is 2017 (good news) but (bad news) the price is steep, I guess more than £75,000, which for now will rule out most disabled people.

I think I’ll just have to stick with buses, trains and planes (and the occasional taxi). You can do a lot of public transport for £75,000.

All the same the idea that automated vehicles will one day make private travel possible for disabled people and other excluded groups  is very tantalizing. If technological and economic progress don’t fail us I predict the dream will come true over the next couple of decades.




Posted in Assistive technology, Blindness and visual impairment, Old age, Stroke, Disability, Cognition, Uncategorized | Tagged , , , , , , , , , | 1 Comment

Of Cats and Captchas

Recently Google announced a different kind of captcha.

You know captchas. You’re trying to register with a website but it wants to be reassured you are a real you, not just a robot program masquerading as a human. So you are asked to identify a sequence of letters and numbers presented in distorted form, sometimes set against a confusing background. Try this one.

A captcha. The letters can't be made out.

When this technology was first used, although somewhat irritating to humans, it was  fit for purpose. Anyone with normal vision could generally identify the characters, and robot programs couldn’t. Humans 1, robots 0.

But security is an arms race. The programers writing the robots found better algorithms and wrote better robots that could read the captchas. Captcha programers fought back, distorting and obscuring the letters more. In turn coders responsible for the robots upped their act, and so it went on. The race hotted up.

In just a year since I last wrote about captchas we’ve reached a stage where the traditional captcha can only defeat robots if it also defeats people. It’s impasse.

So Google (and some others) have been rethinking.

What are captchas for? To check that you are not a robot.

So the first plank of Google’s new approach is a box to tick, “I am not a robot”.

This sounds silly. Can’t a robot tick the box just as well as you and I? Yes it can, but what it can’t do is use the web page like a human. Google’s technology is capable of monitoring a number of subtle variables, for example the way you use the mouse, and using them to assess the probability that you a human. (This reminds me of a program written two or three years back to address the “kitten on the keys” problem – you leave your computer unattended and the cat walks on the keyboard, generating a whole lot of random typing. That program worked by recognising the cadence of a cat’s footsteps – the characteristic timing and rhythm, different from a human typist – and ignoring the random key presses.)

If there is any doubt about your humanity you’ll still be presented with a good old-fashioned captcha. But most people should be able to pass just on the tick box.

On mobile devices it’s a bit different There is no tick box. Instead you are presented with an image identification task. One of Google’s examples involves a picture of a cat (after all this is the internet!), together with a montage of other images: four incredibly cute kittens, one dog, a dog-like animal, a pair of guinea pigs, and a plant I can’t identify. The human is asked to tap the ones that match the cat picture.

This all makes it easier for the normally sighted person to demonstrate they’re human, but what if you are blind? Obviously the mobile version with the cats won’t work for you, but it’s possible the tick box version might, although anyone with low vision is likely to navigate around the screen in a rather different way from someone fully sighted.

It would be interesting to know if people using screen readers will be able to take the tick box test successfully. At present when confronted by a captcha they usually have to attempt an audible captcha, which I found very challenging to identify when I tried it, and of course many elderly people have significant loss of both vision and hearing.

This technology may also be seen as slightly intrusive. The captcha program is watching what we do, which may disturb some people, although I don’t see it as sinister myself.


Posted in 101 experiments in seeing, Assistive technology, Blindness and visual impairment, Old age | Tagged , , , , | Leave a comment

Treatment for stroke is like rushing roulette

Last year about 70,000 people in England and Wales were admitted to hospital suffering from stroke.

A report by the Health and Social Care Information Centre (HSCIC) shows stroke victims don’t always get the rapid attention they need.

If you have a stroke your chances of survival and recovery are very much dependent on prompt treatment. Without it about 15% of stroke victims die and another 10% require permanent care. Many of the rest end up disabled. But if treatment is started soon enough the prognosis is much better for many patients.

How soon is soon enough? About three hours or so. So stoke is an emergency. There’s not much slack, because a person has to be got to a hospital, then moved to a specialist stroke unit, and their condition evaluated, all before treatment can begin.

Given this urgency a lot of emphasis is placed the public being able to recognize stroke symptoms and call an ambulance. This is the message of the FAST campaign.

Even the extra delay caused by taking someone to hospital by private transport rather than ambulance could be significant, because each 30 minute that passes diminishes the chances of their making a good recovery by 10%.

But a report published this month (‘Stroke treatment in England varies widely by location‘, HSCIC) show that getting someone to hospital is often not the weakest link. Once they arrive at the hospital and are admitted they need to be moved quickly to a specialist stroke unit where they can be treated. I believe the target is for 90% of patients to reach a specialist stroke unit within four hours of admission – already about the limit if treatment is to be as effective as possible.

But how quickly you receive treatment depends very much on where you happen to live.

According to the report the average proportion of patients receiving specialist care within four hours is on average just 60%.

And there is a great deal of spread. The best area in the country managed to reach the high 80s. The worst was down in the low 20s. The variability is well illustrated by the map the HSCIC has produced. You can read more detailed statics here.

There are  wide variations between locations that are geographically adjacent, and so a person’s chance of dying can hinge on what side of an arbitrary boundary they chance to live.

The national press picked up the story and of course described it as a ‘postcode lottery’. A cliché yes, but they have a point. There will inevitably be variations in health care provision, for all sorts of valid reasons. But one so wide – a ratio of 4 to 1 between best and worst?


Posted in Stroke, Disability, Cognition, Uncategorized | Tagged , , , , | 2 Comments

Birds Do It, Bees Do It…

…and so do cats and dogs. (Even hedgehogs.) And reindeer.

See in ultra-violet that is.

I’ve often wondered if humans can share this superhuman capability (a contradiction in terms as I realize). What’s the evidence? First the background…

The discovery of ultra-violet (UV) is an interesting story. In 1801 Johann Wilhelm Ritter read that William Herschel had discovered “heat rays”, a form of radiation below the red end of the spectrum (we now call these “heat rays” infra-red). Ritter looked for a form of radiation at the opposite end – beyond violet – and found one. Of course he could not see it but he proved its existence by shining white light through a prism and showing that silver chloride placed beyond the violet edge of the spectral colours still darkened, showing that there was a form of radiation – of light in fact – that we can’t see but silver chloride can.

It’s well established that birds and bees can see UV. Some birds have a specific visual pigment for UV and this extended vision plays an important part in their lives. Humans, and mammals generally, don’t have specific photo-receptors for UV. But it turns out that the three types of colour-sensitive cell we do have – responding to red, green and blue – can also respond to UV, with the blue-sensitive cells slightly better at it than the others.

So why don’t humans see UV and how it it that cats (for example) do? The reason is that in normal circumstances it is blocked by the lenses of our eyes, which filter it out and prevent it ever reaching the retina. Why this filtering takes place is not entirely clear.

One theory is that that our lenses are thicker simply because we are bigger animals. Cats are smaller so there is less lens to filter out UV.

A second is that our lenses are adapted to block UV in order to protect our retinas, which are more easily damaged than those of other mammals.

A third is that thicker lenses offer sharper vision which has importance for humans, whereas the vision of other animals may prioritize other aspects. For example cats have lower visual acuity but are much better than us at seeing in dim lighting.

Whatever the reason for the filtering, the effect is that most humans UV-blind.

But surprisingly not all. Some people can undoubtedly see UV. They may have thinner lenses, or no lenses at all (the term is aphakic), often as a result of surgery for cataracts, or artificial replacement lenses that allow UV through. They typically describe its colour as “whitish-violet”, which probably results from their red, green and blue receptors all being stimulated, which would give a sensation of white light, and the perception of whitish- violet comes from the blue receptors being somewhat more sensitive than the other two kinds.

And what about reindeer, why might it be useful for them to see UV? (Maybe Rudolph’s nose was the wrong colour!) Here’s a clip from University College London.

Posted in 101 experiments in seeing, Animal Intelligence, Sensory perception, Uncategorized | Tagged , , , , , , | Leave a comment