Simulating a laser-based image projector

A long time ago, a podcast from The Naked Scientists included an interesting interview about a new laser projector developed by Light Blue Optics in Cambridge, UK. It sounded cool, so I found out more about it. Their website gives an outline of the technology, and from there I was able to find some relevant papers.

This post gives the results of some experimenting I did with their ideas. The technique is full-colour, using red, green, and blue lasers, but here I work just with greyscale for simplicity. I’ll use the image to the right, which I took on a recent trip to Australia.

The ideas

The technology, as explained in the interview, ‘steers’ the light to where it’s needed, by diffracting a laser beam. The interview further explains that they create lots of noisy images, which your eye averages to give the image they’re trying to project. In a bit more detail:

Fourier transform property of a converging lens

A pleasing property of the universe is that a converging lens performs a 2D spatial Fourier transform of an image on a transparency. The details are explained, for example, in this book (do ‘Search inside’ for ‘Fourier transforming property’ and follow the hit to p.103).

It seems like we could now build our projector — simply find the inverse FT of our image, put that on a transparency, and shine laser light through it and a lens. However, in general the inverse FT is complex-valued, and, the papers tell me, no physical device exists which can give arbitrary amplitude and phase to the light at each pixel.

‘0 or π’ phase-only spatial light modulators

What does exist is a device which can give a phase shift of 0 or π to the light passing through each pixel, while preserving amplitude. If we approximate the inverse FT, quantising each point to ±1 (corresponding to a phase shift of 0 or π), what happens? We lose a huge amount of information, and when we use the transparency in the projector, the result is mostly noise:

With a willing attitude, you can just about discern some kangaroo-like features, but the image quality has room for improvement.

Human eyes can’t see phase

The next insight that LBO had (as I attempt to retrace their train of thought) was to realise that we don’t have to reconstruct the exact target image. The eye doesn’t care about the phase of the projected image’s pixels. From the original target image, we can create an alternative target by applying random phase independently to each pixel. Then we follow the same process of ‘find the inverse FT; quantise it to 0/π phase; put it into the projector’. We get a new very noisy reconstruction, but the point is that we get different noise. Four examples, starting with fresh random phase each time:

Let the human eye average many reconstructions

The final piece is to project many of these noisy reconstructions in very quick succession. The viewer’s eye, by ‘persistence of vision’, then averages them, and perceives the target image. Our 0-or-π phase-only modulator has to have a very high refresh rate for this to work as this is all in the context of video projection. We need 25 target frames per second, each of which needs many reconstructions.

Python code

With the ideas in place, we can use the following code to simulate the physical process of creating one noisy reconstruction:

import numpy as N
import numpy.random as RND
from numpy.fft import fft2, ifft2

# Input image is in 'target'.

# Each pixels keeps its magnitude but gets random phase:
phi = RND.uniform(low = 0.0, high = 2.0 * N.pi, size = target.shape)
phase_multiplier = N.exp((0.0 + 1.0j) * phi)
randomly_phased_target = target * phase_multiplier

# Find the display which, when FT'd, gives the random-phase target:
inv_ft = ifft2(randomly_phased_target)

# Quantise to just +-1, i.e., phase of 0 or pi:
inv_ft_pm1 = N.sign(N.real(inv_ft))

# Find the image after FT and projection:
noisy_reconstruction = fft2(inv_ft_pm1)

We run this as many times as we like, to generate many noisy reconstructions. (I used it to generate the examples above.)


As we take the average of more and more samples, the original picture emerges clearly from the noise:

5 samples

10 samples

100 samples

1000 samples

Conjuage image removal

This summary omits one important detail: The transparency is purely real, taking values just in {-1, +1}, so its FT is conjugate-symmetric. This manifests itself as a 180°-rotated copy of the image superimposed on the projection. Various clues in the papers suggest that one simple approach to this is to pad the target with black, and use only half of the projection, which is what I did for these experiments.


It works!

LBO apparently ‘sidelined its holographic laser projector to focus on software for touch technology’ in mid-2012, and their web-site has no news more recent than that, so perhaps the technique was not commercially viable. All the same, it’s a very interesting fusion of ideas.


This book chapter (under a Creative Commons licence) gives most of the details: