John Naughton 

Has Apple finally given its super-fast iPhone a camera worthy of the name?

At up to 5 trillion operations a second, the new XS model allows you to customise your ‘bokeh’ to perfection
  
  

What a picture – the new Apple iPhone XS.
What a picture – the new Apple iPhone XS. Photograph: Samuel Gibbs/The Guardian

If you’re a keen photographer (which this columnist is) one of the things you prize most is a strange property called bokeh. It’s the aesthetic quality of the blur produced in the parts of an image that are not of central interest – the way a lens renders out-of-focus points of light. You often see it in great portraits: the subject’s eyes are razor-sharp but the – potentially distracting – background is fuzzy.

In the era when all photography was analogue, the only way to get good bokeh was to use lenses that produced narrow depth of field at wide apertures. Since the optical performance of most lenses decreased at such apertures, that meant that serious photographers faced a trade-off: their lust for bokeh involved compromising on overall image quality. And the only way round that was to spend money on lenses of complex design and exceedingly high optical quality. Neither of these came cheap: a photo-buff of my acquaintance, for example, recently laid out a small fortune for a Leica Noctilux f0.95 aspherical lens, which, its manufacturer claims, provides “unique bokeh”. (At a retail price of £9,100 it jolly well ought to.)

Enter Apple, which was once a struggling computer company and now bestrides the digital world. For some years it has also been dominant in the photography industry, even though nobody thinks of it as a camera manufacturer. The company got into photography via the iPhone, which launched in 2007 and had an inbuilt camera. Initially, the camera was barely adequate and certainly nothing to write home about. But over the last decade Apple has put a lot of effort and resources into improving it. And that effort paid off. In 2017, for example, 54% of the users uploading to the upmarket photo-hosting service Flickr employed iPhones. Smartphones in general accounted for 50% of photos uploaded, compared to 12% from traditional point-and-shoot cameras.

Good though the iPhone camera became, however, it never impressed camera buffs, who disdained its small sensor, relatively poor performance in low light conditions, sparse manual controls, etc. And, of course, it couldn’t give good bokeh. So while it fulfilled the first law of photography (which says that the best camera is always the one you happen to have with you) the iPhone couldn’t be taken seriously as an actual camera.

Until recently, this was my view too. But the recent arrival of the iPhone XS has sown serious doubts. The key moment came when using it in “portrait” mode. You take the photograph as usual: the person you’re snapping grins inanely in the time-honoured way and – bingo! – there is the image. The exposure is perfect and the subject is razor-sharp – but so is the irritating background, including that hot-dog vendor you hadn’t noticed at the time. But under the image there’s a graduated scale labelled Depth and a slider. Move the slider and gradually the vendor goes out of focus until he’s satisfactorily fuzzy. You’ve suddenly got good bokeh! But whereas in the analogue days, whatever bokeh you got was immutably fixed the moment you pressed the shutter, now you can adjust it retrospectively.

Welcome to the world of computational photography. One of the most astonishing revelations of the recent past was that Apple had 800 engineers working just on the camera for the iPhone – which led one to wonder if Nikon or Canon had anything like the same level of R&D effort. Now we are finding out what all those geeks have been doing. Essentially, they have added a second camera (with a telephoto lens), upped the quality of the sensors and harnessed the power of the phone’s A12 processor and a specialised machine-learning module (called the “neural engine”, if you please) to analyse everything those cameras see.

In “portrait” mode, for example, the neural engine can do up to 5 trillion operations per second. (Yes, you read that correctly.) Firstly, it uses machine learning to distinguish faces in the frame. Once a face is detected, facial analysis enables it to apply appropriate portrait lighting effects to it. The A12 processor then teams up with the neural engine to separate the subject from the background, so that when you move the slider the rendering of the background becomes increasingly fuzzy.

This is computation on a dizzying scale, happening in milliseconds on a device that you hold in your hand. Professional photographers will still say that while the results may be electrifying for amateurs, you still can’t get acceptable billboard-scale images from an iPhone. And maybe they’re right. But I suspect that many photographers will come round to seeing things Apple’s way – that “a picture is worth a trillion operations”. Especially when it’s got good bokeh.

What I’m reading

Rebel without paws
Ever wondered how Julian Assange is getting on in the Ecuadorian embassy? A leaked memo reproduced by The Register suggests that all is not well, with a series of house rules to follow if he is to retain his wifi privileges. And he’ll have to clean up after his cat - or else.

Standards practice
Who will teach ethics to the boy wonders of Silicon Valley, asks Kara Swisher in the New York Times. First, you have to deal with the question: “Ethics???”

Censor direction
The government is pressing ahead with requirements that porn sites check the age of users. Good idea in principle. Tricky in practice, though, as The Register reports.

 

Leave a Comment

Required fields are marked *

*

*