Refocusing: Could Software Solve the Depth of Field Problem on Small Sensors?

napilopez

Member
Location
NYC
Name
Napier Lopez
Refocusing: Could Software Solve the Depth of Field Problem on Small Sensors?

14225430602_9754657154_b.jpg

This was shot on 4 megapixel cellphone camera.

Cameras get better. Every generation, features are added. Every two or three generations, sensors improve dramatically. Resolution, noise levels, dynamic range, and color fidelity have reached a point that many photographers feel they don't need a large sensor to get image quality that's "good enough" for their uses. But there's one area photography has refused to budge on: depth of field. Unfortunately, physics is stubborn; the wider your field of view and further your subject, the bigger your sensor or the brighter your lens needs to be if you want to have any sort of shallow depth field.

But maybe it doesn’t have to be this way. After all, virtually every recent compact camera and mirrorless system incorporates some type of software correction to compensate for the physical limitations of its optics: chromatic aberrations, distortion, vignetting, etc. Perhaps a related technology could be used to not just correct flaws, but actually enhance a sensor and lens combination.

14204619056_971dafe4dc_z.jpg


14041182527_04b82f8d1a_z.jpg

Of course, adding bokeh in post isn’t a new idea. Before I bought my first real camera, I was using Photoshop plugins to imitate a shallow depth of field look on my cellphone pictures. Sometimes, the effect would turn out surprisingly realistic—indeed, I still sometimes add just a little bit of pseudo-bokeh when I feel my kit didn’t give me enough. But this method is way too time consuming and cumbersome for any image with elements at varying depths. On the other hand, Instagram and other tools feature tilt-shift and other effects to somewhat imitate shallow depth of field quickly, but they almost always look incredibly fake.

The solution to this artificiality could lie in any of various methods to measure the depth within an image, and then map that information to different blur values. Arguably the most exciting of these is Lytro’s light-field technology, which places you in a sci-fi movie and lets you refocus an image after it has been taken. By capturing angular information about incoming light in addition to the usual color and intensity, light field cameras can perform a number of cool tricks. You can create 3D photos from a single capture and lens, design focus animations to direct the eyes to different points of interest, interact with photos, and indeed, make the depth of field appear shallower than it actually was.

4.jpg
Join to see EXIF info for this image (if available)

This is one of my favorite photos--it had much wider DoF before my processing

While the original Lytro camera was more of a commercial proof-of-concept than a practical device, the technology has already started to have a huge influence. There have been rumors of Apple wanting to incorporate Lytro technology into a future device, and every other flagship Android and Windows phone now features its own technique for refocusing an image after capture and adding artificial bokeh. The new Google Camera app does it by asking you to move the camera to capture parallax information, Samsung does it by what essentially amounts to focus bracketing, and the HTC One M8 actually features a secondary rear camera for the sole purpose of capturing depth information.

In practice, none of these methods is terribly reliable. HTC's approach makes practical sense but out-of-focus transitions are too harsh and bokeh is too exaggerated. On the other hand, Google's effect generally looks more realistic and thankfully allows you to control the amount of added bokeh (subtletly works best), but it's too easily confounded by any motion within the frame. In both cases, additional processing time is needed to render the effect (much more so with the Google option). And still, I find myself using them. Once you get used to what works and what doesn’t, it’s a surprisingly convenient way to get natural-looking ‘bokehliscious’ images out of cellphone camera.

14041102829_96af7d4c20.jpg
14225430082_7c816c2cb1.jpg

Besides, it's important to remember we’re talking about first generation technology. Right now, it’s being pushed by cellphone manufacturers for average-to-mediocre cellphone cameras; it’s meant primarily for the Instagram selfie crowd. Imagine, then, the potential if a serious camera-maker were to take hold of the technology and run with it? Perhaps Lytro could do it with its upcoming Illum: it already uses clever light-field tricks to fit an insane 30-250mm F2 lens on a 1-inch sensor, in a body the size of a mirrorless zoom kit.

Take a minute to think about it, and endless possibilities come to mind. In my ideal world, this recorded depth information would become standardized. You could then use any 3rd party developer like Lightroom to adjust depth of field parameters the way you adjust white balance or exposure. But until then, I could imagine a specialized Bokeh or Portrait mode—just like HDR or Panorama spots on a drive dial—to apply these effects. In fact, I’d say it’s just a matter of time until a company like Samsung starts to feature this tech in their more serious cameras. If Apple were to introduce such a feature onto its phones, you could virtually guarantee it popping up elsewhere.

14227751855_00a7532d06_c.jpg

There are a couple of flaws here, but for the most part the effect is subtle enough to not arouse any suspicions.

Playing around with the rudimentary lens blur effect in the Google Camera app has gotten me to use my phone’s camera a lot more than I have in years. And while almost all the cellphone photos in this article have heavy processing, truth is I don’t think most unaware viewers would give a second thought if I told them they’d been shot with a DSLR. Of course, the resolution is low and the flaws are readily visible if you look closely (although easily fixable in LR), but for photos I took on my phone and spent 30 seconds editing, I’m happy. Heck, I can’t even get this sort of wide angle depth of field with anything in my professional kit anyway.

14041459177_99f29aa1f1_z.jpg

Perhaps it’s in bad taste to consider the lack of depth of field with small sensors a “problem” in the first place. After all, every system has its limitations; learning to compromise within the different facets of photography is part of the beauty of this art. You shouldn’t need shallow depth of field to create a good picture. But at the same time, like HDR or stitched panoramas, post-production bokeh could add significant value to negate the limitations of a camera.

Like they say, the best camera is the one you have with you. And if that camera can fit in your pocket but still give you shallow depth of field, even better, right?

14041102459_40743efb95_z.jpg
 
Last edited by a moderator:
Wide angle shallow dof is a bit of my photography nirvana, and although i also tend to be a purist and generally dislike shooting with a phone (a slightly thicker phone with a ricoh gr like layout might fix that), i dont think i could withstand a phone that could reliably and convincingly do just that. I am following these developments with great interest.
 
Slightly more on-topic, for those who are interested in this sort of manipulation, you could look at the Orton effect, which was originally developed by a photographer called Michael Orton using transparencies, but is reasonably straightforward to reproduce in PS, the Gimp or similar.

The effect is somewhat similar, but as Bill points out, a little will go a very long way indeed ...
 
It doesn't matter for me whether an effect is subtley or heavily applied as its the presented image that counts

I do agree that subtle is often better but its not always the case

Technology is wonderful but its a mistake to think you are ever capturing the truth when the camera is processing it all for you
Additional manipulation is icing on the cake for me & I get a lot of fun using what I have thats been made available for free or as part of the camera software
 
Sure, the effect shouldnt be overused. Hdr can be a great tool but it's usually overdone. I say usually, but there're probably loads of photos where I don't even notice that there's hdr processing going on - those are the good ones. As Napier pointed out, most images here, at least viewed at this size, look perfectly natural. A technology like lightfield, where the distance to the subject can be calculated for each pixel, should be able to produce identical results as using the diaphragm as dof control.
 
Should garlic be banned from the kitchen because most people overdo it and kill any other flavor in the dish?

Maybe we should ban alcohol because some refuse to use it in moderation.

Death to the buffet will be next.


Of course all those suggestions are facetious. And don't kill this technology advancement because some will abuse it. We must continue to look past the abuses and seek out the images we enjoy. Even without faux blur and HDR, there would still be truckloads of worthless images to sift through. If people are going to make bad images, they will find a way to do it with whatever they have at their disposal. The technology doesn't kill images.....the people that abuse it do.
 
Haha, I admit I was somewhat remiss about this myself. I'm a huge fan of bokeh with "character". Harsh bokeh, swirly bokeh, anamorphic bokeh, etc. And as of yet, anything made this artificial way does seem largely characterless. But it's hard for me not to imagine this taking some hold in the future, given the way these cellphone features so often trickle down to actual cameras. The hard part is figuring out the time frame for when it becomes practical for everyday usage, and the technology needed to do so. Will it be five years? 10 years? 20?

And besides, many people--myself included--already overuse real bokeh. Whatever tool you're using, it's a matter of tastefulness. Movies already have CG sequences with fake bokeh, many of which we generally don't notice at all. Like bartjeej mentioned, the best HDR images are often the ones where we don't notice it all.

Also, some further reading people might find interesting:

Google's Research Blog on the new Camera App: Lens Blur in the new Google Camera app

Dpreview on Lytro's future: Light field cameras: Focusing on the future

Amin's old Alien Skin Bokeh 2 (the app I used for years to this effect) review: Alien Skin Bokeh 2 Review
 
...Movies already have CG sequences with fake bokeh, many of which we generally don't notice at all. Like bartjeej mentioned, the best HDR images are often the ones where we don't notice it all.
...

Oh how very true. The first series of Downton Abbey (the last I watched any of) was particularly bad for "wandering bokeh" - totally against the laws of optics let alone taste. Thing is, I think they were doing it in part at least to blur out "unfortunate" anachronistic details such as fire exit signs and LED light fittings...
 
If they could give a small sensor realistic looking medium or large format DOF I would like that. I really like the falloff of focus on say a medium format or larger camera. I agree that it would be overdone by some as HDR is today. As I'm not a fan of "harsh" bokeh, I would welcome it if it was smooth. Would be rather fun to pick your format size and aperture for the effect. This from the person that has a camera on it's way that has as few "gimmicks" as possible. Doesn't even shoot video *gasp*.
 
give it a few years and I think this will be a filter like any other "Instagramm"/film-look, with different options/looks.
Creating a depth map from a pair of stereo images (at roughly 3.5 megapixels) takes about 3 seconds on my Dell Xeon workstation at work. The resulting depth map is pretty rough, so there's limits to how much you can do with it.

But for anything at web-resolution I can see this coming. There's a lot of clever people working at the phone makers (there's money to be made :) ) so they may come up with clever solutions sooner than we think...
You'll probably get a selection of different crazy/swirly/... bokehs first (before you get "subtle" or "realistic") ;)
But once you have a defocussing tool that replicates photographic defocus, it's mostly a matter of feeding it with different bokeh kernels (shapes) to simulate different results...
 
Back
Top