Refocusing: Could Software Solve the Depth of Field Problem on Small Sensors?

well, in a discussion about simulating optical behaviour in out-of-focus areas of an image you can certainly use the criteria "this simulation looks realistic".
It's not meant to be a criteria for bokeh as such...
 
If we are not careful, this could become a "religious" discussion.

Problem is that many (myself included) would in many cases prefer results to be obtained by opto-mechanical means rather than by bits and bytes. This is a thin line to tread, I am well aware, but it touches upon areas of taste and sensitivity - which is why I brought HDR in earlier on. One man's "bokeh" is another's "sickly swirl", and vice versa. Neither is right, or wrong, but both feel strongly about what they believe.
 
you're probably right. I saw this more as a neutral technical discussion of a new tool.

Seeing that I do both - obtaining results "by opto-mechanical means" (as a hobby) and obtaining results "by bits and bytes" (in my day job) - I'm probably less invested in how something is created, rather than what the end result looks like...
 
Understood, and thanks in turn for understanding my poorly expressed point. I too work in "bits and bytes", but I derive great pleasure from shooting film with a 1920s Leica II and a 5cm Elmar. For me it is as much about the journey and the challenge of doing it for myself as it is the end result.
 
Perhaps I ought to unpack my "hmmm" a bit.

The nub of it is that using the word "realistic" in conjunction with "bokeh" or "depth of field" is problematic.

Both bokeh and DOF are artefacts that relate only to photographic images - one does not have a persistent experience of either in one's normal visual sensations (there will be exceptions of course in connection with pathological optical or neurological conditions).

You can pay attention to the "depth of field" of a photograph in a way that you can't in your own normal visual field. Clearly (because the human eye is a lensed optical system) there is a depth of field involved, but the rest of our perceptual apparatus works to make it more or less irrelevant to how we experience everyday scenes.

Tilman's use of the word "simulating" is much better - what's under discussion is manipulation intended to simulate the products of one optical system in images produced from an optical system in which those products would not otherwise appear.

What's interesting is how easily the idea that photographs are properly (in some way specially and accurately) representative of "reality" is carried in the language we use to discuss them and the assumptions underlying that language.
 
I think it's an interesting topic/possibility because at the end of the day, small sensors are most limited by their DoF flexibility in practical usage (acknowledging that 'small' is a subjective term here). Every other area seems to constantly improve, but in everyday usage, the lack of DoF control in small kits seems to be the part most limited by physics.

Asking non-rhetorically, what other options do we have besides software to achieve that sort of look in a small body? Super-mega-hyper-refractive glass? Personally, I just think having the option, especially if implemented as seamlessly as possible, is better than not having it at all. And while I doubt I'd be putting it to much real use anytime soon myself, I do wonder whether I will be saying the same thing in a decade, should the technology take off.
 
There is more than one kind of "opto-mechanical" bokeh
What is Bokeh? Over 50 Lenses rated for their out of focus blur. | Steve Huff Photo

What criteria would one apply to differentiate high quality bokeh obtained by post-processing vs that obtained directly from the lens?

I have to say, the more I get used to the tool's quirks and lean to work around them, the more I'm impressed. Effective resolution on the camera is still too low and I do need to make patches here and there to where the algorithm gets it wrong, but I think it actually does some of the best work I've seen at imitating bokeh through software. I was out on the brooklyn bridge with a Fuji X-T1 + 56mm today but my battery died right as we were taking pictures, so I ended up using my phone for one shot instead. I'm perfectly happy sharing this at low resolutions.

14333830252_de1181d8e7_z.jpg
 
First time I noticed this thread... I actually had the same thought quite a while back.

Overall IQ is getting to a point where small sensor cameras have plenty of resolution, detail, and sharpness so shallow DoF is one of the few differentiators for larger sensor cameras. Given that we're already seeing software solutions for adding OOF blur in post, and technological innovations like the Lytro (note what a quantum leap it was from the original Lytro model to the Illum already) I suspect that eventually we'll see this become a built-in feature for cameras.

If you look at the GH4 for example, Depth-From-Defocus technology is already being used for focusing purposes, calculating distance of focal planes. I think it's feasible that with enough advances in image processing that this could be used to create artificial shallow DoF in-camera. I could see that quickly becoming a very popular feature for some of the smaller point and shoots, or compact system cameras.

My prediction is that it will take some time before we see this appear, and it will likely take several iterations to get to a point where it's completely effective or believable. But I strongly suspect it's coming, since it would narrow the gap between formats and likely mean improved sales for crop sensor mirrorless.
 
Back
Top