One of the questions that preoccupies too much of my headspace is: Why do many photographers seem wary of computational photography? AI technologies offer a lot of advantages: they make cameras see better in the dark, capture larger dynamic ranges of exposure and color, pinpoint focus by automatically locking onto faces or eyes, and save photographers time by speeding up the culling and editing process. Those all sound like wins, right?
And yet, photographers seem reluctant to fully embrace AI and machine learning tools. It’s not that we reject progress: Photography itself is a constantly evolving tale of technological advancement—without technology, there would be no photography.
Instead, I think it’s that we don’t always know what to expect when invoking many AI features.
Most photography is fairly predictable and, importantly, repeatable. For example, during the capture process, shooting with a slower shutter speed increases exposure. Upping ISO adds even more exposure but creates digital noise. When you adjust settings on a camera, you know what you’re going to get.
By contrast, when you capture a scene using a modern smartphone, it blends exposures and adjusts specific areas of a scene to balance the overall look. One manufacturer’s algorithms determine which areas to render in which ways, such as how saturated a scene will look, based on what the camera perceives. The algorithms of another company’s phone may render the same scene differently.
On the editing side, making adjustments is usually similarly predictable, from increasing exposure to balancing color. Sure, there’s variability in how some apps’ imaging engines apply color, but in general, you know what you’re going to get when you sit down to edit.
Machine learning introduces an element of unpredictability to editing. Sometimes you know what the software will do, but it’s not always apparent.
I realize I’m speaking in broad strokes here, so let me offer some examples (and counter-examples).
Perception and identification
A signature characteristic of AI editing features is the ability to recognize what’s in an image. The software identifies features such as the sky, people, foliage, buildings, and whatever else its model is trained to perceive. Based on that, it can take action on those areas.
However, at the outset, you don’t know which areas will be recognized. For example, the new AI-assisted selection tools in Lightroom and Lightroom Classic do a great job of identifying a sky or a subject, in my experience. But each time you click “Select Subject,” you don’t know if the software’s idea of the subject is the same as yours. Or how much spill will also be selected outside the subject.
Now, the point of such a tool is to save you time. You could take that image into Photoshop and use its tools to make an incredibly accurate selection. Doing it in Lightroom gets you 90% of the way, and you can clean up the selection.
In Luminar AI, the object selection is opaque. The app analyzes a photo when you open it, and you have to trust that when you use a tool such as Sky Enhancer, it will apply to the sky. If the app doesn’t think a sky exists, the sky-editing tools aren’t active at all. If a sky is detected, you have to go with the areas it thinks are skies, with limited options for adjusting the mask.
(The upcoming Luminar Neo will have improved masking and layer tools, but it currently exists as a limited early-access beta, which I haven’t used.)
For an extreme example, consider the Landscape Mixer neural filter in Adobe Photoshop. I recognize upfront that this isn’t entirely fair, because it’s a feature still in development, and it’s also designed as something fun and artistic—no one is going to apply a winter scene to a summer photo and pass it off as a genuine photo. But my point is that when you apply one of the presets to a photo, you don’t know what you’re going to get until it’s made.
The learning part of machine learning
The other reason I think photographers are hesitant to fully embrace AI technologies is the way the state of the art is advancing. Improving algorithms and performance is a given in software development, and it’s what we expect when we upgrade to new versions of apps. Sometimes that progress doesn’t go the way we expect.
As an example, in an early release version of Luminar AI, I used the Sky AI tool to change a drab midday scene into a more dramatic sunset. One of the improvements Luminar AI made over its predecessor was the ability to detect reflected water in a scene and apply the new sky to that area.
The version I edited turned out pretty well (except for a spot in the surf where the highlight is blown out), with a good distribution of the light in the water.
A short while later, Skylum released an update to Luminar AI that, among other things, improved the recognition of reflections. When I opened the same image after applying the update, the effect was different, even though I hadn’t moved a slider since my original edit. And now I can’t replicate the tones in that first edit. In fact, I’m not able to position the sky in the same way, which may be part of why the reflection isn’t rendered the same way. The repeatability of my earlier edit went out the window.
It’s entirely possible this was due to a bug in how Luminar AI handled reflections, before or after the update. But it could also be due to the “learning” part of machine learning. The models the software uses are trained by analyzing lots of other similar photos, which could be high quality or fodder. We don’t know.
I know that sounds like I’m resistant to change or that I don’t believe in advancing technology, but that’s not the case. As a counterpoint, let me draw your attention to “Process versions” in Lightroom Classic, found in the Calibration panel. The Process version is the engine used to render images in the Develop module. As imaging technology improves, Adobe implements new Process versions to add features and adapt to new tools. The current incarnation is Version 5.
But the other Process versions are still available. When I edit an image that was imported when Version 3 was the latest, I can get the same results as I did then. Or, I can apply Version 5 and take advantage of tools that didn’t exist then. I have a much better idea of what to expect.
Don’t get me wrong, in general, I’m a look-forward kind of guy, and I’m thrilled at a lot of the capabilities that AI technologies are bringing. But we can’t ignore that the evolutionary cycle of computational photography is fluid and in motion. And I think that’s what makes photographers hesitant to embrace them.
The post When AI changes its mind: the unpredictable side of computational photography appeared first on Popular Photography.