Outsmart your iPhone camera’s overzealous AI

A green iPhone camera on a red background
Dan Bracaglia

Last weekend The New Yorker published an essay by Kyle Chayka with a headline guaranteed to pique my interest and raise my hackles: “Have iPhone Cameras Become Too Smart?” (March 18, 2022).

Aside from being a prime example of Betteridge’s Law of Headlines, feeds into the idea that computational photography is a threat to photographers or is somehow ruining photography. The subhead renders the verdict in the way that eye-catching headlines do: “‘s newest smartphone models use machine learning to make every image look professionally taken. That doesn’t mean the photos are good.”

A bench on a beach with a blue sky.
This image was shot on an iPhone 13 Pro using the Halide app and saved as a Raw file. It was then processed in Adobe Camera Raw. Jeff Carlson

The implication there, and a thrust of the article, is that machine learning is creating bad images. It’s an example of a type of nostalgic fear contagion that’s increasing as more computational photography technologies assist in making images: The machines are gaining more control, algorithms are making the decisions we used to make, and my iPhone 7/DSLR/film SLR/Brownie took better photos. All wrapped in the notion that “real” photographers, professional photographers, would never dabble with such sorcery.

A bench on a beach with a blue sky.
Here’s the same scene shot using the native iPhone camera app, straight out of camera, with all of its processing. Jeff Carlson

(Let’s set aside the fact that the phrase “That doesn’t mean the photos are good” can be applied to every technological advancement since the advent of photography. A better camera can improve the technical qualities of photos, but doesn’t guarantee “good” images.)

I do highly recommend that you read the article, which makes some good points. My issue is that it ignores—or omits—an important fact: computational photography is a tool, one you can choose to use or not.

Knowing You Have Choices

A sandy beach with wood pylons.
Another Phone 13 Pro photo, captured straight out of camera. Jeff Carlson

Related: Meet Apple’s new flagship iPhone 13 Pro & Pro Max

To summarize, Chayka’s argument is that the machine learning features of the iPhone are creating photos that are “odd and uncanny,” and that on his iPhone 12 Pro the “digital manipulations are aggressive and unsolicited.” He’s talking about Deep Fusion and other features that record multiple exposures of the scene in milliseconds, adjust specific areas based on their content such as skies or , and fuses it all together to create a final image. The photographer just taps the shutter button and sees the end result, without needing to know any of the technical elements such as shutter , aperture, or ISO.

An underexposed photo sandy beach with wood pylons.
Here’s the same angle (slightly askew) captured using the Halide app and saved as a raw file, unedited. Jeff Carlson

You can easily bypass those features by using a third-party app such as Halide or Camera+, which can shoot using manual controls and save the images in JPEG or raw format. Some of the apps’ features can take advantage of the iPhone’s native image processing, but you’re not required to use them. The only manual control not available is aperture because each compact iPhone lens has a fixed aperture value.

That fixed aperture is also why the iPhone includes Portrait Mode, which detects the subject and artificially blurs the background to simulate the soft background depth of field effect created by shooting with a bright lens at f/1.8 or wider. The small optics can’t replicate it, so Apple (and other smartphone developers) turned to software to create the effect. The first implementations of Portrait Mode often showed noticeable artifacts, the technology has improved in the last half-decade to the point where it’s not always apparent the mode was used.

But, again, it’s the photographer’s choice whether to use it. Portrait Mode is just another tool. If you don’t like the look of Portrait Mode, you can switch to a DSLR or mirrorless camera with a decent lens.

A sandy beach with wood pylons.
The same Halide raw photo, quickly edited in Adobe Lightroom. Jeff Carlson

Algorithmic Choices

More apt is the notion that the iPhone’s processing creates a specific look, identifying it as an iPhone shot. Some images can appear to have exaggerated dynamic , but that’s nothing like the early exposure blending processing that created HDR (high dynamic range) photos where no shadow was left un-brightened.

Each system has its own look. Apple’s processing, to my eye, tends to be more naturalistic, retaining darks while avoiding blown-out areas in scenes that would otherwise be tricky for a DSLR. Google’s processing tends to lean more toward exposing the entire scene with plenty of light. These are choices made by the companies’ engineers when applying the algorithms that dictate how the images are developed.

A lake scene with a blue sky and tree in the foreground.
The iPhone 13 Pro retains blacks in the shadows of the tree, the shaded portions of the building on the pier, and the darker blue of the sky at the top. Jeff Carlson

The same applies to camera manufacturers: Fujifilm, Canon, Nikon, Sony cameras all have their own “JPEG look”, which are often the reason photographers choose a particular system. In fact, Chayka acknowledges this when reminiscing over “…the pristine Leica camera photo shot with a fixed lens, or the instant snapshot with its spotty exposure.”

The article really wants to cast the iPhone’s image quality as some unnatural synthetic version of reality, photographs that “…are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.” That’s a lovely turn of phrase, but it comes at the end of talking about the iPhone’s Photographic Styles feature that’s designed to give the photographer more control over the processing. If you prefer images to be warmer, you can choose to increase the warmth and choose that style when shooting.

A lake scene with a blue sky and tree in the foreground.
The Pixel 6 Pro differs slightly in this shot, opening up more image in the building and, to a lesser extent, the blue of the sky at the top Jeff Carlson

It’s also amusing that the person mentioned at the beginning of the article didn’t like how the iPhone 12 Pro rendered photos, so “Lately she’s taken to carrying a Pixel, from Google’s line of smartphones, for the sole purpose of taking pictures.”

The Pixel employs the same types of computational photography as the iPhone. Presumably, this person prefers the look of the Pixel over the iPhone, which is completely valid. It’s their choice.

Choosing with the Masses

I the larger issue with the iPhone is that most owners don’t know they have a choice to use anything other than Apple’s Camera app. The path to using the default option is designed to be smooth; in addition to prominent placement on the home screen, you can launch it directly from an icon on the lock screen or just swipe from right to left when the phone is locked. The act of taking a photo is literally “point and shoot.”

More important, for millions of people, the photos it creates are exactly what they’re looking for. The iPhone creates images that capture important moments or silly snapshots or any of the unlimited types of scenes that people pull out their phones to record. And computational photography makes a higher number of those images decent.

Of course not every shot is going to be “good,” but that applies to every camera. We choose which tools to use for our photography, and that includes computational photography as much as cameras, lenses, and capture settings.

The post Outsmart your iPhone camera’s overzealous AI appeared first on Popular Photography.