Here's how Google uses dual cameras for the Pixel 4's portrait mode

Google Pixel 4 XL laying face-down on a table
Google Pixel 4 XL laying face-down on a table (Image credit: Joe Maring / Android Central)

What you need to know

  • The Pixel 4 is capable of taking portrait shots for subjects that are up closer and far away.
  • This is achieved by using the Pixel 4's dual cameras along with dual-pixel technology.
  • Google is also able to pull-off a bokeh effect that looks more like a real SLR.

The biggest selling point for Google's Pixel 4 is its camera performance. This has been the case since the first Pixel came out in 2016, and that superiority has continued year after year. The Pixel 4 stands out as the first Pixel to ship with two rear cameras, including a 12MP primary camera and 16MP telephoto camera.

In addition to enabling you to zoom in on subjects that are far away, the telephoto lens also allows for better portrait shots. In a recent post published to the Google AI Blog, Google took some time to explain how the Pixel 4's improved portrait photos work behind the scenes.

With the Pixel 2 and Pixel 3, Google achieved portrait photos using something called "dual-pixel auto-focus." This was used to estimate the depth between objects, with Google further explaining that it works by:

Splitting every pixel in half, such that each half pixel sees a different half of the main lens' aperture. By reading out each of these half-pixel images separately, you get two slightly different views of the scene.

With a second lens in tow, the Pixel 4 is able to capture much more detail and information to allow for better portrait images. Google further explains:

The Pixel 4's wide and telephoto cameras are 13 mm apart, much greater than the dual-pixel baseline, and so the larger parallax makes it easier to estimate the depth of far objects. In the images below, the parallax between the dual-pixel views is barely visible, while it is obvious between the dual-camera views.

Dual-pixel comparison

Source: Google (Image credit: Source: Google)

Google also notes that it expanded its use of machine learning to "estimate depth from both dual-pixels and dual cameras" and that these both go through separate encoders, then through a shared decoder, ultimately giving you the final photo. This is the process in which depth maps are analyzed to determine what's the subject and what's the background, and after all of that's done, you get a crispy-looking portrait.

Furthermore, Google was able to improve the bokeh effect found in the Pixel 4's portrait pictures. Per the company:

To reproduce this bokeh effect, we replaced each pixel in the original image with a translucent disk whose size is based on depth. In the past, this blurring process was performed after tone mapping, the process by which raw sensor data is converted to an image viewable on a phone screen. Tone mapping compresses the dynamic range of the data, making shadows brighter relative to highlights.

Google tone mapping example

Source: Google (Image credit: Source: Google)

This process comes with the downside of losing information about brightness, and to combat this, Google decided to "blur the merged raw image produced by HDR+ and then apply tone mapping. In addition to the brighter and more obvious bokeh disks, the background is saturated in the same way as the foreground."

Chances are you won't think about any of this when using the Pixel 4, but that's what's so compelling about the phone's camera experience in the first place. It kicks out these incredible photos, and all you need to do is press the shutter button.

Joe Maring

Joe Maring was a Senior Editor for Android Central between 2017 and 2021. You can reach him on Twitter at @JoeMaring1.