Google’s Pixel 2 smartphone quickly dethroned the new iPhone 8 Plus once DxO Mark got their hands on it. And the reviewers so far seem to be giving it great praise, both as a camera and a phone. But how is the camera inside the Pixel 2 actually put together? That’s what Nat of Nat and Friends wanted to find out. Being a Google employee, she has a little more access than most of us. So, in this video Nat takes us inside Google’s HQ to speak to engineers and find out more about how the camera’s development and working process.

It’s pretty interesting to see how they pack such a relatively decent camera down into the tiny space available. A sensor, six lens elements and motors to drive the optical image stabilisation all packed into an area which Nat describes as being the size of a blueberry.

I never really looked much into the lenses on smartphone cameras before. So, finding that there’s six separate elements packed into that tiny space was quite surprising for me. Google’s Computational Photography Team Lead Marc Levoy (yes, that Marc Levoy), explains.

These elements work in much the same way as elements on a more standard sized lens for a “real camera”. They help prevent pincushion and barrel distortion to produce a final result that more closely matches what we saw in reality.

Marc also talks about how cameras are moving away from a dedicated hardware process, toward a computational software process. And it’s quite amazing just how far these computational software processes have come. Not just in mobile phone photography, but with digital cameras in general. While I still think some computational processes like faking shallow depth of field, and relighting your subjects have a way to go, it’s only going to get better. But, I still can’t see myself ever ditching those larger cameras in favour of a phone. It does hold great promise for the future, though. For those quick snaps with the camera we always have with us in our pocket.