Hot topics

Exmor, Isocell & Co: why smartphone image sensors are so important!

AndroidPIT Light L16 Camera review 1748
© nextpit by Irina Efremova

Read in other languages:

While smartphone manufacturers are eagerly ramping up the number of megapixels in their cameras, entering three-digit megapixel figures while touting fantasy level zoom on their devices, a far more important camera feature remains pretty much an unknown quantity to many: the sensor size. Why this is so crucial for image quality? Find out more below.

Contents:

AndroidPIT samsung galaxy s20 ultra review 14
Plenty of 'eyes': the trend in current smartphones leans towards more cameras, but relatively unnoticed, also towards ever-larger image sensors. / © NextPit

Sensor size in inches: From analog tubes to CMOS chips

Let's start with some background information: In most smartphone specification sheets, the sensor size of the camera tends to be written down as 1/xyz inch: for example 1/1.72 inch or 1/2 inch. Unfortunately, this size does not correspond to the actual size of your smartphone sensor.

Let's have a look at the IMX586 specifications sheet: Half an inch of this sensor would actually correspond to 1.27 centimeters. The real size of the Sony IMX586 doesn't have much to do with that, though. If we multiply the 0.8-micron pixel size by the horizontal resolution of 8,000 pixels, we end up with only 6.4 millimeters, i.e. approximately half the number. If we first measure the horizontal and then apply the Pythagoras theorem to obtain the diagonal measurement, we end up with 8.0 millimeters. Even that is not nearly enough.

And here's the crux of the matter: Customisations were introduced about half a century ago, when video cameras still used vacuum tubes as image converters. Marketing departments still retain the then approximate light-sensitive area to tube diameter ratio with much vigor, almost to the point of a relentless barrage of penile extension emails sent to my spam folder. And that's the name of a CMOS chip with a diagonal measurement of 0.31-inches these days in a 1/2-inch sensor.

Vladimir Zworykin and historic TV tubes
Those were the days, my friend: the inch-designations of today's image sensors originated from times like these. In the picture: Ionoscope inventor Vladimir K. Zworykin ca. 1954 with a few image converter tubes. / © Public Domain

If you want to know the actual size of an image sensor, either take a look at the manufacturer's datasheet or the detailed Wikipedia page on image sensor sizes. Or you follow the example above and multiply the pixel size by the horizontal or vertical resolution.

Effective sensor area: Larger is better

Why is the sensor size so important? Imagine the amount of light coming through the lens onto the sensor as though rain is falling from the sky. You now have a tenth of a second to estimate the amount of water that is currently falling.

  • With a shot glass, this will be relatively difficult for you, but in a tenth of a second, in heavy rain, perhaps a few drops will fall into the glass - with little rain or with a little bad luck, perhaps none at all. Your estimation will be very inaccurate either way.
  • Now imagine you have a paddling pool to perform the same task. With this, you can easily catch a few hundred to a few thousand raindrops, and courtesy of the surface, you can make a more precise estimate on the amount of rain.

Just like with the shot glass, paddling pool, and rain example, the same applies to sensor sizes and the amount of light, be it much or little. The darker it is, the fewer photons are captured by the light converters - and the less accurate the measurement result. These inaccuracies later manifest themselves in errors such as image noise, broken colors, etc.

image sensor sizes
This graphic shows some of the sensor formats currently used in smartphones in a true-to-scale comparison. / © NextPit

Granted: the difference in smartphone image sensors is not quite as great as that between a shot glass and a paddling pool. But the aforementioned Sony IMX586 in the Samsung Galaxy S20 Ultra's telephoto camera is about four times larger in terms of area size than the 1/4.4-inch sensor in the Xiaomi Mi Note 10's telephoto camera.

When it comes to zoom: Whether the manufacturer writes "100x" or "10x" on the exterior is somewhat equivalent to the chest-thumping denoted by the speedometer in a car. A VW Golf with a speedometer that hits 360 km/h will still not go faster than a Ferrari. However, this is a topic that should be explored in a different article.

golf kmh back
What if the car's top speed was depicted on the rear? If someone from the smartphone marketing department worked for Volkswagen, perhaps there won't be any more spinouts ;-) / © Volkswagen, Installation: AndroidPIT

Bayer Mask & Co.: now it gets colorful

Let us revisit the water analogy from above. If we would now place a grid of 4,000 by 3,000 buckets on a meadow, we could determine with a resolution of 12-megapixels the amount of rainfall and probably take some kind of picture of the water saturation from the overhead rain cloud.

However, if an image sensor with 12-megapixels were to capture the amount of light with its 4,000 by 3,000 photon traps, the resulting photo would be black and white - because we only measured the absolute amount of light. We can neither distinguish the colors nor the size of the falling raindrops. How does the black and white end up as a color photo?

The trick is to place a color mask over the sensor, the so-called Bayer matrix. This ensures that either only red, blue, or green light reaches the pixels. With the classic Bayer matrix that has an RGGB layout, a 12-megapixel sensor then has six million green pixels and three million red and blue pixels each.

bayer pattern rggb ryyb
The human eye can distinguish green tones better. Interestingly, image sensors of cameras are also better positioned in this aspect and have twice as many green pixels as blue or red pixels. On the right you can see an RYYB matrix - here the green pixels are replaced by yellow pixels. / © NextPit

In order to generate an image with twelve million RGB pixels from this data, the image processing typically uses the green pixels for demosaicing. Using the surrounding red and blue pixels, the algorithm then calculates the RGB value for these pixels - which is a very simplified explanation. In practice, the demosaicing algorithms are far smarter, for example, in order to avoid color fringes on sharp edges, then the same principle follows with the red and blue pixels, where a colorful photo finally ends up in the internal memory of your smartphone.

Quad Bayer and Tetracell

Whether it is 48, 64, or 108 megapixels, most of the current extremely high-resolution sensors in smartphones have one thing in common. While the sensor itself actually has 108 million buckets of water or light sensors, the Bayer mask above it has a lower resolution by a factor of four. So there are four pixels under each color filter.

image sensor pixel sizes quad bayer
Whether Quad-Bayer or Samsung Tetracell sensors: more and more image sensors have four pixels sharing one color filter. By the way, the pixel size is relatively insignificant to the image quality. A tiny sensor with a few but large pixels does not take great photos. And a large sensor can afford many small pixels at a gigantic resolution. / © NextPit

This, of course, looks incredibly great when specified on the datasheet. A 48-megapixel sensor! 108-megapixels! It is just ballooned figures to titillate the imagination. And when it's dark, the tiny pixels can be combined to form big superpixels while delivering great night shots.

Paradoxically, however, many cheaper smartphones don't offer the possibility of taking 48-megapixel photos at all - or even provide poorer image quality in direct comparison when in 12-megapixel mode. In all cases known to me, the smartphones are so much slower when taking pictures with a maximum resolution that the slight increase in quality is simply not worth the amount of time taken - especially since 12, 16 or 27-megapixels are completely sufficient for everyday use and do not exhaust your device's memory so quickly.

Most of the marketing propaganda that pushes tens of megapixels into tripe digit territory may be ignored. But in practice, high-resolution sensors are actually larger - and the image quality ends up with noticeable benefits.

Huawei's SuperSpectrum sensor: the same in yellow

There are also some derivatives of the Bayer matrix, the most prominent example being Huawei's so-called RYYB matrix (see graphic above), in which the absorption spectrum of the green pixels is shifted to yellow. This has - at least on paper - the advantage that more light is absorbed and more photons arrive at the sensor in the dark.

quantum efficiency rggb ryyb
Quantum efficiency diagrams show how sensitively different sensors react to light of different wavelengths. In the case of the RYYB or RCCB sensor on the right, the interval below the green or yellow absorption curve is therefore much larger. But the green-yellow pixels respond more to the "red frequency range", which makes demosaicing more difficult. / © Society for Imaging Science and Technology

However, the wavelengths measured by the sensor are no longer as evenly distributed in the spectrum and as clearly separated from each other as compared to an RGGB sensor. In order to maintain accurate color reproduction levels, the demands on the algorithms, which must subsequently interpolate the RGB color values, will have to increase in a corresponding manner.

It is not possible to predict which approach will produce better photos in the end. This largely depends on the practical and laboratory tests that tend to prove one or the other technology 'better'.

Camera software: the algorithm is everything

Finally, I would like to say a few words about the algorithms that are involved. This is all the more important in the age of Computational Photography, where the concept of photography becomes blurred. Is an image composed of twelve individual photographs really still a photograph in the original sense?

What is certain is this: the influence of the image processing algorithms is far greater than the number of bits in a sensor area. Yes, an area difference by a factor of two makes a big difference. But a good algorithm also makes up a lot of ground. Sony, the world market leader in sensors, is a good example of this. Although most image sensors (at least technologically) hail from Japan, Xperia smartphones always lagged behind the competition in terms of image quality. Japan can do hardware, but when it comes to software, the others are more advanced.

batch 20190313 143555 side
Two photos taken with the Samsung Galaxy S10. On the left, the Google camera was used, while the right depicts Samsung's own camera app. Google's HDR mode is superior to Samsung's. No wonder that many smartphone users prefer Google's camera onto their phones. / © NextPit

At this point, I would like to make a note about ISO sensitivity, which also deserves its own article: Please never be impressed by ISO numbers. Image sensors have a single native ISO sensitivity in almost all cases*, which is very rarely listed down in datasheets. The ISO values that the photographer or camera automation set during the actual shooting process are more like gain - a degree "brightness control". How "long" the scale of this brightness control is, can be defined freely, so writing a value like ISO 409,600 into the datasheet makes as much sense as with a VW Golf... well, let's not venture down that road. 

*There are actually a few dual ISO sensors with two native sensitivities in the camera market, such as the Sony IMX689 in the Oppo Find X2 Pro, at least that's what Oppo claims. Otherwise, this topic is more likely to be found in professional cameras like the BMPCC 6K.

Autofocus: PDAF, 2x2 OCL & Co.

A small digression at the end, which is directly related to the image sensor: the whole subject of autofocus. In the past, smartphones determined the correct focus via contrast autofocus. This is a slow and computationally intensive edge detection, which you probably know from the annoying focus pumping.

Most image sensors have now integrated a so-called phase comparison autofocus, also known as PDAF. The PDAF has special autofocus pixels built into the sensor, which are divided into two halves and compare the phases of the incident light and calculate the distance to the subject. The disadvantage of this technology is that the image sensor is "blind" at these points - and these blind focus pixels can affect up to three percent of the area, depending on the sensor.

AndroidPIT oppo find x2 pro focus speed
Dude, this is awesome! The Oppo Find X2 Pro focuses incredibly fast in video mode thanks to its 2x2 OCL sensor. A contrast autofocus can only dream of such speed / © NextPit

Just a reminder: less surface area translates to less light/water and subsequently, poorer image quality. Moreover, the algorithms have to retouch these flaws just like how your brain has to make sense of the "blind spot".

However, there is a more elegant approach that does not render pixels unusable. The micro-lenses already present on the sensor are distributed over several pixels in places. Sony, for example, calls this 2x2 OCL or 2x1 OCL, depending on whether the microlenses combine four or two pixels.

sony 2x2 ocl
Four pixels under a color filter in a microlens: Sony's 2x2-OCL technology turns all pixels into autofocus cross sensors. / © Sony

We'll soon be devoting a separate, more detailed article to autofocus - what do you look out for in a camera when you buy a new smartphone? And which topics around mobile photography would you like to read more about? I look forward to your comments!

More articles about smartphone cameras and photography:

 The best gaming monitors at a glance

  Best gaming monitor up to $400 Best gaming monitor up to $600 Best gaming monitor up to $800 Best gaming monitor up to $1,000 Best gaming monitor for consoles
Model
Image LG Ultragear 27GP850P - product image Asus ROG Strix XG27AQ - product image BenQ MOBIUZ EX3210U - product image Asus ROG Swift PG27AQDM - product image Gigabyte M32U - product image
Offers
Go to comment (3)
Stefan Möllenhoff

Stefan Möllenhoff
Head of Content Production

I have been writing about technology since 2004 with a strong passion for smartphones, photography, and IoT, especially in the world of smart homes and AI ever since they debuted. I'm also an avid cook and bake pizza at least three times a week using my Ooni Koda 16. In order to compensate for all the consumed calories, I indulge in sporting activities on a daily basis while strapping on at least two fitness trackers. I am strongly convinced that you can DIY a lot of things if you put your mind to it - including a photovoltaic system and power station.

To the author profile
Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
3 comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • 8
    Adrian Palmer Edwards Jun 24, 2020 Link to comment

    I have to disagree with you on Sony smartphone camera's, Xperia smartphones from the past could take photo's as good as any other manufacturer, any day. It is the person that is using the camera is at fault, this is the way to put it. Other smartphone camera's were built for children to use, Sony was built for adults to use


    • Stefan Möllenhoff 39
      Stefan Möllenhoff
      • Admin
      • Staff
      Jun 26, 2020 Link to comment

      I agree to some degree. Earlier on, Sony's auto modes were lacking behind the rivals, but that was nothing that couldn't be fixed with some exposure controlling or usage of the manual modes.

      But in the times of computational photography, where Software matters more and more, Sony is lagging more and more behind. We're expecting the Xperia 1 II currently, and we'll be testing its camera in detail and compare it to its rivals – stay tuned ;)


  • Rusty H. 33
    Rusty H. Jun 24, 2020 Link to comment

    I just wish they would drop all the multiple sensor garbage, and just be done with it, and use ONE larger sensor, along the line of APS-C size or similar, which would be 6-10 times larger than most smartphone sensors. But, the retractable lens and camera bump on the back, would cause panic attacks to the "stylish & colorful crowd".
    What lot of people don't understand is that a camera sensors job, it to capture as much available light as possible. Hard to do without software "tricks" when you have super tiny smartphone sensors. Adding megapixels is in theory, good when you zoom & crop, and allows you to lose detail after zooming in. Stuffing bazillions of super tiny sensors that close together, also creates another issue, in the case of lower than perfect light conditions. The signal gain is so high, the crosstalk envelope goes up, requiring the camera software to try to knock down the "noise" (the sparkles in a low light photo), then, try to use software to bring back the detail, which can result in a "muddy" looking photo.

    Stefan Möllenhoff

Write new comment:
All changes will be saved. No drafts are saved when editing