Photography: Human Vision vs. Camera

Share:  
Photography- Human Vision vs. Camera
Image by Skeeze

Many beginner photographers and the regular smartphone snappers have asked me the same question way too many times: Why doesn’t the camera produce the same picture as our eyes see it, shouldn’t the camera sensor be more powerful than human vision by now?

A Bit About Human Vision

To answer that question it is necessary to know how we actually see.

Our eyes work pretty much as a camera, the front part of the eye acts as a lens, while the rear as a sensor. When light reaches the front of our eyes, it is focused using the cornea (basically the lens built in our eye)

Then it is further focused by the crystalline lens in the eye, located behind the pupil which further helps in focusing the light. On the other hand, the iris and the pupil act as the aperture in the lens, closing down and opening up depending on the amount of light reaching the eye and the amount of clarity needed (controlling the depth of field).

Blausen 0388 EyeAnatomy 01.png


Blausen 0388 EyeAnatomy 01” by BruceBlaus. Wikiversity Journal of Medicine. DOI:10.15347/wjm/2014.010. ISSN 20018762.

Now, once that light passes through the cornea, it reaches the retina, the “sensor” part of the eye. The retina contains two types of photo- sensitive receptors, rods and cones.

Rods aren’t much perceptive to color, rather they perceive light, thus they are more sensitive to it. Peripheral vision is based on rods more since they are more sensitive. Cones, on the other hand, are sensitive to color, in fact, they are sensitive to 3 colors: red, green, and blue.

That is why we humans are trichromats.

The issue with the eyes however, is that they aren’t good with resolution, actually, we only have a small portion of the eye with high resolution, while the rest of it is on the weaker side.

Now you’ll say that 20/20 vision is able to distinguish lots of details, which doesn’t really favor the claim that the eye has low resolution optics. (I’ll get to that later).

Our eye has a special part of the retina, called the fovea, which is basically the “high resolution” part of the eye. Even though it only increases the resolution on a small portion of the image that our eye generates, it is enough for our brain.

Our eye has a contrast ratio of around 100:1, or in photographic terms about 6.5 stops of dynamic range (for a single set of chemical and light conditions). However our eye adapts to the situation, thus it can shift that dynamic range towards the visible range of the light spectrum.

Human eye with blood vessels.jpg


Human eye with blood vessels” by ROTFLOLEBOwn work. Licensed under CC BY-SA 3.0 via Commons.

Once our eyes capture an image, it's sent to the brain, where all the magic happens. Unlike the camera, which snaps a picture and that's it, our eyes snap pictures constantly, which the brain later processes them into what we actually see.

Now the interesting part is that the picture we actually see is far from the picture that our eyes see. In fact, our eyes see upside down, they don’t have clear resolution throughout the whole visible range (the eyes even have blind spots where the nerves pass through), they are also blocked by the nose and the skull (part of the image which the brain chooses to ignore) and so on.

But our magnificent brain uses our eyesight and all other stimuli and past experiences to generate an image which is around 30% based on actual eyesight; the rest is generated from glances we do around the places and prediction algorithms.

Essentially, our eyesight is one active panorama that dynamically changes both in luminance and in content.

The dynamic range can extend to more than 40 stops by changing the amount of light our eyes perceive, and by using our brain to process that information.

To summarize, our eyesight is a complex combination of:

  • Light, captured by the eyes,
  • Past experiences,
  • And extremely complex processing that our brain does to combine every stimuli possible in order to generate the picture we actively see.

Additionally, our eyes (if fully functional) are capable of distinguishing around 10 million colors. That value varies from the light conditions of course.

As we're focusing largely on “light”, we know as photographers how important this is. So how about enhancing your skills and knowledge further by looking at this courseFantastic Fundamental Light Skills” by Photography Concentrate.

How the Camera Sees

Basically, the camera works pretty much like the eye, it uses the lens to focus the lights on the sensor. The sensor captures it and transforms it to electrical impulses which are later processed by the processors built in the camera in order to interpret the impulses and translate them into a usable file.

The camera sensor isn’t sensitive to color, in fact, it has little to no wavelength specificity. In order to be able to produce color images, the sensor needs a filter array on top of it which will filter out the wavelengths of the 2 other colors.

This means that the Bayer RGB Filter fitted on top of the sensor (which has one red, two green, and one blue pixel) filters out the wavelengths of the 2 colors that aren’t used in that pixel.

For example, the green pixel on the bayer filter, filters out the red and the blue wavelengths so the pixel on the sensor under it can capture the wavelength for the green color only. On top of the bayer filter there is an infrared and ultraviolet filter in order to block out those wavelengths of light as well.

A micrograph of the corner of the photosensor array of a ‘webcam’.jpeg


A micrograph of the corner of the photosensor array of a ‘webcam’” by Natural Philo

Most modern sensors used in professional grade cameras have a dynamic range from 10 to almost 15 stops of light. They are also capable of distinguishing around 50% more shades of each color when compared to the human eye.

Lenses which focus light on the camera sensors are complex optical combinations providing different types of geometrical and optical qualities and benefits.

They are also partly to blame for color rendition too, since often coatings and imperfections in the optics and materials can result in color casts and limited color ranges under certain circumstances.

How A Digital Photograph Is Taken

  1. Once the lens and the sensor are paired as a functioning camera, the light that hits the lens goes through it (obviously), the focus is controlled by the focus mechanism on the lens itself and the amount of light that passes through is controlled by the lens iris (resembling the human eye). 
  2. When the shutter is released, the curtain exposes the sensor on the light focused by the lens, and usually a fraction of a second (unlike the human eye that is exposed to light most of the time). 
  3. Once the sensor is exposed, the light generates electricity in the semiconductor array of pixels which is later processed by the processor (or processors depending on the camera) in order to form the final picture (whether it is RAW or JPEG).

Differences and Similarities

The camera lacks the processing power and the way of processing that our brain has, which is why it will never be up to par with human vision. This might change in the future because cameras are getting more and more advanced: with the Lytro camera, you can now change the perspective and focus after you take the picture, which honestly is ground-breaking.

Yes, the technology needs some improving, but it exists and it is available to everybody. The Lytro comes quite close to the way our brain works regarding the processing of images and perspectives after the eye sees it in a certain manner.

However, I’m not sure if any camera will be able to do picture processing based on previous experiences and pictures taken to improve the visual quality and dynamic range that one picture has.

So far, the camera itself is a tool for capturing images, which later need the human touch in order to complete them towards our liking.

In order to be able to complete the images towards your liking in post-processing, you’ll need to rely on your eyesight and previous experiences (in software editing and in the manner of how the picture should look like).

Our eyes have a limited spectrum of light they can capture from a given light source, while the camera (capable of doing longer exposures) can gather much more light and pack it into one picture, representing the scene much clearer and brighter than our eyes could.

This is especially noticeable when photographing the stars since you would never be able to see all those colors and contrasts in the Milky Way like the camera would be able to do.

As we're focusing largely on “light”, we know as photographers how important this is. So how about enhancing your skills and knowledge further by looking at this courseFantastic Fundamental Light Skills” by Photography Concentrate.

Summary

It is not possible to put human vision against cameras since it would be an unfair battle, but you can quantify the differences and similarities, give or take.

If you understand yourself better, and if you understand your tools (the camera) better, it will affect the final product. It will improve it and make your workflow more efficient, and thus you’ll need less time to achieve better results.

Understanding the human eye will help you in dark situations, for example: if you are aware about how much time your eye will need to adjust to the light conditions (and how to preserve that adjustment), you’ll be able to work in those conditions more efficiently.

Further Resources

About Author

Photographer who loves challenging and experimental photography and loves sharing his knowledge about it.

Leave a Reply

Your email address will not be published. Required fields are marked *