Will Cameras Learn to “Compose”?

Share:  

I can almost feel the rising tide of indignation heading towards me even as it type this headline. Learn to compose? Don’t be ridiculous some might comment. How can you simulate the creativity of the mind other might chip in. And for the most part, at this moment in time you are correct.

Technology, however waits for no one. Just by looking back at the last 10-15 years, we have gone from fiddly phones with buttons to devices that can do incredible things at the by touch or even voice. Those devices also now have incredible photographic and video capabilities often underpinned by computational photography

daniel romero syp hchzeba unsplash
Computational photography is already here. By Daniel Romero on Unsplash

Computational photography is where a computer processor merges multiple, almost identical images to create a hard to achieve effect. This might be simulating Bokeh, creating a high key portrait under normal lighting or giving low noise, handheld night shots. It’s not perfect but it is already very impressive. So could a camera compose?

Will Our Cameras Be Able To Compose?

I think the short answer is yes. I also think that we will start to see this happening within the next five years.

Again I think that again it will be smartphone technology that will drive this forward. As photographers we are introduced to the rules of composition at a very early stage. The word rules is key here because computers love rules. They work on the basis of 0 and 1s, yes and no, right and wrong. 

The basic rules of composition are pretty easy to understand. Rule of thirds, leading lines, symmetry and geometry. These are things that would not take a huge amount of computation power to work out. Would would take the processing power is isolating subjects and interests within the frame and then applying a compositional rule to them. 

alexander sinn bcsouznyigu unsplash
The processing power of our cameras is constantly increasing. By Alexander Sinn

Think about what we already have today. Beyond the compositional photography power of our smartphones, we have cameras that can track fast moving subjects. Face detect systems that can pinpoint and track the eyes of multiple subjects simultaneously. The processing power in our cameras is incredible. But is it enough to compute a composition? At the moment that answer is no.

How Can Computational Composition Be Implemented?

As mentioned earlier, the driving factor will be smartphones. The simple reason for this is internet connectivity. With the advent of 5G data services we are now on the cusp of having significantly  faster connections than our home internet, on our mobile devices. 

Computational composition is going to take a lot or processing power and a lot of AI and machine learning. There are however already many of the building blocks we need for it, already in place.

echo grid b r dkakgtm unsplash
Fast mobile data is a key enabler. By Echo Grid on Unsplash

GPS locates the position of the phone down an accuracy of 3 meters. Our phones have sensors that detect the orientation and direction that the phone is pointing. Services such as Google and Bing maps have huge databases not only of the base maps but of the objects sitting on the maps.  The algorithms that Google and Microsoft use are staggering and combined with visual reference such as Streetview can give in incredible virtual picture of what’s in front of the smartphones camera.

suzy brooks jrfj gq me unsplash
Google and Bing maps have incredible amounts of data held within them. By Suzy Brooks on Unsplash

Add in live weather reports and an ephemeris and your phone can also get a good impression of how the light is within that scene. Now the phone would feed all that data back up to a server. It would also send a live feed from the phone’s camera and the server would start to work it’s AI magic. 

Of course the compositional information would be advisory. It would be sent back to your phone in the form of arrows and indicators on screen that would pick out the key elements of the shot. It would then advise you to move either the position of the phone or your position to get the right composition. Once in the optimum location the phone would lock the indicators and then receive data about the very best exposure. 

When Will This Happen?

I think that we might start seeing computational composition within the next five years. It will probably come from one of the big tech companies such as Apple, Samsung or Google. These tech giants are likely to be the only companies with the server and data resources needed for such a venture. 

sebastian scholz nuki fh dtg qx q unsplash
Machine learning is already in our homes. By Sebastian Scholz on Unsplash

It would also be a USP – unique selling point for a smartphone. Many new, high-end smartphones are sold on their photographic and video capabilities. This particularly true in the battle between Apple and Samsung, both of whom have the resources to develop such a technology.

Initially computational composition will be a very simplistic affair. Very basic compositional advice based around low level compositional rules. As mentioned at the top, some of the rules are pretty easy to compute and these will be the ones that are integrated first.

Overtime, artificial intelligence and machine learning will allow cameras to determine more and more complex compositions. The technology will slowly drip down from smartphones to our main cameras and will become commonplace.

Should I Be Worried About Computational Composition?

In short, no. A photograph is and always will be a unique representation of your mind’s eye in digital form. Whilst a computer might be able to advise on compositions, the very best images are often the ones that bend or break the rules. We will always have the option to switch off or ignore what the camera is telling us.

Another way to look at computational composition is as another auto mode. Useful to have when you are in a hurry but there’s nothing like switching to manual to get the creative juices flowing.

theregisti vebuf bngka unsplash
Could your next camera advise you on composition? Maybe. By Theregisit on Unsplash

I think that it’s just a matter of time before we start to see computational composition. However like the advent of digital or smartphones, it will not signal the death of photography. Instead it will be another way to enhance it and to introduce newcomers to it

In my opinion, in the early days of computational photography, we will start to see lots of identikit images. The servers will have dished up similar compositions to anyone that has shot at that location. Which is, if you think about it, a little like looking through an Instagram feed today.

As the technology progresses, we may find it will become a very powerful tool that we can use to advise us on composition. The keyword being advise. 

The march of photographic technology is relentless and as photographers we have to either embrace it or get left behind. 

About Author

Jason has more than 35 years of experience as a professional photographer, videographer and stock shooter. You can get to know him better here.

Leave a Reply

Your email address will not be published. Required fields are marked *