The Graphic City

EyeEm isn't just a community of more than 20 million photographers, it's also a technology company that uses machine learning and neural networks to scan and "learn" about the aesthetics, mood and subjects in snaps. It asks the age-old question: what makes a good photo.

With artificial intelligence, algorithms can search and tag tens of thousands of photographs in seconds – far quicker than a human eye could ever manage. As a result, what would once have been an overwhelming amount of images can now be instantly "judged" for worth.

The team predates the rise of Instagram, the Facebook-owned picture-hosting app that continues to dominate social networking.

Rising in 2009 from an underground art project, the Berlin-based company officially launched two years later, with a focus on mixing cutting-edge curation with mobile photography exhibitions.

The result was "Missions": creative projects that showcase the work of both budding and professional photographers submitted to the platform. It has partnered with major brands, including Spotify, Apple and now IBTimes UK, to launch themed competitions.

For the first time, EyeEm is opening up its technology to its partners – launching a new dashboard that is intended to streamline the experience. EyeEm Vision – its image recognition technology – is integrated and "trained" to find snaps with a particular aesthetic.

Using as little as 20 images, the tool searches the EyeEm database for pictures and surfaces those with similar attributes – think keywords like dark, light, smile, party, faces, hands, misty or fun. But despite the pre-defined aesthetics, the results do not all come out looking the same.

"We are not talking about similarity," Lorenz Aschoff, co-founder of EyeEm, told IBTimes UK Tuesday (30 January). "It's on a deeper level. It's a deep learning model that really understands the visual aesthetic. It has been trained on a larger model of images to understand the concept of beauty. That is more abstract."

The dashboard shows the number of photographers taking part in the competition, how many photos have been submitted and various social metrics including shares and likes.

The IBTimes UK mission, the first to launch using the new hub, is themed as "The Graphic City" and asks photographers to visualise the lines, curves, corners and architecture of urban landscapes. Here are some examples of the type of thing we are looking for:

1 of 9

When announced, the winners will be featured on a gallery on the IBTimes UK website.

"The challenge we wanted to solve was how to we carry over the quality of photography in the age of cameras everywhere, of digital capture," Aschoff said, describing the motivations behind EyeEm – a team working in a world where everyone is now a photographer.

"If you hand a camera to everyone, and you connect all cameras in the world with each other, there's going to be a massive flood of content that will be incredibly hard to handle. We asked the questions: How is this going to impact qualitative photography?

"How do we know what's good? We do not only try to understand what's in the photo but we try to understand how good the photo is.

"There is nobody, to our knowledge, that does this. This has a lot to do where we come from. We think this is essential to empower search because search is not only about relevance but about quality. We do not want all the best car photos in the world. We want the best. So we are really obsessed with ranking images. We are unique."

There is growing competition in the image visualisation market, including big name brands including Google and IBM Watson. EyeEm says it differentiates on privacy as it doesn't transfer its snaps over the cloud. Aschoff told IBTimes UK that his business is able to hold its own.

Benchmark studies have shown that EyeEm's image technology comes out on top when compared with Google, IBM, Clarifai, Amazon and Microsoft. The analysis showed that, on average, 80% of the keywords it generated were accurate, compared to 78% for Google's Cloud Vision.

The reason for the accuracy is simple: it draws on the collective images of its community.

"This is a piece of technology that [is] extremely valuable to image curators because it takes the hassle out of reading through a load of images," Aschoff said. "We believe it has the potential to change how humans search. We are crazy excited to see where this is going."