food photo
MIT researchers have developed an AI algorithm that can predict recipes and ingredients based off of a photo of food. Ana Arevalo/AFP/GettyImages

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed a deep-learning algorithm that can predict a list of ingredients and suggest recipes just by looking at and analysing pictures of food. The team trained the artificial intelligence system, dubbed "Pic2Recipe", using a massive database filled with over one million recipes on popular cooking websites such as Food.com and AllRecipes.

Using that data, the researchers taught the algorithm how to find patterns, make connections between food images and predict the corresponding ingredients and possible similar recipes. Researchers said the AI was correct 65% of the time. When it isn't able to recognise the dish in the photo, it gives a "no matches" response.

"In computer vision, food is mostly neglected because we don't have the large-scale datasets needed to make predictions," Yusuf Aytar, an MIT postdoc who co-wrote the paper, said in a news release on Thursday. "But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences."

While the system was able to predict the recipes and ingredients in desserts like cookies or muffins, researchers said it had a tough time figuring out the ingredients in more complex dishes such as smoothies or sushi rolls. The system also faced difficulty with food that has tonnes of varied and nuanced recipes, such as lasagna.

To address that issue, the team said they had to make sure the system would not "penalize" similar recipes when attempting to differentiate recipes.

Researchers said they hope to improve the system to be able to analyse and understand food in greater detail, understand how the food is prepared - such as stewing or dicing - and be able to differentiate various types of ingredients. They are also looking into developing the system further into a "dinner aide" that can tell a user what to cook based on their dietary preferences or what is on hand in the fridge at the time.

"This could potentially help people figure out what's in their food when they don't have explicit nutritional information," lead author and CSAIL graduate student Nick Hynes said. "For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal."

The team has also provided an online demo for people to test out the system by uploading their own food photos.

"You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know what's needed to cook it at home later," Christoph Trattner, an assistant professor at MODUL University Vienna, who was not involved in the project. "The team's approach works at a similar level to human judgement, which is remarkable."

Researchers will present the paper later this month at the Computer Vision and Pattern Recognition conference in Honolulu.