"Google engineers have trained a machine learning algorithm to write picture captions using the same techniques it developed for language translation." (via MIT Tech Review) Woah. They say a picture is worth a thousand words...and it looks like Google has found an easier way to come up with them. This new piece of technology will affect how search engines work in the future. If Google is able to correctly caption images online, can you imagine how much more accurate search results will be? So, how does it work? Google's translation technology translates sentences using vectors and by observing how often words are next to each other. They make the assumption that, regardless of language, specific words have the same relationship to each other. The algorithm takes an image, generates a vector with the relationship between the words that describe the image, and this vector is plugged into their existing translation algorithm to produce the caption. Then, the captions are evaluated by humans to check their accuracy. The project is called Neural Image Caption. I admit, vector space mathematics goes over my head, but this is pretty impressive. Has anyone else heard about this?