How Google Arts and Culture's Face Match A.I. Actually Works

Artificial intelligence is everywhere these days and humans can’t seem to decide whether to be creeped out by [how far it might go or laugh at how far it still has to go.

That’s part of what made the viral Google Arts and Culture feature allowing users to compare their faces with a work of art so fun. It played up our natural vanity, for sure, but it also gave us a chance to test out what AI is capable of.

Some users took the opportunity to punk the AI, to hilarious effect:

If you want to know how our future robot overlords may work, here’s an intro to the facial recognition that powers the face match technology.

How Google Arts & Culture Facial Recognition Works

Facial recognition is a biometric identification system that examines the physical features of a face to try to distinguish one person from another.

Facial recognition detects a face in an image, creates a “faceprint” of unique characteristics, and then verifies and/or identifies it against existing information in a database.

It sounds like an easy process, but in fact, there is a lot of learning that the machine has to do first: after identifying faces in an image, it may have to reorient or resize it for a better reading — we’ve all been in cases where a selfie taken from too close looks distorted from reality.

You Might Also Like: Improve Your Life With These 7 Wires on Amazon Prime. (Yes, Wires.)

Then, once the AI has resized and reoriented the face, it creates a “faceprint”, a set of characteristics that uniquely identify one person’s face. This could include the distance between facial features, such as eyes, or shapes or sizes of noses.

Faceprints can then be compared with an individual photo or to databases of many images.

In the case of Google’s museum selfie feature, each selfie that is uploaded is compared with its database of over 70,000 works of art.

Adele on Google Face Match

Google’s FaceNet

Google is one of a number of companies that are interested in facial recognition — others include (surprise surprise) Facebook and Microsoft — and, in 2015, published research claiming that it far its facial recognition software has come, made correct identifications 99.63 percent of the time. In comparison, Facebook’s DeepFace algorithm claims 97.25 percent accuracy, while the FBI’s technology claims 85 percent, as the Washington Post reports.

Another sign of how far Google’s facial recognition has come? Google Photos can now identify pets.

Not that the system is perfect. Google’s facial recognition software has been shown to get race horrifically wrong, like when Google Photos mistakenly tagged two African Americans as gorillas in 2015.

It’s all about training, as Google explains,

While a computer won’t react like you do when you see that photo, a computer can be trained to recognize certain patterns of color and shapes. For example, a computer might be trained to recognize the common patterns of shapes and colors that make up a digital image of a face.

According to the Washington Post, users currently have to opt into facial recognition on Google Photos (but not on Facebook).

But, by playing around with this selfie feature, that’s essentially what we’re doing, so we are actively consenting to making Google’s AI smarter.

Written by Eileen Guo

More articles by Eileen • Follow Eileen on Twitter

tweetshare

More From Inverse

Advertisement