MIT researchers have developed a computational model which aims to capture the human elements of facial recognition and implement it in our artificial intelligence (AI) and machine learning systems.
On Thursday, MIT revealed the research, conducted at the Center for Brains, Minds, and Machines (CBMM), headquartered at the Massachusetts Institute of Technology.
The researchers designed a machine learning system which implements the new model and they have trained it to recognize sets of particular faces based on sample imagery, resulting in a far more accurate and ‘human’ way of recognizing faces.
An interesting aspect of the model is the “spontaneous” addition of a facial recognition processing step which takes place when an image is displayed showing a face which is rotated — such as 45 degrees to the left or right — which was not included in the initial model.
The team says this property appeared through the training process but was not part of the original brief. However, in this way, the model “duplicates an experimentally observed feature of the primate face-processing mechanism.”
As such, the research team believes the artificial model and the brain are ‘thinking’ along the same lines.
“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the CBBM. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”
The researcher’s new paper, described in the journal Computational Biology, includes a mathematical proof of the computer model.
The system is considered a neural network as it attempts to mimic the structure of the human brain and includes simple units which are arranged into layers and connecting to ‘nodes’ which act as information processors.
Data is fed into the network, classified into different facial recognition criteria, and particular nodes react to different stimuli. By separating which nodes react the most strongly to different categories, the researchers were able to produce more accurate recognition of faces.
As nodes ‘fired’ in different ways, the “spontaneous” step also became apparent.
While this research has a long way to go, it represents a step forward in deepening our understanding of the mind, as well as how we could potentially improve machine learning algorithms and artificial intelligence in facial recognition technologies.
“I think it’s a significant step forward,” says Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science. “In this day and age, when everything is dominated by either big data or huge computer simulations, this shows you how a principled understanding of learning can explain some puzzling findings.
“They’re only looking at the feed-forward pathway — in other words, the first 80, 100 milliseconds. The monkey opens its eyes, and within 80 to 100 milliseconds, it can recognize a face and push a button signaling that,” Koch added. “The question is what goes on in those 80 to 100 milliseconds, and the model that they have seems to explain that quite well.”
Earlier this week, researchers from Augusta University proposed an algorithm which could solve the root of what we call human intelligence.