This is a beautiful result. IIUC, these neuroscientists use the terminology "face axis" instead of (machine learning terminology) variation along an eigenface vector or feature vector.
Scientific American: ...using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at Caltech have finally cracked the neural code for face recognition. The researchers found the firing rate of each face cell corresponds to separate facial features along an axis. Like a set of dials, the cells are fine-tuned to bits of information, which they can then channel together in different combinations to create an image of every possible face. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees, by simply tracking the electrical activity of its face cells.”I never believed the "Jennifer Aniston neuron" results, which seemed implausible from a neural architecture perspective. I thought the encoding had to be far more complex and modular. Apparently that's the case. The single neuron claim has been widely propagated (for over a decade!) but now seems to be yet another result that fails to replicate after invading the meme space of credulous minds.
... neuroscientist Rodrigo Quian Quiroga found that pictures of actress Jennifer Aniston elicited a response in a single neuron. And pictures of Halle Berry, members of The Beatles or characters from The Simpsons activated separate neurons. The prevailing theory among researchers was that each neuron in the face patches was sensitive to a few particular people, says Quiroga, who is now at the University of Leicester in the U.K. and not involved with the work. But Tsao’s recent study suggests scientists may have been mistaken. “She has shown that neurons in face patches don’t encode particular people at all, they just encode certain features,” he says. “That completely changes our understanding of how we recognize faces.”Modular feature sensitivity -- just like in neural net face recognition:
... To decipher how individual cells helped recognize faces, Tsao and her postdoc Steven Le Chang drew dots around a set of faces and calculated variations across 50 different characteristics. They then used this information to create 2,000 different images of faces that varied in shape and appearance, including roundness of the face, distance between the eyes, skin tone and texture. Next the researchers showed these images to monkeys while recording the electrical activity from individual neurons in three separate face patches.This is the original paper in Cell:
All that mattered for each neuron was a single-feature axis. Even when viewing different faces, a neuron that was sensitive to hairline width, for example, would respond to variations in that feature. But if the faces had the same hairline and different-size noses, the hairline neuron would stay silent, Chang says. The findings explained a long-disputed issue in the previously held theory of why individual neurons seemed to recognize completely different people.
Moreover, the neurons in different face patches processed complementary information. Cells in one face patch—the anterior medial patch—processed information about the appearance of faces such as distances between facial features like the eyes or hairline. Cells in other patches—the middle lateral and middle fundus areas—handled information about shapes such as the contours of the eyes or lips. Like workers in a factory, the various face patches did distinct jobs, cooperating, communicating and building on one another to provide a complete picture of facial identity.
Once Chang and Tsao knew how the division of labor occurred among the “factory workers,” they could predict the neurons’ responses to a completely new face. The two developed a model for which feature axes were encoded by various neurons. Then they showed monkeys a new photo of a human face. Using their model of how various neurons would respond, the researchers were able to re-create the face that a monkey was viewing. “The re-creations were stunningly accurate,” Tsao says. In fact, they were nearly indistinguishable from the actual photos shown to the monkeys.
The Code for Facial Identity in the Primate Brain200 cells is interesting because (IIRC) standard deep learning face recognition packages right now use a 126-dimensional feature space. These packages perform roughly as well as humans (or perhaps a bit better?).
Le Chang, Doris Y. Tsao
Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells
•Face cells display flat tuning along dimensions orthogonal to the axis being coded
•The axis model is more efficient, robust, and flexible than the exemplar model
•Face patches ML/MF and AM carry complementary information about faces
Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
No comments:
Post a Comment