A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks
Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks.
This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences.
The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains.
The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience.
Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks.
These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions.
Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”.
He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project.
Original Article: Face Detection in Untrained Deep Neural Networks?
The Latest on: Untrained neural networks
- Twitter: It’s Not The Algorithm’s Fault. It’s Much Worse.on June 28, 2022 at 5:00 pm
Instead of just picking, say, the largest square that’s closest to the center of the image, they use some “algorithm”, likely a neural network, trained to find people’s faces and make sure ...
- OpenAI Taught a Neural Network How to Play Minecrafton June 27, 2022 at 10:20 am
OpenAI has detailed its efforts to teach a neural network how to play Minecraft. The organization says(Opens in a new window) it used a "massive unlabeled video dataset of human Minecraft play," along ...
- Sinequa Adds Industry-Leading Neural Search Capabilities to its Search Cloud Platformon June 22, 2022 at 8:01 am
Enterprise search leader Sinequa today announced the addition of advanced neural search capabilities to its Search Cloud Platform, providing unprecedented relevance and accuracy - at scale - to ...
- How the brain interprets motion while in motionon June 20, 2022 at 5:00 pm
Face Detection in Untrained Deep Neural Networks? Dec. 21, 2021 — Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks.
- Graph convolutional network can classify schizophreniaon June 20, 2022 at 10:50 am
A graph convolutional network (GCN) approach allows classification ... may represent a core neural deficit of negative symptomatology in schizophrenia," the authors write.
- Understanding how and for whom brain training workson June 20, 2022 at 10:38 am
Whether and the degree to which working memory training improves performance on untrained tasks, as in "fluid intelligence," the ability to think and reason abstractly and solve problems ...
- Wolfram releases neural net repositoryon June 14, 2022 at 5:00 pm
The software company Wolfram Research is launching a public repository for trained and untrained neural network models. The Wolfram Neural Net Repository builds on the company’s Wolfram Language ...
- Merging physical domain knowledge with AI improves prediction accuracy of battery capacityon June 7, 2022 at 6:50 am
In addition, when applied to untrained data, its prediction accuracy ... methods with physical domain knowledge–based neural networks. As a result, the battery prediction accuracy for testing ...
- Deepfakes: What are they and why would I make one?on August 1, 2019 at 9:13 am
In a nutshell, neural networks are a type of machine learning ... or that might seem real to the untrained eye. The vast majority of deepfakes circulating the Internet are featuring celebrities ...
via Bing News
The Latest on: Untrained neural networks
via Google News