A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks
Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks.
This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences.
The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains.
The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience.
Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks.
These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions.
Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”.
He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project.
Original Article: Face Detection in Untrained Deep Neural Networks?
More from: Korea Advanced Institute of Science and Technology
The Latest on: Untrained neural networks
- Training A Neural Network To Play A Driving Gameon March 25, 2023 at 5:00 pm
In these cases, it can make more sense to create a neural network and train the computer to do the job, as one would a human. On a more basic level, [Gigante] did just that, teaching a neural ...
- Fourier Transformations Reveal How AI Learns Complex Physicson March 21, 2023 at 8:18 pm
Scientific AI’s “Black Box” Is No Match for 200-Year-Old Method Fourier transformations reveal how deep neural network learns complex physics. One of the oldest tools in computational physics — a 200- ...
- Neural network IP Listingon March 19, 2023 at 5:00 pm
WhisPro is a neural network based speech recognition software package, allowing customers to add voice activation to voice-enabled IoT devices. WhisPro™ is targeting the rapidly growing use ... IMG ...
- Hardware For Deep Neural Networkson March 12, 2023 at 5:00 pm
Along with another MIT professor and two PhD students ([Vivienne Sze], [Yu-Hsin Chen], and [Tien-Ju Yang]), [Emer’s] presentation covers hardware architectures for deep neural networks.
- Physicist: The Entire Universe Might Be a Neural Networkon March 1, 2023 at 3:33 am
In other words, he wrote in the paper, it's a "possibility that the entire universe on its most fundamental level is a neural network." For years, physicists have attempted to reconcile quantum ...
- What are artificial neural networks?on May 28, 2022 at 4:53 pm
Artificial neural networks are inspired by the early models of sensory processing by the brain. An artificial neural network can be created by simulating a network of model neurons in a computer.
- neural networkson March 12, 2019 at 12:14 pm
WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is ...
- Neural network accelerator IP Listingon August 7, 2018 at 8:19 am
IMG Series4 next-generation neural network accelerator (NNA) is ideal for advanced driver-assistance systems (ADAS) and autonomous vehicles such as robotaxis. The range of cores incorporates ... The ...
- neural networkon January 21, 2018 at 9:23 am
Neural networks are used in image processing, robotics, diagnosing, forecasting and many other disciplines. Unlike regular applications that are programmed to deliver precise results ("if this ...
- How Do Artificial Neural Networks Learn?on August 18, 2015 at 3:47 pm
The field of artificial neural networks is extremely complicated and readily evolving. In order to understand neural networks and how they process information, it is critical to examine how these ...
via Bing News
The Latest on: Untrained neural networks
[google_news title=”” keyword=”untrained neural networks” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News