Understanding Human Imagination Through Diffusion Model
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This paper develops a possible explanation for a facet of visual processing inspired by the biological brain's mechanisms for information gathering. The primary focus is on how humans observe elements in their environment and reconstruct visual information within the brain. Drawing on insights from diverse studies, personal research, and biological evidence, the study posits that the human brain captures high-level feature information from objects rather than replicating exact visual details, as is the case in digital systems. Subsequently, the brain can either reconstruct the original object using its specific features or generate an entirely new object by combining features from different objects, a process referred to as "Imagination." Central to this process is the "Imagination Core," a dedicated unit housing a modified diffusion model. This model allows high-level features of an object to be employed for tasks like recreating the original object or forming entirely new objects from existing features. The experimental simulation, conducted with an Artificial Neural Network (ANN) incorporating a Convolutional Neural Network (CNN) for high-level feature extraction within the Information Processing Network and a Diffusion Network for generating new information in the Imagination Core, demonstrated the ability to create novel images based solely on high-level features extracted from previously learned images. This experimental outcome substantiates the theory that human learning and storage of visual information occur through high-level features, enabling us to recall events accurately, and these details are instrumental in our imaginative processes.