This article describes sparse coding of associative memory models and the use of its networks for large-scale modeling of the brain. The mechanism of associations in human memory has been stated previously by psychologists and philosophers and has been the subject of research in the field of neuroscience and artificial neural networks for over half a century, which continues its investigation today. The functioning of associative memory in the human brain is observed when we try to remember and fail to instantly remember a particular piece of information. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get Original Essay In this state, our mind probably contains features of the current scenario and related information (not the appropriate information) about what we are trying to remember which initiates a process of consecutive associations from one to the other (probably by similarity of meaning or logic in human language) finally recognizing that information that adequately adapts to the context that stimulated our research. Therefore, in relating to contextual information our brain probably behaves like an information system that associates a new output with a particular input. Technically there are three different mechanisms of associative process. One is heteroassociation which searches for a pattern from a category that is associated with another pattern in another category, the second is self-association which searches within a pattern to recall a complete or exact pattern, and the third is a special case of self-association called as a two-way association that goes back and forth between two patterns. Now the question asked by neuroscientists is how the association process in the mind is related to (or implemented in) neurophysiological mechanisms in the brain. A rule called synaptic plasticity formulated by Donald Hebb explains that the strength of synapses between any two cells is strengthened by the repeated activation of one cell to another cell. This is otherwise called Hebbian theory which pioneered the development of neural associative memory (NAM) models. The neural associative memory model stores the weights of synaptic connections between neurons and is stored in a memory storage matrix. The process of storing and retrieving a specific set of patterns is expressed using an additive or binary rule with an appropriate threshold value for a specific set of patterns. We consider a learning process in matrix (memory) formation where each learning step at a time, the change in matrix (memory) formation depends on the product of presynaptic activity and postsynaptic activity in the synapse at that specific time. Therefore the synaptic change is calculated based on an additive rule locally in time and space, called the local learning rule. The output of this model is binary {0, 1} which contains the stored model based on Hebb's learning rule, and the sparsity of these models is more productive for information storage and retrieval. Efficiency becomes critical when the NAM model is used in many technical applications. Vector matrix multiplication is performed for the entries in {0, 1} for retrieval and then with counting, thresholding and finally if the input patterns are sparse, retrieval of the stored patterns becomes faster. These sparse binary models are used in spoken word recognition, face recognition and written letter recognition which has a large number of classes. Video signals are a.
tags