Recurrent Neural Networks

Neural network software uses layers of computational cells that supply feedback across the layers to adjust coefficient values and “train” the network to provide a desired result. Once the network has been trained, it can be supplied input data that is not predetermined and it will output a result that offers some form of analysis or identification according to how it was trained. This makes neural networks good at “classification” tasks.

But intelligence is not just abstracting a pattern and then classifying it. Once we get that done, we take strings of classified patterns and figure out how they are combined to create meaning. This can apply aptly to language processing. Recurrent Neural Networks (RNN) have connections that allow them to accumulate learning across series of data inputs.

A Deep Dive into Recurrent Neural Nets – [nikhilbuduma.com]

What is a Recurrent Neural Net?

One quite promising solution to tackling the problem of learning sequences of information is the recurrent neural network (RNN). RNNs are built on the same computational unit as the feed forward neural net, but differ in the architecture of how these neurons are connected to one another. Feed forward neural networks were organized in layers, where information flowed unidirectionally from input units to output units. There were no undirected cycles in the connectivity patterns. Although neurons in the brain do contain undirected cycles as well as connections within layers, we chose to impose these restrictions to simplify the training process at the expense of computational versatility. Thus, to create more powerful compuational systems, we allow RNNs to break these artificially imposed rules. RNNs do not have to be organized in layers and directed cycles are allowed. In fact, neurons are actually allowed to be connected to themselves.

A Beginner’s Guide to Recurrent Networks and LSTMs – [deeplearning4j.org]

The purpose of this post is to give students of neural networks an intuition about the functioning of recurrent networks and purpose and structure of a prominent variation, LSTMs.

Recurrent nets are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, or numerical times series data emanating from sensors, stock markets and government agencies.

They are arguably the most powerful type of neural network, applicable even to images, which can be decomposed into a series of patches and treated as a sequence.

Since recurrent networks possess a certain type of memory, and memory is also part of the human condition, we’ll make repeated analogies to memory in the brain.

Comments are closed.