Chapters

Neural Networks

Posted by: Jaspreet

Last Updated on: 18 Oct, 2022


Neural Networks: Glossary



What are Neural Networks?

Imagine you have a magical drawing board that can learn and recognize different shapes. You want to teach it to recognize a circle, a square, and a triangle. How can you do that?

  1. First, you draw lots of different circles, squares, and triangles on the board. Each shape has its own special colors and patterns. These are the examples that will help the board learn.

  2. Now, the magical drawing board has something called a "brain" called a neural network. It's like the board's super-smart friend who can understand and remember things.

  3. The brain of the drawing board is made up of lots of little parts called "neurons." These neurons work together to figure out what shape you drew.

  4. When you draw a shape on the board, the neurons in the brain wake up and start looking at it. They look at all the colors, patterns, and lines of the shape. Each neuron thinks about a specific part of the shape, like the color of the lines or how many sides it has.

  5. The neurons talk to each other and share their thoughts. They pass messages to help each other understand the shape. They say things like, "Hey, I think this is a circle because I see a round line and it has no corners!" or "Wait, I see a triangle because I see three straight lines and it has pointy corners!"

The more examples of shapes you show the drawing board, the smarter its brain becomes. It learns from all the different shapes and figures out what makes each one special.
Once the drawing board's brain learns enough, it can recognize shapes all by itself. You can draw a new shape, and the board will say, "Oh, I know this! It's a triangle!" or "That's a square, I'm sure!"

So, just like how you learn by looking at and understanding different things, a neural network learns by looking at examples and figuring out patterns. It's like a magical friend that helps the drawing board understand and recognize shapes.

Neural Network: Glossary

Biomimicry:

  1. Neurophysiologist Warren McCulloch and Walter Pitts introduced in 1943 a paper discussing about design of Artificial Neuron inspired from the working of Bilogical Neuron
  2. Paper was titled A logical calculus of the ideas immanent in nervous activity

Propositional Logic:

Artificial Neuron:

Perceptron:

  1. Invented in 1957 by Frank Rosenblatt, by introducing Threshold Logic Unit in Artificial Neural Network
  2. Idea was inspired from Hebb's Rule, i.e. Neurons that fire together, wire together
  3. Multiple TLU are placed in a layer, with each TLU connected to every input

Dense Layer:

  1. A layer where each node is fully connected to all previous and next layer's nodes/units

Multi Layer Perceptron:

  1. Limitations of Perceptron, i.e. XOR Problem, can be eliminated by stacking multiple perceptrons
  2. ANN where multiple perceptrons are stacked over each other is called as Multi Layer Perceptron

Step Function:

  1. Heavside Step Function

Gradient Descent:

  1. Gradient Descent is a (first order) optimization algorithm for finding a local minimum of a differentiable function
  2. i.e. "A gradient measures how much the output of a function changes if you change the inputs a little bit." — Lex Fridman
Reverse Mode Auto Diff
  1. However, it was hard to compute Gradient Descent of a neural network, due to its complexity w.r.t. number of parameters
  2. Later Reverse Mode Automatic Differentiation was introduced, which takes 2 passes through the network to compute network's gradient error
    • Forward Pass:
    • Reverse Pass:
    • Epoch: 1 Forward and 1 Reverse pass through the network is called as 1 Epoch
  3. This helped to tweak network's each bias and weight in order to reduce the neural network's error
Backpropagation
  1. Backpropagation = Reverse Mode Automatic Differentiation + Gradient Descent
  2. This was achieved by replacing Step Function with Sigmoid Function, as Sigmoid function is non-linear in nature, and thus allows chain-rule differentiation to reach local minimum
  3. Backpropagation Steps involved in running through a neural network: Homl:3e CH10 Page311

Activation Function:

  1. Relu, Leaky Relu, Tanh, etc

Regression Multi Layer Perceptron:

  1. Single Variate Regression: Predictingsingle values at a time, e.g. Predicting house value requires one value prediction, the cost
  2. Multi Variate Regression: Predicting multiple values at once, e.g. Predicting coordinate of a point in a graph requires 2 points predictions, X & Y coordinates

Bias Neuron:

O/P Calculation:

Weight Updates:

Multilayer Perceptron (MLP):
Input Layer, Hidden Layer, Output Layer, Lower & Upper Layers
Every layer, except output layer, includes a bias neuron, and is fully connected to next layer

Feedforward Neural Network:

Backpropagation Training Algorithm: Other Activation Functions

Regression Multilayer Perceptron:
Architecture of Regression Multilayer Perceptron:

Classification Multilayer Perceptron:
Architecture:

  1. Multi-Label Binary Classification Problem:
  2. Multi-Class Classification Problems:

More: