What are Neural Networks?
Imagine you have a magical drawing board that can learn and recognize different shapes. You want
to teach it to recognize a circle, a square, and a triangle. How can you do that?
- First, you draw lots of different circles, squares, and triangles on the board. Each shape
has
its own special colors and patterns. These are the examples that will help the board learn.
-
Now, the magical drawing board has something called a "brain" called a neural network. It's
like
the board's super-smart friend who can understand and remember things.
-
The brain of the drawing board is made up of lots of little parts called "neurons." These
neurons work together to figure out what shape you drew.
-
When you draw a shape on the board, the neurons in the brain wake up and start looking at
it.
They look at all the colors, patterns, and lines of the shape. Each neuron thinks about a
specific part of the shape, like the color of the lines or how many sides it has.
-
The neurons talk to each other and share their thoughts. They pass messages to help each
other understand the shape. They say things like, "Hey, I think this is a circle because I
see a
round line and it has no corners!" or "Wait, I see a triangle because I see three straight
lines
and it has pointy corners!"
The more examples of shapes you show the drawing board, the smarter its brain becomes. It learns
from all the different shapes and figures out what makes each one special.
Once the drawing board's brain learns enough, it can recognize shapes all by itself. You can
draw a new shape, and the board will say, "Oh, I know this! It's a triangle!" or "That's a
square, I'm sure!"
So, just like how you learn by looking at and understanding different things, a neural network
learns by looking at examples and figuring out patterns. It's like a magical friend that helps
the drawing board understand and recognize shapes.
Neural Network: Glossary
Biomimicry:
- Neurophysiologist Warren McCulloch and Walter Pitts introduced in 1943 a paper discussing
about design of Artificial Neuron inspired from the working of Bilogical Neuron
- Paper was titled A logical
calculus of the ideas immanent in nervous activity
Propositional Logic:
Artificial Neuron:
Perceptron:
- Invented in 1957 by Frank Rosenblatt, by introducing Threshold Logic Unit in
Artificial Neural Network
- Idea was inspired from Hebb's Rule, i.e. Neurons that fire together, wire together
- Multiple TLU are placed in a layer, with each TLU connected to every input
Dense Layer:
- A layer where each node is fully connected to all previous and next layer's nodes/units
Multi Layer Perceptron:
- Limitations of Perceptron, i.e. XOR Problem, can be eliminated by stacking multiple
perceptrons
- ANN where multiple perceptrons are stacked over each other is called as Multi Layer
Perceptron
Step Function:
- Heavside Step Function
Gradient Descent:
- Gradient Descent is a (first order) optimization algorithm for finding a local minimum of a
differentiable function
- i.e. "A gradient measures how much the output of a function changes if you change the inputs
a little bit." — Lex Fridman
Reverse Mode Auto Diff
- However, it was hard to compute Gradient Descent of a neural network, due to its complexity
w.r.t. number of parameters
- Later Reverse Mode Automatic Differentiation was introduced, which takes 2 passes
through the network to compute network's gradient error
- Forward Pass:
- Reverse Pass:
- Epoch: 1 Forward and 1 Reverse pass through the network is called as 1 Epoch
- This helped to tweak network's each bias and weight in order to reduce the neural network's
error
Backpropagation
- Backpropagation = Reverse Mode Automatic Differentiation + Gradient Descent
- This was achieved by replacing Step Function with Sigmoid Function, as Sigmoid
function is non-linear in nature, and thus allows chain-rule differentiation to reach local
minimum
Backpropagation Steps involved in running through a neural network: Homl:3e
CH10 Page311
Activation Function:
- Relu, Leaky Relu, Tanh, etc
Regression Multi Layer Perceptron:
- Single Variate Regression: Predictingsingle values at a time, e.g. Predicting house value
requires one value prediction, the cost
- Multi Variate Regression: Predicting multiple values at once, e.g. Predicting coordinate of
a point in a graph requires 2 points predictions, X & Y coordinates
Bias Neuron:
O/P Calculation:
Weight Updates:
Multilayer Perceptron (MLP):
Input Layer, Hidden Layer, Output Layer, Lower & Upper Layers
Every layer, except output layer, includes a bias neuron, and is fully connected to next
layer
Feedforward Neural Network:
Backpropagation Training Algorithm:
Other Activation Functions
Regression Multilayer Perceptron:
Architecture of Regression Multilayer Perceptron:
Classification Multilayer Perceptron:
Architecture:
- Multi-Label Binary Classification Problem:
- Multi-Class Classification Problems:
More: