Neural Networks: Feedforward and Backpropagation Explained Optimization

 Neural Networks: Feedforward and Backpropagation Explained  Optimization
Neural Networks: Feedforward and Backpropagation Explained & Optimization

Towards really understanding neural networks — One of the most recognized concepts in Deep Learning (subfield of Machine Learning) is neural networks.

Something fairly important is that all types of neural networks are different combinations of the same basic principals. When you know the basics of how neural networks work, new architectures are just small additions to everything you already know about neural networks.

Moving forward, the above will be the primary motivation for every other deep learning post on this website.

Table of Contents (Click To Scroll)

  1. An Overview of Neural Networks

  2. What is a neural network?

  3. The details, notations and math used

  4. Backpropagation: Optimizing All Weights

  5. Optimizing the Neural Network

  6. Putting Neural Networks Into Steps

  7. Further Reading (Recommended Books)

Overview

The big picture in neural networks is how we go from having some data, throwing it into some algorithm and hoping for the best. But what happens inside that algorithm? This question is important to answer, for many reasons; one being that you otherwise might just regard the inner workings of a neural networks as a black box.


Neural networks consists of neurons, connections between these neurons called weights and some biases connected to each neuron. We distinguish between input, hidden and output layers, where we hope each layer helps us towards solving our problem.

To move forward through the network, called a forward pass, we iteratively use a formula to calculate each neuron in the next layer. Keep a total disregard for the notation here, but we call neurons for activations $a$, weights $w$ and biases $b$ — which is cumulated in vectors.

$a^{(l)}= sigmaleft( boldsymbol{W}boldsymbol{a}^{l-1}+boldsymbol{b} right)$

This takes us forward, until we get an output. We measure how good this output $hat{y}$ is by a cost function $C$ and the result we wanted in the output layer $y$, and we do this for every example. This one is commonly called mean squared error (MSE):

$$ C = frac{1}{n} sum_{i=1}^n (y_i-hat{y}_i)^2 $$

Given the first result, we go back and adjust the weights and biases, so that we optimize the cost function — called a backwards pass. We essentially try to adjust the whole neural network, so that the output value is optimized. In a sense, this is how we tell the algorithm that it performed poorly or good

[...]

Source - Continue Reading: https://mlfromscratch.com/neural-networks-explained/

webmaster

Related post