Apache Spark Deep Learning Cookbook
上QQ阅读APP看书,第一时间看更新

Getting ready

The formula for the cost function is the following:

cost(x)=(predicted-actual) 2

If the cost function looks familiar, it's because it is really just another way of minimizing the squared difference between the actual output and the prediction. The purpose of gradient descent or backpropagation in a neural network is to minimize the cost function until the value is close to 0. At that point, the weights and bias (w1w2, and b) will no longer be random insignificant values generated by numpy, but actual significant weights contributing to a neural network model.