2020年2月24日 星期一

[轉載]理解不同的神經網絡損失函數(Understanding different Loss Functions for Neural Networks)

原文
There are various loss functions available for different objectives. In this guide, I will take you through some of the very frequently used loss functions, with a set of examples. This guide is designed by keeping the Keras and Tensorflow framework in the mind.
每要達到不同目的需要不同的損失函數 這篇文章會介紹一些比較常用的 附上一些例子 這篇文章主要以Keras以Tensorflow作背景框架

 Loss Function | Brief Intro 損失函數 | 概觀

 Loss function helps in optimizing the parameters of the neural networks. Our objective is to minimize the loss for a neural network by optimizing its parameters(weights). The loss is calculated using loss function by matching the target(actual) value and predicted value by a neural network. Then we use the gradient descent method to update the weights of the neural network such that the loss is minimized. This is how we train a neural network.
損失函數有助於優化神經網絡的參數. 我們的目的通常是用損失函數來最小化神經網絡的損失值(注:損失值通常代表神經網絡的恶劣程度)並優化權重. 損失值的取得通常來自損失函數,而損失函數以目標結果與預測結果作參數來評估損失值. 然後我們可用梯度下降法優化神經網絡權重,直到損失值達到最優化(注:我在說什麼). 這就是訓練神經網絡的方法