Table of cross-entropy loss at different probabilities

Yang Zhang
2 min readJun 21, 2019

--

cross-entropy loss at different probabilities for the correct class

Cross-entropy loss is used for classification machine learning models. Often, as the machine learning model is being trained, the average value of this loss is printed on the screen. But it is not always obvious how good the model is doing from the looking at this value.

The formula of cross entropy in Python is

def cross_entropy(p):
return -np.log(p)

where p is the probability the model guesses for the correct class.

For example, for a model that classifies images as an apple, an orange, or an onion, if the image is an apple and the model predicts probabilities {“apple”: 0.7, “orange”: 0.2, “onion”: 0.1}, the cross-entropy loss will be about 0.36. This corresponds to the row with loss=0.36 and p=0.7 in the table above. Higher p for apple, lower loss.

A few special points on the table:

  • p=1, loss=0: perfect job guessing the correct class
  • p=0.5, loss=0.69: guessing the correct class with a 0.5 probability, in the case of binary classification (e.g., hotdog or not-hotdog), this corresponds to a random fair coin flip guess.
  • p=0, loss=infinitely large (not shown on the table but should’ve been on the top row): worse possible job predicting the correct class

By looking at the table above, it is now easier to interpret prediction accuracy from cross-entropy loss value.

And this is a graph version of the table:

--

--

Yang Zhang
Yang Zhang

Written by Yang Zhang

Data science and machine learning

No responses yet