# (1/2) Same Story(MLE) Different Endings: Mean Square Error, Cross Entropy, KL Divergence

## Mathematically Proving That They are All the Same

--

# Introduction

Very often, data scientists and machine learning practitioners don’t appreciate the mathematical and intuitive relationships between different loss metrics like Negative Log Likelihood, Cross Entropy, Maximum Likelihood Estimation, Kullback-Leibler (KL) divergence, and most importantly Mean Square Error. Wouldn’t you be surprised if I say that KL-Divergence and Mean Square Error are the same mathematically?

As a seasoned data scientist, I am confounded by the fact that these mathematical relations are not given the kind of emphasis this topic deserves in AI/ML courses and textbooks. In this blog, I aim to establish the solid mathematical and intuitive relations between these different losses which are used in different problems like classification, regressions, GANS, etc.

This blog immensely helps data scientists to deepen their understanding of different loss metrics and also helps aspiring data scientists crack machine learning interviews.

# The Mother of All Loss Functions: Maximum Likelihood Estimation

The maximum likelihood method is used for parameter estimation. Often in machine learning each model contains its own set of parameters, for example, linear model *y = mx + c : *weight/slope *m** *and intercept *c** *are the parameters that ultimately define the model.

Now the challenge is to find the model parameters when the data is provided. Maximum likelihood estimation is a method that determines values for the parameters. But how it’s done ?. Intuitively the parameter values are found such that they maximize the likelihood that the predicted is close to observed.

The parameter set in the total search space that maximizes the likelihood function is called the maximum likelihood estimate.

# Math Behind MLE

The logic of maximum likelihood is both intuitive and flexible. Math is simple and elegant, just follow along.

1: Let’s assume that we want to build a model with parameters ** θ. **where

**[θ₀, θ₁, θ₂,θ₃ …θₙ]^T, for example in linear regression (**

*θ:**y = mx + c)*model

**[**

*θ:**m, c*]. where 𝞗 is called the parameter space. In linear regression case 𝞗 is the seach space for different combinations of [(

*m, c),*(

*m*₀

*, c*₀

*),*(

*m*₁

*, c*₁

*)……*(

*m*ₙ

*, c*ₙ

*)*].

2: The goal of MLE is to find the best *. *The goal of maximum likelihood estimation is to determine the best parameters ** θ**ₖ ∈ 𝞗. For example, in linear regression,

**ₖ**

*θ***(**

*:**m*ₖ

*, c*ₖ

*).*

*3: *The way to find the right parameter set ** θ**ₖ is using the

**Likelihood function**. The concept is simple, if carefully understood. Lets assume our linear (

*y = mx + c)*model again, for a given data point (xₚ, yₚ) and parameters

**ₖ**

*θ***(**

*:**m*ₖ

*, c*ₖ

*).*

*4: ***PDF: ***f*ₚ*(*yₚ, ** θ**ₖ) tells the probability of model predicted

**if the actual label is**

*yₚ*

**. Simple right, you flip a coin and see heads pdf:**

*yₚ**f(Head*) tells us how likely you will see heads.

5: *f*ₚ*(*yₚ, ** θ**ₖ) is for one data point p, but we need to calculate this function for all the data point (y₀, y₁, y₂,y₃ …yₙ). How do we do this, we could use Joint Probability Distribution to take all data points into consideration.

Note:

For independent and identically distributed random variables, joint probability distribution

will be the product of univariate density functionsfₙ(y;θ)fₚ(yₚ,θₖ)

6: For a given parameter ** θ**ₖ, joint density funtion

*f(*y,

**ₖ) tells me how likely I will see**

*θ***distribution that is equal to the observed**

*y***distribution. Now reverse the situation, we want to find the**

*y***ₖ so that I will see**

*θ***distribution closest to the observed**

*y***distribution. That reversed JDF is called the likelihood function.**

*y*7: So we search all the parameter space *θ* ∈ 𝞗 and the specific value *θ*ₖ that maximizes the likelihood function is called the Maximum Likelihood Estimate (**MLE**).

8: In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:

Maximizing the log-likelihood is the same as maximizing the likelihood. Since ‘log’ is an increasing function, the value of Θ that maximizes the log-likelihood function will also maximize the likelihood function.

## Loss: Negative Log Likelihood (Teaser)

Before concluding the blog, let me give a teaser: the loss that is a very obvious outcome from MLE is Negative log-likelihood. It is a loss function used in multi-class classification. Losses are generally minimized so we use a negative sign in the above equation and thus called Negative log-likelihood loss. We minimize the Negative log-likelihood loss and thus achieve the Maximum likelihood estimate.

# Conclusion

Almost all common loss functions can be derived from the Maximum Likelihood Estimation. In my next article, we will understand how they can be derived mathematically and appreciate the similarities between these seemingly different loss functions that are used in regression, classification, and GANS.

Thanks for your time !