Now, if, in addition to those four conditions, the error has the same variance given any values for the regressors (i.e., the error is homoscedastic), then the OLS estimators are the best (i.e., having the narrowest sampling distribution) among all the linear unbiased estimators. This means that if those five conditions are satisfied, you cannot find an estimator that is unbiased AND has a narrower sampling distribution than the OLS. These five conditions, which are typically referred to as Gauss-Markov conditions, do not limit the distribution of the errors. If we restrict that distribution to normal, it can be shown that OLS estimators are the best among all unbiased estimators (not limited to linear anymore).
Making the normality of the errors assumption has another positive consequence. It makes the sampling distribution of the OLS estimators normal even when the sample size is not large enough. (Note that, due to the CLT and because OLS estimators involve complicated use of sample averages, if the sample size is large enough, the sampling distribution of the estimators will be normal regardless of the distribution of the errors.). Having normal distribution for the OLS estimators results in the distribution of t and F statistics become, respectively, t and F distributions. Knowing the estimates' sampling distributions allows us to make a statistical inference (generate prediction intervals and confidence levels).