When asking a question, in addition to the answer itself, we also want to know how likely the answer is to be correct. We would think it is ridiculous to trust all answers without considering how knowledgeable and confident each source is. That's the problem with many AI-based models today where all models appear equivalently confident in their predictions. When the models are not aware of what they do or do not know, there is no way for us to tell which one we should trust.
"Uncertainty" can be a scary word. Especially when it's used around automation. However, the information that uncertainty provides is crucial in order to make AI safer. Lack of uncertainty quantification exposes users of these models to overconfident prediction. Being able to quantify uncertainty means a model can communicate how credible it estimates its prediction to be. For example, if a model learns the growth rate of pineapples in Indonesia and then asks how it would grow on in Denmark, the model should be extremely uncertain about its answer given that it has no data on a pineapple growing in any similar weather conditions.
For a more detailed (and mathematical) description of the uncertainty problem, see
this article by Yarin Gal.