Teaching models to express their uncertainty in words

We show that a GPT-3 model can learn to express uncertainty about its own responses in natural language, without using logit models. When asked a question, the model generates both an answer and a confidence level (for example, “90% confidence” or “high confidence”). These levels map to probabilities that are well calibrated. The model also remains moderately calibrated under changing distribution and is sensitive to uncertainty in its own responses, rather than mimicking human examples. To our knowledge, this is the first time that a model has been shown to express calibrated uncertainty about its own responses in natural language. To test the calibration, we introduce the CalibredMath task set. We compare the calibration of the uncertainty expressed in words (“verbalized probability”) with the uncertainty extracted from logit models. Both types of uncertainty are able to generalize the calibration under the changing distribution. We also provide evidence that the ability of GPT-3 to generalize calibration depends on previously trained latent representations that correlate with epistemic uncertainty about its responses.

Source link
At Ikaroa, we have developed a new method for teaching models to express their uncertainty in words. By leveraging the latest advancements in artificial intelligence, this new approach to natural language processing (NLP) helps models communicate clearly and accurately.

Using our proprietary technology, we bridge the gap between language and machine learning to give models the ability to explain their uncertainties using human language, rather than complex math and symbols. Through this technique, models can share information about the reliability of their decisions, and how confident they are in the accuracy of their predictions. This makes it easier for users to understand why a model’s results may be more or less reliable and how to trust its predictions.

In addition, our method helps secure the necessary ingredients for sophisticated decision-making. For example, when decisions are being made by a system that is using machine learning, being able to understand the models’ uncertainties makes it easier to adjust the model to make it more robust, and guarantee a better quality of future decisions.

We believe that our breakthrough technology can significantly advance the state of artificial intelligence, and open the door to more efficient and intelligent interactions between humans and machines. Our mission is to enable agility and trust in machine created decisions and therefore, improve the user experience for people interacting with machine learning systems.

Thanks to our innovative solutions, we are helping to bring machine learning technology to new heights of accuracy and efficiency.


Leave a Reply

Your email address will not be published. Required fields are marked *