Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. The technique aims to help users know when a model should be trusted.
↧