Knowledge distillation helps with taming COSI bot responses

Large language models (LLMs) can be powerful conversational agents but are prone to hallucinations (made up responses) and toxic content. In our upcoming paper at the Taming LLMs workshop at INLG-23, we show that ChatGPT’s responses can be curated and distilled down to a T5 model that is much better behaved!