ABSTRACT

Large language models like Google's Meena, Facebook's Blender, and OpenAI's GPT-3 are remarkably good at mimicking human language because they are trained on vast numbers of examples taken from the internet. ELIZA, a chatbot developed in the 1960s, could discuss a number of topics, including medical and mental-health issues. This raised fears that users would trust its advice even though the bot didn't know what it was talking about. There are several problems with this “Hear no evil, speak no evil” approach. Offensive speech is only one of the problems that researchers at the workshop were concerned about. Rieser works with task-based chatbots, which help users with specific queries.