Options
LLMGuard: Guarding against Unsafe LLM Behavior
ISSN
21595399
Date Issued
2024-03-25
Author(s)
Goyal, Shubh
Hira, Medha
Mishra, Shubham
Goyal, Sukriti
Goel, Arnav
Dadu, Niharika
Kirushikesh, D. B.
Mehta, Sameep
Madaan, Nishtha
DOI
10.1609/aaai.v38i21.30566
Abstract
Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present ”LLMGuard”, a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.