EURO 2024 Copenhagen
Abstract Submission

EURO-Online login

2459. Generative AI in Decision Support Tools: Managing Hallucinations

Invited abstract in session WA-30: Optimization Tools, stream Software for Optimization.

Wednesday, 8:30-10:00
Room: 064 (building: 208)

Authors (first author is the speaker)

1. Oliver Bastert
FICO
2. Jens Schulz
FICO

Abstract

The field of generative artificial intelligence (GenAI) has seen exponential growth over the past few years in research papers and publications. The techniques have gained traction across industries. A hype train is rolling around how to best leverage GenAI in the field of Operations Research: code generation to increase productivity, chatbots for documentation, generation of optimization models - just to name a few. Thinking further, an LLM may generate models for decision support tools and execute the code to guide complex business decisions: a CO2-optimal delivery network, an optimal production schedule, an optimized pricing table.
Large language models (LLMs) occasionally produce incorrect or misleading results, a phenomenon commonly called "hallucinations". We demonstrate basic examples where a slight adaptation of a question (prompt) turns the LLM response into a false statement. Besides basic inaccuracies, there are also considerations around data exposure, data leakage, privacy violations, copyright violations, biased answers, dangerous or unethical usage, inappropriate language and malicious code. Best practices need to be followed to responsibly apply GenAI in decision support tools!
Our examples demonstrate the importance of balancing innovation with risk mitigation in leveraging GenAI for critical decision-making tasks advocating for a holistic approach that combines technological advancements with ethical considerations and human oversight.

Keywords

Status: accepted


Back to the list of papers