June 3, 2025  5:30 PM – 6:15 PM GMT+0
Data, AI and automation

Mitigating Hallucinations in AI

Generative AI hallucinations are still a concern. What does it take to make a generative LLM solution more accurate? LexisNexis, one of the largest repositories of authoritative legal and business information, has millions of users who rely on facts being correct—whether that's researching a company, the value of a business transaction, or anything else they're using to make critical decisions.
 
In this session, CTO of Global Nexis Solutions, Snehit Cherian explains how his team is developing tools that researchers and business users can trust. He'll outline the four-pronged hallucination mitigation strategy that Nexis+ AI, the company's generative AI research solution, is using to improve answers and build user trust.
CTO
Global Nexis Solutions