Large Language Models (LLMs) have revolutionized how we interact with information, but grounding their responses in verifiable facts remains a fundamental challenge. This is compounded by the fact that real-world knowledge is often scattered across numerous sources, each with its own data formats, schemas, and APIs, making it difficult to access and integrate. Lack of grounding can lead to hallucinations — instances where the model generates incorrect or misleading information. Building responsible and trustworthy AI systems is a core focus of our research, and addressing the challenge of hallucination in LLMs is crucial to achieving this goal.
Today we’re excited to announce DataGemma, an experimental set of open models that help address the challenges of hallucination by grounding LLMs in the vast, real-world statistical data of Google’s Data Commons. Data Commons already has a natural language interface. Inspired by the ideas of simplicity and universality, DataGemma leverages this pre-existing interface so natural language can act as the “API”. This means one can ask things like, “What industries contribute to California jobs?” or “Are there countries in the world where forest land has increased?” and get a response back without having to write a traditional database query. By using Data Commons, we overcome the difficulty of dealing with data in a variety of schemas and APIs. In a sense, LLMs provide a single “universal” API to external data sources.