De cerca, nadie es normal

Chain-of-Verification (CoVe): An Approach for Reducing Hallucinations in LLM Outcomes

Posted: February 25th, 2024 | Author: | Filed under: Artificial Intelligence, Natural Language Processing | Tags: , , , , , , | Comments Off on Chain-of-Verification (CoVe): An Approach for Reducing Hallucinations in LLM Outcomes

Upon coping with LLM generative linguistic capabilities and prompt engineering, one of the main challenges to be tackled is the risk of hallucinations. In the fouth quarter 2023 a new approach to fight and reduce them in LLM outcomes was tested and published by a group of researchers from Meta AI: Chain-of-Verification (CoVe).

What these researchers aimed at was to prove the ability of language models to deliberate on the responses they give in order to correct their mistakes. In the Chain-of-Verification (CoVe) method the model first drafts an initial response; then plans verification questions to fact-check its draft; subsequently answers those questions independently so that the answers are not biased by other responses; and eventually generates its verified improved response.

Setting up the stage

Large Language Models (LLMs) are trained on huge corpora of text documents with billions of tokens of text. It has been shown that as the number of model parameters is increased, performance improve in accuracy, and larger models can generate more correct factual statements. 

However, even the largest models can still fail, particularly on lesser known long-tailed distribution facts; i.e. those that occur relatively rarely in the training corpora. In those cases where the model is incorrect, they instead generate an alternative response which is typically plausible looking, but an incorrect one: a hallucination.

The current wave of language modeling research goes beyond next word prediction, and has focused on their ability to reason. Improved performance in reasoning tasks can be gained by encouraging language models to first generate internal thoughts or reasoning chains before responding, as well as updating their initial response through self-critique. This is the line of research followed by the Chain-of-Verification (CoVe) method: given an initial draft response, firstly it plans verification questions to check its work, and then systematically answers those questions in order to finally produce an improved revised response.

The Chain-of-Verification Approach

This approach assumes access to a base LLM that is capable of being prompted with general instructions in either a few-shot or zero-shot fashion. A key assumption in this method is that this language model, when suitably prompted, can both generate and execute a plan of how to verify itself in order to check its own work, and finally incorporate this analysis into an improved response.

The process entails four core steps:

1. Generate Baseline Response: Given a query, generate the response using the LLM.

2. Plan Verifications: Given both query and baseline response, generate a list of verification questions that could help to self-analyze if there are any mistakes in the original response.

3. Execute Verifications: Answer each verification question in turn, and hence check the answer against the original response to check for inconsistencies or mistakes.

4. Generate Final Verified Response: Given the discovered inconsistencies (if any), generate a revised response incorporating the verification results.

Conditioned on the original query and the baseline response, the model is prompted to generate a series of verification questions that test the factual claims in the original baseline response. For example, if response may contains the statement “The Mexican–American War was an armed conflict between the United States and Mexico from 1846 to 1848”, then one possible verification question to check those dates could be “When did the Mexican American war start and end?” It is important to highlight that verification questions are not templated and the language model is free to phrase these in any form it wants and they also do not have to closely match the phrasing of the original text.

Given the planned verification questions, the next step is to answer them in order to assess if any hallucinations exist: the model is used to check its own work. In their paper, the Meta AI researchers investigated several variants of verification execution: Joint, 2-Step, Factored and Factor+Revise.

  1. Joint: In the Joint method, the afore-mentioned planning and execution steps (2 and 3) are accomplished by using a single LLM prompt, whereby the few-shot demonstrations include both verification questions and their answers immediately after the questions. 
  1. 2-Step:  in this method, there is a first step in which verification prompts are generated and then these verification questions are answered in a second step, where crucially the context given to the LLM prompt only contains the questions, and not the original baseline response and hence cannot repeat those answers directly.
  1. Factored:  this method consists of answering all questions independently as separate prompts. those prompts do not contain the original baseline response and are hence not prone to simply copying or repeating it.
  1. Factor+Revise: in this method, after answering the verification questions, the overall CoVe pipeline then has to either implicitly or explicitly cross-check whether those answers indicate an inconsistency with the original responses. For example, if the original baseline response contained the phrase “It followed in the wake of the 1845 U.S. annexation of Texas. . . ” and CoVe generated a verification question such as “When did Texas secede from Mexico?”, which would be answered with 1836 then an inconsistency should be detected by this step.

And in the final part of this four-step process, the improved response that takes verification into account is generated. This is executed through taking into account all of the previous reasoning steps -the baseline response and verification question answer pairs-, so that the corrections can happen.

As a conclusion, Chain-of-Verification (CoVe) is an approach to reduce hallucinations in a large language model by deliberating on its own responses and self-correcting them. LLMs are able to answer verification questions with higher accuracy than when answering the original query, by breaking down the verification into a set of simpler questions. And besides, when answering the set of verification questions, controlling the attention of the model so that it cannot attend to its previous answers (factored CoVe) helps alleviate copying the same hallucinations.

Stated the above, CoVe does not remove hallucinations completely from the generated outcomes. While this approach gives clear improvements, the upper bound to the improvement is limited by the overall capabilities of the model, e.g. in identifying and knowing what it knows. In this regard, the use of external tools by language models -for instance,RAG,-to gain further information beyond what is stored in its weights- would grant very likely promising results.


Large Language Models (LLMs): an Ontological Leap in AI

Posted: December 27th, 2022 | Author: | Filed under: Artificial Intelligence, Natural Language Processing | Tags: , , , , , | Comments Off on Large Language Models (LLMs): an Ontological Leap in AI

More than the quasi-human interaction and the practically infinite use cases that could be covered with it, OpenAI’s ChatGPT has provided an ontological jolt of a depth that transcends the realm of AI itself.

Large language models (LLMs), such as GPT-3, YUAN 1.0, BERT, LaMDA, Wordcraft, HyperCLOVA, Megatron-Turing Natural Language Generation, or PanGu-Alpha represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. LLMs have been called foundational models; i.e., the infrastructure that made LLMs possible –the combination of enormously large data sets, pre-trained transformer models, and the requirement of significant computing power– is likely to be the basis for the first general purpose AI technologies.

In May 2020, OpenAI released GPT-3 (Generative Pre-trained Transformer 3), an artificial intelligence system based on deep learning techniques that can generate text. This analysis is done by a neural network, each layer of which analyzes a different aspect of the samples it is provided with; e.g., meanings of words, relations of words, sentence structures, and so on. It assigns arbitrary numerical values to words and then, after analyzing large amounts of texts, calculates the likelihood that one particular word will follow another. Amongst other tasks, GPT-3 can write short stories, novels, reportages, scientific papers, code, and mathematical formulas. It can write in different styles and imitate the style of the text prompt. It can also answer content-based questions; i.e., it learns the content of texts and can articulate this content. And it can grant as well concise summaries of lengthy passages.

OpenAI and the likes endow machines with a structuralist equipment: a formal logical analysis of language as a system in order to let machines participate in language. GPT-3 and other transformer-based language models stand in direct continuity with the linguist Saussure’s work: language comes into view as a logical system to which the speaker is merely incidental. These LLMs give rise to a new concept of language, implicit in which is a new understanding of human and machine. OpenAI, Google, Facebook, or Microsoft effectively are indeed catalyzers, which are triggering a disruption in the old concepts we have been living by so far: a machine with linguistic capabilities is simply a revolution.

Nonetheless, critiques have appeared as well against LLMs. The usual one is that no matter how good they may appear to be at using words, they do not have true language; based on the primeval seminal trailblazing work from the philologist Zipf, criticism have stated they are just technical systems made up of data, statistics, and predictions.

According to the linguist Emily Bender, “a language model is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. Quite the opposite we, human beings, are intentional subjects who can make things into objects of thought by inventing and endowing meaning.

Machine learning engineers in companies like OpenAI, Google, Facebook, or Microsoft have experimentally established a concept of language at the center of which does not need to be the human. According to this new concept, language is a system organized by an internal combinatorial logic that is independent from whomever speaks (human or machine). They have undermined one of the most deeply rooted axioms in Western philosophy: humans have what animals and machines do not have, language and logos.

Some data: monthly, on average, humans publish about seventy million posts on the content management platform WordPress. Humans produce about fifty-six billion words a month, or 1.8 billion words a day on this content management platform. GPT-3 -before its scintillating launch- was producing around 4.5 billion words a day, more than twice what humans on WordPress were doing collectively. And that is just GPT-3; there are other LLMs. We are exposed to a flood of non-human words. What will it mean to be surrounded by a multitude of non-human forms of intelligence? How can we relate to these astonishingly powerful content-generator LLMs? Do machines require semantics or even a will to communicate with us?

These are philosophical questions that cannot be just solved with an engineering approach. The scope is much wider and the stakes are extremely high. LLMs can, as well as master and learn our human languages, make us reflect and question ourselves about the nature of language, knowledge, and intelligence. Large language models illustrate, for the first time in the history of AI, that language understanding can be decoupled from all the sensorial and emotional features we, human beings, share with each other. Gradually, it seems we are entering eventually a new epoch in AI.