De cerca, nadie es normal

Chain-of-Verification (CoVe): An Approach for Reducing Hallucinations in LLM Outcomes

Posted: February 25th, 2024 | Author: | Filed under: Artificial Intelligence, Natural Language Processing | Tags: , , , , , , | Comments Off on Chain-of-Verification (CoVe): An Approach for Reducing Hallucinations in LLM Outcomes

Upon coping with LLM generative linguistic capabilities and prompt engineering, one of the main challenges to be tackled is the risk of hallucinations. In the fouth quarter 2023 a new approach to fight and reduce them in LLM outcomes was tested and published by a group of researchers from Meta AI: Chain-of-Verification (CoVe).

What these researchers aimed at was to prove the ability of language models to deliberate on the responses they give in order to correct their mistakes. In the Chain-of-Verification (CoVe) method the model first drafts an initial response; then plans verification questions to fact-check its draft; subsequently answers those questions independently so that the answers are not biased by other responses; and eventually generates its verified improved response.

Setting up the stage

Large Language Models (LLMs) are trained on huge corpora of text documents with billions of tokens of text. It has been shown that as the number of model parameters is increased, performance improve in accuracy, and larger models can generate more correct factual statements. 

However, even the largest models can still fail, particularly on lesser known long-tailed distribution facts; i.e. those that occur relatively rarely in the training corpora. In those cases where the model is incorrect, they instead generate an alternative response which is typically plausible looking, but an incorrect one: a hallucination.

The current wave of language modeling research goes beyond next word prediction, and has focused on their ability to reason. Improved performance in reasoning tasks can be gained by encouraging language models to first generate internal thoughts or reasoning chains before responding, as well as updating their initial response through self-critique. This is the line of research followed by the Chain-of-Verification (CoVe) method: given an initial draft response, firstly it plans verification questions to check its work, and then systematically answers those questions in order to finally produce an improved revised response.

The Chain-of-Verification Approach

This approach assumes access to a base LLM that is capable of being prompted with general instructions in either a few-shot or zero-shot fashion. A key assumption in this method is that this language model, when suitably prompted, can both generate and execute a plan of how to verify itself in order to check its own work, and finally incorporate this analysis into an improved response.

The process entails four core steps:

1. Generate Baseline Response: Given a query, generate the response using the LLM.

2. Plan Verifications: Given both query and baseline response, generate a list of verification questions that could help to self-analyze if there are any mistakes in the original response.

3. Execute Verifications: Answer each verification question in turn, and hence check the answer against the original response to check for inconsistencies or mistakes.

4. Generate Final Verified Response: Given the discovered inconsistencies (if any), generate a revised response incorporating the verification results.

Conditioned on the original query and the baseline response, the model is prompted to generate a series of verification questions that test the factual claims in the original baseline response. For example, if response may contains the statement “The Mexican–American War was an armed conflict between the United States and Mexico from 1846 to 1848”, then one possible verification question to check those dates could be “When did the Mexican American war start and end?” It is important to highlight that verification questions are not templated and the language model is free to phrase these in any form it wants and they also do not have to closely match the phrasing of the original text.

Given the planned verification questions, the next step is to answer them in order to assess if any hallucinations exist: the model is used to check its own work. In their paper, the Meta AI researchers investigated several variants of verification execution: Joint, 2-Step, Factored and Factor+Revise.

  1. Joint: In the Joint method, the afore-mentioned planning and execution steps (2 and 3) are accomplished by using a single LLM prompt, whereby the few-shot demonstrations include both verification questions and their answers immediately after the questions. 
  1. 2-Step:  in this method, there is a first step in which verification prompts are generated and then these verification questions are answered in a second step, where crucially the context given to the LLM prompt only contains the questions, and not the original baseline response and hence cannot repeat those answers directly.
  1. Factored:  this method consists of answering all questions independently as separate prompts. those prompts do not contain the original baseline response and are hence not prone to simply copying or repeating it.
  1. Factor+Revise: in this method, after answering the verification questions, the overall CoVe pipeline then has to either implicitly or explicitly cross-check whether those answers indicate an inconsistency with the original responses. For example, if the original baseline response contained the phrase “It followed in the wake of the 1845 U.S. annexation of Texas. . . ” and CoVe generated a verification question such as “When did Texas secede from Mexico?”, which would be answered with 1836 then an inconsistency should be detected by this step.

And in the final part of this four-step process, the improved response that takes verification into account is generated. This is executed through taking into account all of the previous reasoning steps -the baseline response and verification question answer pairs-, so that the corrections can happen.

As a conclusion, Chain-of-Verification (CoVe) is an approach to reduce hallucinations in a large language model by deliberating on its own responses and self-correcting them. LLMs are able to answer verification questions with higher accuracy than when answering the original query, by breaking down the verification into a set of simpler questions. And besides, when answering the set of verification questions, controlling the attention of the model so that it cannot attend to its previous answers (factored CoVe) helps alleviate copying the same hallucinations.

Stated the above, CoVe does not remove hallucinations completely from the generated outcomes. While this approach gives clear improvements, the upper bound to the improvement is limited by the overall capabilities of the model, e.g. in identifying and knowing what it knows. In this regard, the use of external tools by language models -for instance,RAG,-to gain further information beyond what is stored in its weights- would grant very likely promising results.


On Natural Language Processing, Game Theory, and Diplomacy

Posted: April 11th, 2023 | Author: | Filed under: Artificial Intelligence | Tags: , , , , , | Comments Off on On Natural Language Processing, Game Theory, and Diplomacy

Beyond GPT in its different evolutions, there are other LLMs -as stated in Large Language Models  (LLMs): an Ontological Leap in AI– developed with a perfectly defined industry focus in mind. This is the case of CICERO.  

In November 2022, the Meta Fundamental AI Research Diplomacy Team (FAIR) and researchers from other academic institutions published the seminal paper Human-level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning, laying the foundations for CICERO. 

CICERO is an AI agent that can use language to negotiate, persuade, and work with people to achieve strategic goals similar to the way humans do. It was the first AI to achieve human-level performance in the strategy game No-press Diplomacy

No-press Diplomacy is a complex strategy game, involving both cooperation and competition, that has served as a benchmark for multi-agent AI research. It is a 7-player zero-sum cooperative/competitive board game, featuring simultaneous moves and a heavy emphasis on negotiation and coordination. In the game a map of Europe is divided into 75 provinces. 34 of these provinces contain supply centers, and the goal of the game is for a player to control a majority (18) of the SCs. Each players begins the game controlling three or four supply centers and an equal number of units. Importantly, all actions occur simultaneously: players write down their orders and then reveal them at the same time. This makes Diplomacy an imperfect-information game in which an optimal policy may need to be stochastic in order to prevent predictability. 

Diplomacy is a game about people rather than pieces. It is designed in such a way that cooperation with other players is almost essential to achieve victory, even though only one player can ultimately win. It requires players to master the art of understanding other people’s motivations and perspectives; to make complex plans and adjust strategies; and then to use natural language to reach agreements with other people and to persuade them to form partnerships and alliances.

How Was Cicero Developed by FAIR?

In two-player zero-sum (2p0s) settings, principled self-play algorithms ensures that a player will not lose in expectation regardless of the opponent’s strategy, as exposed by John von Neumann in 1928 in his work Zur Theorie der Gesellschaftsspiele.

Theoretically, any finite 2p0s game -such as chess, go, or poker- can be solved via self-play given sufficient computing power and memory. However, in games involving cooperation, self-play alone no longer guarantees good performance when playing with humans, even with infinite computing power and memory. The clearest example of this is language. A self-play agent trained from scratch without human data in a cooperative game involving free-form communication channels would almost certainly not converge to using English, for instance, as the medium of communication. Owing to this, the afore-mentioned researchers developed a self-play reinforcement learning algorithm -named RL-DiL-piKL-, that provided a model of human play while simultaneously training an agent that responds well to this human model. The RL-DiL-piKL was used to train an agent, named Diplodocus. In a 200-game No-press Diplomacy tournament involving 62 human participants, two Diplodocus agents both achieved a higher average score than all other participants who played more than two games, and ranked first and third according to an Elo rating system -a method for calculating the relative skill levels of players in zero-sum games.

Which Are the Implications of this Breakthrough?

Despite almost silenced by the advent of GPT in its different versions, firstly this is an astonishing advance in the field of negotiation, and more particularly in the realm of diplomacy. Never an AI model has had such a brilliant performance in a fuzzy environment, seasoned by information asymmetries, common sense reasoning, ambiguous natural language, and statistical modeling. Secondly and more importantly, this is another evidence we are in a completely new AI era in which machines can and are scaling knowledge

These LLMs have caused a deep shift: we went from attempting to encode human-distilled insights into machines to delegating the learning process itself to machines. AI is ushering in a world in which decisions are made in three primary ways: by humans (which is familiar), by machines (which is becoming familiar), and by collaboration between humans and machines (which is not only unfamiliar but also unprecedented). We will begin to give AI fewer specific instructions about how exactly to achieve the goals we assign it. Much more frequently we will present AI with ambiguos goals and ask: “How, based on your conclusions, should we proceed?”

AI promises to transform all realms of human experience. And the core of its transformations will ultimately occur at the philosophical level, transforming how humans understand reality and our roles within it. In an age in which machines increasingly perform tasks only humans used to be capable of: what, then, will constitute our identity as human beings? 

With the rise of AI, the definition of the human role, human aspirations, and human fulfillment will change. For humans accustomed to monopoly on complex intelligence, AI will challenge self-perception. To make sense of our place in this world, our emphasis may need to shift from the centrality of human reason to the centrality of human dignity and autonomy. Human-AI collaboration does not occur between peers. Our task will be to understand the transformations that AI brings to human experience, the challenges it presents to human identity, and which aspects of these developments require regulation or counterbalancing by other human commitments.

The AI revolution has come to stay. Unless we develop new concepts to explain, interpret, and organize its consequent transformations, we will be unprepared to navigate them. We must rely on our most solid resources -reason, moral and ethical values, tradition…- to adapt our relationship with reality so it keeps on being human. 


Large Language Models (LLMs): an Ontological Leap in AI

Posted: December 27th, 2022 | Author: | Filed under: Artificial Intelligence, Natural Language Processing | Tags: , , , , , | Comments Off on Large Language Models (LLMs): an Ontological Leap in AI

More than the quasi-human interaction and the practically infinite use cases that could be covered with it, OpenAI’s ChatGPT has provided an ontological jolt of a depth that transcends the realm of AI itself.

Large language models (LLMs), such as GPT-3, YUAN 1.0, BERT, LaMDA, Wordcraft, HyperCLOVA, Megatron-Turing Natural Language Generation, or PanGu-Alpha represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. LLMs have been called foundational models; i.e., the infrastructure that made LLMs possible –the combination of enormously large data sets, pre-trained transformer models, and the requirement of significant computing power– is likely to be the basis for the first general purpose AI technologies.

In May 2020, OpenAI released GPT-3 (Generative Pre-trained Transformer 3), an artificial intelligence system based on deep learning techniques that can generate text. This analysis is done by a neural network, each layer of which analyzes a different aspect of the samples it is provided with; e.g., meanings of words, relations of words, sentence structures, and so on. It assigns arbitrary numerical values to words and then, after analyzing large amounts of texts, calculates the likelihood that one particular word will follow another. Amongst other tasks, GPT-3 can write short stories, novels, reportages, scientific papers, code, and mathematical formulas. It can write in different styles and imitate the style of the text prompt. It can also answer content-based questions; i.e., it learns the content of texts and can articulate this content. And it can grant as well concise summaries of lengthy passages.

OpenAI and the likes endow machines with a structuralist equipment: a formal logical analysis of language as a system in order to let machines participate in language. GPT-3 and other transformer-based language models stand in direct continuity with the linguist Saussure’s work: language comes into view as a logical system to which the speaker is merely incidental. These LLMs give rise to a new concept of language, implicit in which is a new understanding of human and machine. OpenAI, Google, Facebook, or Microsoft effectively are indeed catalyzers, which are triggering a disruption in the old concepts we have been living by so far: a machine with linguistic capabilities is simply a revolution.

Nonetheless, critiques have appeared as well against LLMs. The usual one is that no matter how good they may appear to be at using words, they do not have true language; based on the primeval seminal trailblazing work from the philologist Zipf, criticism have stated they are just technical systems made up of data, statistics, and predictions.

According to the linguist Emily Bender, “a language model is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. Quite the opposite we, human beings, are intentional subjects who can make things into objects of thought by inventing and endowing meaning.

Machine learning engineers in companies like OpenAI, Google, Facebook, or Microsoft have experimentally established a concept of language at the center of which does not need to be the human. According to this new concept, language is a system organized by an internal combinatorial logic that is independent from whomever speaks (human or machine). They have undermined one of the most deeply rooted axioms in Western philosophy: humans have what animals and machines do not have, language and logos.

Some data: monthly, on average, humans publish about seventy million posts on the content management platform WordPress. Humans produce about fifty-six billion words a month, or 1.8 billion words a day on this content management platform. GPT-3 -before its scintillating launch- was producing around 4.5 billion words a day, more than twice what humans on WordPress were doing collectively. And that is just GPT-3; there are other LLMs. We are exposed to a flood of non-human words. What will it mean to be surrounded by a multitude of non-human forms of intelligence? How can we relate to these astonishingly powerful content-generator LLMs? Do machines require semantics or even a will to communicate with us?

These are philosophical questions that cannot be just solved with an engineering approach. The scope is much wider and the stakes are extremely high. LLMs can, as well as master and learn our human languages, make us reflect and question ourselves about the nature of language, knowledge, and intelligence. Large language models illustrate, for the first time in the history of AI, that language understanding can be decoupled from all the sensorial and emotional features we, human beings, share with each other. Gradually, it seems we are entering eventually a new epoch in AI.


Language is a Rum Thing

Posted: September 29th, 2020 | Author: | Filed under: Artificial Intelligence | Tags: , , , , | Comments Off on Language is a Rum Thing

Zipf and His Word Frequency Distribution Law

Although it might sound surprising for some data scientists, the supposedly successful use of machine learning techniques to tackle the problem of natural language processing is based on the work of an US philologist called George Kingsley Zipf (1902-1950). Zipf analyzed the frequency distribution of certain terms and words in several languages, enunciating the law named after him in the 40’s of the past century. Ah, these crazy linguists!

One of the most puzzling facts about human language is also one of the most basic: Words occur according to a famously systematic frequency distribution such that there are few very high-frequency words that account for most of the tokens in text (e.g., “a,” “the,” “I,” etc.) and many low-frequency words (e.g., “accordion,” “catamaran,” “jeopardize”). What is striking is that the distribution is mathematically simple, roughly obeying a power law known as Zipf’s law: The rth most frequent word has a frequency f(r) that scales according to

f(r)∝1/rα

for α≈1 (Zipf, 1932, 1936)(1) In this equation, r is called the frequency rank of a word, and f(r) is its frequency in a natural corpus. Since the actual observed frequency will depend on the size of the corpus examined, this law states frequencies proportionally: The most frequent word (r = 1) has a frequency proportional to 1, the second most frequent word (r = 2) has a frequency proportional to 1/2α, the third most frequent word has a frequency proportional to 1/3α, and so forth.

From Zipf`s standpoint as well, the length of a word, far from being a random matter, is closely related to the frequency of its usage -the greater the frequency, the shorter the word. The more complex any speech-element is phonetically, the less frequent it occurs. In English the most frequent word in the sample will occur on the average once in approximately every 10 words; the second most frequent word once in every 20 words; the third most frequent word once in every 1,000 words; in brief, the distribution of words in English approximates with remarkable precision an harmonic series. Similarly, one finds in English (or Latin or Chinese) the following striking correlation. If the number of different words occurring once in a given sample is taken as x, the number of different words occurring twice, three times, four times, n times, in the same sample, is respectively 1/22, 1/32, 1/42… 1/n2 of x, up to, though not including, the few most frequently used words; that is, an unmistakable progression according to the inverse square is found, valid for over 95% of all the different words used in the sample.

This evidence points to the existence of a fundamental condition of equilibrium between the form and function of speech-habits, or speech-patterns, in any language. The impulse to preserve or restore this condition of equilibrium is the underlying cause of linguistic change. All speech-elements or language-patterns are impelled and directed in their behavior by a fundamental law of economy, in which there is the desire to maintain an equilibrium between form and behavior, always according to Zipf.

Nonetheless, if our languages are pure statistical distributions, what happens with meanings? Is there a multiplicative stochastic process at play? Absolutely not! We select and arrange our words according to their meanings with little or no conscious reference to the relative frequency of occurrence of those words in the stream of speech, yet we find that words thus selected and arranged have a frequency distribution of great orderliness which for a large portion of the curve seems to be constant for language in general. The question arises as to the nature of the meaning or meanings which leads automatically to this orderly frequency distribution.

A study of language is certainly incomplete which totally disregards all questions of meaning, emotion, and culture even though these refer to the most elusive of mental phenomena.

Daniel Everett and Language as a Cultural Tool          

According to the linguist Everett, language is an artifact, a cultural tool, an instrument created by hominids to satisfy their social need of meaning and community (Everett, 2013)(2).

Linguists, psychologists, anthropologists, biologists, and philosophers tend to divide into those who believe that human biology is endowed with a language-dedicated genetic program and those who believe instead that human biology and the nature of the world provide general mechanisms, that allow us the flexibility to acquire a large array of general skills and abilities of which language is but one. The former often refers to a “language instinct” or a “universal grammar” (Chomsky dixit) shared by all humans. The latter talk about learning language as we learn many other skills, such as cooking, chess, or carpentry. The latter proposal takes seriously the idea that the function of language shapes its form. It recognizes the linguistic importance of the utilitarian forces radiating from the human necessity to communicate in order to survive. Language emerges as the nexus of our biological endowment and our environmental existence.

According to Chomsky meaning is secondary to grammar and all we need to understand of a formal grammar is that if we follow the rules and combine the symbols properly, then the sentences generated are grammatical -does it sound familiar to the ML approach to NLP?. Nonetheless, this is not accurate: beings with just a grammar would not have language. In fact, we know that meaning drives most, if not all the grammar. Meaning would have to appear at least as early in the evolution of language as grammar.

Forms in language vary radically and thus serve to remind us that humans are the only species with a communication system whose main characteristics is variation and not homogeneity. Humans do not merely produce fixed calls like vervet monkeys, they fit their messages to specific contexts and intentions.

People organize their words by related meanings -semantic fields-, by sound structure, by most common meanings, and so on. Even our verb structures are constrained by our cultures and what these cultures consider to be an “effable event”. For instance, the Pirahãs -an indigenous people of the Amazon Rainforest in Brazil- do not talk about the distant past or the far-off future because a cultural value of theirs is to talk only about the present or the short-term past or future.

Can grammatical structure itself be shaped by culture? Let’s consider another example: researchers claim there is no verb “to give” in Amele mainly for cultural reasons: giving is so basic to Amele culture the language manifests a tendency to allow the “experiential basicness” of giving to correspond to a “more basic kind of linguistic form” – that is zero. No verb is needed for this fundamental concept of Amele culture.

Language has been shaped in its very foundation by our socio-cultural needs. Languages fit their cultural niches and take on the properties required of them in their environments. That is one reason that languages change over time -they evolve to fit new cultural circumstances.

Our language is shaped to facilitate communication. There is very little evidence for arbitrariness in the design of grammars. People both overinterpret and under-interpret what they hear based on cultural expectations built into their communication patterns. We learn to predict, by means of what some researchers think is a sophisticated and unconscious computational computation of probabilities what a speaker is likely to say next once we learn that the relationships amongst words are contingent what the likehood of one word following another one is. Crucial for language acquisition is what we call the “interactional instinct”. This instinct is at innate drive amongst human infants to interact with conspecific caregivers. Babies and children learn from their parents’ faces what is in their parents’ minds and they adjust their own inner mental lives accordingly. Rather than learning algebraic procedures for combining symbols, children instead seem to learn linguistic categories and constructions as patterns of meaningful symbols.

All humans belong to culture and share values and knowledge with other members of their cultures. With the current approach an AI/NLP model will never be able to learn culture. Therefore, it can never learn a language stricto sensu, though it can learn lists of grammatical rules and lexical combinations.

Without culture, no background, without background no signs, without signs, no stories and no language.

Recapping, it seems NLP keeps on being the last challenge for AI practitioners and aficionados. Blending the mathematical-statistical and tbe symbolic approaches is paramount to find a solution to this conundrum. I’m positive the moment we succeed, we’ll be closer to strong AI… Still a long way ahead.

Die Grenzen Meiner Sprache sind die Grenzen meiner Welt. Ludwig Wittgenstein (1889 – 1951).

Bibliography:

(1) The Psycho-Biology of Language. An Introduction to Dynamic Philology. George Kingsley Zipf. 1936

Selected Studies of the Principle of Relative Frequency in Language. George Kingsley Zipf. 1932,

(2) Language. The Cultural Tool. Daniel Everett, 2013. Profile Books.


Winograd Schema Challenge: A Step beyond the Turing Test

Posted: March 17th, 2016 | Author: | Filed under: Artificial Intelligence | Tags: , , , , , , , | Comments Off on Winograd Schema Challenge: A Step beyond the Turing Test

The well-known Turing test was first proposed by Alan Turing (1950) as a practical way to defuse what seemed to him to be a pointless argument about whether or not machines could think. He put forward that, instead of formulating such a vague question, we should ask whether a machine would be capable of producing behavior that we would say required thought in people. The sort of behavior he had in mind was participating in a natural conversation in English over a teletype in what he called the Imitation Game. The idea, roughly, was the following: if an interrogator was unable to tell after a long, free flowing and unrestricted conversation with a machine whether s/he was dealing with a person or a machine, then we should be prepared to say that the machine was thinking. The Turing test does have some troubling aspects though.

Leer más »