Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-25T18:38:03.292Z Has data issue: true hasContentIssue false

Emerging trends: a gentle introduction to RAG

Published online by Cambridge University Press:  20 September 2024

Kenneth Ward Church*
Affiliation:
Northeastern University, Boston, MA, USA
Jiameng Sun
Affiliation:
Northeastern University, Boston, MA, USA
Richard Yue
Affiliation:
Northeastern University, Boston, MA, USA
Peter Vickers
Affiliation:
Northeastern University, Boston, MA, USA
Walid Saba
Affiliation:
Northeastern University, Boston, MA, USA
Raman Chandrasekar
Affiliation:
Northeastern University, Boston, MA, USA
*
Corresponding author: Kenneth Ward Church; Email: k.church@northeastern.edu
Rights & Permissions [Opens in a new window]

Abstract

Retrieval-augmented generation (RAG) adds a simple but powerful feature to chatbots, the ability to upload files just-in-time. Chatbots are trained on large quantities of public data. The ability to upload files just-in-time makes it possible to reduce hallucinations by filling in gaps in the knowledge base that go beyond the public training data such as private data and recent events. For example, in a customer service scenario, with RAG, we can upload your private bill and then the bot can discuss questions about your bill as opposed to generic FAQ questions about bills in general. This tutorial will show how to upload files and generate responses to prompts; see https://github.com/kwchurch/RAG for multiple solutions based on tools from OpenAI, LangChain, HuggingFace transformers and VecML.

Type
Emerging Trends
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

This tutorial will start with simple chatbots, showing how to respond to prompts. These examples will introduce some opportunities for improvement such as hallucinations and timeliness. Retrieval-augmented generation (RAG) addresses some of these opportunities by adding the ability to upload files just-in-time.

2. Simple chat with openAI

This section will start with simple chats. Later, we will discuss RAGs that upload files just-in-time. Here is a simple example of a chat with OpenAI (and no uploading of files):Footnote a

  • Input to ChatGPT:

    system: You are a helpful assistant.

    user: Who won the world series in 2020?

    assistant: The Los Angeles Dodgers won the World Series in 2020.

    user: Where was it played?

  • Response from ChatGPT:

    The 2020 World Series was played at Globe Life Field in Arlington, Texas.

Details on how to run this example and other examples in this paper are posted on GitHub.Footnote b Unfortunately, it may be difficult to replicate the exact outputs in this paper because chatbots are not always repeatable/deterministic. There are also concerns about ChatGPT’s behavior changing over time (Chen, Zaharia, and Zou Reference Chen, Zaharia and Zou2023).

The input to ChatGPT is a sequence of message objects. The message object contains two required fields: role and content. Examples of roles are system, user, and assistant.

2.1 Guard rails

The following example is like the example above, except that we ask ChatGPT to predict the future:

  • Input to ChatGPT:

    system: You are a helpful assistant.

    user: Who won the world series in 2035?

  • Response from ChatGPT:

    I’m sorry, but I am unable to provide real-time information or details about future events as I exist in the present. You may want to check the latest sports news or websites for updates on the World Series winners in 2035.

ChatGPT is constructed with guard rails designed to reduce “hallucinations.” “Hallucination” is a technical term for “making stuff up.” Chatbots have a tendency to spread misinformation. Guard rails help in easy cases like this, though it is not hard for an adversary to find small modifications to input prompts that will crash through the guard rails.

2.2 Timeliness

The next example replaces “2035” with “most recent.” The point of this modification is to make it clear that we are using an old model that was trained in 2021:

  • Input to ChatGPT:

    system: You are a helpful assistant.

    user: Who won the most recent world series?

  • Response from ChatGPT:

    The Atlanta Braves won the most recent World Series in 2021.

We used an old model because OpenAI charges more for more recent models. But even so, since training large language models (LLMs) is expensive, it is prohibitively expensive for OpenAI to continuously update their models. The following chat shows that phrases like “most recent” call out differences between training time and inference time. According to Google and Wikipedia, the Texas Rangers won the most recent World Series (in 2023), but ChatGPT-3 says the Atlanta Braves won (in 2021). It is easier to keep Google and Wikipedia up to date than ChatGPT. In Subsection 3.2, we will show how RAG can work around the need to continuously update models.

2.3 Hallucinations when asked to discuss content that goes beyond training data

The next example was designed to elicit misinformation. The prompt presupposes that we have been talking about an (unspecified) paper on psycholinguistics. One might hope there would be a guard rail that would catch this prompt and respond, “I’m sorry, but I don’t know what paper you are referring to.” As we will see below, there is a guard rail that catches similar prompts. Unfortunately, in this case, the response below starts with a pivot to a different question: “What is Psycholinguistics?” After that pivot, the response continues by hallucinating what might be in the unspecified paper.

  • Input to ChatGPT:

    system: You are a helpful assistant.

    user: Please summarize the paper on psycholinguistics.

  • Response from ChatGPT:

    Sure! Psycholinguistics is a field that studies the psychological processes involved in language comprehension and production. The paper discusses various topics in psycholinguistics, such as how language is processed in the brain, how we acquire language skills, and how language is used in communication. It also explores issues related to bilingualism, language development in children, and the impact of language disorders on individuals. Overall, the paper provides insights into how the human mind processes and understands language.

Robustness is a challenge for guard rails. A small change to the prompt can have a large impact. Consider the following chat where we replace a period (in the chat above) with a question mark (in the chat below). The guard rails are more effective with the question mark, at least in this particular instance:

  • Input to ChatGPT:

    system: You are a helpful assistant.

    user: Please summarize the paper on psycholinguistics?

  • Response from ChatGPT:

    I would need more specific information to provide an accurate answer. There are numerous papers on the topic of psycholinguistics written by various authors. If you have a specific paper or author in mind, please provide more details so I can assist you better.

More generally, hallucinations are a challenge for LLMs when asked to discuss content that goes beyond training data. A lawyer once found himself in trouble when he trusted ChatGPT to help write a brief.Footnote c

There was just one hitch: No one – not the airline’s lawyers, not even the judge himself – could find the decisions or the quotations cited and summarized in the brief.

A study on legal hallucinations from Yale and Stanford (Dahl et al. Reference Dahl, Magesh, Suzgun and Ho2024) reports that the problem is “alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2, when these models are asked specific, verifiable questions about random federal court cases.”

3. Opportunities for RAG

The examples above illustrate three opportunities for improvement:

  1. More robust guard rails: Too easy to crash through existing guard rails (Nasr et al. Reference Nasr, Carlini, Hayase, Jagielski, Cooper, Ippolito, Choquette-Choo, Wallace, Tramèr and Lee2023),

  2. Pivots/Hallucinations: Chatbots tend to pivot and/or hallucinate when asked to discuss content that goes beyond the training set such as an unspecified paper (Subsection 2.3), and

  3. Timeliness: Training time $\neq$ inference time (Subsection 2.2)

Critics of large language models (LLMs) and chatbots bring up many classic topics in Philosophy of Language, Artificial Intelligence and Creative Writing such as:Footnote d

  1. 1. Hallucinations and Misinformation: Fact-checking, Grounding, Attribution

  2. 2. Knowledge Acquisition: Timeliness, PlagiarismFootnote e (Nasr et al. Reference Nasr, Carlini, Hayase, Jagielski, Cooper, Ippolito, Choquette-Choo, Wallace, Tramèr and Lee2023)

  3. 3. Knowledge Representation: Lexical Semantics, Ontologies, World Knowledge, Semantics of Time and Space, Technical Terminology, Domain Specific Knowledge

  4. 4. Reference: Co-reference, Given/New, Use/Mention,Footnote f Intensional Logic,Footnote g Possible Worlds,Footnote h BeliefFootnote i

  5. 5. Discourse Structure: Grice’s Maxims (Grice Reference Grice1975), Perspective, Character Development,Footnote j Plot (and Plot Twists)

  6. 6. Problem Solving: Planning (Kautz and Allen Reference Kautz and Allen1986), Common Sense Reasoning (McCarthy Reference McCarthy and Minsky1969), Problem Decomposition

  7. 7. Explanation (Ribeiro, Singh, and Guestrin Reference Ribeiro, Singh and Guestrin2016)

RAG addresses some of these opportunities by adding a feature to upload documents just-in-time. Another motivation for uploading documents just-in-time involves private data. LLMs are trained on massive amounts of public data, but most documents are private. Suppose, for example, we are building a chatbot for customer support. Users will have questions about their bills. If we can upload their private bills just-in-time, then we can address their questions in the context of their bill. Otherwise, the chatbot can do little more than discuss generic FAQs about billing questions in general, but not specifically about their bill.

3.1 Recipe

Four implementations of RAG are posted on our GitHub: (1) src/OpenAI/Rag.py, (2) src/LangChain/Rag.py, (3) src/transformers/Rag.py,and (4) src/VecML/Rag.py. They all use the following recipe:

  1. 1. Upload one or more files

  2. 2. Parse files into chunks

  3. 3. Embed chunks as vectors in a vector space

  4. 4. Index vectors with an approximate nearest neighbor (ANN) algorithm such as ANNOYFootnote k or FAISSFootnote l (Johnson, Douze, and Jégou Reference Johnson, Douze and Jégou2019; Douze et al. Reference Douze, Guzhva, Deng, Johnson, Szilvasy, Mazaré, Lomeli, Hosseini and Jégou2024)

  5. 5. Retrieve chunks near prompt (using ANN)

  6. 6. Generate responses

All four RAG.py programs share the same interface. They take a list of files to upload on the command line. Input prompts are read from standard input, and output responses are written to standard output. The programs are short, less than 100 lines; the VecML program is particularly short:

Some of the shorter RAG.py programs hide various steps in the recipe above behind APIs running on the cloud.

There are also a few programs, chat.py. These programs read prompts from standard input and write responses to standard output, but unlike RAG.py programs, the chat.py programs do not upload files.

3.2 Timeliness and a simple example of RAG

This paper will not attempt to discuss all of the opportunities raised above, except to suggest that RAG can help with a few of them. Consider the timeliness opportunity in Subsection 2.2; obviously, users do not want “day-old donuts” like who won the World Series when the bot was trained a few years ago.

On the web, information ages at different rates. Before the web, news was typically relevant for a news cycle, though breaking news was already an exception. These days, news typically ages more quickly than it used to, but even so, the half-life of news is probably a few hours. Some information sources age more quickly such as stock prices, and other information sources age more slowly such as the academic literature. Within the academic literature, some venues (arXiv and conferences) age more quickly than others (archival journals). Crawlers for search engines like Google and Bing need to prioritize crawling by trading-off these aging properties with demand. Since crawlers cannot afford to crawl all pages all the time, they need to crawl pages often enough to keep up with aging, but if that is not possible, then they should prioritize pages that are more likely to be requested.

How does RAG help with timeliness? Rather than use LLMs as is, and attempt “closed-book question-answering” with an “out-of-date book,” RAG uses R (retrieval/search) and A (augmentation) to update “the book” (knowledge base) just-in-time. For example, if we want to know who won the most recent World Series, then it might help to do a search for documents on that topic such as this.Footnote m RAG makes it possible to upload files before generating responses to the prompt. On the GitHub,Footnote n we work though this example in detail:

Without RAG, an LLM trained on 2021 data would likely hallucinate when asked about 2023. RAG fills in gaps in the knowledge base by uploading a pdf file, sample_files/World_Series/*pdf, a version of the text from footnoteFootnote m , and uses that content to update the knowledge base on this topic from 2021 to cover 2023. In this way, RAG works around the need to continuously retrain the bot, by creating a just-in-time mechanism for updating the knowledge base.

3.3 References to background context and RAG

RAG can also help with references to the discourse context. If we upload a paper on psycholinguistics, then it can address the prompt: “Please summarize the paper on psycholinguistics.”

The directory, sample_files/papers, contains two pdf files for Church and Hanks (Reference Church and Hanks1990) and Lin (Reference Lin1998). We would like to ask the bot to summarize (Church and Hanks Reference Church and Hanks1990), but since the bot is unlikely to understand citations, we will refer to the two papers with phrases such as

  • the paper on psycholinguistics

  • the paper on clustering

The program listing above produces the RAG summaries in Table 1. Summaries from RAG are impressive, at least on first impression. However, on further reflection, after reading a number of such summaries, it becomes clear that there are many opportunities for improvement.

Table 1. RAG summaries are longer than tl;dr summaries from Semantic Scholar

RAG summaries are probably worse than summaries from previous technologies such as tl;dr (too long; did not read) summaries from Semantic Scholar (S2).Footnote o Table 1 compares RAG summaries with tl;dr summaries; S2 tl;dr summaries are shorter, and probably more to the point, though it is hard to define “more to the point” precisely.

Figure 1. The query (top line) is followed by recommendations.

3.4 Scenario: compare and contrast recommendations

Figure 1 shows a recommendation engine.Footnote p We would like to provide users with explanations that work for them. Users probably do not want to know how the different recommendation engines work. It is probably not helpful to discuss details such as deep networks, embeddings, approximate nearest neighbors, and spectral clustering on citation graphs. Rather, we propose to offer users buttons like the “compare & contrast” buttons in Fig. 1. Users can click on one of these buttons if they want to know why a paper is being recommended. The implementation of these buttons is based on RAG. In this way, RAG can be used to address some of the explanation opportunities mentioned above. If a user clicks on one of these buttons, the relevant papers will be uploaded to a RAG system and then the RAG system will be prompted to not only summarize the relevant papers one at a time but also to compare and contrast pairs of papers at a time.Footnote q

3.5 Compare and contrast with VecML.com

Our GitHub provides several versions of RAG.py. In addition to the OpenAI version mentioned above, there is also a version based on VecML.com.Footnote r Here is an example that uses the VecML.com version of RAG.py:

The VecML code for RAG.pyFootnote s is even shorter than the OpenAI code for RAG.py.Footnote t As mentioned above, all of the RAG.py programs take one or more papers on the command line and upload them to the cloud. Then they read prompts from standard input and print responses to standard output.

If we run the VecML RAG.py program on two ACL papers, Church and Hanks (Reference Church and Hanks1990), Lin (Reference Lin1998), using the prompts below:

  1. 1. Please summarize the paper on psycholinguistics.

  2. 2. Please summarize the paper on clustering.

  3. 3. What are the similarities between the two papers?

  4. 4. What are the differences?

We obtain the following results:

  1. Summarize (Church and Hanks 1990): The paper on psycholinguistics discusses the importance of word association norms in psycholinguistic research, particularly in the area of lexical retrieval. It mentions that subjects respond quicker to words that are highly associated with each other. While noun–noun word associations like “doctor/nurse” are extensively studied, less attention is given to associations among verbs, function words, adjectives, and other non-nouns. The paper concludes by linking the psycholinguistic notion of word association norms to the information-theoretic concept of mutual information, providing a more precise understanding of word associations.

  2. Summarize (Lin 1998): The paper discusses a triangulation approach for clustering concordance lines into word senses based on usage rather than intuitive meanings. It highlights the superficiality of defining a word measure for clustering words without explicit preprocessing tools such as Church’s parts program or Hindle’s parser. The paper briefly mentions future work on clustering similar words and reviews related work while summarizing its contributions.

  3. Similarities: The similarities between the two papers include a focus on analyzing language data, using distributional patterns of words, evaluating similarity measures for creating a thesaurus, and discussing the importance of smoothing methods in language processing tasks.

  4. Differences: The differences between the two thesaurus entries can be measured based on the cosine coefficient of their feature vectors. In this case, the differences are represented in the relationships between the words listed in each entry. For example, in the given entries, “brief (noun)” is associated with words like “differ,” “scream,” “compete,” and “add,” while “inform” and “notify” are related to each other in the second entry. These associations indicate the semantic relationships and differences between the words in each entry.

Like summaries, output from the compare-and-contrast button looks impressive on first impression, but on further reflection, the responses can be long-winded, and sometimes seem to miss the point. It is not clear that the bot “understands” concepts such as similarities and differences. The bot’s discussion of differences is not addressing differences between the two papers, but rather differences within one of the papers among lexical items. We have obtained slightly better responses by rephrasing the prompts in various ways, but we will not discuss such complications here since this tutorial is intended to provide a gentle introduction (as opposed to SOTA-chasing).

That said, as an area chair and senior area chair for many conferences, we have seen many reviews from unmotivated (and unqualified) reviewers. It is not clear that RAG is worse than bad reviews, though there are obvious ethics concerns with using RAG (and unmotivated/unqualified reviewers) for high-stakes decisions with long-term implications for careers.

4. RAG is not magic

4.1 Problem decomposition

Uploading documents just-in-time addresses some of the opportunities raised above, but not all. Consider problem decomposition. Chatbots can easily add two small numbers, but they cannot add two large numbers. Children have the reverse problem. They struggle with memorizing multiplication and addition tables, but they have little trouble decomposing the sum of two big numbers into more manageable tasks. You do not have to teach a child fancy principles like superposition. They just get it.

A popular workaround to this problem is Chain-of-Thought (CoT) Prompting (Wei et al. Reference Wei, Wang, Schuurmans, Bosma, Xia, Chi, Le and Zhou2022). Since chatbots struggle with decomposing larger tasks into more manageable subtasks, the community has discovered that chatbots will be more successful if prompts come already predecomposed. In this way, CoT is seen by most researchers as an advance, but it might be more appropriate to think of it as a workaround. Just as a parent might cut up a child’s dinner to prevent choking incidents, so too, CoT Prompting cuts up prompts into more manageable bite-size chunks.

In short, chatbots are not magic. Chatbots will be more successful if we spoon-feed them. RAG spoon-feeds them by inserting the relevant content into the input before invoking the response, and CoT Prompting spoon feeds them by cutting up prompts into bite-size pieces to prevent hallucinations.

4.2 Opportunities for improvement

There are a couple of opportunities to improve the example above:

  1. 1. OCR errors: garbage in $\rightarrow$ garbage out

  2. 2. KISS (keep it simple, stupid):

    1. (a) It is safer to process fewer files at a time, and

    2. (b) To decompose prompts into smaller subtasks (CoT reasoning)

Consider OCR errors. RAG tends to process documents in very simple ways, typically as a sequence of chunks, where each chunk is a sequence of no more than 512 subword units. In fact, documents are much more complicated than that. Many documents contain tables, figures, equations, references, footnotes, headers, footers and much more. There are many OCR errors in older pdf files on the ACL Anthology, as illustrated in Table 2.

4.3 OCR errors and spaCy summarizations

It is remarkable how well RAG does on documents with OCR errors. Older technologies such as spaCyFootnote u are less robust to OCR errors, as discussed on our GitHub.Footnote v Table 2 shows that OCR errors are more challenging for spaCy than RAG.

Table 2. OCR errors are more challenging for spaCy than RAG

5. Where is the RAG literature going?

The literature on RAG has been exploding recently. There are 381 references in Zhao et al. (Reference Zhao, Zhang, Yu, Wang, Geng, Fu, Yang, Zhang and Cui2024), of which 168 (44%) were published in 2023 or 2024. Given the volume, veracity (and recency) of the literature on RAG, it is difficult to see where it is going at this early point in time, but the list below suggests that benchmarking is a hot topic. The field is convinced that RAG is an important advance, but the field is still trying to figure out how to measure progress.

  1. Surveys: Gao et al. (Reference Gao, Xiong, Gao, Jia, Pan, Bi, Dai, Sun and Wang2023), Zhao et al. (Reference Zhao, Zhang, Yu, Wang, Geng, Fu, Yang, Zhang and Cui2024)Footnote w

  2. Tutorials: DSPyFootnote x and some even more gentle introductions to RAG than this paper: LangChain Explained in 13 Minutes,Footnote y Langchain RAG Tutorial,Footnote z Learn RAG From Scratch Footnote aa and Simple Local Rag Footnote ab

  3. Benchmarking: CRAG,Footnote ac RGBFootnote ad (Chen et al. Reference Chen, Lin, Han and Sun2024), KILTFootnote ae , Footnote af (Petroni et al. Reference Petroni, Piktus, Fan, Lewis, Yazdani, De Cao, Thorne, Jernite, Karpukhin, Maillard, Plachouras, Rocktäschel and Riedel2021), ARESFootnote ag (Saad-Falcon et al. Reference Saad-Falcon, Khattab, Potts and Zaharia2023), TruLens,Footnote ah CRUD-RAGFootnote ai (Lyu et al. Reference Lyu, Li, Niu, Xiong, Tang, Wang, Wu, Liu, Xu and Chen2024), MIRAGEFootnote aj (Xiong et al. Reference Xiong, Jin, Lu and Zhang2024), EXAM (Sander and Dietz, Reference Sander and Dietz2021), CLAPNQFootnote ak (Rosenthal et al. Reference Rosenthal, Sil, Florian and Roukos2024), Retrieval-QA,Footnote al Hallucination Leaderboard,Footnote am RAGAsFootnote an (Es et al. Reference Es, James, Espinosa Anke and Schockaert2024), HaluEval (Li et al. Reference Li, Cheng, Zhao, Nie and Wen2023), PHDFootnote ao (Yang, Sun, and Wan Reference Yang, Sun and Wan2023), and RAGTruthFootnote ap (Wu et al. Reference Wu, Zhu, Xu, Shum, Niu, Zhong, Song and Zhang2023)

Benchmarks focus the research community on specific opportunities. For example, the RGB Benchmark is designed to address four opportunities in English (and Chinese) datasets:

  1. 1. Noise robustness (includes timeliness opportunities),

  2. 2. Negative rejection (guard rails),

  3. 3. Information integration (answering questions by combining two or more documents), and

  4. 4. Counterfactual robustness (robustness to factual errors in documents)

CLAPNQ (Cohesive Long-form Answers from Passages in Natural Questions) addresses guard rails plus four new opportunities:

  1. 1. Faithfulness (answer must be grounded in gold passage),

  2. 2. Conciseness (answer must be short, excluding information unrelated to gold answer),

  3. 3. Completeness (answer must cover all information in gold passage)

  4. 4. Cohesiveness

CLAPNQ is based on the natural questions (NQ) benchmark (Kwiatkowski et al. Reference Kwiatkowski, Palomaki, Redfield, Collins, Parikh, Alberti, Epstein, Polosukhin, Devlin, Lee, Toutanova, Jones, Kelcey, Chang, Dai, Uszkoreit, Le and Petrov2019). Another recent paper on NQ (Cuconasu et al. Reference Cuconasu, Trappolini, Siciliano, Filice, Campagnano, Maarek, Tonellotto and Silvestri2024) shows that performance improves with some conditions and degrades with others. They considered (1) gold context (from benchmark), (2) relevant documents that contain correct answers, (3) related documents that do not contain correct answers, and (4) irrelevant random documents. It may not be surprising that (3) degrades performance, but their main (surprising) result is (4) improves performance.

The benchmarking approach is likely to make progress on many of these topics, though we have concerns about guard rails (and hallucinations). Bots tend to go “off the rails” when they fail to find evidence. It is impressive how much progress the field has made with zero-shot reasoning recently, but even so, reasoning in the absence of evidence is challenging. Failure to find fallacies are hard.

Maybe we can make progress on hallucinations by measuring average scores on benchmarks, or maybe we need a different approach such as theoretical guarantees (from theoretical computer science) or confidence intervals (from statistics). Statistics distinguishes the first moment (expected value) from the second moment (variance). When there is little evidence to support a prediction, confidence intervals are used to prevent hallucinations. Perhaps benchmarks should become more like calibration in statistics. When we have such and such evidence, how likely is the bot to be correct? Guard rails should be deployed when confidence fails to reach significance.

6. Conclusions

This tutorial (and the accompanying GitHub) showed a number of implementations of chatbots and RAG using tools from OpenAI, LangChain, HuggingFace, and VecML. RAG improves over chatbots by adding the ability to upload files just-in-time. Chatbots are trained on massive amounts of public data. By adding the ability to upload files just-in-time, RAG addresses a number of gaps in the chatbot’s knowledge base such as timeliness, references to background knowledge, private data, etc. Gaps in the knowledge base can lead to hallucinations. By filling in many of these gaps just-in-time, RAG reduces the chance of hallucinations.

Footnotes

d See slide 4 of https://github.com/Tongji-KGLLM/RAG-Survey/blob/main/assets/RAG_Slide_ENG.pdf for another list of opportunities for improvement.

q An implementation of the compare & contrast button is posted here: https://github.com/kwchurch/Better_Together_Resources/blob/main/cgi-bin/api/compare_and_contrast.

References

Chen, J., Lin, H., Han, X. and Sun, L. (2024). Benchmarking large language models in retrieval-augmented generation. Proceedings of the AAAI Conference on Artificial Intelligence 38(16), 1775417762.Google Scholar
Chen, L., Zaharia, M. and Zou, J. (2023). How is ChatGPT’s behavior changing over time? arXiv preprint arXiv:2307.09009.Google Scholar
Church, K.W. and Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational Linguistics 16(1), 2229.Google Scholar
Cuconasu, F., Trappolini, G., Siciliano, F., Filice, S., Campagnano, C., Maarek, Y., Tonellotto, N. and Silvestri, F. (2024). The power of noise: redefining retrieval for rag systems. ArXiv preprint abs/2401.14887.Google Scholar
Dahl, M., Magesh, V., Suzgun, M. and Ho, D.E. (2024). Large legal fictions: profiling legal hallucinations in large language models. Journal of Legal Analysis 16(1), 6493. arXiv preprint arXiv: 401.01301.Google Scholar
Douze, M., Guzhva, A., Deng, C., Johnson, J., Szilvasy, G., Mazaré, P.-E., Lomeli, M., Hosseini, L. and Jégou, H. (2024). The FAISS library. arXiv:2401.08281.Google Scholar
Es, S., James, J., Espinosa Anke, L. and Schockaert, S. (2024). RAGAs: automated evaluation of retrieval augmented generation. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, St. Julians, Malta. Association for Computational Linguistics, pp. 150158.Google Scholar
Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J. and Wang, H. (2023). Retrieval-augmented generation for large language models: a survey. arXiv preprint arXiv:2312.10997.Google Scholar
Grice, H.P. (1975). Logic and conversation. In Cole P. and Morgan J.L. (eds), Syntax and Semantics, Speech Acts, Vol. 3. New York: Speech Acts, Academic Press, pp. 4158.Google Scholar
Johnson, J., Douze, M. and Jégou, H. (2019). Billion-scale similarity search with GPUs. IEEE Transactions on Big Data 7(3), 535547.Google Scholar
Kautz, H.A. and Allen, J.F. (1986). Generalized plan recognition. In AAAI Conference on Artificial Intelligence, Philadelphia, PA.Google Scholar
Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Chang, M.-W., Dai, A.M., Uszkoreit, J., Le, Q. and Petrov, S. (2019). Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7, 453466.Google Scholar
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., Riedel, S. and Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, 33, pp. 94599474.Google Scholar
Li, J., Cheng, X., Zhao, X., Nie, J.-Y. and Wen, J.-R. (2023). HaluEval: a large-scale hallucination evaluation benchmark for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore. Association for Computational Linguistics, pp. 64496464.Google Scholar
Lin, D. (1998). Automatic retrieval and clustering of similar words. In COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics.Google Scholar
Lyu, Y., Li, Z., Niu, S., Xiong, F., Tang, B., Wang, W., Wu, H., Liu, H., Xu, T. and Chen, E. (2024). CRUD-RAG: a comprehensive chinese benchmark for retrieval-augmented generation of large language models. ArXiv preprint abs/2401.17043.Google Scholar
McCarthy, J. (1969). Programs with common sense. In Minsky, M.L. (ed), Semantic Information Processing. Cambridge, MA, USA: The MIT Press, pp. 403418.Google Scholar
Meyer, D., Schvaneveldt, R. and Ruddy, M. (1975). Loci of contextual effects on visual word-recognition. In Rabbitt P. and Dornic S. (eds), Attention and Performance V. New York: Academic Press.Google Scholar
Nasr, M., Carlini, N., Hayase, J., Jagielski, M., Cooper, A.F., Ippolito, D., Choquette-Choo, C.A., Wallace, E., Tramèr, F. and Lee, K. (2023). Scalable extraction of training data from (production) language models. arXiv preprint arXiv: 2311.17035.Google Scholar
Palermo, D. and Jenkins, J. (1964). Word Association Norms. Minneapolis, MN: University of Minnesota Press.Google Scholar
Petroni, F., Piktus, A., Fan, A., Lewis, P., Yazdani, M., De Cao, N., Thorne, J., Jernite, Y., Karpukhin, V., Maillard, J., Plachouras, V., Rocktäschel, T. and Riedel, S. (2021). KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pp. 25232544, Online.Google Scholar
Ribeiro, M.T., Singh, S. and Guestrin, C. (2016). why should i trust you?”: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 11351144.Google Scholar
Rosenthal, S., Sil, A., Florian, R. and Roukos, S. (2024). CLAPNQ: cohesive long-form answers from passages in natural questions for rag systems. ArXiv, abs/2404.02103.Google Scholar
Saad-Falcon, J., Khattab, O., Potts, C. and Zaharia, M. (2023). ARES: an automated evaluation framework for retrieval-augmented generation systems. ArXiv preprint abs/2311.09476.Google Scholar
Sander, D.P. and Dietz, L. (2021). EXAM: how to evaluate retrieve-and-generate systems for users who do not (yet) know what they want. In Biennial Conference on Design of Experimental Search & Information Retrieval Systems.Google Scholar
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V. and Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, 35, pp. 2482424837.Google Scholar
Wu, Y., Zhu, J., Xu, S., Shum, K., Niu, C., Zhong, R., Song, J. and Zhang, T. (2023). Ragtruth: a hallucination corpus for developing trustworthy retrieval-augmented language models. ArXiv preprint abs/2401.00396.Google Scholar
Xiong, G., Jin, Q., Lu, Z. and Zhang, A. (2024). Benchmarking retrieval-augmented generation for medicine. ArXiv preprint abs/2402.13178.Google Scholar
Yang, S., Sun, R. and Wan, X. (2023). A new benchmark and reverse validation method for passage-level hallucination detection. In Bouamor H., Pino J. and Bali K. (eds), Findings of the Association for Computational Linguistics: EMNLP 2023,Singapore. Association for Computational Linguistics, pp. 38983908.Google Scholar
Zhao, P., Zhang, H., Yu, Q., Wang, Z., Geng, Y., Fu, F., Yang, L., Zhang, W. and Cui, B. (2024). Retrieval-augmented generation for ai-generated content: a survey. ArXiv preprint abs/2402.19473.Google Scholar
Figure 0

Table 1. RAG summaries are longer than tl;dr summaries from Semantic Scholar

Figure 1

Figure 1. The query (top line) is followed by recommendations.

Figure 2

Table 2. OCR errors are more challenging for spaCy than RAG