RAG AI (Retrieval Augmented Generation): Making Generative AI work for financial services

This post was originally published in Auquan's WhitepaperThe Advantages of RAG AI (Retrieval Augmented Generation) Over Generative AI for Financial Services. Download the whitepaper here

 

 

In today's digitized world, Generative AI has shown tremendous promise for dramatically reshaping industries and operations. A recent Mckinsey report called Generative AI the next productivity frontier, with the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. Mckinsey estimates that work involving analyzing natural language text accounts for an average of 25 percent of total work time in enterprise.1 

 

In the financial sector, with its vast volumes of textual content, such as company reports, regulatory documents, 10K’s and 10Qs, broker research and other reports, this number could be much higher. Traditionally, highly-compensated financial professionals spend a significant amount of time sifting through lengthy documents and extensive reports in order to gather, understand, summarize, and communicate insights. Hence the sector stands to benefit immensely from Generative AI’s increased ability to understand natural language and simplify this labor-intensive task, by summarizing vast amounts of data into relevant insights.

 

These innovations promise to empower finance professionals, including equity research analysts, risk managers, private equity associates and sustainability researchers, to redirect their focus from mundane data processing to more critical tasks, such as analyzing meaningful insights, drawing conclusions, and making informed and strategic decisions faster. 

 

Unfortunately, as promising as Generative AI may seem, it has severe limitations for complex enterprise financial services use cases. For any information-retrieval or search system to be worthy of advanced enterprise use cases, it needs to possess three core criteria:

 

1. Comprehensiveness and timeliness
2. Transparency and trustworthiness
3. Credibility and accuracy

 

Let's evaluate current LLMs against these three criteria.  

 


 

The challenges with using Generative AI in the financial sector

Generative AI, in its standard, out-of-the-box format, fails on all three criteria.

 

1. The data lacks timeliness 

Current LLMs are, in essence, snapshots of the past. Their inability to incorporate real-time data means they are "frozen in time", depriving users of output that incorporates up-to-date information. Any developments past their training cutoff date, be they in financial markets, global economic shifts, or even regulatory changes, remain outside its knowledge. The static knowledge base is akin to having a financial advisor who knows the history of stock markets but isn't aware of today's macro conditions or recent company news.

 

2. Their knowledge base lacks comprehensiveness

LLMs are designed for generalized tasks. The vastness of their training means they lack domain-specific knowledge, such as niche yet impactful datasets such as sanctions lists, records of regulatory fines, or detailed shipping data. This renders them unsuitable for tasks demanding in-depth expertise.

 

This is a known problem of Generative AI models employed in the banking industry, they encounter difficulties in fully understanding the intricate financial context, individual circumstances, and nuanced scenarios. Picture an investment banker in need of information regarding a potential regulatory fine in order to make an investment or compliance decision. The ramifications of making this decision without considering recent sanctions or regulatory updates could be both legally and financially disastrous. 

 

3. They lack transparency, making them difficult to trust

Generative AI tools such as ChatGPT do not provide sources for the information they generate. This "black box" nature raises further concerns, as discerning the exact sources or basis for their conclusions remains challenging and requires additional work. Without transparency, it's hard to assess the reliability or bias in an AI's recommendations. If biases exist, if conclusions are derived from outdated, non-credible, or inaccurate data, there's no way to verify or correct it. Moreover, in cases of errors or regulatory scrutiny, being unable to trace back the AI's decision-making process can result in severe compliance violations, legal repercussions, and erosion of trust among clients and regulators.

 

From an investment perspective, unverified guidance puts capital and reputations at risk. For compliance use cases, untraceable data sources could contravene standards. Bain placed trust is a fundamental issue with Generative AI’s adoption in financial services, with hallucinations, toxicity, privacy, bias, and data governance concerns creating a trust gap.

 

  • 4. The responses lack credibility or accuracy because they frequently hallucinate 

Add to this the now infamous problem of “hallucinations” with LLMs — coherent but misleading or outright fabricated responses — and the pitfalls of Generative AI for financial services become even more clear. An equity research analyst may make a recommendation based on a linguistically impeccable response, which in reality is factually inaccurate. Or consider a private equity (PE) analyst who has to verify every risk surfaced on a deal by LLMs, only to realize that half of them are fabricated. In a field where every detail counts, having to always second-guess or verify what an LLM says can slow things down and increase the chance of costly mistakes.

 

In high-stakes sectors like finance, where source credibility, accuracy, and timeliness are paramount, LLMs significantly underperform. Consider, for example, a financial analyst AI-copilot, powered by a generic LLM, designed to assist risk managers in assessing risks of their investments. The lack of real-time updates on the macro environment, such as recent interest rate movements, regulatory changes such as a new liquidity requirement or company-specific insights such as latest earnings release could result in catastrophic losses. Similarly, any decision made on hallucinated or inaccurate responses to queries such as “does this company have any previous accusations of fraud or bribery” or “does this company have any potential violations of the modern slavery act” could result in regulatory issues. The response may inaccurately allege wrongdoing, or miss a recent accusation, yielding additional enormous fines.

 


RAG AI: Bridging the critical gap between static knowledge of LLMs and the dynamic nature of the financial industry.

So, how can financial service industry participants benefit from the capabilities of LLMs while avoiding these pitfalls? Enter Retrieval Augmented Generation (RAG) AI — a new enhancement to generativeAI developed by Meta that represents a paradigm shift for the industry.

 

RAG AI represents a significant leap forward because it is specifically designed to tackle knowledge-intensive tasks. At its core, RAG AI merges the strengths of an AI-powered information retrieval system and Generative AI, bridging the gap between static knowledge of LLMs and the need for dynamic, context-aware generation in financial services.

 

Retrieval AI + Generative AI = Retrieval Augmented Generation (RAG) AI

RAG AI can access external, up-to-date knowledge sources, ensuring responses are not only coherent but also rooted in fact. This interplay between retrieval and generation means unlike LLMs, which need extensive retraining to update their knowledge, RAG's internal knowledge can be seamlessly modified without overhauling the entire system. 

 

How RAG AI Works

Just like a Generative AI system, RAG AI begins with a user or machine submitted input — like a question or a specific query. The system runs a search across an extensive and continuously updated knowledge base, much like a conventional search or retrieval engine to narrow down petabytes of potentially noisy or irrelevant data to a subset relevant to the input query. RAG then fine-tunes this subset further using semantic search techniques to information that aligns with the query's inherent intent, not just the keywords used. It translates the query into numerical representations, or vectors, using an embedding model.

 

These vectors, which capture the core or semantic meaning of the input, enable machines to identify semantically similar entries to the initial query from the subset of data. This step ensures that the subsequent response is generated from information that is not just accurate but also contextually meaningful. This final data subset, combined with the original query, is then fed into a Large Language Model (LLM). 

And this is RAG AI's distinctive edge. This enriched contextual prompt enables the LLM to generate a response that is both linguistically coherent and anchored in the most recent and accurate facts.

CogX Slides (1)

Consider the same financial analyst AI that aids risk managers but powered by RAG AI. When now prompted by queries such as “does this company have any previous accusations of fraud or bribery”, the system will first search through its knowledge base, which may include recent analyst notes, broker research, consultancy reports, company filings, government updates, news and legal documents, to identify any data points that directly or contextually refer to the company as well as fraud and bribery. The text from these documents is then fed to the LLM, along with the original question. The response will now include an accurate, trustworthy summary of the information which includes the latest facts.

 


The case for RAG in financial services

For financial services, the implications of upgrading from Generative AI to RAG AI are transformative:

1. Timeliness: real-time access to information

The retrieval component of RAG means it has real-time access to information. Timeliness of information is paramount in financial services, both from a regulatory requirements perspective as well as the imperative tha investment decisions be based on accurate and recent data. Picture the same investment banker as before, in need of information on a potential regulatory fine. Instead of grappling with a static LLM, which might give an outdated or incorrect response, they can leverage RAG AI to quickly retrieve the most recent data on the topic, ensuring their decisions are grounded in real-time insights. 

2. Comprehensiveness: incorporating domain-specific information in responses 

RAG AI’s context-aware generation cross references and pulls data from datasets pertinent to financial services, such as earnings transcripts, sanctions lists, product releases,partnerships, supply chain information, and shipping data. This capability has the potential to create significant outperformance in investment decision making, enhance risk management, or save many hours in due diligence and know-your-customer (KYC) efforts.

3. Transparency and trustworthiness: citing data sources to overcome the "black box" issue 

In an industry where regulatory compliance and accountability are crucial, RAG's ability to cite sources is invaluable. It allows any AI assisting in asset management or risk assessment to not only provide quick insights but also substantiate them with data sources. This isn't merely about enhancing accuracy. RAG brings transparency to AI’s inner workings by providing a clearer view into which sources or datasets the model references, which can build trust in the system among users. For professionals in financial services, knowing the basis of an AI's recommendation – be it a market report, a recent financial study, or historical data – is mandatory.

 

4. Credibility and accuracy: mitigating the risk of "hallucinations"  

RAGs ability to generate responses derived only from the information supplied by the retrieval system means the AI remains rooted in relevant and material fact, greatly reducing the probability of “hallucinations”. A PE analyst now can safely use a RAG AI system to instantly surface all risks on a deal, without having to verify each one, because they have been summarized from pre-vetted and credible sources. 

To sum it up, RAG AI significantly amplifies the generative capabilities of LLMs by incorporating the retrieval component, on all the core criteria necessary for any information-retrieval or search system to be worthy of advanced enterprise use cases. 

 

Table 1: Generative AI vs RAG AI

Criteria

Generative AI

RAG AI

Timeliness

The knowledge base is “frozen in time” and lacks any information after training cut-off date 

Retrieval component has real-time access to information

Comprehensiveness

The model is trained for generalized tasks

Retrieval component cross references and pulls data from datasets pertinent to specific use cases

Transparency and trustworthiness

“Black boxes” that do not cite their sources, making them difficult to trust

Cites sources to overcome the “black box” transparency issue

Credibility and accuracy

They frequently hallucinate and produce coherent but misleading or fabricated responses

Generates responses derived only from the information supplied by the retrieval system

 


Implementing RAG AI for financial services: Three use cases

As adoption of AI in financial services moves apace, expect to see more RAG-based solutions implemented. Innovation will focus on providing deeper domain-specific coverage, increasing usability, and uncovering more hidden insights in unstructured data. And enhanced personalization and contextualization of system output is in demand, therefore we anticipate catering to the specific needs of different professionals in the industry.

The most obvious financial sector use case for RAG AI is to identify, perform due-diligence on, and monitor investment opportunities in public markets. 

 

In the next few posts, we will explore specific use cases of RAG AI for financial services.


References

  1. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction
  2. https://www.bain.com/insights/five-principles-for-generative-ai-in-financial-services/
  3. https://masterofcode.com/blog/red-flags-and-best-practices-for-generative-ai-in-telecom-banking-and-finance
  4. https://research.facebook.com/file/4283170945104179/Retrieval-Augmented-Generation-for-Knowledge-Intensive-NLP-Tasks.pdf
  5. https://www.techopedia.com/here-is-why-companies-are-restricting-the-use-of-generative-ai-tools
  6. https://russellinvestments.com/us/insights/articles/2022-esg-manager-survey
  7. https://www-consulting-us.cdn.ampproject.org/c/s/www.consulting.us/news/amp/9509/kpmg-more-than-half-of-firms-have-cancelled-ma-deals-because-of-esg-findings


About Auquan

Auquan is an AI innovator transforming vast amounts of unstructured data into actionable intelligence for financial services customers like UBS and Federated Hermes. Professionals in investment banking, private equity, asset management, and other financial institutions use Auquan’s SaaS-based Intelligence Engine to discover hidden value in seemingly worthless data and identify financially-material ESG, reputational, and regulatory risks on more than 500k private companies, equities, infrastructure projects, and other entities. Auquan leverages retrieval augmented generation (RAG) — a cutting-edge AI technique that addresses common pitfalls of using generative AI in the enterprise to consistently produce timely, comprehensive, and accurate insights. 

 


DISCLAIMER: The document herein and the information encapsulated within are provided on a confidential basis, with the sole intent of probing potential business opportunities between the disclosing entity (Auquan Ltd.) and the recipient. This document is not to be shared with any third party, nor used for any other purposes, without the explicit written consent of the disclosing party. The information contained within this report should not be used as a replacement for, or be presumed to be, specific professional, particularly financial, advice. Auquan categorically disavows any responsibility for any loss or damage incurred by any individual or entity concerning any actions or lack thereof pertaining to the content of this report.

Auquan does not provide any express or implied warranty, endorsement, or representation regarding the information contained herein, the potential results deriving from the usage of the information, or any other matter. The role of Auquan is to gather information from public sources and disseminate it in the form of this report.

Auquan expressly rejects, and the reader hereby relinquishes, all implied warranties, inclusive of, but not limited to, warranties of originality, precision, completeness, merchantability, suitability for a particular use, and warranties associated with potential infringements of intellectual property rights, trademark rights, or any other rights belonging to a third party. This report may be cited or employed for business objectives, provided that the report is used in its entirety and Auquan is expressly acknowledged as the source. Auquan reserves all copyrights and rights of origin to the content of this report.

AUQUAN INTELLIGENCE NEWSLETTER

Get insights on AI for financial services — and under-the-radar company intelligence — in your email.

Close Icon