Advancing AI Reasoning: The Synergy Between Knowledge Graphs and Large Language Models

Extracting valuable insights from unstructured text is a critical application in the finance industry. However, this task often goes beyond simple data extraction and necessitates advanced reasoning capabilities. A prime example is determining the maturity date in credit agreements, which usually involves deciphering a complex directive like "The Maturity Date shall fall on the last Business Day preceding the third anniversary of the Effective Date." This level of sophisticated reasoning poses challenges for Large Language Models (LLMs). It requires the incorporation of external knowledge, such as holiday calendars, to accurately interpret and apply the given instructions. Integrating knowledge graphs is a promising solution with several key advantages.

 

The advent of transformers has revolutionized text vectorization, achieving unprecedented precision. These embeddings encapsulate profound semantic meanings, surpassing previous methodologies, and are why Large Language Models (LLMs) are so convincingly good at generating text.

 

LLMs further demonstrate reasoning capabilities, albeit with limitations; their depth of reasoning tends to diminish rapidly. However, integrating knowledge graphs with these vector embeddings can significantly enhance reasoning abilities. This synergy leverages the inherent semantic richness of embeddings and propels reasoning capabilities to unparalleled heights, marking a significant advancement in artificial intelligence.

 

In the finance sector, LLMs are predominantly utilized through Retrieval Augmented Generation, a method that infuses new, post-training knowledge into LLMs. This process involves encoding textual data, indexing it for efficient retrieval, encoding the query, and employing similar algorithms to fetch relevant passages. These retrieved passages are then used with the query, serving as a foundation for the LLM to generate the response. 

 

This approach significantly expands the knowledge base of LLMs, making it invaluable for financial analysis and decision-making. While Retrieval Augmented Generation marks a significant advancement, it has limitations.

 

A critical shortcoming lies in the passage vectors' possible inability to fully grasp the semantic intent of queries, leading to the vital context being overlooked. This oversight occurs because embeddings might not capture certain inferential connections essential for understanding the query's full scope.

 

Moreover, condensing complex passages into single vectors can result in the loss of nuances, obscuring key details distributed across sentences. 

 

Additionally, the matching process treats each passage separately, lacking a joint analysis mechanism that could connect disparate facts. This absence hinders the model's ability to aggregate information from multiple sources, often necessary for generating comprehensive and accurate responses required to synthesize information from various contexts.

 

Efforts to refine the Retrieval Augmented Generation framework abound, from optimizing chunk sizes to employing parent chunk retrievers, hypothetical question embeddings, and query rewriting. While these strategies present improvements, they don't lead to revolutionary outcome changes. An alternative approach is to bypass Retrieval Augmented Generation by expanding the context window, as seen with Google Gemini's leap to a one million token capacity. However, this introduces new challenges, including non-uniform attention across the expanded context and a substantial, often thousandfold, cost increase.

 

Incorporating knowledge graphs with dense vectors is emerging as the most promising solution. While embeddings efficiently condense text of varying lengths into fixed-dimension vectors, enabling the identification of semantically similar phrases, they sometimes fall short in distinguishing critical nuances. For instance, "Cash and Due from Banks" and "Cash and Cash Equivalents" yield nearly identical vectors, suggesting a similarity that overlooks substantial differences. The latter includes interest-bearing entities like "Asset-Backed Securities" or "Money Market Funds," while "Due from Banks" refers to non-interest-bearing deposits.

 

Knowledge graphs also capture the complex interrelations of concepts. This fosters a deeper contextual insight, underscoring additional distinct characteristics through connections between concepts. For example, a US GAAP knowledge graph clearly defines the sum of "Cash and Cash Equivalents," "Interest Bearing Deposits in Banks," and "Due from Banks" as "Cash and Cash Equivalents."

 

By integrating these detailed contextual cues and relationships, knowledge graphs significantly enhance the reasoning capabilities of LLMs. They enable more precise multi-hop reasoning within a single graph and facilitate joint reasoning across multiple graphs. 

 

Moreover, this approach offers a level of explainability that addresses another critical challenge of LLMs. The transparency in how conclusions are derived through visible, logical connections within knowledge graphs provides a much-needed layer of interpretability, making the reasoning process not only more sophisticated but also accessible and justifiable. 

 

The fusion of knowledge graphs and embeddings heralds a transformative era in AI, transcending the limitations of individual approaches to achieve a semblance of human-like linguistic intelligence. 

 

Knowledge graphs introduce previously gained symbolic logic and intricate relationships from humans, enhancing the neural networks' pattern recognition prowess and finally resulting in superior hybrid intelligence.

 

Hybrid intelligence paves the way for AI that not only articulates eloquently but also comprehends deeply, enabling advanced conversational agents, discerning recommendation engines, and insightful search systems. 

 

Despite challenges in knowledge graph construction and noise management, integrating symbolic and neural methodologies promises a future of explainable, sophisticated language AI, unlocking unprecedented capabilities.

 

About Vahe Andonians

Vahe Andonians is the Founder, Chief Technology Officer, and Chief Product Officer of Cognaize. Vahe founded Cognaize to realize a vision of a world in which financial decisions are based on all data, structured and unstructured. As a serial entrepreneur, Vahe has founded several AI-based fintech firms and led them through successful exits and is a senior lecturer at the Frankfurt School of Finance & Management.

 

The article is featured on Datanami