The benefits and shortcomings of LLM-powered search
Over the past few years, large language models (LLMs) have revolutionized document search capabilities. Compared to more traditional search methods, which are based on simple keyword phrase matching or some level of optical character recognition (OCR), LLM-powered search excels at understanding natural language questions and determining context and intent, surfacing helpful results beyond those that match the exact words in a query.
However, LLMs aren’t perfect. They can hallucinate, providing answers that aren’t actually in the documents being searched. And like traditional search methods, they don’t always understand the broader context around an individual document or piece of information, creating the risk of providing irrelevant or even misleading information.
To leverage the benefits of LLMs for search while mitigating the risks, Gryps uses knowledge graphs and retrieval augmented generation (RAG) to provide the most useful, accurate information.
How knowledge graphs work
Knowledge graphs are data sets that use a graph-based structure to identify the relationships between different pieces of information. They map semantic relationships between different items in the data set, meaning that context is built into the data set. While other data structures might include two separate pieces of information—i.e. a lobby renovation project and a building address—a knowledge graph could go a step further by also including the relationship: this lobby renovation happened at this building address. Tracking these relationships is important for prediction, information retrieval, and analysis involving complex capital programs or projects with multiple types of connected information.
Knowledge graphs can also be augmented with inferences, which identify additional relationships between items in the data set. For example, if the knowledge graph includes the fact that Contractor A was a sub to Contractor B, and Contractor A utilized supplier C, then you can infer that Contractor B utilized supplier C on the project. Gryps created a construction industry-specific knowledge graph that is automatically populated with organizational data. We also built AI agents to extract structured information from unstructured data like documents and emails. When we combine both structured and unstructured data into a single knowledge graph, our AI-powered search tool is able to operate with more comprehensive information and more robust context.
Knowledge graphs and LLM-powered search
When LLMs are used to search a knowledge graph, they’re able to do a much better job understanding the context around a piece of information in order to surface the most relevant results. For example, if you search for the “Park Avenue elevator HVAC replacement,” an LLM without a knowledge graph might find several HVAC replacement projects in the data set and surface the wrong one, while the knowledge graph has the relational context needed to surface the Park Avenue project specifically.
To help LLMs provide accurate information from knowledge graphs, developers also use a process called retrieval-augmented generation, or RAG. RAG protocols instruct LLMs to retrieve specific information from a database for its response, rather than simply generating an answer that’s based on the data set, but not necessarily actually in the data itself. When you combine LLMs with both RAG and knowledge graphs, the models are better equipped to provide information that’s accurate and relevant to the question being asked, without hallucinations.
At Gryps, we take this a step further by integrating structured and unstructured data. Gryps built a collection of agents that work together to create the owner’s centralized and unified knowledge graph. This includes data collection agents that understand where and how to get information from over 30 different source systems, document processing agents that classify and organize documents before ingesting them into the knowledge graph, and data enrichment agents that extract information from within the files to combine with structured data, enriching the data set for more meaningful outcomes.
Conclusion
The proliferation of LLMs has created exciting opportunities to improve search capabilities across a range of industries. However, these search tools will only be as useful as they are accurate. For industries like construction and facility management—where a single piece of bad information can lead to costly issues—accuracy is especially critical. Combining LLMs with knowledge graphs and RAG is a powerful method for developing AI-powered search tools that can be counted on to provide the information the user needs.
Want to learn more about how AI-powered search can give your team instant access to the documents and information they need? Request a demo here.





