Matches in Nanopublications for { ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?o ?g. }
- FactFinder comment "FactFinder is a hybrid QA system that leverages a KG to enhance the factual correctness and completeness of LLM-generated answers during the inference stage. It achieves this by having the LLM generate Cypher queries to retrieve domain-specific knowledge from the KG, which is then used by the LLM to formulate a more accurate and reliable natural language response. This approach directly improves the LLM's performance in knowledge-intensive tasks by providing access to external, structured, and up-to-date factual knowledge." assertion.
- MedIKAL comment "The medIKAL framework integrates LLMs and KGs through a multi-step collaborative reasoning process. It begins by allowing LLMs to perform an initial diagnosis, whose results are then merged with KG search outcomes using a residual network-like approach. The KG further refines candidate diseases via path-based reranking, and its knowledge is specifically reconstructed into a semi-structured format. Finally, LLMs engage in collaborative reasoning with this reconstructed KG knowledge using a fill-in-the-blank prompt, thereby combining the strengths of both systems for enhanced clinical diagnosis." assertion.
- SynthKGQA comment "SynthKGQA is an LLM-powered framework for generating high-quality Knowledge Graph Question Answering (KGQA) datasets. It uses LLMs to propose questions, ground-truth answer subgraphs, seed entities, and SPARQL queries from an existing KG, effectively constructing structured data and associated natural language for KGQA tasks." assertion.
- Evokg comment "EVOKG is a noise-tolerant KG evolution module that incrementally updates and maintains a temporal knowledge graph from unstructured documents. It employs LLM capabilities (implicitly for extraction and explicitly for entity/relation alignment) to create a synergized knowledge representation that supports sophisticated temporal reasoning by LLMs." assertion.
- Evoreasoner comment "EVOREASONER is a multi-hop temporal reasoning algorithm that employs LLMs as agents to interact with evolving KGs. It uses multi-route decomposition, temporal-aware grounding, and local exploration to enhance the LLM's ability to reason over dynamic knowledge, representing a synergized approach to reasoning." assertion.
- PlanOnGraphPoG comment "PoG proposes a novel self-correcting adaptive planning paradigm for KG-augmented LLMs, specifically for Knowledge Graph Question Answering (KGQA). It leverages LLMs as agents to adaptively explore, remember, and reflect on reasoning paths in KGs to answer complex questions, addressing issues like fixed exploration breadth and erroneous paths encountered in existing KG-augmented LLM methods. The method enhances the LLM's ability to effectively perform KGQA tasks." assertion.
- REMINDRAG comment "REMINDRAG proposes an LLM-guided knowledge graph traversal featuring node exploration, node exploitation, and memory replay to enhance RAG systems. The memory replay component stores traversal experience by updating KG edge embeddings, which then guide future LLM traversals. This creates a synergized reasoning process where the LLM acts as an agent interacting with and learning from the KG, and the KG's representation is dynamically adapted to improve the LLM's reasoning efficiency and effectiveness." assertion.
- arkness comment "ARKNESS is an end-to-end hybrid framework that fuses zero-shot KG construction with retrieval-augmented generation. It uses an LLM for automated entity-relation extraction to build the KG and then employs a semantic search with beam search traversal over the KG to inject evidence-linked subgraphs into LLM prompts, enabling accurate and explainable reasoning for manufacturing process planning." assertion.
- DualReasoning comment "DualR is a novel framework that integrates a GNN-based module for explicit graph reasoning (System 2) with an LLM for knowledge-enhanced answer determination (System 1) in KGQA. It features an LLM-empowered GNN for semantic-aware knowledge exploration and refines extracted reasoning chains into a prompt to guide a frozen LLM, creating a synergistic reasoning process." assertion.
- NlpAkgConstructionMethod comment "This method focuses on constructing a novel knowledge graph (NLP-AKG) for NLP academic papers. It leverages LLMs for various tasks like entity extraction from text and tables, interpreting table hierarchies, generating code for data extraction, and summarizing innovation points. While also involving XLNet and k-means for cleaning and disambiguation, the core framework uses LLMs to address entity discovery, coreference resolution, and relation extraction, making it an LLM-augmented approach for KG construction." assertion.
- SubGraphCommunitySummaryMethod comment "This method aims to enhance the question answering ability of LLMs by augmenting them with knowledge from the constructed NLP-AKG during the inference stage. It involves using LLMs for intent identification, querying the KG to form 'sub-graph communities' around relevant papers, and then feeding these structured community elements along with prompts to the LLM to generate more accurate answers. This directly improves LLM performance by providing external, structured knowledge at inference." assertion.
- Racoon comment "RACOON is a framework that uses a Knowledge Graph to augment the context provided to an LLM during the inference stage for Column Type Annotation. It retrieves, processes, and serializes KG knowledge into natural language to form an LLM-augmented prompt, thereby improving the LLM's performance without retraining." assertion.
- EdcWithRefinement comment "EDC+R is an extension of the EDC framework that incorporates an iterative refinement process. It uses a "hint" composed of previously extracted entities/relations and relations retrieved by the Schema Retriever to enhance the quality of the LLM's open information extraction phase, thereby improving the overall KG construction." assertion.
- ExtractDefineCanonicalize comment "EDC is a three-phase LLM-based framework (Open Information Extraction, Schema Definition, Schema Canonicalization) proposed for automated Knowledge Graph Construction. It leverages LLMs for extracting relational triplets, generating schema definitions, and standardizing the schema to handle large or absent pre-defined schemas." assertion.
- SchemaRetriever comment "The Schema Retriever is a trained component designed to retrieve relevant schema elements for a given input text. It improves the LLM's extraction performance within the KGC framework by providing contextually appropriate knowledge from the KG schema, akin to retrieval-augmented generation." assertion.
- rje comment "RJE is a novel framework that synergizes KG retrieval with LLM reasoning for Knowledge Graph Question Answering. It integrates a multi-stage process of retrieval, judgment by an LLM, and conditional exploration, where LLMs act as agents interacting with the KG to conduct multi-hop reasoning and answer generation. This approach aims to mutually enhance both LLM performance (especially for smaller models) and KGQA efficiency." assertion.
- kgBertTailoredLlmIntegrationStrategy comment "This method proposes a specific architectural integration strategy for incorporating KG-BERT into diverse pre-trained LLMs (Claude, Mistral IA, GPT-4). It involves adding dedicated components such as a KG-dedicated attention layer, modularized cross-layers with lightweight aggregation, or a dedicated attention head. The goal is to enhance the LLMs' factual accuracy, reasoning, and consistency in knowledge-intensive tasks like question answering and entity linking during their inference phase." assertion.
- iText2KG comment "iText2KG is a zero-shot, plug-and-play method for incremental, topic-independent Knowledge Graph Construction from raw documents using LLMs. It leverages LLMs across four modules (Document Distiller, Incremental Entity Extractor, Incremental Relation Extractor, Graph Integrator) to address entity discovery, coreference resolution, and relation extraction, which are key tasks in KG construction." assertion.
- GroundingLLMReasoningAgent comment "This method proposes an LLM-based agent that interacts with a Knowledge Graph in an interleaved sequence of thought, action, and retrieved data. It uses predefined actions like RetrieveNode, NodeFeature, NeighborCheck, and NodeDegree to systematically ground LLM reasoning steps in structured knowledge, thus enabling synergistic reasoning." assertion.
- GroundingLLMReasoningAutomaticGraphExploration comment "This method integrates LLM language generation with structured graph retrieval by automatically extracting entities from LLM-generated thoughts. It then guides a multi-step search and prune pipeline over the KG, using LLM prompts to filter relevant relations and neighboring entities, enabling dynamic and structured exploration of the KG for reasoning." assertion.
- GatedPromptChainingWorkflowForKgDmlConstruction comment "This method utilizes an LLM-based workflow, comprising sequential LLM calls for summarization, entity recognition, JSON structuring, and Cypher generation, to automatically construct a Knowledge Graph (KG-DML) from unstructured system documentation. The workflow integrates LLM-based validation gates and feedback loops to ensure accuracy and consistency in the KG construction process. The LLM's primary role is to enhance the KG construction task by automating the extraction and structuring of domain-specific knowledge." assertion.
- LlmAgentForInteractiveKgDiagnostics comment "This method proposes an LLM agent that interprets natural language queries, acting as an orchestrator to select and execute external, structured KG reasoning tools (e.g., upward/downward propagation) for diagnostic tasks. For general interpretive queries, it employs a Graph-RAG approach by retrieving relevant KG segments and embedding them into the LLM's prompt. This represents a synergized reasoning approach where the LLM and KG mutually enhance diagnostic capabilities by combining LLM's natural language understanding and agentic behavior with the KG's structured knowledge and reasoning tools." assertion.
- Lpkg comment "LPKG is a novel framework that enhances the planning capabilities of LLMs for complex question-answering. It uses predefined KG patterns to construct accurate planning data, which is then used to fine-tune a planning LLM. The KG data directly improves the LLM's internal ability to generate multi-step plans, classifying it under KGEnhancedLLMPretraining as it modifies the LLM's core capabilities through a training phase." assertion.
- PoK comment "PoK is a unified framework designed for Temporal Knowledge Graph Question Answering (TKGQA). It synergizes LLMs by having them decompose complex temporal questions into structured sub-objectives (plan) and interact with a custom-built Temporal Knowledge Store (TKS) for fact retrieval and re-ranking. The LLM then uses this structured plan and retrieved temporal facts to conduct multi-hop reasoning and generate accurate answers, enhancing interpretability and factual consistency." assertion.
- AmarFramework comment "AMAR enhances LLMs' reasoning and factual output for KGQA by adaptively retrieving multi-aspect knowledge (entities, relations, subgraphs) from KGs, processing it through self-alignment and relevance gating modules, and converting it into prompt embeddings for the LLM during inference. This specifically aims to improve the LLM's ability to answer complex questions by providing accurate and filtered external knowledge at inference time, addressing hallucination and outdated knowledge issues." assertion.
- RPO-RAG comment "RPO-RAG is a KG-based RAG framework designed for small LLMs to improve their performance on Knowledge Graph Question Answering. It introduces a query-path semantic sampling strategy, relation-aware preference optimization at the relation level, and an answer-centered prompt design. These components collectively enable a synergistic reasoning approach where the LLM's reasoning process is deeply aligned with the structured knowledge and relational logic of the KG." assertion.
- ChainOfExplorations comment "This novel retrieval algorithm leverages LLM reasoning for planning, executing, and evaluating sequential traversals through a Knowledge Graph. The LLM acts as an agent, actively interacting with the KG to identify relevant nodes and relationships, which directly exemplifies synergized reasoning between LLMs and KGs." assertion.
- KgRagPipeline comment "This overarching framework is designed to enhance Large Language Model Agents (LMAs) during the inference stage. It integrates Knowledge Graphs (KGs) to provide structured external knowledge, aiming to reduce hallucinations and improve performance on knowledge-intensive Question Answering tasks." assertion.
- TripleHypernodes comment "This method introduces a novel, recursive structure for representing complex and nested relationships within a Knowledge Graph. An LLM is explicitly used in a few-shot learning approach to extract these hypernodes from unstructured text, thus synergizing LLM capabilities with KG structure to enhance knowledge representation." assertion.
- CoKTrialAndError comment "This method enhances the CoK framework's learning by introducing a trial-and-error mechanism with a symbolic agent. This trains the LLM to simulate human-like internal knowledge exploration and adapt its reasoning paths based on factual support, thereby fostering a more robust, KG-informed, and agent-like synergized reasoning process." assertion.
- ZhaoEtAl2025HunanCelebrityKGConstruction comment "This method proposes a supervised fine-tuning approach for Large Language Models to enhance information extraction for domain-specific Knowledge Graph construction. It involves defining a fine-grained schema, constructing an instruction fine-tuning dataset, and employing LoRA for parameter-efficient fine-tuning of LLMs. The fine-tuned LLMs are then used to extract structured entities, relations, and properties from text to build a Knowledge Graph." assertion.
- PersonaAgentWithGraphRAG comment "This framework enhances Large Language Models (LLMs) by integrating Knowledge Graphs (KGs) to enable personalized AI agents. The KG stores user interaction histories and community patterns, which a GraphRAG mechanism then retrieves and synthesizes to dynamically generate context-rich prompts for the LLM during its inference stage, thereby improving personalized content generation and decision-making." assertion.
- ArkV1 comment "ARK-V1 is an LLM-agent architecture designed for Knowledge Graph Question Answering. It leverages an LLM as a backbone to iteratively explore a KG through a sequence of steps including anchor selection, relation selection, and triple inference, all performed during the inference stage. This process enables the LLM to access and reason over external, structured knowledge from the KG to answer complex natural language queries, thereby enhancing the LLM's factual accuracy and domain-specific reasoning capabilities." assertion.
- SpatioTemporalGeoQASystem comment "This method describes a comprehensive GeoQA system that integrates a constructed spatio-temporal knowledge graph (KG) with large language models (LLMs). LLMs are utilized to convert natural language questions into SPARQL queries, perform query validation, decompose complex descriptive questions, and generate natural language answers by reasoning over KG-retrieved facts and additional contextual information. The core goal is to enhance question answering capabilities over historical geospatial KGs." assertion.
- SkewRoute comment "SkewRoute is a training-free LLM routing framework tailored for Knowledge Graph Retrieval-Augmented Generation (KG-RAG). It leverages the score skewness of retrieved KG contexts to dynamically assess query difficulty and route queries to different LLM scales (smaller for simple, larger for complex). This method optimizes LLM inference costs and performance in KG-RAG applications by using KG-derived signals during the inference stage." assertion.
- GraphJudge comment "GraphJudge is a framework designed to enhance Knowledge Graph Construction quality. It employs a closed-source LLM for initial entity-centric text denoising and triple extraction, then fine-tunes an open-source LLM to act as a "graph judge" for verifying and filtering the correctness of the extracted triples, addressing issues like noise, domain-specific inaccuracies, and hallucinations in KG construction." assertion.
- ExKGLLM comment "ExKG-LLM is a novel framework that uses Large Language Models (LLMs) to automate the expansion of Cognitive Neuroscience Knowledge Graphs (CNKG). It achieves this by extracting new entities and relationships from scientific texts, assigning confidence scores to them, and integrating them into the existing graph. The method primarily focuses on entity discovery, relation extraction, and probabilistic link prediction to systematically build and grow the KG, directly addressing KG construction tasks." assertion.
- xpSHACL comment "xpSHACL is a novel, unified system that integrates rule-based justification trees, a Knowledge Graph for contextual enrichment and caching, and Large Language Models for natural language explanation generation. It synergizes symbolic reasoning with LLM capabilities, enhanced by KG-retrieved knowledge, to provide comprehensive and human-readable explanations for SHACL constraint violations. The KG actively supports the LLM in reasoning and ensuring consistency." assertion.
- TglLlm comment "TGL-LLM is a novel framework that integrates temporal graph learning into LLM-based temporal knowledge graph models. It achieves this through hybrid graph tokenization (using Temporal Graph Adapters and Hybrid Prompt Design) and a two-stage training paradigm (Data Pruning and Prompt Tuning). These components are designed to create a unified representation of knowledge from both temporal graphs and LLMs, addressing limitations in temporal pattern modeling and cross-modal alignment." assertion.
- KnowledgeGraphInfusedFineTuningFramework comment "This method proposes a fine-tuning algorithm framework that injects knowledge graph information into large language models during the fine-tuning stage. It uses a GNN to encode KG information, a fusion mechanism to combine KG embeddings with LLM representations, a gating mechanism to balance contributions, and a joint loss function. The primary goal is to improve the LLM's knowledge expression and structured reasoning capabilities." assertion.
- curiousLLM comment "CuriousLLM introduces an LLM agent that generates curiosity-driven follow-up questions to guide the traversal of a Knowledge Graph. This process aims to retrieve more relevant context efficiently, which then enhances the performance of a separate LLM (GPT-4o-mini) in generating answers for multi-document question answering tasks during inference." assertion.
- Compass comment "COMPASS is a novel plug-and-play framework that synergizes LLMs and KGs for explainable conversational recommendations. It employs a two-stage training (graph entity captioning pre-training and knowledge-aware instruction fine-tuning) to bridge the modality gap and enable the LLM to conduct cross-modal reasoning over dialogue histories and KG-augmented contexts to generate interpretable user preference summaries." assertion.
- GnnRagRa comment "GNN-RAG+RA further boosts LLM KGQA performance by employing retrieval augmentation, combining GNN-induced reasoning paths with LLM-based retrieved paths (e.g., from RoG). This method increases the diversity and recall of retrieved KG information, which in turn enhances the LLM's reasoning and answer accuracy during inference." assertion.
- HumanInTheLoopLLMCenteredArchitectureForKGQA comment "This framework introduces an interactive, LLM-centered architecture for Knowledge Graph Question Answering. It leverages LLMs to generate, explain, and iteratively refine Cypher graph queries through natural language feedback from users. This approach facilitates a synergized reasoning process by treating LLMs as agents that interact with KGs to conduct auditable and transparent reasoning, enhancing accessibility and control for non-experts." assertion.
- CCKGAugmentationForLLMInference comment "This method uses the previously constructed Cultural Commonsense Knowledge Graph (CCKG) to augment smaller LLMs. It involves retrieving relevant CCKG assertions or paths using semantic search and employing them as in-context exemplars during the LLM's inference stage to improve performance on cultural reasoning tasks (MCQA, sentence completion) and story generation." assertion.
- CulturalCommonsenseKnowledgeGraphConstructionFramework comment "This method proposes an iterative, prompt-based framework that leverages LLMs (GPT-4o) as "cultural archives" to systematically extract culture-specific entities (actions), relations, and multi-step inferential chains. It involves an initial generation phase of assertions and an iterative expansion phase to elaborate paths, thereby constructing a Cultural Commonsense Knowledge Graph (CCKG)." assertion.
- MetaTox comment "Meta-Tox is a novel method that first utilizes LLMs to construct a meta-toxic knowledge graph through a three-step pipeline (rationale reasoning, triplet extraction, and entity resolution). Subsequently, it queries this graph via retrieval and ranking to supplement accurate, relevant toxicity knowledge. This knowledge is then used to prompt LLMs, boosting their capability in hatred and toxicity detection during inference, thus demonstrating a unified framework where LLMs act as agents both in building and interacting with the KG for reasoning." assertion.
- KgLlmFrameworkForSupplyChainVisibility comment "This framework proposes a novel zero-shot, LLM-driven approach to construct knowledge graphs from public data. It specifically leverages LLMs (e.g., GPT-4) with carefully crafted prompts for Named Entity Recognition (NER), Relation Extraction (RE), and Entity Disambiguation. The extracted and structured information is then used to build a comprehensive knowledge graph to enhance supply chain visibility." assertion.
- OkgLlm comment "OKG-LLM is a novel framework that integrates an Ocean Knowledge Graph (OKG) with Large Language Models (LLMs) for global Sea Surface Temperature (SST) prediction. The KG provides structured domain knowledge (structural and semantic embeddings) which is fused with numerical SST data. This knowledge-enhanced input is then fed into a pre-trained LLM, enabling the LLM to leverage domain-specific insights during its inference process to achieve more accurate predictions." assertion.
- ReasonAlignRespond comment "RAR is a novel framework that systematically integrates LLM reasoning with knowledge graphs for Knowledge Graph Question Answering (KGQA). It employs three fine-tuned LLM modules (Reasoner, Aligner, Responser) and an iterative Expectation-Maximization (EM) algorithm to jointly refine human-like reasoning chains and their corresponding KG paths, thereby reducing hallucinations and enhancing interpretability. This approach synergizes LLMs and KGs by using LLMs as agents to interact with and ground reasoning in KGs." assertion.
- kgContextualizedLlmExplanationGeneration comment "This method utilizes knowledge graphs (KGs) to provide structured factual context for LLM prompts during the inference stage. By extracting hierarchical information, semantic relations, and metadata from a KG, the method constructs a contextual part for the GPT-4 prompt, thereby guiding the LLM to generate more precise, relevant, and less hallucinatory explanations for learning recommendations in a sensitive domain." assertion.
- KGV comment "KGV integrates a paragraph-level Knowledge Graph (KG) with LLMs for Cyber Threat Intelligence (CTI) credibility assessment. The KG is utilized during the LLM's inference stage to provide factual knowledge, guide the LLM's multi-step reasoning process (including key point extraction, claim extraction, and triple extraction), and mitigate factual hallucinations, thereby enhancing the LLM's accuracy and response speed in verifying CTI claims." assertion.
- AgentiGraph comment "AGENTiGraph is a novel multi-agent platform designed to bridge the gap between LLMs and KGs for complex knowledge management tasks. It leverages LLM-based agents (e.g., User Intent Interpretation, Task Planning, Knowledge Graph Interaction, Reasoning, Dynamic Knowledge Integration) to collaboratively interpret user input, conduct multi-step reasoning by interacting with the KG, and even update the KG. This directly aligns with synergized models treating LLMs as agents to interact with KGs to conduct reasoning." assertion.
- ByokgRag comment "BYOKG-RAG is a novel framework that enhances Knowledge Graph Question Answering (KGQA) by synergistically combining LLMs with specialized graph retrieval tools. LLMs generate critical graph artifacts (entities, paths, queries) that are processed by graph tools for linking and retrieval. The retrieved context then enables the LLM to iteratively refine its graph linking and retrieval before generating the final answer." assertion.
- knowledgeReasoningLanguageModel comment "KRLM proposes a novel framework to "achieve unified coordination between LLM knowledge and KG context throughout the KGR process" to alleviate knowledge distortion and hallucinations in LLM-based KGFMs for inductive KGR. It integrates LLM and KG knowledge through a specialized tokenizer, attention layers with dynamic knowledge memory, and a structure-aware next-entity predictor for enhanced reasoning, directly aligning with synergized models that effectively conduct reasoning with both LLMs and KGs." assertion.
- ClinicalKnowledgeGraphConstructionAndEvaluationWithMultiLLMsViaRAG comment "This method proposes an end-to-end framework for constructing and evaluating clinical knowledge graphs from free-text using a multi-agent LLM pipeline and a schema-constrained Retrieval-Augmented Generation (KG-RAG) strategy. It primarily leverages LLMs for entity discovery, attribute extraction, relation identification, ontology mapping, semantic encoding, and multi-LLM consensus validation, all contributing to the construction and refinement of knowledge graphs." assertion.
- AhmadEtAl2024UnveilingLLMs comment "This method proposes an end-to-end framework to decode factual knowledge embedded in LLM latent representations into a dynamic knowledge graph, utilizing an extended activation patching technique. This framework enables layer-wise interpretability analyses (both local and global) of LLMs' internal knowledge resolution process, thereby enhancing the understanding of LLM mechanisms through KG-based visualization and analysis." assertion.
- ValiK comment "VaLiK is a unified framework that first uses VLMs to generate and verify textual descriptions from images, then employs LLMs for symbolic structuring to construct Multimodal Knowledge Graphs (MMKGs), and finally leverages these MMKGs to enhance LLM reasoning during inference by augmenting prompts. This approach involves LLMs both in KG construction and in interacting with KGs for enhanced reasoning, demonstrating mutual enhancement and synergy." assertion.
- UagUncertaintyAwareKnowledgeGraphReasoning comment "UAG is a novel multi-hop knowledge graph reasoning framework that incorporates uncertainty quantification (UQ) into its reasoning process. It combines KG-based candidate retrieval and path formulation with LLM-based candidate evaluation and answer generation, all controlled by a global error rate controller using the Learn-Then-Test framework. This synergizes LLMs and KGs for trustworthy reasoning." assertion.
- M3KGRAG comment "M3KG-RAG is an end-to-end framework that constructs a multi-hop multimodal knowledge graph (M3KG) using a lightweight multi-agent pipeline (LLMs as agents for KG construction) and employs a novel Grounded Retrieval And Selective Pruning (GRASP) mechanism for retrieving and filtering knowledge. It is designed to enhance multimodal large language models (MLLMs) by supplying query-aligned, answer-supportive knowledge to improve their reasoning depth and answer faithfulness. This method integrates LLM agents for KG construction and KG-enhanced retrieval for MLLMs, thereby synergizing both aspects for reasoning." assertion.
- AtlasKV comment "AtlasKV is a parametric knowledge integration method that augments LLMs with billion-scale KGs. It is designed to overcome limitations of existing methods by improving LLM performance in terms of knowledge grounding, generalization, and scalability during the inference stage, without requiring external retrievers or retraining." assertion.
- HiKVP comment "HiKVP (Hierarchical Key-Value Pruning) is an algorithm that dramatically reduces computational and memory overhead during LLM inference by hierarchically clustering and pruning KGKVs. It maintains high knowledge grounding accuracy while enabling scalable integration of billion-scale KGs into LLMs at inference time." assertion.
- KG2KV comment "KG2KV is a pipeline that transforms KG triples into high-quality Q-K-V data, which serves as training data for LLMs. This method enhances the generalization performance and efficient knowledge integration by enabling better injection of KGs into LLMs' parametric representations, thus improving their knowledge expression." assertion.
- iterativeReasoningLlmAgentForWarehousePlanningAssistance comment "The method introduces a novel LLM-based agent designed with an iterative reasoning mechanism to diagnose operational bottlenecks by interacting with a Knowledge Graph (KG) derived from Discrete Event Simulation data. The agent employs a sophisticated dual-path architecture (QA Chain and Iterative Reasoning Chain) that generates sequential, conditioned sub-questions, formulates Cypher queries for KG interaction, retrieves evidence, and performs self-reflection, thus treating the LLM as an agent to conduct complex, multi-step reasoning over KGs." assertion.
- SelfReflectivePlanning comment "SRP is a novel framework that synergizes LLMs with KGs through iterative, reference-guided reasoning during the inference stage. It aims to enhance LLM reasoning reliability for question answering by enabling LLMs to plan and reflect on reasoning paths using structured knowledge from KGs, thereby addressing issues like hallucination and factual inconsistency." assertion.
- aStarRetrievalAlgorithm comment "This method is a specific graph traversal algorithm, based on A*, designed for the QA pipeline to extract relevant triples from the memory graph. It employs various heuristics (Inner Product, Weighted Shortest Path, Averaged Weighted Shortest Path) to optimize pathfinding, contributing to the synergized reasoning capabilities of the overall framework." assertion.
- beamSearchRetrievalAlgorithm comment "Inspired by LLM token generation, this retrieval algorithm constructs multiple semantically relevant paths from a starting vertex in the knowledge graph. It is a key component of the QA pipeline, governed by hyperparameters for depth, path count, and intersection rules, and contributes to the synergized reasoning process by finding optimal information retrieval paths." assertion.
- memorizePipeline comment "This pipeline describes the process of constructing and updating the hybrid memory graph from unstructured natural language texts. It leverages LLMs to perform key steps such as formulating vertices and edges, generating hyper-edges, and parsing LLM output into a structured format (e.g., subject-relation-object triples), thereby augmenting KG construction." assertion.
- mixedRetrievalAlgorithm comment "This algorithm integrates the A*, WaterCircles, and BeamSearch strategies within the QA pipeline to enhance extraction efficacy. By combining the outputs of these distinct traversal methods, it aims to improve overall recall and robustness in retrieving information from the knowledge graph, thus contributing to the synergized reasoning of the system." assertion.
- personalAiFramework comment "This framework introduces a flexible external memory architecture for LLM agents based on a novel hybrid knowledge graph design (supporting standard edges and two types of hyper-edges: thesis and episodic). It enables rich semantic and temporal representations by allowing LLMs to automatically construct and update this structured memory. This aims to synergize LLM and KG capabilities for a unified knowledge representation that enhances personalized, context-aware reasoning." assertion.
- qaPipeline comment "This pipeline handles information search in the memory graph for question answering. It involves an LLM for entity extraction from the natural language question and for generating the final answer, while graph traversal algorithms (like A*, WaterCircles, BeamSearch, and hybrids) are used to retrieve relevant triples from the KG. This exemplifies synergized reasoning by combining LLM capabilities with KG-based structured retrieval." assertion.
- waterCirclesRetrievalAlgorithm comment "This retrieval method is a modified breadth-first search (BFS) algorithm used within the QA pipeline to extract relevant knowledge. It maps query entities to graph vertices, expands outwards, and aggregates triples at intersections, with specific enhancements for handling thesis and episodic hyper-edges, thereby supporting synergized reasoning." assertion.
- OntoSCPrompt comment "This method introduces a novel two-stage LLM-based KGQA system to generalize across heterogeneous KGs. It utilizes an ontology-guided hybrid prompt learning strategy, integrating KG ontology into prompts for semantic parsing and KG content population, and employs task-specific decoding strategies to ensure SPARQL query validity. The primary goal is to improve KGQA performance and generalization using LLMs." assertion.
- LLMAssistedKGCompletionForCurriculumAndDomainModelling comment "This method introduces a human-AI collaborative pipeline for Knowledge Graph completion in higher education. It leverages LLMs to automate topic extraction and classification from lecture materials and to identify semantic similarities between course contents, thereby generating new facts and relations to enhance the KG. The core goal is to enable personalized learning-path recommendations by connecting related courses through LLM-assisted KG completion." assertion.
- LlmSEmpoweredNodeImportanceEstimationLenie comment "LENIE is proposed to address semantic deficiencies in KGs for Node Importance Estimation (NIE). It uses LLMs to generate richer and more precise augmented node descriptions. These descriptions are then encoded into semantic embeddings, enriching the representations of KG entities, which subsequently boost the performance of downstream NIE models." assertion.
- AutomaticTextToCypherTaskGenerationPipeline comment "This pipeline automates the creation of a high-quality text-to-Cypher benchmark by first generating initial (question, Cypher) pairs from graph patterns using templates. LLMs are then employed in multiple stages to rewrite these template-generated questions into more natural-sounding language and to verify their semantic equivalence, effectively leveraging LLMs to generate text (questions) that describe facts or queries over KGs." assertion.
- TextToCypherRetrievalOverPropertyGraphViewsOfRdfKGs comment "This method proposes a novel approach to enable Large Language Models (LLMs) to efficiently and accurately retrieve information from full-scale modern RDF knowledge graphs. It involves transforming complex RDF graphs into domain-specific property graph views that are more amenable to LLM querying via Cypher, thereby enhancing the LLM's ability to access and utilize structured knowledge during inference." assertion.
- DemoGraph comment "DemoGraph proposes a black-box approach where LLMs are tasked with generating knowledge graphs (KGs) from text prompts to perform context-driven graph data augmentation. The method specifically focuses on using LLMs to construct KGs (identifying concepts/entities and their relationships) which are then dynamically merged into an original graph to enhance its quality for downstream graph representation learning tasks. Therefore, LLMs are used to improve the KG construction process to facilitate graph data augmentation." assertion.
- AutomatedRetrosynthesisPlanningAgent comment "This method proposes a comprehensive agent system that integrates LLMs for information extraction, entity alignment, and pathway evaluation/recommendation, with KGs for structured storage, efficient retrieval, and robust pathway construction and search. The system leverages the strengths of both to perform multi-step retrosynthesis planning, where LLMs and KGs collaboratively contribute to the complex reasoning process." assertion.
- MultibranchedReactionPathwaySearchAlgorithm comment "This is a novel algorithm explicitly designed to identify all valid multi-branched reaction pathways within a retrosynthetic pathway tree derived from a knowledge graph. It functions as a core component of the overall agent, specifically aimed at helping LLMs overcome their limitations in complex, multi-branched reasoning, thereby enabling a more comprehensive and accurate reasoning process within the synergized framework." assertion.
- KG2data comment "KG2data is a system that integrates knowledge graphs, LLMs, ReAct agents, and tool usage to improve the accuracy and robustness of LLM API calls in knowledge-intensive domains like meteorology. It uses the KG as a long-term memory module for the LLM-driven agent, providing domain-specific knowledge during the inference stage to overcome LLM's limitations without retraining." assertion.
- AbeduEtAl2024SynergizedRepositoryRelatedQASystem comment "This method proposes a unified framework for software repository question answering by synergizing LLMs and KGs. It involves a Knowledge Graph Constructor, an LLM-based Query Generator to translate natural language questions into Cypher queries, a Query Executor, and an LLM-based Response Generator to synthesize answers using retrieved KG facts. The LLM acts as an agent interacting with the KG for reasoning and response generation." assertion.
- AbeduEtAl2024SynergizedRepositoryRelatedQASystemWithFewShotCoT comment "This method enhances the Synergized Repository-Related QA System by incorporating few-shot chain-of-thought prompting into the Query Generator LLM. This technique guides the LLM to generate step-by-step reasoning paths, significantly improving its ability to accurately interpret complex relationships within the knowledge graph and generate more precise Cypher queries, thereby boosting overall reasoning performance." assertion.
- Knowformer comment "Knowformer is a customized transformer architecture that introduces a knowledge projector to align and inject structured knowledge representations (obtained from a GNN encoder on the KG) into the LLM's FFN layers. This method synergizes the LLM's internal processing with KG knowledge, combining LLM and KG encoders for a unified reasoning process." assertion.
- QuestionGuidedKnowledgeGraphReScoring comment "Q-KGR proposes a method to re-score knowledge graph edges based on the semantic similarity to the input question. This process eliminates noisy pathways and focuses on pertinent factual knowledge from the KG, thereby enhancing the quality of knowledge accessible to the LLM during its inference stage for question answering." assertion.
- KnowledgeGraphOfThoughtsKGoT comment "KGoT is an AI assistant architecture that integrates LLM reasoning with dynamically constructed KGs. It treats LLMs as agents that iteratively construct, evolve, and interact with the KG to solve complex multi-step tasks. This combined, iterative approach to structured reasoning with both LLM capabilities and KG representation aligns with the mutual enhancement and unified framework definition of SynergizedReasoning." assertion.
- KcGenre comment "KC-GenRe is a novel knowledge-constrained generative re-ranking method for Knowledge Graph Completion (KGC). It uses LLMs to address the challenges of mismatch, misordering, and omission in KGC re-ranking by formulating it as a candidate identifier sorting generation problem, and integrating knowledge-guided interactive training and knowledge-augmented constrained inference." assertion.
- AriadneCognitiveArchitecture comment "Ariadne is an agentic cognitive architecture that combines the AriGraph memory model with LLM-based planning and decision-making modules (e.g., ReAct framework). It represents a unified framework where the LLM acts as an agent interacting with and reasoning over the dynamic knowledge graph (AriGraph) to learn, plan, and execute actions in unknown environments." assertion.
- ReGraM comment "ReGraM is a framework designed to improve the factual accuracy and consistency of Large Language Models (LLMs) in Medical Question Answering. It achieves this by constructing query-aligned subgraphs from a Knowledge Graph (KG) and constraining the LLM's reasoning exclusively within this localized region during the inference phase. This approach directly enhances the LLM's performance on a specific task (QA) by providing structured, relevant external knowledge and guiding its multi-hop reasoning." assertion.
- KaradimisEtAl2024LLMKGFusedOSC comment "This method introduces a novel multi-stage pipeline for zero-shot object state classification. It synergistically integrates LLM-generated domain-specific knowledge, fuses it with general-purpose embeddings, and then leverages Knowledge Graphs and Graph Neural Networks to project these combined semantic embeddings into a visual space. This unified framework enables effective reasoning by combining LLM-derived knowledge and KG structure to solve a complex vision task." assertion.
- eperm comment "EPERM is a three-stage framework that reformulates KGQA as a probabilistic graphical model for enhanced reasoning. It uses fine-tuned LLMs as agents to interact with KGs in stages: subgraph retrieval, evidence path finding (generating and scoring weighted plans/paths), and final answer prediction. This unified approach combines LLM capabilities with KG structural information for faithful reasoning, representing a synergized model for effective reasoning." assertion.
- KnowledgeGuidedLargeLanguageModel comment "The KG-LLM proposes a hybrid architecture that explicitly fuses the LLM's semantic representation with knowledge-grounded embeddings from a pediatric dental knowledge graph using a learnable gating parameter. This creates a blended representation, aligning with 'SynergizedKnowledgeRepresentation' by effectively combining knowledge from both sources to enhance record understanding and safe antibiotic recommendations." assertion.
- abcssd comment "test comment" assertion.
- assertion comment "A new nanopublication template is being used for declaring FORRT Replication Claims. See https://w3id.org/np/RAm4qKh1-lJO4AG6zOagD7YnvlUO0FnpLTkWE5MzuUrdE" assertion.
- assertion comment "contains too many errors" assertion.
- j.jag.2024.104034 comment "The Fire_cci51 dataset is available at https://dx.doi.org/10.5285/b1bd715112ca43ab948226d11d72b85e and MODIS MCD64A1 can be found at HTTPS://DOI.ORG/10.5067/MODIS/MCD64A1.061" assertion.
- 3333 comment "test comment 123" assertion.
- 5.726791 comment "The system is based on convolutional Neural Network called LeNet-5." assertion.
- edaa7ba8-2841-4ec1-a937-b5170d8a3629 comment "GPS time series data web service" assertion.
- 911a1b7c-c3f8-4a5d-a692-080237dbcf8d comment "GPS time series data web service" assertion.