Matches in Nanopublications for { ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?o ?g. }
- DeepDocumentModel comment "The Deep Document Model (DDM) is proposed to address limitations in traditional KG construction by providing a comprehensive and fine-grained structured representation of academic papers. It uses advanced NLP techniques (e.g., text segmentation, named entity recognition) to convert unstructured text into a hierarchical knowledge representation, enriching the Scholarly Knowledge Graph (ASKG). This enhanced KG representation is explicitly designed to facilitate more sophisticated interactions with Large Language Models." assertion.
- KGenhancedQueryProcessing comment "The KG-enhanced Query Processing (KGQP) workflow is designed to mitigate "AI hallucination" in LLMs and optimize complex queries by providing KG-based context during question-answering interactions. It involves LLMs transforming user queries into triples, performing SPARQL queries for matching against the KG, and then leveraging LLMs for final triple selection and results ranking. This directly improves the LLM's accuracy and reliability during its inference stage by grounding responses with structured KG knowledge." assertion.
- KnowledgeGraphTuning comment "Knowledge Graph Tuning (KGT) is a novel approach for real-time LLM personalization. It leverages LLMs to extract personalized factual knowledge triples and evaluates their retrieval and reasoning probabilities, which then guide a heuristic optimization algorithm to modify an external Knowledge Graph (KG) by adding or removing triples. This continuous feedback loop where the LLM actively interacts with and updates the KG to enhance its reasoning aligns with the Synergized Reasoning category." assertion.
- DALK comment "DALK is a dynamic co-augmentation framework where LLMs are leveraged to construct an evolving, domain-specific Knowledge Graph (KG), and subsequently, a novel self-aware knowledge retrieval method selects pertinent knowledge from this KG to augment LLM inference and reasoning capabilities. This unified framework emphasizes mutual enhancement and interaction for reasoning between LLMs and KGs." assertion.
- LlmAutomaticOntologyExtractionAlgorithm comment "This algorithm leverages the generative capabilities of LLMs to automatically derive ontological knowledge, such as entity classes, relation composition, equivalence, and disjointness rules, directly from KGs. This process automates the design and enrichment of the KG's underlying ontology, thereby enhancing KG construction." assertion.
- OlKgc comment "OL-KGC is an LLM-based Knowledge Graph Completion (KGC) method. It integrates vectorized structural information and ontological knowledge from KGs, transforming them into a textual format and prefixes to enhance the LLM's logical capabilities and overall performance in KGC tasks." assertion.
- SynergizeLLMAgentAndKnowledgeGraphLearningModelSLAK comment "The SLAK framework synergizes LLM agents and a Location-Based Knowledge Graph (LBKG) for socioeconomic prediction. It leverages LLM agents' reasoning capabilities for automatic discovery and optimization of task-relevant meta-paths within the LBKG, and employs a semantic-guided knowledge fusion module for adaptive knowledge integration using LLM embeddings. Furthermore, it introduces a cross-task communication mechanism enabling both LLM agent-level collaboration for meta-path refinement and KG embedding-level fusion, positioning the LLMs as agents interacting with KGs for enhanced reasoning." assertion.
- MedKGent comment "MedKGent is a novel LLM agent framework designed to construct a temporally evolving medical knowledge graph. It comprises an Extractor Agent (which uses an LLM for relation extraction, sampling-based confidence estimation, and entity enrichment) and a Constructor Agent (which uses an LLM for incremental integration, confidence updates, and conflict resolution), leveraging LLMs to directly improve KG construction tasks." assertion.
- semanticConditionTuning comment "Semantic-Condition Tuning (SCT) proposes a novel framework for deep integration of KGs and LLMs for Knowledge Graph Completion. It features a Condition-Adaptive Fusion Module which acts as a KG fusion module, performing a feature-level affine transformation on the LLM's input embeddings based on a graph-derived semantic condition. This module is jointly trained with the LLM, thereby creating a unified, knowledge-infused representation that aligns with the SynergizedKnowledgeRepresentation category." assertion.
- UrbanKGent comment "UrbanKGent is a unified LLM agent framework proposed for automatic Urban Knowledge Graph (UrbanKG) construction. It integrates knowledgeable instruction generation, a tool-augmented iterative trajectory refinement module, and hybrid instruction fine-tuning to effectively address both relational triplet extraction and knowledge graph completion tasks, enhancing the creation of urban KGs." assertion.
- LLMAtKGE comment "LLMAtKGE is a unified framework leveraging LLMs to perform explainable adversarial attacks on KGEs. A core component, "High-order Adjacency Tuning LLM for Filtering," precomputes multi-hop KG structural information and transforms it into the LLM's embedding space via a dedicated adapter. This adapter, jointly trained with the LLM using LoRA on a triple classification task, acts as a KG fusion module that effectively represents knowledge from both sources, thus fitting the SynergizedKnowledgeRepresentation category." assertion.
- ContextualizationDistillation comment "Contextualization Distillation leverages LLMs to generate high-quality, context-rich descriptions for KG triplets. These LLM-generated contexts are then used in auxiliary tasks (reconstruction or contextualization) to train smaller KGC models, thereby enhancing their performance on Knowledge Graph Completion. The core idea is that LLMs augment the training data/signal for the KG completion task." assertion.
- R2KGDualAgentFramework comment "This method proposes a novel dual-agent system where LLMs (Operator for evidence collection, Supervisor for judgment) iteratively interact with a Knowledge Graph to perform general-purpose, reliable reasoning. It synergizes LLM's reasoning abilities with KG's knowledge by treating LLMs as agents navigating and interpreting KG evidence, addressing tasks like QA and fact verification." assertion.
- R2KGSingleAgentVersion comment "This is a cost-effective variant of the R2-KG dual-agent framework. It uses a single LLM (Operator) for both KG exploration and answer generation, enhancing reliability through a strict self-consistency strategy across multiple trials, maintaining the synergized reasoning approach with the KG." assertion.
- MemoTime comment "MemoTime is a memory-augmented framework that enhances LLM temporal reasoning by integrating structured temporal grounding from TKGs, hierarchical decomposition, dynamic evidence retrieval, and a self-evolving experience memory. It enables LLMs to perform faithful multi-entity temporal reasoning and adapt retrieval strategies during the inference stage, without retraining, to answer complex temporal QA." assertion.
- CLAKGLLMRecommendationFramework comment "This closed-loop framework leverages the Case-Enhanced Law Article Knowledge Graph (CLAKG) to ground the recommendations generated by an LLM during the inference stage. By retrieving relevant historical case information and candidate law articles from the CLAKG, the framework enables the LLM to provide more accurate law article recommendations and effectively mitigates issues like hallucinations in LLM outputs, thereby enhancing LLM performance." assertion.
- LLMbasedCLAKGConstruction comment "This method uses a Large Language Model (LLM) to automatically extract nodes and relationships from law articles and judgments, integrating them to form the Case-Enhanced Law Article Knowledge Graph (CLAKG). The LLM's role is to reduce manual input and improve the scalability of the KG construction process, directly augmenting the KG construction task." assertion.
- SparqLLM comment "SparqLLM is a novel Retrieval-Augmented Generation (RAG) framework that integrates Large Language Models with Knowledge Graphs. Its core purpose is to automate SPARQL query generation from natural language questions and provide dynamic data visualization of the results, thereby significantly enhancing user-friendly interaction and accessibility for Knowledge Graph Question Answering tasks." assertion.
- KgSmile comment "KG-SMILE is introduced as a novel perturbation-based framework designed to provide token and component-level interpretability for GraphRAG systems. It uses KGs to explain how specific entities and relations influence the LLM's generated outputs by applying controlled perturbations and training surrogate models. This method aims to enhance the understanding and trustworthiness of LLM outputs by interpreting their reasoning process through KG interactions." assertion.
- LlmDa comment "LLM-DA proposes a unified framework where LLMs act as agents to analyze historical data, generate temporal logical rules, and dynamically update these rules based on current events within Temporal Knowledge Graphs (TKGs). The method then synergizes these LLM-derived rules with traditional graph-based reasoning (e.g., GNNs) to conduct interpretable and accurate Temporal Knowledge Graph Reasoning (TKGR). This constitutes a combined reasoning approach where LLMs actively interact with and enhance the KG reasoning process." assertion.
- LlmKerec comment "LLM-KERec is a novel recommendation system where Large Language Models are critically used in the "Complementary Graph Construction" module. Here, LLMs infer and determine complementary relationships between entities, thus performing a key task in building the knowledge graph. This direct use of LLMs for relation extraction within KG construction aligns with the LLMAugmentedKGConstruction category." assertion.
- KGProver comment "KG-Prover is a novel framework that uses a knowledge graph (KG) mined from mathematical texts to augment general-purpose LLMs during the inference stage for automated theorem proving. It involves iterative KG traversal for context retrieval, informal proof generation by the LLM, formal proof generation, and verification/refinement with Lean feedback. The method enhances LLM performance by providing relevant knowledge dynamically at test-time without requiring additional finetuning." assertion.
- RagKgIl comment "RAG-KG-IL is a novel multi-agent hybrid framework that deeply integrates LLMs with KGs and an incremental learning approach. It aims to enhance LLM reasoning and reduce hallucinations, while LLM-agents actively generate, update, and reason over the Knowledge Graph, demonstrating mutual enhancement and combined reasoning capabilities. The explicit "KG Generation and Reasoning" component and evaluation of "Knowledge Creation and Causality Reasoning Capabilities" underscore its synergistic approach to reasoning." assertion.
- graphConstrainedReasoning comment "GCR is a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs to eliminate hallucinations and ensure faithful reasoning. It integrates KG structure into the LLM decoding process via a KG-Trie, leverages a lightweight KG-specialized LLM for graph-constrained decoding to generate paths, and a powerful general LLM for inductive reasoning over multiple generated paths. This approach deeply combines LLM and KG reasoning capabilities." assertion.
- drKGC comment "DrKGC is explicitly designed for Knowledge Graph Completion (KGC) by leveraging LLMs. It improves KGC performance by dynamically retrieving relevant subgraphs and enhancing structural embeddings using a GCN adapter, which are then integrated into the LLM's prompt. This approach directly uses LLMs to generate missing facts in KGs, thereby augmenting the KG completion task." assertion.
- KGCQR comment "KG-CQR is a novel framework that enhances the retrieval phase in RAG systems by enriching complex input queries with contextual representations from a corpus-centric KG. It comprises subgraph extraction, subgraph completion, and contextual generation modules, ultimately improving the context provided to LLMs during inference for tasks like question answering." assertion.
- logicQueryOfThoughts comment "Logic-Query-of-Thoughts (LQOT) is a novel method that integrates LLMs and fuzzy logic-based Knowledge Graph Question Answering (KGQA) to answer complex logic queries. It achieves mutual enhancement by having LLM outputs augment KG fuzzy logic reasoning to address incompleteness, and KGQA results refine LLM outputs to mitigate hallucination, within an iterative subquestion answering framework. The method synergizes reasoning processes by allowing LLMs to interact with and be guided by KG query structures and vice-versa." assertion.
- DynamicDecisionAlgorithm comment "This algorithm serves to optimally combine the answers obtained from the two parallel workflows of UniOQA (Translator and Searcher). By dynamically selecting the better answer based on an F1 score criterion, it represents a synergized approach to reasoning, integrating results derived from both LLM-generated queries and LLM-augmented KG retrieval to yield the final, most accurate answer." assertion.
- EntityAndRelationReplacementAlgorithm comment "This algorithm uses an LLM (Baichuan2-7B with a crafted instruction) to select and replace entities and relations in the generated Cypher Query Language (CQL) to align them with the knowledge graph. Its purpose is to enhance the executability and accuracy of the CQL, directly improving the performance of Knowledge Graph Question Answering by refining the query before execution." assertion.
- GRAGProcess comment "GRAG (Knowledge Graph Retrieval-Augmented Generation) is a process that employs an LLM, combined with traditional information retrieval from the knowledge graph (retrieving relevant subgraphs), to directly generate answers to natural language questions. This directly uses LLMs to improve the task of question answering over KGs." assertion.
- UniOQA comment "UniOQA is presented as a "unified framework" that integrates two parallel workflows (Translator for CQL generation and Searcher for direct retrieval) and a "dynamic decision algorithm" to synthesize their outputs. This explicit combination and optimization of answers from both LLM-derived queries and KG-based retrieval constitutes a synergized approach to reasoning for Knowledge Graph Question Answering." assertion.
- KgRetriever comment "KG-Retriever is a novel Retrieval-Augmented Generation (RAG) framework that leverages a Hierarchical Index Graph (HIG) to provide comprehensive and efficient knowledge to LLMs during the inference stage. Its goal is to improve the quality, credibility, and efficiency of LLM-generated responses by addressing challenges like multi-hop question answering and information fragmentation. This directly aligns with using KGs to enhance LLM performance during inference." assertion.
- OntologyGuidedReverseThinkingORT comment "ORT is a novel framework for KGQA that synergizes LLMs and KGs. It uses LLMs as agents for initial question understanding (extracting conditions and aims) and final answer generation, while leveraging KG ontology for structured reverse reasoning path construction and various pruning steps. This multi-stage interaction where LLMs and KGs collaboratively conduct reasoning for complex questions aligns with synergized reasoning." assertion.
- Llama2KGReasoning comment "This method uses Llama2-7B as a baseline for zero-shot graph reasoning by converting KG queries and subgraphs into textual questions. The LLM attempts to infer missing facts directly from the textual representation for KG completion." assertion.
- Prolink comment "PROLINK is a novel framework designed for low-resource inductive reasoning across arbitrary KGs. It utilizes a pre-trained LLM to generate a graph-structural prompt from relation semantics. This prompt graph is then calibrated and injected into the KG's relation graph to enhance a GNN-based reasoner, thereby improving the inference of missing facts (KG completion)." assertion.
- InferenceTimeKnowledgeGraphConstruction comment "This method proposes a novel four-stage pipeline that dynamically constructs and expands knowledge graphs (KGs) during the LLM's inference process. The KG is built by extracting initial triplets from the question, iteratively expanding them using the LLM's internal knowledge, and refining them through external retrieval from sources like Wikipedia and Google Search. The ultimate goal is to improve the factual accuracy and robustness of LLM-generated answers in Question Answering tasks." assertion.
- reasoningPathStack comment "The Reasoning Path Stack is a specific mechanism within RTSoG's Answer Generation stage. It processes the KG-derived weighted reasoning paths in a structured manner, allowing the LLM to effectively utilize this external knowledge during inference to generate more accurate answers." assertion.
- rewardGuidedTreeSearchOnGraph comment "RTSoG is a training-free framework that enhances LLM performance in KGQA. It uses KGs to retrieve weighted reasoning paths, which are then used by the LLM for answer generation during inference. This directly falls under KGEnhancedLLMInference as KGs are used to provide contextual knowledge to LLMs at inference time." assertion.
- selfCriticMonteCarloTreeSearch comment "SC-MCTS is a core component of RTSoG designed to iteratively retrieve weighted reasoning paths from KGs. This method, guided by a reward model (LLM), directly enables the LLM to access and leverage knowledge from KGs during the inference stage for improved KGQA performance." assertion.
- KGRAR comment "KG-RAR is a novel iterative retrieve-refine-reason framework that deeply integrates step-by-step KG retrieval into an LLM's multi-step reasoning process. It features process-oriented KG construction, hierarchical retrieval, and a Post-Retrieval Processing and Reward Model (PRP-RM) with role-based prompting, enabling LLMs to act as agents interacting with KGs for guided reasoning and verification. This framework represents a unified, synergistic approach where LLMs and KGs mutually enhance the reasoning process." assertion.
- StructureFirstReasonNextFramework comment "This framework proposes an end-to-end pipeline where LLMs are used to construct Knowledge Graphs (KGs) from financial documents (leveraging LLM's understanding for triplet generation) and then these KGs, after filtering, are used to enhance the LLM's numerical reasoning capabilities. This bidirectional and integrated approach, where LLM capabilities are used to create structured knowledge that subsequently guides and improves the LLM's reasoning, aligns with a synergized model for reasoning." assertion.
- SubgraphRAG comment "SubgraphRAG is a KG-based RAG framework that retrieves relevant subgraphs from KGs using a lightweight MLP with Directional Distance Encoding (DDE) and parallel triple-scoring. It then employs unfine-tuned LLMs, guided by tailored prompts, to reason over the retrieved subgraphs, reducing hallucinations and improving answer grounding during the inference stage." assertion.
- spikeSemanticProfilesIntoKnowledgeGraphsForEnhancedRecommendation comment "SPiKE is a recommender model that synergizes knowledge from LLM-generated semantic profiles and KG structural information. It uses a profile-aware KG aggregation mechanism to integrate profiles during message passing and a pairwise matching loss to align KG and LLM-generated profile embeddings, thereby creating a unified representation for recommendation." assertion.
- multimodalRagFramework comment "This framework integrates VAT-KG with Multimodal Large Language Models (MLLMs) for question answering across diverse modalities. It synergistically combines retrieval from VAT-KG with MLLM generation and includes a "Retrieval Checker" that uses the MLLM's text encoder to filter retrieved KG triplets, indicating a unified reasoning process where both LLM and KG representations are actively combined." assertion.
- vatkgConstructionPipeline comment "This method proposes a four-step pipeline for constructing the Visual-Audio-Text Knowledge Graph (VAT-KG). It explicitly leverages LLMs for "Knowledge-Intensive Recaptioning" to enrich textual data and "Multimodal Triplet Grounding" to extract knowledge-intensive triplets, directly supporting the construction of the knowledge graph. Thus, LLMs are used to improve a KG task, specifically KG construction." assertion.
- HealthGenie comment "HealthGenie is a novel interactive system that synergizes LLMs and KGs through a circular workflow for personalized dietary recommendations. The system treats LLMs as agents that interact with KGs to conduct reasoning, adapting recommendations based on user interaction with visualized graph data, thus enabling effective reasoning with both components in a unified manner." assertion.
- TRAIL comment "TRAIL is a unified framework that tightly integrates LLM thinking, reasoning, and incremental learning by coupling joint inference and dynamic KG refinement. The LLM acts as an agent to iteratively explore, update, and refine the KG during reasoning, using a confidence-driven mechanism to generate, validate, and prune new facts. This approach aims for mutual enhancement, making the reasoning process more adaptive and enabling the KG to evolve in real-time." assertion.
- GLTW comment "The GLTW method proposes a novel unified framework that integrates an improved Graph Transformer (iGT) for encoding KG structural information with Large Language Models. It explicitly fuses the embeddings from both iGT and the LLM via an "Embedding Fusion Module" and leverages a KG language prompt. This joint training and representation fusion allows for a synergistic approach where knowledge from both modalities is effectively combined to enhance Knowledge Graph Completion." assertion.
- PgAkv comment "PG&AKV is a framework designed to enhance LLM performance during inference, specifically for open-ended and precise question answering. It leverages LLMs to generate a preliminary knowledge framework (pseudo-graph) and then uses external KGs for atomic-level knowledge querying and verification. The refined knowledge from KGs mitigates LLM hallucinations and improves answer accuracy without retraining, aligning with the goal of KGEnhancedLLMInference." assertion.
- WayToSpecialistFramework comment "The WTS framework introduces the "LLM⟳KG" paradigm for "bidirectional enhancement" between specialized LLMs and evolving Domain Knowledge Graphs (DKGs). It integrates DKG-Augmented LLM (KG improving LLM reasoning) and LLM-Assisted DKG Evolution (LLM evolving KG for better knowledge support), forming a continuous feedback loop. This architecture allows the LLM's reasoning to inform KG evolution, and the evolved KG, in turn, enhances the LLM's future reasoning capabilities, exemplifying synergistic reasoning." assertion.
- Graphusion comment "Graphusion is a zero-shot Knowledge Graph Construction (KGC) framework that leverages LLMs for extracting, merging, and resolving knowledge triplets from free text. Its core fusion module provides a global view of triplets by incorporating entity merging, conflict resolution, and novel triplet discovery, addressing key challenges in scientific KGC." assertion.
- KGEnhancedModelForTutorQA comment "This pipeline enhances LLM interaction with a concept graph for various Question Answering tasks on the TutorQA benchmark. It operates in two steps: command query, where the LLM generates commands to retrieve relevant paths from the KG, and answer generation, where these paths are used as contextual prompts for the LLM to generate answers." assertion.
- KgDf comment "KG-DF is a unified framework where a Knowledge Graph (KG) is constructed with safety and general knowledge. An LLM performs semantic parsing of user input to extract keywords, which are then used to retrieve relevant KG triples. These triples are integrated into the LLM's prompt as a "warning" and the LLM then performs a judgment (reasoning) to decide whether to respond or reject, thereby enhancing LLM security against jailbreak attacks and improving general QA. This constitutes a synergistic reasoning process where the LLM acts as an agent interacting with KG-derived knowledge for decision-making." assertion.
- AutoMathKG comment "AutoMathKG is an overarching system that provides an automatically updatable mathematical knowledge graph. It integrates LLMs for KG construction and updates, and uses the KG and its vector database (MathVD) to enhance a specialized LLM's mathematical reasoning capabilities, thus mutually benefiting both components within a unified framework." assertion.
- AutomaticKnowledgeCompletion comment "This mechanism utilizes the specialized Math LLM to supplement incomplete proofs or solutions for new mathematical entities within the knowledge graph. By generating missing facts, it directly enhances the completeness and quality of the KG using LLM capabilities." assertion.
- AutomaticKnowledgeFusion comment "This mechanism employs MathVD for fuzzy search and LLMs (via in-context learning) to determine whether to merge new input entities with existing similar candidates or add them as new entities. This process directly contributes to the construction and maintenance of the KG by addressing entity discovery, coreference resolution, and relationship integration." assertion.
- MathLLM comment "Math LLM is a specialized LLM designed with task adapters, retrieval augmentation from AutoMathKG and MathVD, and self-calibration to address various mathematical problems. It exemplifies synergized reasoning by treating the LLM as an agent that interacts with the KGs to conduct complex mathematical deductions and problem-solving." assertion.
- MathVD1 comment "MathVD1 is one of two proposed strategies for constructing a vector database (MathVD) from the AutoMathKG entities. It embeds a single long text concatenating all key information about an entity using SBERT, providing a vector representation of KG knowledge for similarity search, which is crucial for the system's synergized reasoning and knowledge fusion mechanisms." assertion.
- MathVD2 comment "MathVD2 is the second proposed strategy for constructing MathVD. It separately embeds each key information description of an entity using SBERT and then weights and sums these vectors based on their importance. This method contributes to a robust vector representation of KG knowledge, facilitating synergized knowledge representation and retrieval within the AutoMathKG system." assertion.
- QMKGF comment "QMKGF is proposed to enhance Retrieval-Augmented Generation (RAG) by leveraging KGs to provide more relevant context to LLMs. It involves KG construction, multi-path subgraph creation, query-aware subgraph fusion, and KG-based query expansion, all aimed at improving the quality of LLM-generated responses during the inference stage." assertion.
- PyramidDrivenAlignment comment "PDA proposes a novel framework to align LLM and KG knowledge and optimize their reasoning capabilities for complex question-answering. It uses the Pyramid Principle to generate deductive knowledge from the LLM that aligns with KG reasoning paths, and then employs a recursive mechanism to leverage the KG's inherent reasoning abilities to retrieve accurate information, thereby combining the strengths of both systems." assertion.
- MemQ comment "MemQ is an LLM-based framework designed to improve Knowledge Graph Question Answering (KGQA). It enhances the LLM's reasoning capabilities by decoupling it from direct tool invocation using a query memory built by an LLM, natural language reasoning steps, and a query reconstruction strategy. The method explicitly aims to improve the LLM's performance on the KGQA task." assertion.
- CoLaKG comment "CoLaKG leverages LLMs to comprehend local and global knowledge graph (KG) information, and user preferences, generating semantic embeddings for items and users. These LLM-derived embeddings enrich the representations of KGs by encoding their textual descriptions and connections, thereby addressing KG limitations like missing facts and semantic loss to improve recommendation performance." assertion.
- LLMSupportedKGConstructionPipeline comment "This method introduces a comprehensive semi-automated pipeline that leverages open-source LLMs to automate various stages of Knowledge Graph construction, from competency question generation and ontology creation to KG population and evaluation using a judge LLM. The core goal is to reduce human effort and expertise required in building KGs, thus making LLMs agents that directly improve the KG construction task." assertion.
- HybridFactVerificationPipeline comment "This method introduces a multi-stage pipeline for fact-checking that integrates Knowledge Graphs, Large Language Models, and a Web Search Agent. It is classified as SynergizedReasoning because LLMs act as agents for classification and query rewriting, interacting with KG-derived evidence and web snippets within a unified framework to conduct complex reasoning for claim verification, directly combining their strengths." assertion.
- KgRetrieverEntityLevelKgConstructionUsingLlms comment "This method describes the specific use of large language models (Qwen-72B) with in-context learning and designed prompts to extract entities and relations from unstructured text within documents. This process is crucial for constructing the entity-level knowledge graph layer of the Hierarchical Index Graph within the KG-Retriever framework, directly using LLMs to perform a core KG construction task." assertion.
- FusionAwareTemporalModule comment "This module deeply integrates natural language prompt embeddings (derived from the LLM) with time series data. It aligns pooled text features with temporal segments and uses multi-scale convolutions to combine them effectively. This process aims to create a unified representation by merging semantic knowledge from the LLM with temporal patterns, thereby enhancing the model's ability to learn complex patterns for time series tasks." assertion.
- KnowledgeDrivenTemporalPrompt comment "This module leverages a Knowledge Graph (KG) to enrich user-provided prompts with task-relevant semantics and descriptive insights for time series analysis. By incorporating retrieved knowledge into the prompt, it enhances the LLM's input at the inference stage, allowing the LLM to access external knowledge to improve its performance on time series tasks." assertion.
- LTM comment "LTM is a novel multi-task framework that integrates time series models, LLMs, and KGs for tasks like forecasting, imputation, and anomaly detection. It achieves a deep fusion of temporal and semantic features by combining KG-enhanced prompts with LLM processing and feature fusion modules, representing knowledge from both LLMs and KGs in a unified manner for time series analysis." assertion.
- ConceptFormer comment "ConceptFormer enhances LLM factual recall by injecting concept vectors, derived from KGs, directly into the LLM's input embedding space during the inference stage. This method avoids altering the LLM's internal architecture and efficiently leverages structured knowledge without retraining the base LLM, enabling access to current knowledge in a token-efficient manner." assertion.
- LkdKgc comment "LKD-KGC is a novel framework for unsupervised domain-specific Knowledge Graph Construction. It leverages LLMs to infer knowledge dependencies, prioritize document processing, autoregressively generate entity schemas, and extract entities and relationships. The method directly enhances the KG construction task by employing LLMs in multiple stages, fitting the definition of LLM-Augmented KG Construction." assertion.
- GnnRag comment "GNN-RAG enhances LLM performance on KGQA by using a GNN to retrieve relevant multi-hop reasoning paths from a KG. These verbalized paths are then fed to the LLM as context for RAG, thereby improving the LLM's ability to answer complex questions during its inference stage and reducing hallucinations." assertion.
- PathMind comment "PathMind is a novel framework designed to enhance LLM performance in Knowledge Graph Reasoning (KGR). It achieves this by selectively guiding LLMs with important reasoning paths extracted and prioritized from KGs during the response generation (inference) phase, after fine-tuning the LLM to effectively utilize these paths. This mechanism enables the LLM to generate accurate and logically consistent answers for KGR tasks." assertion.
- AriGraph comment "AriGraph is a novel knowledge graph world model that integrates semantic and episodic memories, constructed and updated by an LLM agent during environment interaction. It provides structured knowledge for efficient retrieval and reasoning, enabling LLMs to access and utilize dynamic, up-to-date information at the inference stage to improve their performance in complex tasks." assertion.
- Wikontic comment "Wikontic is a multi-stage pipeline that uses LLMs to construct high-quality, ontology-consistent KGs from unstructured text. It leverages LLMs for candidate triplet extraction, ontology-aware refinement (entity typing, relation validation), and entity normalization/deduplication, which are core KG construction tasks. The integrated multi-hop QA component demonstrates the quality and utility of the constructed KG." assertion.
- knowledgeGraphProgrammingLanguageRepresentationForLlmReasoning comment "This method proposes and evaluates representing Knowledge Graphs using programming language (Python) code, both statically and dynamically. This representation is then used to either prompt or fine-tune Large Language Models, aiming to deeply integrate KG structures and semantics into LLM representations. The goal is to enhance LLM multi-hop reasoning by enabling the LLM to effectively represent and process knowledge from both its natural language understanding and the structured KG input in a synergistic manner." assertion.
- FraudShield comment "FraudShield is a novel framework that leverages LLMs to dynamically construct and refine a fraud tactic-keyword knowledge graph. This KG is then used to augment LLM inputs with structured evidence (XML-tagged keywords and rationales) during inference, thereby enhancing the LLM's reasoning capabilities for robust fraud detection. This integrated approach, where LLMs contribute to KG creation and the KG in turn guides LLM reasoning, exemplifies a synergistic interaction focused on mutual enhancement for improved reasoning." assertion.
- CachingMechanism comment "This mechanism optimizes the retrieval of external medical KGs by storing previously retrieved KGs in a local knowledge base. It significantly reduces retrieval times during LLM inference, directly contributing to the efficiency of the overall KG-enhanced LLM system." assertion.
- DeclarativeConversion comment "This mechanism converts raw medical KG triplets into LLM-digestible declarative sentences. It allows the LLM to better integrate and reason with external knowledge during inference, improving its ability to utilize the provided medical facts." assertion.
- MKGRank comment "This framework enhances English-centric LLMs to perform multilingual medical QA by integrating comprehensive external medical knowledge graphs during the inference stage. It bridges language gaps and provides relevant, up-to-date knowledge to LLMs for better reasoning." assertion.
- MultiAngleRanking comment "This strategy selects the most relevant medical triplets from retrieved KGs by ranking them based on similarity with the question (using UMLS-BERT embeddings) and further filtering with a MedCPT Cross Encoder. It ensures the LLM receives high-quality, pertinent information for accurate inference." assertion.
- SelfInformationMining comment "When external KG retrieval is ineffective, this method employs the BM25 algorithm to extract relevant world knowledge from the LLM's own internal representations. It acts as a fallback strategy to ensure the LLM consistently has background information for medical QA, enhancing its robustness during inference." assertion.
- WordLevelTranslationMechanism comment "This mechanism, part of MKG-Rank, extracts medical entities from multilingual questions and options using an LLM, then translates them into English. This enables the LLM to leverage English-centric KGs for multilingual QA, enhancing its performance during inference." assertion.
- EfficientLlmKgConstruction comment "This method describes an LLM-based workflow for efficiently constructing domain-specific knowledge graphs. It involves fine-tuning an LLM for knowledge triples extraction from unsupervised corpora, followed by post-processing for error removal and entity resolution. This directly addresses the problem of knowledge mismatch by building a tailored KG." assertion.
- EnhancedLlmWithKnowledgePrelearningAndFeedback comment "ELPF is a modular, three-stage KG-LLM alignment framework designed to enhance LLM's capability to utilize KG information and reduce hallucinations. It includes K-LoRA for pre-learning KG infusion and domain linguistic style, supervised fine-tuning with KG retrieval, and Alignment with Knowledge Graph Feedback (AKGF) where KGs act as automated evaluators for DPO-based fine-tuning. The framework synergizes LLMs and KGs to improve reasoning and factual correctness." assertion.
- LiEtAl2024KnowledgeDistillationForTKGRWithLLMs comment "This method proposes a two-stage knowledge distillation framework where LLMs act as teacher models to transfer structural and temporal reasoning capabilities to lightweight student models. The goal is to improve the performance and efficiency of temporal knowledge graph reasoning (a KG completion task) by leveraging the advanced reasoning signals from LLMs." assertion.
- LiEtAl2024KGIntegratedCollaborationScheme comment "This method proposes a training-free collaborative reasoning scheme where LLMs act as agents to iteratively explore KGs, retrieve task-relevant subgraphs, and then combine their inherent implicit knowledge with explicit KG knowledge for step-by-step, transparent reasoning. This synergistic approach aims to enhance LLM reasoning capabilities and interpretability by actively integrating KG interaction into the reasoning process." assertion.
- GLaMLLMSummarization comment "This method leverages the LLM's generative capabilities to rewrite or summarize the encoded graph statements into more coherent representations for fine-tuning. The goal is to enhance semantic alignment between the KG and the LLM's vocabulary, thereby improving the LLM's factual recall and multi-hop reasoning after training." assertion.
- GLaMNodeDescriptorsAdjacencySummarization comment "This method combines multiple encoding strategies, specifically using LLM's zero-shot capabilities to create text-based node descriptors from the k-hop context subgraph, utilizing adjacency lists, and performing summarization. This comprehensive approach aims to instill robust graph-based reasoning capabilities into the LLM via fine-tuning." assertion.
- GLaMRelationalGrouping comment "This GLaM variant encodes the neighborhood subgraph by including the entire adjacency list of the central node or partitioning neighbors based on relation types. This strategy is used to fine-tune the LLM to better understand graph structure for improved knowledge expression." assertion.
- GLaMTriples comment "This method is a specific implementation within the GLaM framework where the neighborhood subgraph is encoded into (source, relation, target) triples for fine-tuning. It aims to improve the LLM's factual reasoning by embedding graph knowledge directly into its parameters during the training phase." assertion.
- iracKgAssistedLlmPostTraining comment "This method proposes a comprehensive approach where a specialized IRAC (Issue, Rule, Analysis, Conclusion) Knowledge Graph is constructed and subsequently used to generate high-quality training data. This data is then employed for post-training (via Supervised Fine-Tuning and Direct Preference Optimization) of Large Language Models, thereby enhancing their intrinsic legal reasoning capabilities and knowledge expression. While it's 'post-training' rather than initial 'pre-training', the core mechanism involves fundamentally improving the LLM's internal knowledge through a structured training process, aligning with the goal of enhancing LLM knowledge expression in a training stage." assertion.
- deliberationOverPriorsDP comment "Deliberation over Priors (DP) is a framework designed to enhance the trustworthiness of LLM reasoning by leveraging KG structural and constraint priors. It employs a progressive knowledge distillation strategy to integrate structural priors into LLMs during an offline stage, preparing them for more faithful relation path generation. In the online (inference) stage, a reasoning-introspection strategy guides LLMs to verify reasoning paths against extracted constraint priors, improving the reliability of response generation. The primary goal is to improve LLM performance, specifically its reasoning and response generation at inference time, by explicit use of KG knowledge." assertion.
- ChattyKG comment "Chatty-KG is a multi-agent system where LLMs are employed as specialized agents to perform various tasks critical for conversational question answering over knowledge graphs. These tasks include contextual understanding, entity and relation linking, and SPARQL query generation and execution. The system's primary goal is to enhance the accessibility and performance of KG-based QA by leveraging LLMs to bridge the gap between natural language questions and structured KG data, thereby improving KG tasks." assertion.
- CKGLLMA comment "CKG-LLMA is a novel framework that integrates LLMs and KGs bidirectionally. It uses LLMs to augment KGs and then leverages the enriched KGs to guide LLMs in generating explanations, combining multiple components for a unified reasoning process in recommendations." assertion.
- ConfidenceAwareExplanationGeneration comment "This method uses a two-step chain-of-thought reasoning procedure, guided by the augmented KG information and confidence scores, to instruct LLMs. It enables LLMs to generate realistic and informative natural language explanations for recommendations, thereby enhancing interpretability." assertion.
- ConfidenceAwareMOEMessagePropagation comment "This mechanism introduces a learnable edge confidence for LLM-enhanced triplets, utilizing a Mixture-of-Experts layer to filter noisy information and effectively aggregate knowledge. It represents knowledge from both LLM augmentations and KG structures in a unified and robust manner." assertion.
- DualViewTwoStepContrastiveLearning comment "This method constructs a dual-view contrastive learning framework, combining differentiable KG augmentation and interaction graph enhancement. Its aim is to learn robust user and item representations by integrating knowledge from both original KGs and LLM-augmented data through self-supervised signals." assertion.
- LLMbasedSubgraphAugmenter comment "This method uses LLMs with specific multi-view prompts (User-view and Item-view) to refine and augment extracted knowledge subgraphs. Its primary goal is to enrich KGs by adding or deleting triplets, directly contributing to KG construction and quality improvement." assertion.
- ReliableReasoningPathRRP comment "RRP is proposed to enhance LLM reasoning for knowledge-intensive tasks by distilling effective guidance from Knowledge Graphs. It operates during the inference stage by generating, refining, and prioritizing relevant reasoning paths from KGs, which are then fed to the LLM. This framework aims to reduce hallucination and improve factual consistency in LLM outputs without requiring retraining." assertion.