Matches in Nanopublications for { ?s ?p ?o <https://w3id.org/np/RAVrgcBn3vWKbPnxAb24pbaEkK9nm9MPRUdzWSkECHz6c/assertion>. }
Showing items 1 to 44 of
44
with 100 items per page.
- arXiv.2412.04690 type Entity assertion.
- AttrGNN type Workflow assertion.
- BERTINT type Workflow assertion.
- ChatEA type Workflow assertion.
- DERA type Workflow assertion.
- DERAR type Workflow assertion.
- GCNAlign type Workflow assertion.
- HMAN type Workflow assertion.
- LLMAlign type Workflow assertion.
- LLMEA type Workflow assertion.
- TEA type Workflow assertion.
- AttrGNN label "AttrGNN" assertion.
- BERTINT label "BERT-INT" assertion.
- ChatEA label "ChatEA" assertion.
- DERA label "DERA" assertion.
- DERAR label "DERA-R" assertion.
- GCNAlign label "GCN-Align" assertion.
- HMAN label "HMAN" assertion.
- LLMAlign label "LLM-Align" assertion.
- LLMEA label "LLMEA" assertion.
- TEA label "TEA" assertion.
- AttrGNN comment "AttrGNN is a representative method for modeling the topological structure of attribute triples using Graph Neural Networks. It is included as a baseline to explore the effectiveness of attribute information modeling through GNNs compared to LLMs." assertion.
- BERTINT comment "BERT-INT is an Entity Alignment method that uses the BERT language model for cross-graph interactive modeling of semantic information. It is used as a baseline to evaluate the performance of LLM-Align." assertion.
- ChatEA comment "ChatEA is an Entity Alignment approach that utilizes an existing EA model to generate alignment candidates and then leverages the reasoning capabilities of LLMs to predict the final results. It is discussed as related work and used as a baseline for comparison." assertion.
- DERA comment "DERA is an Entity Alignment method based on heterogeneous parsing with large language models, used as a state-of-the-art baseline for comparison against LLM-Align's performance." assertion.
- DERAR comment "DERA-R is a simplified version of DERA, serving as a base model for candidate alignment selection within the LLM-Align framework and also as a strong baseline for performance comparison." assertion.
- GCNAlign comment "GCN-Align is an Entity Alignment model that uses Graph Convolutional Networks (GCNs) to encode attribute information into entity embeddings. It serves as a base model for candidate entity selection within the LLM-Align framework and also as a baseline for performance comparison." assertion.
- HMAN comment "HMAN is an Entity Alignment method that employs fine-grained ranking techniques from information retrieval to re-rank candidate entities. It is used as a baseline for performance comparison in the experiments." assertion.
- LLMAlign comment "LLM-Align is a novel framework for Entity Alignment (EA) that leverages Large Language Models (LLMs) to improve EA performance. It combines candidate alignment selection using existing EA models with LLM-based reasoning, enhanced by heuristic attribute and relation selection for prompt construction and a multi-round voting mechanism to mitigate LLM hallucinations and positional bias. This method directly addresses a core task in Knowledge Graph construction by enabling LLMs to perform cross-KG entity resolution." assertion.
- LLMEA comment "LLMEA is an Entity Alignment method that integrates knowledge from both Knowledge Graphs and Large Language Models to predict entity alignments. It is discussed as related work and used as a baseline to compare the performance of LLM-Align." assertion.
- TEA comment "TEA is a classic method for Entity Alignment based on language model modeling, organized around textual entailment. It is included as a baseline to explore the impact of PLMs versus LLMs on EA tasks." assertion.
- LLMAlign subject LLMAugmentedKGConstruction assertion.
- arXiv.2412.04690 title "LLM-Align: Utilizing Large Language Models for Entity Alignment in Knowledge Graphs" assertion.
- arXiv.2412.04690 describes LLMAlign assertion.
- arXiv.2412.04690 discusses AttrGNN assertion.
- arXiv.2412.04690 discusses BERTINT assertion.
- arXiv.2412.04690 discusses ChatEA assertion.
- arXiv.2412.04690 discusses DERA assertion.
- arXiv.2412.04690 discusses DERAR assertion.
- arXiv.2412.04690 discusses GCNAlign assertion.
- arXiv.2412.04690 discusses HMAN assertion.
- arXiv.2412.04690 discusses LLMEA assertion.
- arXiv.2412.04690 discusses TEA assertion.
- LLMAlign hasTopCategory LLMAugmentedKG assertion.