Answer Candidate Type Selection: Text-To-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs
Mikhail Salnikov, Maria Lysyuk, Pavel Braslavski, Anton Razzhigaev, Valentin Malykh, Alexander Panchenko
Proceedings of KONVENS 2023, pp. 155β164
ACT Selection is a lightweight post-processing method that improves Knowledge Graph Question Answering (KGQA) by filtering and re-ranking answer candidates generated by pre-trained Text-to-Text models (e.g., T5, BART).
The key insight: even when a closed-book LM generates an incorrect answer, it often predicts the correct answer type. We leverage Wikidata's instance_of (P31) property to extract candidate types and rerank answers using a simple scoring function.
- Candidate Generation: Diverse Beam Search over a fine-tuned seq2seq model produces an initial list of answer candidates.
- Answer Type Extraction: Aggregate
instance_oftypes from candidates; merge semantically similar types using Sentence-BERT. - Entity Linking: Extract question entities via fine-tuned spaCy NER + mGENRE; enrich candidates with one-hop Wikidata neighbors.
- Candidate Scoring: Rank candidates using a weighted sum of four signals:
S_type: Intersection between candidate types and predicted answer typesS_neighbour: Binary score if candidate is a 1-hop neighbor of question entitiesS_t2t: Rank from the original Text-to-Text model outputS_property: Cosine similarity between question and candidate property (Sentence-BERT)
The full implementation is integrated into the M3M demo:
π Main pipeline code:
app/pipelines/act_selection.py
| Endpoint | Description |
|---|---|
POST /pipeline/act_selection/ner |
NER + sentence insertion |
POST /pipeline/act_selection/mgenre |
Entity linking via mGENRE |
POST /pipeline/act_selection/seq2seq |
Raw Text-to-Text generation |
POST /pipeline/act_selection/main |
Full ACT Selection pipeline (with all scores) |
POST /pipeline/act_selection/simple_type_selection/ |
Lightweight version (type + neighbor scores only) |
POST /pipeline/act_selection/simple_with_description_qustion_similarity_type_selection/ |
Extended version with description-question similarity |
import requests
response = requests.post(
"http://localhost:8000/pipeline/act_selection/main",
json={"text": "Who published Neo Contra?"}
)
result = response.json()
print(result["answers"][:5]) # Top-5 ranked Wikidata entities@inproceedings{salnikov-etal-2023-answer,
title = "Answer Candidate Type Selection: Text-To-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs",
author = "Salnikov, Mikhail and
Lysyuk, Maria and
Braslavski, Pavel and
Razzhigaev, Anton and
Malykh, Valentin A. and
Panchenko, Alexander",
editor = "Georges, Munir and
Herygers, Aaricia and
Friedrich, Annemarie and
Roth, Benjamin",
booktitle = "Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)",
month = sep,
year = "2023",
address = "Ingolstadt, Germany",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2023.konvens-main.16/",
pages = "155--164"
}