Skip to content

s-nlp/act

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

ACT Selection: Answer Candidate Type Selection

Paper arXiv

Answer Candidate Type Selection: Text-To-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs
Mikhail Salnikov, Maria Lysyuk, Pavel Braslavski, Anton Razzhigaev, Valentin Malykh, Alexander Panchenko
Proceedings of KONVENS 2023, pp. 155–164

πŸ“Œ Overview

ACT Selection is a lightweight post-processing method that improves Knowledge Graph Question Answering (KGQA) by filtering and re-ranking answer candidates generated by pre-trained Text-to-Text models (e.g., T5, BART).

The key insight: even when a closed-book LM generates an incorrect answer, it often predicts the correct answer type. We leverage Wikidata's instance_of (P31) property to extract candidate types and rerank answers using a simple scoring function.

πŸ”§ Pipeline Components

  1. Candidate Generation: Diverse Beam Search over a fine-tuned seq2seq model produces an initial list of answer candidates.
  2. Answer Type Extraction: Aggregate instance_of types from candidates; merge semantically similar types using Sentence-BERT.
  3. Entity Linking: Extract question entities via fine-tuned spaCy NER + mGENRE; enrich candidates with one-hop Wikidata neighbors.
  4. Candidate Scoring: Rank candidates using a weighted sum of four signals:
    • S_type: Intersection between candidate types and predicted answer types
    • S_neighbour: Binary score if candidate is a 1-hop neighbor of question entities
    • S_t2t: Rank from the original Text-to-Text model output
    • S_property: Cosine similarity between question and candidate property (Sentence-BERT)

πŸš€ Usage

The full implementation is integrated into the M3M demo:

πŸ”— Main pipeline code:
app/pipelines/act_selection.py

Available Endpoints (FastAPI)

Endpoint Description
POST /pipeline/act_selection/ner NER + sentence insertion
POST /pipeline/act_selection/mgenre Entity linking via mGENRE
POST /pipeline/act_selection/seq2seq Raw Text-to-Text generation
POST /pipeline/act_selection/main Full ACT Selection pipeline (with all scores)
POST /pipeline/act_selection/simple_type_selection/ Lightweight version (type + neighbor scores only)
POST /pipeline/act_selection/simple_with_description_qustion_similarity_type_selection/ Extended version with description-question similarity

Example Request

import requests

response = requests.post(
    "http://localhost:8000/pipeline/act_selection/main",
    json={"text": "Who published Neo Contra?"}
)
result = response.json()
print(result["answers"][:5])  # Top-5 ranked Wikidata entities

Cite Bibtex

@inproceedings{salnikov-etal-2023-answer,
    title = "Answer Candidate Type Selection: Text-To-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs",
    author = "Salnikov, Mikhail  and
      Lysyuk, Maria  and
      Braslavski, Pavel  and
      Razzhigaev, Anton  and
      Malykh, Valentin A.  and
      Panchenko, Alexander",
    editor = "Georges, Munir  and
      Herygers, Aaricia  and
      Friedrich, Annemarie  and
      Roth, Benjamin",
    booktitle = "Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)",
    month = sep,
    year = "2023",
    address = "Ingolstadt, Germany",
    publisher = "Association for Computational Lingustics",
    url = "https://aclanthology.org/2023.konvens-main.16/",
    pages = "155--164"
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors