About
The maturing of Semantic Web technologies cannot be separated from the need for high-quality, reusable ontologies. However, traditional ontology engineering is a notoriously difficult, time-consuming, and expert-driven process. The advent of LLMs presents a paradigm shift, promising to accelerate and democratize this process, enabling "specialists from beyond computer science" to develop their own models.
While the potential is enormous, automated ontology creation from LLMs presents significant challenges that the Semantic Web and knowledge engineering communities have yet to systematically address. If the foundational ontologies generated by LLMs are of poor quality, incoherent, or formally incorrect, the downstream AI systems built upon them will inherit these critical flaws.
This workshop focuses on the emerging and rapidly evolving intersection of Large Language Models (LLMs), semantic technologies, and ontology engineering. It addresses the challenges and opportunities associated with leveraging LLMs for ontology creation, refinement, and validation. The workshop encompasses both theoretical and practical aspects of LLM-based ontology construction, including semi-automated/fully automated ontology generation pipelines, evaluation methodologies, and strategies for reducing hallucinations.
Motivation: This workshop is motivated by three key observations: (1) the growing practical deployment of LLMs in knowledge graph and ontology construction projects with minimal formal evaluation frameworks; (2) the lack of systematic comparison between LLM-based and traditional ontology engineering approaches; and (3) the absence of community consensus on appropriate evaluation methodologies and quality metrics specific to LLM-generated ontologies.
Topics of Interest
We welcome contributions on topics concerning the development and assessment of high-quality ontologies, both manually engineered and automatically generated using large language models (LLMs). The main topics of interest include, but are not limited to, the following:
- LLM-to-KG with schema constraints: Enforcing structured templates and ontology schemas during triple generation.
- Education & UX: Developing LLM-driven tutors for CQ/axiom authoring and automated documentation.
- Evidence-linked triple extraction: Capturing direct evidence sentences and document sources for traceability.
- Hallucination benchmarking: Metrics and datasets for measuring hallucination severity in KG extraction.
- Post-hoc KG repair: Applying symbolic reasoners and neural consistency models to detect/correct errors.
- Calibration and abstention: Incorporating probabilistic calibration to allow abstention on uncertain links.
- Robustness and red-teaming: Stress-testing model robustness using adversarial inputs and perturbations.
- Domain applications: Deploying KG within domains (e.g., biomedical, climate) to quantify decision impact.
- Lifecycle and Maintenance: LLM-assisted ontology evolution, versioning, CI/CD integration, and refactoring.
- Modular Evaluation & Benchmarks: Component-level metrics, task cards, and error taxonomies.
- Provenance & Governance: Designing evidence-traceable axioms, audit trails, and governance mechanisms.
- Neuro-symbolic Control: Reasoner-in-the-loop decoding and learned validators for logical soundness.
- Human-in-the-Loop Protocols: Role hierarchies and economic evaluation for expert participation.
- Domain Adaptation: Adapters, RAG with domain-specific KGs, and drift management.
- Multilingual & Multimodal OE: Pipelines translating text, tables, and figures into consistent axioms.
- Operational Efficiency: Measuring cost, energy efficiency, and latency of proprietary vs. open-source models.
- Standards & Community Alignment: Integration with FAIR principles, OBO/ODP standards, OAEI, and LLMS4OL.
Submission Instructions
The papers will be (single-blind) peer-reviewed. The workshop proceedings will be published by CEUR, and manuscript must follow the CEUR-WS template. We will also provide the option of not archiving the submissions on CEUR to the authors.
Manuscripts will be submitted through
EasyChair
.
For all manuscript submissions, at least one author must agree to review another paper
Paper Length:
Short papers: 5–9 pages (including references).
Regular papers: at least 10 pages (including references).
Presentation Requirement: At least one author of each accepted paper must register for the workshop and present the paper in person.
Important Dates
- Workshop paper submissions:
March 9, 2026
Extended: March 16, 2026
- Workshop program with list of accepted papers available online: April 20, 2026
- Camera-ready papers due: May 5, 2026
- Workshop will be held on: May 11, 2026 (Aligned with ESWC 2026)
Programme
The workshop runs as a full-day event on May 11, 2026. All times are local to the conference venue.
Session 1: Opening and Keynote 09:00 – 10:30
-
09:00 – 09:45
Keynote
45 min — chair welcomes attendees in the intro
-
09:45 – 10:07
Paper 1
Ground-truth Construction and Evaluation of LLM Contribution to Life Sciences Tool Annotation
Ulysse Le Clanche et al.
15 min presentation + 7 min Q&A
-
10:07 – 10:30
Paper 2
Benchmarking Resource-Efficient LLMs for Research Topic Ontology Generation in the Biomedical Field
Tanay Aggarwal et al.
15 min presentation + 8 min Q&A
-
10:30 – 11:00
Coffee Break
Session 2: Four Papers 11:00 – 12:20
-
11:00 – 11:20
Paper 3
Automating AI Risk Profiling with LLM-Engineered OWL 2 Axioms
Manas Goyal et al.
15 min presentation + 5 min Q&A
-
11:20 – 11:40
Paper 4
Pitfalls in AI-Generated Ontologies: Strategies for Detection and Mitigation
Pasquale Lisena et al.
15 min presentation + 5 min Q&A
-
11:40 – 12:00
Paper 5
PERSEUS: PERceptual Semantic Extraction & Unified System
Aryan Singh Dalal and Hande McGinty
15 min presentation + 5 min Q&A
-
12:00 – 12:20
Paper 6
OG-NSD: Neuro-Symbolic Ontology Drafting from Natural-Language Requirements
Efstratios Skaperdas et al.
15 min presentation + 5 min Q&A
-
12:20 – 14:00
Lunch Break
Session 3: Three Papers and Closing 14:00 – 15:30
-
14:00 – 14:22
Paper 7
MASEO: A Multi-Agent System for Explainable Ontology Generation
Jiayi Li et al.
15 min presentation + 7 min Q&A
-
14:22 – 14:45
Paper 8
IDEA2: Expert-in-the-loop competency question elicitation for collaborative ontology engineering
Elliott Watkiss-Leek et al.
15 min presentation + 8 min Q&A
-
14:45 – 15:07
Paper 9
Towards Automated Ontology Generation from Unstructured Text: A Multi-Agent LLM Approach
Abid Talukder et al.
15 min presentation + 7 min Q&A
-
15:07 – 15:30
Closing discussion and wrap-up
-
15:30
Workshop ends at the afternoon coffee break
Organization
General Chairs
- Aryan Singh Dalal (General Co-chair), Kansas State University, USA
- Kathleen Jagodnik (General Co-chair), Kansas State University, USA
- Maria Maleshkova (General Co-chair), Helmut Schmidt University, Germany
- Hande McGinty (General Co-chair), Kansas State University, USA
- Cogan Shimizu (General Co-chair), Wright State University, USA
Program Committee
- Maaike de Boer, TNO, Netherlands
- Robert Buchmann, Babeș-Bolyai University, Romania
- Alessandro Oltramari, Bosch Research, USA
- Nicole Obretincheva, King’s College London, United Kingdom
- Yihang Zhao, King’s College London, United Kingdom
- Christian Bizer, University of Mannheim, Germany
- Hanna Abi Akl, Inria, France
- Aldo Gangemi, University of Bologna, Italy
- Anna Sofia Lippolis, ISTC-CNR, Italy
- Daniel Garijo, Universidad Politécnica de Madrid, Spain
- María Poveda-Villalón, Universidad Politécnica de Madrid, Spain
- Miguel Ceriani, CNR, Italy
- Paul Groth, University of Amsterdam, Netherlands
- Simon Burbach, Helmut Schmidt University, Germany
- Vaibhav Gupta, Helmut Schmidt University, Germany