Description
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.
The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined: the experiments, the benchmark suites, and the software that support the process.
Furthermore, the book invites practitioners to reach a continuous improvement of semantic technologies by means of their continuous evaluation and presents futures lines of research.
Chapter
Previous interoperability evaluations
Thesis goals and open research problems
Contributions to the state of the art
Work assumptions, hypothesis and restrictions
Benchmarking methodology for Semantic Web technologies
Selection of relevant processes
Identification of the main tasks
Task adaption and completion
Analysis of task dependencies
Organizing the benchmarking activities
RDF(S) Interoperability Benchmarking
RDF(S) Import Benchmark Suite
RDF(S) Export Benchmark Suite
RDF(S) Interoperability Benchmark Suite
KAON RDF(S) import results
Protege-Frames RDF(S) import results
WebODE RDF(S) import results
Corese, Jena and Sesame RDF(S) import results
Evolution of RDF(S) import results
Global RDF(S) import results
KAON RDF(S) export results
Protege-Frames RDF(S) export results
WebODE RDF(S) export results
Corese, Jena and Sesame RDF(S) export results
Evolution of RDF(S) export results
Global RDF(S) export results
RDF(S) interoperability results
KAON interoperability results
Protege-Frames interoperability results
WebODE interoperability results
Global RDF(S) interoperability results
OWL Interoperability Benchmarking
The OWL Lite Import Benchmark Suite
Benchmarks that depend on the knowledge model
Benchmarks that depend on the syntax
Description of the benchmarks
Towards benchmark suites for OWL DL and Full
Experiment execution: the IBSE tool
GATE OWL compliance results
Jena OWL compliance results
KAON2 OWL compliance results
Protege-Frames OWL compliance results
Protege-OWL OWL compliance results
SemTalk OWL compliance results
SWI-Prolog OWL compliance results
WebODE OWL compliance results
Global OWL compliance results
OWL interoperability results
OWL interoperability results per tool
Global OWL interoperability results
Evolution of OWL interoperability results
OWL interoperability results
Conclusions and future research lines
Development and use of the benchmarking methodology
Benchmarking interoperability
RDF(S) and OWL interoperability results
Appendix A. Combinations of the RDF(S) components
A.1. Benchmarks with single components
A.2. Benchmarks with combinations of two components
A.3. Benchmarks with combinations of more than two components
Appendix B. The RDF(S) benchmark suites
B.1. RDF(S) Import Benchmark Suite
B.2. RDF(S) Export and Interoperability Benchmark Suites
Appendix C. Combinations of the OWL Lite components
C.1. Benchmarks for classes
C.2. Benchmarks for properties
C.3. Benchmarks for instances
Appendix D. The OWL Lite Import Benchmark Suite
D.2. Description of ontologies in DL
Appendix E. The IBSE ontologies