Benchmarking Semantic Web Technology ( Studies on the Semantic Web )

Publication series :Studies on the Semantic Web

Author: García-Castro   R.;  

Publisher: Ios Press‎

Publication year: 2009

E-ISBN: 9781614993377

P-ISBN(Hardback):  9781607500537

Subject: TP18 artificial intelligence theory

Keyword: null 自动化技术、计算机技术

Language: ENG

Access to resources Favorite

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Description

This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other. The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined: the experiments, the benchmark suites, and the software that support the process. Furthermore, the book invites practitioners to reach a continuous improvement of semantic technologies by means of their continuous evaluation and presents futures lines of research.

Chapter

Benchmark suites

Previous interoperability evaluations

Conclusions

Work objectives

Thesis goals and open research problems

Contributions to the state of the art

Work assumptions, hypothesis and restrictions

Benchmarking methodology for Semantic Web technologies

Design principles

Research methodology

Selection of relevant processes

Identification of the main tasks

Task adaption and completion

Analysis of task dependencies

Benchmarking methodology

Benchmarking actors

Benchmarking process

Plan phase

Experiment phase

Improvement phase

Recalibration task

Organizing the benchmarking activities

Plan phase

Experiment phase

RDF(S) Interoperability Benchmarking

Experiment definition

RDF(S) Import Benchmark Suite

RDF(S) Export Benchmark Suite

RDF(S) Interoperability Benchmark Suite

Experiment execution

Experiments performed

Experiment automation

RDF(S) import results

KAON RDF(S) import results

Protege-Frames RDF(S) import results

WebODE RDF(S) import results

Corese, Jena and Sesame RDF(S) import results

Evolution of RDF(S) import results

Global RDF(S) import results

RDF(S) export results

KAON RDF(S) export results

Protege-Frames RDF(S) export results

WebODE RDF(S) export results

Corese, Jena and Sesame RDF(S) export results

Evolution of RDF(S) export results

Global RDF(S) export results

RDF(S) interoperability results

KAON interoperability results

Protege-Frames interoperability results

WebODE interoperability results

Global RDF(S) interoperability results

OWL Interoperability Benchmarking

Experiment definition

The OWL Lite Import Benchmark Suite

Benchmarks that depend on the knowledge model

Benchmarks that depend on the syntax

Description of the benchmarks

Towards benchmark suites for OWL DL and Full

Experiment execution: the IBSE tool

IBSE requirements

IBSE implementation

Using IBSE

OWL compliance results

GATE OWL compliance results

Jena OWL compliance results

KAON2 OWL compliance results

Protege-Frames OWL compliance results

Protege-OWL OWL compliance results

SemTalk OWL compliance results

SWI-Prolog OWL compliance results

WebODE OWL compliance results

Global OWL compliance results

OWL interoperability results

OWL interoperability results per tool

Global OWL interoperability results

Evolution of OWL interoperability results

OWL compliance results

OWL interoperability results

Conclusions and future research lines

Development and use of the benchmarking methodology

Benchmarking interoperability

RDF(S) and OWL interoperability results

Open research problems

Bibliography

Appendices

Appendix A. Combinations of the RDF(S) components

A.1. Benchmarks with single components

A.2. Benchmarks with combinations of two components

A.3. Benchmarks with combinations of more than two components

Appendix B. The RDF(S) benchmark suites

B.1. RDF(S) Import Benchmark Suite

B.2. RDF(S) Export and Interoperability Benchmark Suites

Appendix C. Combinations of the OWL Lite components

C.1. Benchmarks for classes

C.2. Benchmarks for properties

C.3. Benchmarks for instances

Appendix D. The OWL Lite Import Benchmark Suite

D.1. List of benchmarks

D.2. Description of ontologies in DL

Appendix E. The IBSE ontologies

The users who browse this book also browse