Cargando…

Benchmarking Semantic Web Technology.

This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchma...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: García-Castro, R.
Formato: Electrónico eBook
Idioma:Inglés
Publicado: IOS Press, 2009.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Title Page; Acknowledgements; Contents; Introduction; Context; The Semantic Web; Brief introduction to Semantic Web technologies; Semantic Web technology evaluation; The need for benchmarking in the Semantic Web; Semantic Web technology interoperability; Heterogeneity in ontology representation; The interoperability problem; Categorising ontology differences; Thesis contributions; Thesis structure; State of the Art; Software evaluation; Benchmarking; Benchmarking vs evaluation; Benchmarking classifications; Evaluation and improvement methodologies; Benchmarking methodologies
  • Software Measurement methodologiesExperimental Software Engineering methodologies; Benchmark suites; Previous interoperability evaluations; Conclusions; Work objectives; Thesis goals and open research problems; Contributions to the state of the art; Work assumptions, hypothesis and restrictions; Benchmarking methodology for Semantic Web technologies; Design principles; Research methodology; Selection of relevant processes; Identification of the main tasks; Task adaption and completion; Analysis of task dependencies; Benchmarking methodology; Benchmarking actors; Benchmarking process
  • Plan phaseExperiment phase; Improvement phase; Recalibration task; Organizing the benchmarking activities; Plan phase; Experiment phase; RDF(S) Interoperability Benchmarking; Experiment definition; RDF(S) Import Benchmark Suite; RDF(S) Export Benchmark Suite; RDF(S) Interoperability Benchmark Suite; Experiment execution; Experiments performed; Experiment automation; RDF(S) import results; KAON RDF(S) import results; Protege-Frames RDF(S) import results; WebODE RDF(S) import results; Corese, Jena and Sesame RDF(S) import results; Evolution of RDF(S) import results; Global RDF(S) import results
  • RDF(S) export resultsKAON RDF(S) export results; Protege-Frames RDF(S) export results; WebODE RDF(S) export results; Corese, Jena and Sesame RDF(S) export results; Evolution of RDF(S) export results; Global RDF(S) export results; RDF(S) interoperability results; KAON interoperability results; Protege-Frames interoperability results; WebODE interoperability results; Global RDF(S) interoperability results; OWL Interoperability Benchmarking; Experiment definition; The OWL Lite Import Benchmark Suite; Benchmarks that depend on the knowledge model; Benchmarks that depend on the syntax
  • Description of the benchmarksTowards benchmark suites for OWL DL and Full; Experiment execution: the IBSE tool; IBSE requirements; IBSE implementation; Using IBSE; OWL compliance results; GATE OWL compliance results; Jena OWL compliance results; KAON2 OWL compliance results; Protege-Frames OWL compliance results; Protege-OWL OWL compliance results; SemTalk OWL compliance results; SWI-Prolog OWL compliance results; WebODE OWL compliance results; Global OWL compliance results; OWL interoperability results; OWL interoperability results per tool; Global OWL interoperability results