|
|
|
|
LEADER |
00000cam a2200000Ma 4500 |
001 |
EBOOKCENTRAL_ocn867820209 |
003 |
OCoLC |
005 |
20240329122006.0 |
006 |
m o d |
007 |
cr |n||||||||| |
008 |
140110s2009 xx o 000 0 eng d |
040 |
|
|
|a IDEBK
|b eng
|e pn
|c IDEBK
|d EBLCP
|d OCLCQ
|d DEBSZ
|d OCLCQ
|d ZCU
|d MERUC
|d OCLCQ
|d ICG
|d OCLCO
|d OCLCF
|d OCLCQ
|d DKC
|d AU@
|d OCLCQ
|d OCLCO
|d SGP
|d OCLCQ
|d OCLCO
|d OCLCL
|
020 |
|
|
|a 1306284708
|q (ebk)
|
020 |
|
|
|a 9781306284707
|q (ebk)
|
020 |
|
|
|a 9781614993377
|
020 |
|
|
|a 1614993378
|
029 |
1 |
|
|a DEBBG
|b BV044065883
|
029 |
1 |
|
|a DEBSZ
|b 431596417
|
035 |
|
|
|a (OCoLC)867820209
|
037 |
|
|
|a 559721
|b MIL
|
050 |
|
4 |
|a TK5105.88815 .G37 2009
|
082 |
0 |
4 |
|a 621.38
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a García-Castro, R.
|
245 |
1 |
0 |
|a Benchmarking Semantic Web Technology.
|
260 |
|
|
|b IOS Press,
|c 2009.
|
300 |
|
|
|a 1 online resource
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
588 |
0 |
|
|a Print version record.
|
505 |
0 |
|
|a Title Page; Acknowledgements; Contents; Introduction; Context; The Semantic Web; Brief introduction to Semantic Web technologies; Semantic Web technology evaluation; The need for benchmarking in the Semantic Web; Semantic Web technology interoperability; Heterogeneity in ontology representation; The interoperability problem; Categorising ontology differences; Thesis contributions; Thesis structure; State of the Art; Software evaluation; Benchmarking; Benchmarking vs evaluation; Benchmarking classifications; Evaluation and improvement methodologies; Benchmarking methodologies
|
505 |
8 |
|
|a Software Measurement methodologiesExperimental Software Engineering methodologies; Benchmark suites; Previous interoperability evaluations; Conclusions; Work objectives; Thesis goals and open research problems; Contributions to the state of the art; Work assumptions, hypothesis and restrictions; Benchmarking methodology for Semantic Web technologies; Design principles; Research methodology; Selection of relevant processes; Identification of the main tasks; Task adaption and completion; Analysis of task dependencies; Benchmarking methodology; Benchmarking actors; Benchmarking process
|
505 |
8 |
|
|a Plan phaseExperiment phase; Improvement phase; Recalibration task; Organizing the benchmarking activities; Plan phase; Experiment phase; RDF(S) Interoperability Benchmarking; Experiment definition; RDF(S) Import Benchmark Suite; RDF(S) Export Benchmark Suite; RDF(S) Interoperability Benchmark Suite; Experiment execution; Experiments performed; Experiment automation; RDF(S) import results; KAON RDF(S) import results; Protege-Frames RDF(S) import results; WebODE RDF(S) import results; Corese, Jena and Sesame RDF(S) import results; Evolution of RDF(S) import results; Global RDF(S) import results
|
505 |
8 |
|
|a RDF(S) export resultsKAON RDF(S) export results; Protege-Frames RDF(S) export results; WebODE RDF(S) export results; Corese, Jena and Sesame RDF(S) export results; Evolution of RDF(S) export results; Global RDF(S) export results; RDF(S) interoperability results; KAON interoperability results; Protege-Frames interoperability results; WebODE interoperability results; Global RDF(S) interoperability results; OWL Interoperability Benchmarking; Experiment definition; The OWL Lite Import Benchmark Suite; Benchmarks that depend on the knowledge model; Benchmarks that depend on the syntax
|
505 |
8 |
|
|a Description of the benchmarksTowards benchmark suites for OWL DL and Full; Experiment execution: the IBSE tool; IBSE requirements; IBSE implementation; Using IBSE; OWL compliance results; GATE OWL compliance results; Jena OWL compliance results; KAON2 OWL compliance results; Protege-Frames OWL compliance results; Protege-OWL OWL compliance results; SemTalk OWL compliance results; SWI-Prolog OWL compliance results; WebODE OWL compliance results; Global OWL compliance results; OWL interoperability results; OWL interoperability results per tool; Global OWL interoperability results
|
520 |
|
|
|a This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
|
590 |
|
|
|a ProQuest Ebook Central
|b Ebook Central Academic Complete
|
650 |
|
0 |
|a Semantic Web.
|
650 |
|
6 |
|a Web sémantique.
|
650 |
|
7 |
|a Semantic Web
|2 fast
|
758 |
|
|
|i has work:
|a Benchmarking Semantic Web technology (Text)
|1 https://id.oclc.org/worldcat/entity/E39PCG9xXyyrkgTFM9vMx7QBxC
|4 https://id.oclc.org/worldcat/ontology/hasWork
|
776 |
0 |
8 |
|i Print version:
|z 9781306284707
|
856 |
4 |
0 |
|u https://ebookcentral.uam.elogim.com/lib/uam-ebooks/detail.action?docID=1589005
|z Texto completo
|
938 |
|
|
|a EBL - Ebook Library
|b EBLB
|n EBL1589005
|
938 |
|
|
|a ProQuest MyiLibrary Digital eBook Collection
|b IDEB
|n cis27261283
|
994 |
|
|
|a 92
|b IZTAP
|