The consistent evaluation of semantic technologies is critical not only for future scientific progress but also for their widespread industrial adoption. Such evaluation needs to address development quality (correctness or robustness) as well as deployment qualities (interoperability or scalability).
In addition, information (data and ontologies) is heterogeneous,
distributed and is exponentially increasing in quantity and size.
Capturing static versions of information is not always possible because
of its highly dynamic nature or because of the associated costs. The
evolution of standards and the development of new ones also make
compatibility a major issue for legacy technology and information.
While the community is evolving towards a more thorough evaluation
of semantic technologies, the dynamism mentioned above makes the
evaluation of semantic technologies a difficult task: as previous
evaluation methods and techniques become obsolete, new ones have to be
developed as fast as semantic technologies evolve.
The aims of the IWEST workshop are twofold. Firstly, to initiate
discussion about the current trends and future challenges of evaluating
semantic technologies. Secondly, to support communication and
collaboration with the goal of aligning the various evaluation efforts
within the community and accelerating innovation in all the associated
fields as has been the case with both the TREC benchmarks in
information retrieval and the TPC benchmarks in database research.
In line with the goals of the workshop, we will also incorporate the results of the 2nd International Evaluation Campaign for Semantic Technologies - a wide-ranging evaluation campaign which has been organised by the Semantic Evaluation At Large Scale (SEALS) Initiative.
The IWEST 2012 proceedings are published in CEUR Workshop Proceedings Vol. 843.
||Welcome and introduction
||Multilingual Ontology Matching
Evaluation - A First Report on using MultiFarm
Christian Meilicke, Cassia Trojahn, Ondrej Zamazal and Dominique Ritze
||Automatic Conformance Test Data
Generation Using Existing Ontologies in the Web
Irlán Grangel-González and Raúl García-Castro
||Evaluating Semantic Search
Systems to Identify Future Directions of Research
Khadija Elbedweihy, Stuart Wrigley, Fabio Ciravegna, Dorothee Reinhard and Abraham Bernstein
||Building the WSMO-Lite Test
Collection on the SEALS Platform
Liliana Cabral, Ning Li and Jacek Kopecky
||Using WS-BPEL for Automation of
Semantic Web Service Tools Evaluation
Serge Tymaniuk, Ioan Toma and Liliana Cabral
High quality papers are invited from researchers interested in all
aspects of formal evaluation and benchmarking with reference to
semantic technologies. In addition, we also invite papers from
participants of the 2nd
International Evaluation Campaign for Semantic Technologies.
We invite contributions describing benchmarking approaches applied to semantic technologies including, but not limited to:
We encourage full papers (max 12 pages), short papers (max 6 pages) and short demo papers (max 2 pages) describing significant work in progress, late breaking results or ideas / challenges for the domain.
Submissions must be in PDF and must be submitted using easychair. Submissions must be formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS).
Accepted papers will be published in the workshop proceedings. A
selection of the best workshop papers will be published in a LNCS
volume with the best ESWC 2012 workshop papers.
The best evaluation paper will receive a "Best Evaluation Paper" award of 500 Euro.
Furthermore, we will waive TWO
to the lead author of the best student papers which are accepted, worth
540 Euro (conference + workshop). The lead author MUST be a Masters or
PhD student at their institution and able to cover flight + hotel costs
Dr. Raúl García
Universidad Politécnica de Madrid, Spain
Dr. Stuart Wrigley
University of Sheffield, UK
Dr. Lyndon Nixon
STI International, Austria
This workshop is sponsored by the SEALS project.