Source Dataset

Recently, we have been working on the DBpedia / Wikipedia Page Link dataset. We have considered the English and the German language versions for this project. In the current DBpedia 2014 page links English and German datasets 19 million and 7 million entities are represented respectively. But the original DBpedia only contains about 4 million and 1 million distinct entities for English and German versions.

1st Workshop on Linked Data Quality (LDQ)

Since the start of the Linked Open Data Cloud, we have seen an unprecedented volume of structured data published on the web, in most cases as RDF and Linked (Open) Data. The quality of these datasets can hardly be better than the original data source. We see datasets originating from crowdsourced sources like Wikipedia and OpenStreetMap and highly curated sources e.g. from the library domain. Quality is of course fitness for use, thus DBpedia currently can be appropriate for a simple end-user application but could never be used in the medical domain for treatment decisions.

Workshop "From Pixels to Semantics - Semantic Analysis meets Visual Analysis" (PIXSEM)

Social Web Communities such as Flickr, Youtube and Facebook have collected a huge amount of valuable multimedia content. How this data can be successfully exploited has been subject of research for members from both communities - computer vision as well as knowledge mining. The aim of this workshop is to bring together outstanding research results that show how social multimedia data can be used to bridge the gap between semantic analysis and visual analysis to exploit synergies and to enable higher quality results as well as better efficiency.



Subscribe to Semantic Multimedia RSS