Publication Details
Information Extraction from Web Sources based on Multi-aspect Content Analysis
document modeling, information extraction, page segmentation,
content classifi cation, ontology, RDF
Information extraction from web pages is often recognized as a diffcult task mainly due to the loose structure and insuffcient semantic annotation of their HTML code. Since the web pages are primarily created for being viewed by human readers, their authors usually do not pay much attention to the structure and even validity of the HTML code itself. The CEUR Workshop Proceedings pages are a good illustration of this. Their code varies from an invalid HTML markup to fully valid and semantically annotated documents while preserving a kind of uni ed visual presentation of the contents. In this paper, as a contribution to the ESWC 2015 Semantic Publishing Challenge, we present an information extraction approach based on analyzing the rendered pages rather than their code. The documents are represented by an RDF-based model that allows to combine the results of difffferent page analysis methods such as layout analysis and the visual and textual feature classi cation. This allows to specify a set of generic rules for extracting a particular information from the page independently on its code.
@INPROCEEDINGS{FITPUB10840, author = "Martin Mili\v{c}ka and Radek Burget", title = "Information Extraction from Web Sources based on Multi-aspect Content Analysis", pages = "81--92", booktitle = "Semantic Web Evaluation Challenges, SemWebEval 2015 at ESWC 2015", series = "Communications in Computer and Information Science", journal = "Communications in Computer and Information Science", volume = 2015, number = 548, year = 2015, location = "Portoro\v{z}, SI", publisher = "Springer International Publishing", ISBN = "978-3-319-25517-0", ISSN = "1865-0929", doi = "10.1007/978-3-319-25518-7\_7", language = "english", url = "https://www.fit.vut.cz/research/publication/10840" }