Matches in Nanopublications for { ?s <https://github.com/LaraHack/linkflows_model/blob/master/Linkflows.ttl#hasCommentText> ?o ?g. }
- comment hasCommentText "This paper introduces the very interesting and powerful concept of "lenticular lenses"." assertion.
- comment hasCommentText "This paper introduces the very interesting and powerful concept of "lenticular lenses"." assertion.
- comment-1 hasCommentText "This paper proposes new crowdsourcing techniques to identify errors in linked data by combining expert judgements with data obtained from crowdsourcing platforms." assertion.
- comment-2 hasCommentText "The paper addresses very valuable research questions." assertion.
- comment-3 hasCommentText "The paper contains all necessary definitions of crowdsourcing terminology making it self-contained and understandable for a reader from the semantic web community." assertion.
- comment-4 hasCommentText "Authors compare different combinations of worker/expert answers and propose different types of workflows to identify errors in linked data. The paper focuses on 3 specific types of errors. Authors focus exclusively on error identification as fixes can be best applied by correcting the automatic extraction process rather than the generated data." assertion.
- comment-5 hasCommentText "Authors make all data and results available on-line for others to re-use." assertion.
- comment-6 hasCommentText "Results are discussed and analysed in detail comparing crowd and expert performance also including an error analysis." assertion.
- comment-7 hasCommentText "originality: The addressed problem of linked data quality is important and the proposed solution is novel and reasonable" assertion.
- comment-8 hasCommentText "significance of the results: The results show how to best combine experts and crowd for the proposed linked data quality problems. This does not solve all linked data quality problems, but it certainly contributes to bring this field forward." assertion.
- comment-9 hasCommentText "quality of writing: The paper is very well written and structured. It is easy to follow and presents a detailed description of the approach and of the experimental results also including error analysis." assertion.
- comment-10 hasCommentText "A controversial point is that the ground truth was created by experts as the results which are evaluated against this. I agree that there is no way around this, but a small discussion on how the authors believe the ground truth data is of better quality than the expert answers would help (e.g., experts did not necessarily put enough effort in the task as the ground truth creators did also by resolving conflicts and discussing difficult triples together etc.)." assertion.
- comment-11 hasCommentText "In section 2 it is unclear to me how the 4 dimensions relate to the 3 error types addressed here. Expanding this section would make the paper easier to understand and more self-contained. Discussing automatic approaches on how to identify errors in linked data could also be discussed at this point to motivate the need for human computation approaches." assertion.
- comment-12 hasCommentText "The scalability of the approach is unclear: It seems to me that the proposed approach need each single triple in a linked dataset to be manually checked. This would limit the scalability of the approach. Thus, while the focus of this paper is clearly different, it would be useful to briefly discuss the possibility of hybrid human-machine approaches to scale the approach to large amounts of triples (e.g., the English DBpedia 3.9 has 500M triples). Related to this is “Proposition 2” which sounds very much not scalable." assertion.
- comment-13 hasCommentText "At the end of section 4.3 it is unclear whether the incorrect links are errors present in Wikipedia or are generated by the wrappers." assertion.
- comment-14 hasCommentText "The impression from reading the paper is that the payment of crowd workers is extremely low (e.g., 0.04USD for 5 triples or 0.06USD for 30 triples). It would be interesting to report the hourly rate by considering the time spent by workers in completing the tasks to get a better idea of the adopted payment level." assertion.
- comment-15 hasCommentText "Probably section 5.4 could be presented before the proposed approaches instead of after them. " assertion.
- comment-16 hasCommentText "Section 5.4.1 seems not to be a relevant baseline as it looks for different types of errors." assertion.
- comment-17 hasCommentText "It would be good to add a final paragraph in section 7 stating how this paper compares to the two described areas of research." assertion.
- comment-1 hasCommentText "This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing." assertion.
- comment-2 hasCommentText "The authors present a study where they examine the applicability of crowdsourcing to Linked Data Quality problems with DBpedia as an example. They show the general feasibility of the approach and continue to investigate whether and for which tasks in particular unskilled laymen instead of experts can also be employed to solve LDQ problems. Furthermore, they address the problem of optimal or better crowdsourcing workflows tom employ experts and laymen for Linked Data curation." assertion.
- comment-3 hasCommentText "The problem addressed is a rather interesting one, giving first insights in how to adapt crowdsourcing to LDQ issues. From my point of view, I would have liked to see a more concise comparison of the two crowdsourcing approaches also with the sophisticated state-of-the-art automated tools. " assertion.
- comment-4 hasCommentText "The proposed RDFUnit approach in the way the authors conducted their experiments has some flaws and is too limited (for details cf below). Thus, I have some (significant) issues with the evaluation, which should be addressed by the authors." assertion.
- comment-5 hasCommentText "In the end of the introduction (p.3) the limitations of automated methods for Linked Data quality assurance are mentioned with referring to checking ontological inconsistencies only. Besides, there also exist approached based on statistics (as e.g. outlier detection, etc.) which should not be neglected or at least mentioned as such in the related work section.[1,2,3] [1] Heiko Paulheim and Christian Bizer. 2014. Improving the Quality of Linked Data Using Statistical Distributions. Int. J. Semant. Web Inf. Syst. 10, 2 (April 2014), pp. 63-86. [2] Didier Cherix, Ricardo Usbeck, Andreas Both, Jens Lehmann. 2014. CROCUS: Cluster-based Ontology Data Cleansing. WASABI 2014 at Extended Semantic Web Conference 2014. [3] Daniel Fleischhacker, Heiko Paulheim, Volha Bryl, Johanna Völker, and Christian Bizer. 2014. Detecting Errors in Numerical Linked Data Using Cross-Checked Outlier Detection. In Proc. 13th Int. Semantic Web Conference (ISWC '14), pp. 357-372." assertion.
- comment-6 hasCommentText "In section 2) Linked Data Quality Issues, you focus on three RDF-tripel level quality issues only out of a larger set of Linked Data Quality issues referred to by your previous work in that area. Unfortunately, you do not explain why the 3 categories of quality issues you focus on are representative either for LDQ issues in general and crowdsourcing in particular. What about the other quality issues concerning their importance, representativeness, suitability for crowdsourcing etc? A more detailed discussion would be helpful." assertion.
- comment-7 hasCommentText "In section 3.2) you give background information on the Find-Fix-Verify pattern (2nd paragraph). This information (in which scenario it was used first, etc.) is not really necessary for the rest of the paper." assertion.
- comment-8 hasCommentText "On page 9 you state that your function "prune" discards all RDF triples of which an URI could not be dereferenced. Was there a significant amount of these RDF triples? Did it only concern triples with relation to an external website that could not be dereferenced or did it also concern DBpedia URIs? " assertion.
- comment-9 hasCommentText "In p.11, Fig 3, the comparison of DBpedia RDF triples (middle column) shown in your tool is compared with wikipedia infobox values (left column) for which you implemented an extractor. How do you ensure that your extractor does not create extraction errors like they are about to occur in the DBpedia infobox which resulted in the RDF triple under consideration? What if both - your wikipedia infobox extractor and the DBpedia RDF triple - show the very same error? Many problems of the DBpedia extractors arise from wrong infobox information, because many wikipedia authors don't care about infobox conventions. Can these kind of errors be detected at all with your crowdsourcing tool (based on the very same mechanisms to give a hint to the crowdworker)?" assertion.
- comment-10 hasCommentText "In your evaluation, p.17, section 5.2.6, you present an "analysis" of expert misclassifications. The analysis only states in an aggregated form what kind of misclassifications occurred, but does not give any explanation or details why these occurred." assertion.
- comment-11 hasCommentText "Also in your evaluation, p.21, section 5.3.5, you claimed that rdf:type information could not be evaluated correctly by users because yago classes do not provide self-speaking labels or other textual information. But, the URI of yago classes usually consists of a self-speaking name and some numerical information, such as e.g. yago:AerospaceEngineer109776079 , which is perfectly readable for humans." assertion.
- comment-12 hasCommentText "Why don't you consider (also) the Open World Assumption (p.21) for your baseline approach. Please explain." assertion.
- comment-13 hasCommentText "The rules you provide as constraints to be checked for your baseline approach (p.22) are sometimes questionable. As e.g., Persons without a birthdate (even if they sometimes have a deathdate) - this holds for many historical persons born a long time ago, simply because their birthdate is not known." assertion.
- comment-14 hasCommentText "In your experiments you should give the LD experts always full schema/ontology information to consider the correctness of an RDF triple." assertion.
- comment-15 hasCommentText "Baseline evaluation (p.22, section 5.4.2). In addition to the foaf:name, you should also take into account alternative labels (from redirects and interlanguage links) of the entity under consideration, if you want to find out automatically, whether the external web page refers to this entity. Otherwise you might not detect it. Why have you set the threshold to ">1"? Why is two times sufficient? Often in natural language texts, it is avoided to name a subject more often with the same name, but synonyms and pronouns are used instead." assertion.
- comment-16 hasCommentText "In p.23, table 9: I doubt the ability of the baseline to detect, whether the baseline is able to identify if a "thumbnail" or a "depiction" refers to the correct image for this entity. Please justify and explain how you ensure this." assertion.
- comment-17 hasCommentText "In the related work section (p.24/25) Games with a Purpose are mentioned. There exist also games with the dedicated purpose of DBpedia quality check that have not been mentioned/compared [4]. [4] J. Waitelonis, N. Ludwig, M. Knuth, H. Sack: Whoknows? - Evaluating Linked Data Heuristics with a Quiz that cleans up DBpedia. International Journal of Interactive Technology and Smart Education (ITSE), Emerald Group, Bingley (UK), Vol. 8, 2011 (3)." assertion.
- comment-18 hasCommentText "In the related work section (p. 25), Tools for Linked Data Quality Assessment that are able to automatically extract/create ontology constraints from available data, to further use these constraints to assess the quality of the remaining data have been neglected.[5,6] [5] Jens Lehmann, Lorenz Bühmann: ORE - A Tool for Repairing and Enriching Knowledge Bases, Proc. of the 9th Int. Semantic Web Conference 2010, Lecture Notes in Computer Science, Springer, 2010 [6] G. Töpper, M. Knuth, and H. Sack: DBpedia ontology enrichment for inconsistency detection. In Proc. of the 8th Int. Conf. on Semantic Systems (I-SEMANTICS '12). ACM, New York, NY, USA, pp. 33-40." assertion.
- comment-1 hasCommentText "Generally speaking, I like the paper and its topic and I think that it could worth publishing in SWJ, because it is strongly in line with the special issue CfP." assertion.
- comment-2 hasCommentText "I have some major remarks that should be addressed before acceptance. My observations are mainly related to two aspects: the global scope of the paper and of the presented results, and the experiments design." assertion.
- comment-3 hasCommentText "Regarding the paper scope, I am not convinced that the authors provided results that can be considered valid for Linked Data at large, nor for LD quality issues of any kind. This is quite adequately indicated in the paper title, but it is not fully reflected in the paper text itself." assertion.
- comment-4 hasCommentText "The authors stated that they focused on DBpedia "as a representative data set for the broader Web of Data"; I largely disagree with that for the following reasons: (1) not all LD sources are produced by following a transformation/mapping process like DBpedia one, and the types of errors that happen in a specific LD source heavily depend both on the intrinsic quality of the source and on the possible translation process to RDF; (2) DBpedia is very general in coverage of topics, while LD sources (and their possible quality issues) can be very specific to a given domain; as a consequence, the capability of an experts' or workers' crowd to identify and assess LD quality issues is highly influenced by the domain/coverage of the source. Therefore, I'd recommend to soften the claims of generality of the presented results and to clearly state that they are "proved" only on DBpedia. I'm sure the authors can speculate to what extent those results can be considered general, but at the present moment I believe they cannot affirm that they fully addressed the research questions as they are introduced." assertion.
- comment-5 hasCommentText "The experiments were focused on some specific LD quality issues and not to the whole list of possible issues (which are comprehensively listed in the authors' previous works). While this is fine per se - I didn't expect the authors to make experiments on the whole set of issues - it makes the presented results even less general. It should be also valuable if the authors add an explanation on why those specific quality issues (instead of others) were selected for the experiments." assertion.
- comment-6 hasCommentText "As a global recommendation, I suggest the authors to honestly rephrase the parts of the paper that would try to convince the readers about a possible general validity of the paper results for any LD source and/or for any LD quality issue." assertion.
- comment-7 hasCommentText "Regarding the experiments design, my impression is that a number of results are not fully related to the intended characteristics of the experiments (expert vs. laymen crowdsourcing, find vs. verify stage, etc.) but are the collateral effect of non-optimal design choices, in terms of (1) choice of triples/data and quality issues to be tested, (2) user interface and support information provided to participants and (3) reported indicators and baselines." assertion.
- comment-8 hasCommentText "I have a number of concerns on the employed triples. The experts were given the opportunity to find quality issues in triples that were (i) random, (ii) instances of some class or (iii) manually selected; while this can appear reasonable, the effects are that the workers' crowds (in both experimental workflows) were presented with information "chosen" by somebody else, thus possibly making the task hard or even impossible because of the triples' domain. I would have expected the authors to make a *controlled experiment*, i.e. selecting a general-purpose subset of DBpedia that - at least from the point of view of the content - was at the same "difficulty" level for all the involved crowds." assertion.
- comment-9 hasCommentText "Restricting to a set of selected subjects, I think that not all triples were suitable for the intended experiments; indeed, some specific cases emerged that are not related to the intrinsic characteristics of quality assurance; while it is generally ok to let the experimenters find out problems, it is also reasonable to think that, when preparing an experiment, the obvious things that can lead to problems are avoided. Some examples: - specific datatype objects (like dates vs. numbers, which are definitely ok to be mistaken) - owl:sameAs links (which maybe were interpreted in a "purist" way by LD experts who can be careful in accepting those triples because of their logical implications) - rdf:type triples among the incorrect link issues (apparently unclear for the MTurk workers, but partially also to me: why did rdf:type triples were considered among the "links" instead of the "values"?) - DBpedia translation-specific triples (which do not make any sense in such an evaluation setting, and should have been filtered out in the first place)." assertion.
- comment-10 hasCommentText "The two authors who created the "ground truth" got quite low values of inter-rater agreement." assertion.
- comment-11 hasCommentText "The tested quality issues are somehow unbalanced: while the incorrect object extraction or the incorrect links can be related to the entities' "meaning", the incorrect datatypes or language tags are more "structural" mistakes; as a consequence, it is not surprising that the latter is the case in which the paid workers performed worst." assertion.
- comment-12 hasCommentText "Some experiment design flaws come also from the user interface and the information provided to the involved crowds." assertion.
- comment-13 hasCommentText "The quality issues to be identified are not presented with the same granularity level to experts and laymen, since the experts got quite a detailed taxonomy of issues, while the MTurk users only three possibilities." assertion.
- comment-14 hasCommentText "Regarding the MTurk-based Find stage, I personally find the screenshot in Figure 3 very confusing, since it seems that the Wikipedia column (which should provide the "human readable" information) is less complete that the DBpedia column: how were the workers expected to interpret this fact? were they instructed to click on the Wikipedia page link (if provided) to check?" assertion.
- comment-15 hasCommentText "Regarding the MTurk-based Verify stage, Figure 5 is also problematic since it doesn't display the Wikipedia preview (explained in the text) and seems anyway to require quite an effort or some knowledge to be judged; it would have been interesting to know if the authors were able to trace if and how many times the workers actually clicked on the Wikipedia link or how much time it took to them to make a decision." assertion.
- comment-16 hasCommentText "Some of the examples given in the text are misleading and therefore not fully suitable to be offered to the crowds as explanations (if they were); e.g. is Elvis Presley's name language-dependent?" assertion.
- comment-17 hasCommentText "I have some doubts about the choice of evaluation metrics." assertion.
- comment-18 hasCommentText "Regarding the tables, the authors would have better used sensitivity and specificity rather than TP and FP, because rates are more easily compared and interpreted than counts. This last comment also applies to bar charts, which are hard to judge because of the different value ranges: using rates would improve readability and better convey the message." assertion.
- comment-19 hasCommentText "I am not at all convinced about the significance of keeping track of the first answer, even less of the comparison between the first answer and the majority voting: while I understand the cost consideration, it would have been more meaningful to compare 3-worker majority voting vs. 5-worker majority voting, since 1 single worker cannot express any kind of answer "agreement" or "variance"." assertion.
- comment-20 hasCommentText "I found the baseline section quite weird, since the authors describe the interlinks approach that perfectly makes sense (even if it regards only one of the tested quality issues), but they also introduce the TDQA assessment which cannot be compared to the experiment results (and thus cannot be considered a baseline approach). The authors would better create a baseline (e.g. by using SPIN or ShEx-based constraint checks) to try to identify datatype/language and object values issues (w.r.t. all those cases in which such checks can be implemented, of course); that would be a reasonable baseline to compare to." assertion.
- comment-21 hasCommentText "I would suggest the authors to make a final summary table that compares the two workflows as well as the comparable baselines, so to support the final discussion." assertion.
- comment-22 hasCommentText "page 4, 1st column, postal code example: in some countries postal codes contain letters, so it is not necessarily true that it should be an integer" assertion.
- comment-23 hasCommentText "sections 3.1.1 and 3.1.2 do not provide any reference" assertion.
- comment-24 hasCommentText "section 3.2 is clearly related to reference [3], so there is no need to include the citation several times" assertion.
- comment-25 hasCommentText "page 6, definition 1: why is it 2^Q? can all the quality issues happen at the same time?" assertion.
- comment-26 hasCommentText "page 6, beginning of 2nd column: this is very specific to DBpedia, so it is in contradiction to the generality claims of the paper" assertion.
- comment-27 hasCommentText "page 8, end of section 4.1: the authors explain the redundancy during the Find stage by experts; if an agreement is already achieved, is the Verify stage useful at all?" assertion.
- comment-28 hasCommentText "page 9, 1st column: it seems that the prune step is specific to the experimental setting, rather than to the general case (non-dereferenceable URIs should have been discarded in the first place...)" assertion.
- comment-29 hasCommentText "page 9, 2nd column: reference to Figure 1 should probably be Figure 3" assertion.
- comment-30 hasCommentText "page 10, footnote 8: it is not simply for sake of simplicity, since datatype and language tag cannot happen together; furthermore, for the laymen probably there is not much difference between "value" and "link" either" assertion.
- comment-31 hasCommentText "section 4.4 is not completely necessary in the paper" assertion.
- comment-32 hasCommentText "page 15, end of 1st column: why were the DBpedia Flickr links filtered out? if there was some doubt about their validity or relevance to the tests, why not filtering them out before the Find stage?" assertion.
- comment-33 hasCommentText "page 15, section 5.2.4: the example triple is totally unclear, what does it mean? why is is correct?" assertion.
- comment-34 hasCommentText "table 3: from the text 1512 seems to be the number of the "marked" triples rather than the evaluated ones" assertion.
- comment-35 hasCommentText "table 4: the caption does not explain that the results refer to the "ground truth" sample (same for table 6); why the LD expert inter-rater agreement was computed for all the triples together?" assertion.
- comment-36 hasCommentText "page 16, beginning of 1st column: the need for specific technical knowledge about datatypes seems to be yet another experiment design flaw" assertion.
- comment-37 hasCommentText "page 17, list in the 1st column: what are Wikipedia-upload entries? what does it mean w.r.t. the misclassification discussion?" assertion.
- comment-38 hasCommentText "page 18, section 5.3.2: the text says 30k triples while table 5 almost 70k triples, so what's the correct number? why was the sample selected on the basis of "at least two workers" and not by majority voting? the sample contains the "exact same number of triples" or exactly the same triples? why did this Verify stage take more time than in the case of the other workflow?" assertion.
- comment-39 hasCommentText "page 18, end of 2nd column: the geo-coordinates example seems yet another symptom of an ill-designed experiment" assertion.
- comment-40 hasCommentText "table 5: the sample used for the Verify task does not have the same distribution of triples for the quality issues than the Find stage; can the authors elaborate of the possible effects of those different proportions in terms of loss of information?" assertion.
- comment-41 hasCommentText "page 19, 2nd column: the problem with non-UTF8 characters seems another sign of sub-optimal design of the user interface for the experiments" assertion.
- comment-42 hasCommentText "page 20, 2nd column: possible design flaw also in the case of proper nouns" assertion.
- comment-43 hasCommentText "figure 7(b): TP+TN are complementary w.r.t. FP+FN; rates would be more meaningful than total counts" assertion.
- comment-44 hasCommentText "page 21, 1st column: there are a couple of "Find" that are more probably "Verify"; it would be interesting to know if the rdf:type triples correctly classified were done by the same worker(s)" assertion.
- comment-45 hasCommentText "page 21, 2nd column: it is not clear on how many triples the 5146 tests were run, on the 509 "ground truth" triples? what exactly is a success/failure in the tests?" assertion.
- comment-46 hasCommentText "page 22, 2nd column: were only the foaf:name links used or also the rdfs:label ones? the listing is somewhat useless, the text was clear enough; also clear it is unclear what the "triples subject to crowdsourcing" were, since different datasets were used in the previous tests" assertion.
- comment-47 hasCommentText "page 23, 1st column: I didn't get what the following consideration refers to: "workers were exceptionally good and efficient at performing comparisons between data entries, specially when some contextual information is provided" " assertion.
- comment-48 hasCommentText "page 24, footnote 21: the link is broken" assertion.
- comment-49 hasCommentText "page 24, 2nd column: "fix-find-verify workflow" is probably Find-Verify" assertion.
- comment-50 hasCommentText "page 25, end of 1st column: the authors write "Recently, a study [18]..." but the paper was published in 2012" assertion.
- comment-1 hasCommentText "The paper “N-ary Relation Extraction for Joint T-Box and A-Box Knowledge Base Augmentation” describes a pipeline for database enrichments. It builds on top of frame semantics." assertion.
- comment-2 hasCommentText "The introduction comprehensively explains the need and usefulness for knowbase-completion and enumerates different efforts for making knowledge publicly available in a structured manner. It makes very clear what the main contributions of this paper are: It is a whole framework/pipeline from text to database entries. The actual information extraction technology is not very sophisticated, what the authors present as an advantage of the approach (and I agree, if this is of sufficient performance, of course)." assertion.
- comment-3 hasCommentText "The paper explains and partially focuses on technical details. It mentions the use of XML/Wikisyntax and actually explains how these documents are preprocessed. It explains the technology in terms of CPU and RAM used to process the data. I am not fully convinced that this level of detail is helpful in this article. It might be considered to move such detailed description to the homepage where the system can be downloaded and contribute to a more concise description." assertion.
- comment-4 hasCommentText "The mathematically motivated methods, like TF-IDF-standard deviation-based ranking of entities could be introduced more formally." assertion.
- comment-5 hasCommentText "The evaluation seems to be sound and is interesting." assertion.
- comment-6 hasCommentText "This paper could be improved by adding discussions for the different design decisions made when developing the pipeline. For instance, it remains unclear why it has been chosen to use a POS tagger, but no chunking, no dependency parsing. If such steps do not contribute to the overall performance, or if the pipeline would be to slow, that’s fine. But I would prefer to read about these trade-offs and possible impacts of other decisions." assertion.
- comment-7 hasCommentText "The work is built on top of a manually annotated data set. I am wondering how that would scale to future applications of the pipeline. How is this approach working when new roles or entities are added? Do you expect reannotation to be necessary?" assertion.
- comment-8 hasCommentText "The confidence score calculation is very important when it comes to KB completion, however, section 9.1 is only describing that the pipeline outputs such scores. More formal description of properties and distributions of these scores would be interesting." assertion.
- comment-9 hasCommentText "The related work section does not mention distributional methods and matrix factorization-based approaches for relation extraction. A short discussion of advantages and disadvantage would be interesting.." assertion.
- comment-10 hasCommentText "The developed system shares properties with semantic role labeling systems. How does your development compare to existing systems? Could an SRL-system contribute to a strong baseline?" assertion.
- comment-11 hasCommentText "* T-Box and A-Box are not terms which are well known in all communities which might be interested in this work. These terms should be introduced early in the paper." assertion.
- comment-12 hasCommentText "* Figure 1 is very text focused. I understand that this is a screenshot (or print) from an actual system. However, the layout makes it quite difficult to understand just from the depiction what is the purpose. Perhaps a graphical annotation might help here." assertion.
- comment-13 hasCommentText "* The use of English is not perfect and could be improved, however, this does not lead to any problems with understanding the content of the paper." assertion.