Ambiguity and Hesitancy in Quality Assesment: The case of Image segmentation
Title: Ambiguity and Hesitancy in Quality Assesment: The case of Image segmentation
Conference: plenary talk at the Symposia on Mathematical Techniques Applied to Data Analysis and Processing (SMATAD), Fuengirola (Spain), 2017.
Abstract: Hesitancy and ambiguity are recurrent motivations for using the so called soft-computing techniques. However, the date in which they are supposed to happen is frequently highly precise data. In this talk,we tackled this apparent incoherent. Moreover, we focus on analyzing the ambiguity indiced by the most ambiguous thinking machine ever constructed: human beings. In this sense, we take as guiding topic image segmentation, a field of image processing in which humans are usually employed to produce ground truth. Repeated questioning to the same human (or singular questioning to various humans) lead to ambiguous, imperfect, multivalued, imprecise ground truth. How to handle that? What knowledge can we extract from image segmentation? Is such knowledge portable to other environments in which hesitancy and ambiguity is mean to occur?
Keywords: Hesitancy; Imperfect ground truth; Boundary detection; Quality evaluation; Fuzzy Set Theory.
Cite as: C. Lopez-Molina,”Ambiguity and Hesitancy in Quality Assesment: The case of Image segmentation”, plenary talk at the Symposia on Mathematical Techniques Applied to Data Analysis and Processing (SMATAD), Fuengirola (Spain), 2017.
Automatic information processing regularly deals with plenty of sources of uncertainty. Many of them are born from the data gathering itself, while many others stem from imprecise computations or algorithmic needs. In general, they can all be solved with advanced machinery, dedicated mathematical models or extra computational resources. In this effort, Fuzzy Set theory has played a relevant role in the past 40 years. However, there is a source of ambiguity and hesitancy which cannot be removed from information processing: that due to the ambiguous, human-like definition of information processing problems.
Human beings generally make variable interpretation of the goals and needs of an information processing task, whichever context it is carried out in. Hence, the perceived quality of one single result will be heterogeneous, depending on the human expert evaluating it. This poses significant problems in various stages of information processing. For example, it is damaging when it comes to algorithmic setting or optimization, since the perceived improvement (for one human) might be coupled to the perceived quality loss (according to another human). Also, it becomes damaging when scientists intend to select the best performing algorithm for a given task, as the opinions by different human experts might differ. In general, we find that the perceived quality of one single result is a conglomerate of opinions, often hesitant or contradictory.
The problem of multivariate (and hence ambiguous) data for quality assessment is recurrent in literature. Depending on the specific tasks, some tailor-made solutions can be applied, but literature lacks general solutions; Some generalistic approaches, as multi-objective optimization techniques, are more focused in dealing with multiple, hypothetically orthogonal, dimensions of quality, rather than with human ambiguity. In this work, we propose a general framing of the possible solutions to tackle ambiguity and hesitancy, taking as reference the problem of image segmentation.
Image segmentation consists, summarily, in labelling the area occupied by each of the visible objects in an image. Although this output is not very useful by itself, it can be used subsequently for more complex tasks, including object recognition or semantic indexing. Image segmentation lacks a mathematical definition, and hence automatic methods are bound to be evaluated according to how similar their results are to human-made solutions. Unfortunately, different humans routinely produce different interpretations of an image. As a consequence, the evaluation of a segmented image becomes some sort of comparison with a list of human-made, spatially imprecise segmentations. This requires a significant mathematical apparatus which is able to cope with multivariate data involving hesitancy, ambiguity and contradiction.
In this talk we analyze, from a historical perspective, the problem of quality evaluation for image segmentation. Specifically, we focus on how to handle the variable interpretation by different humans. This will lead to an analysis of the general quality evaluation problem in the presence of multivariate ground truth, including its semantics, the technical challenges it poses and the relationship with some mathematical disciplines involved in its solution.
Code (in the KITT): This work has no associated code whatsoever.
- The slideshow will be soon uploaded to this page. Up to then, please email directly to the main author (C. Lopez-Molina);
Related works (in the KITT):
- [LopezMolina16a]– C. Lopez-Molina, H. Bustince and B. De Baets, “Separability criteria for the evaluation of boundary detection benchmarks”, IEEE Trans. on Image Processing, 25 (3), 1047-1055 (2016).
Related works (web):