Abstract
If one is given a large number (N) of images sourced from various
places and under different contexts, but having at least "something" in
common, then the extraction of the object common in all these images is
known as co-segmentation. The first part of the talk will explain how this
can be solved using the concept of maximum common subgraph (MCS) matching.
This problem becomes very challenging if only an unknown number (say MN) of
these images contain the co-segmentable common object. This is quite natural
as not everyone in the crowd is looking at the same object. Unfortunately,
MCS cannot be applied to co-segment this outlier-ridden image set. We
introduce a new concept called maximally occurring common subgraph (MOCS)
matching, which is capable of handling such outliers in the data.
Unfortunately the search becomes prohibitive. We provide a greedy solution
to MOCS matching problem by defining an intermediate graphical
representation called latent class graph. This requires only O(N) matching
operations and ensures globally consistent matching of the common object
regions across images.
In the second part of the talk, we shall discuss how the same problem can
also be solved using statistical mode detection in the multi-dimensional
feature space of all super-pixel segments in the image set. We assume that
the dominant mode and its neighbors in the feature space correspond to the
image segments that partially constitute the common object in every image.
We obtain the complete objects using an LDA based region growing of the
partial object regions. We shall present the details of these methods in the
talk and shall illustrate the efficacy of the proposed solutions in solving
the co-segmentation problem.