Disadvantages of robust spectral unmixing.

  • When the sets of pixels that represent the pure and the mixed classes are large, the number of possible n-tuples one has to consider may become exponentially large. The process therefore becomes very slow. To overcome this problem, Bosdogianni, Kalvainnen, Petrou and Kittler (Robust unmixing of large sets of mixed pixels, Pattern recognition Letters, vol 18, pp. 415-424, 1997) proposed the use of the randomised Hough transform. In this approach only a sub-sample of the data is used and the accumulator array is continuously monitored for any emerging peaks.

  • When the number of pure classes is large, the accumulator array may be very big and it may require a lot of memory. To overcome this problem, Bosdogianni, Petrou and Kittler (``Classification of sets of mixed pixels with the hypothesis testing Hough transform'', IEE Proceedings Vision, Image and Signal Processing, Vol 145, pp 57-64,1997) proposed the use of the optimised Hough transform. According to this method, the accumulator space is not discretised to form an accumulator array, but it is treated as a continuous space, and it is sampled. The sampling can be sparse originally and become progressively denser around the regions of interest, in a hierarchical approach. Every sample point of the accumulator space generates a hypothesis:  Null hypothesis: "This point represents the true mixing proportions." The hypothesis is tested by considering n-tuples of pixels. For example, a certain sample point (a,b) and a certain quadruple of pixels will make equation: wj = a xj + b yj+ (1-a-b) zj  imbalanced by a certain amount ej. The null hypothesis receives support inversely proportionally to the value of ej. The sample point that receives the most support wins. The region around it may be sampled more densely to improve accuracy.