Topic: Multi-modal Sensor Fusion
By: John Fisher (MIT), Alan Willsky (MIT)
Array processing problems are typically cast assuming either a single sensing modality or that fusion of different modalities occurs at the decision level. The first leads to correlation based approaches, while the second implicitly assumes some level of independence between different modalities. Our approach addresses fusion of multiple modalities (e.g. acoustic/video) at the signal level using mutual information as an optimization criterion for projecting into a maximally informative subspace. We can show that this approach is consistent with a multiple independent cause model of observed signals.
Questions and comments to the webmaster. This page is copyrighted by the Massachusetts Institute of Technology.