With the spread of physical sensors and social sensors, we are living in a world of big sensor data. Though they generate heterogeneous data, they often provide complementary information. Combining these two types of sensors together enables sensor-enhanced social media analysis which can lead to a better understanding of dynamically occurring situations.
We utilize event related information detected from physical sensors to filter and then mine the geo-located social media data, to obtain high-level semantic information. Specifically, we apply a suite of visual concept detectors on video cameras to generate "camera tweets" and develop a novel multi-layer tweeting cameras framework. We fuse "camera tweets" and social media tweets via a unified matrix factorization model. We perform matrix factorization on a spatial-temporal situation matrix formed from physical sensors signals, incorporating the surrounding social content to exploit a set of latent topics that can provide explanations for the concept signal strengths. We have tested our method on large scale real data including PSI stations data, traffic CCTV camera images and tweets for situation prediction as well as for filtering noise from events of diverse situations. The experimental results show that the proposed approach is effective.
Presented at the 11th Bay Area Multimedia Forum (BAMMF), 13 December 2016 (bammf.org)