Joint Modeling & Querying of Social Media & Video

As the amount of user generated data increases, it becomes more challenging to effectively search this data for useful information. There has been work on how to search text social media posts, such as Tweets, or videos. However, searching on these sources using separate tools is ineffective because the information links between them are lost; for instance, one cannot automatically match social network posts with activities seen on a video. As an example, consider a set of tweets and videos (which may be posted on Twitter or other media) generated during a riot. A police detective would like to jointly search this data to find material related to a specific incident like a car fire. Some tweets (with no contained video) may comment on the car fire, while a video segment from another tweet shows the car during or after the fire. Linking the videos with the relevant social media posts, which is the focus of this project, can greatly reduce the effort in searching for useful information. The successful completion of this project has the potential to improve the productivity of people who search in social media, such as police detectives, journalists of disaster management authorities. This project will also strengthen and extend the ongoing undergraduate research and high school outreach activities of the investigators.

The objective of this project is to focus on the fundamental research tasks that would allow for joint modeling of social network and video data. Then, given a set of posts, the system would find relevant video segments and vice versa, by defining a common feature space for social media and video data. This proof-of-concept project will be evaluated on posts and videos shared on the Twitter platform. This is the right time to tackle this problem given the recent advances in deep learning and big data management technologies. A key risk is that the semantics in a tweet may not be enough to map it to a video segment; for that, the context (e.g., tweets from closely related users) of the tweet may need to be leveraged.


People


Publications

Conferences/Workshops/Journals

  • Mithun, Niluthpol Chowdhury and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K.. Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval, ACM International Conference on Multimedia Retrieval 2018 
  • Gorovits, Alexander and Gujral, Ekta and Papalexakis, Evangelos and Bogdanov, Petko. LARC: Learning Activity-Regularized Overlapping Communities Across Time., ACM KDD 2018 
  • Pasricha, Ravdeep and Gujral, Ekta and Papalexakis, Evangelos. Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition., ECML-PKDD 2018 
  • Chowdhury, Mithun and Rameswar, Panda and Papalexakis, Evangelos and Roy-Chowdhury, Amit. Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval., ACM International Conference on Multimedia 2018 
  • Gujral, Ekta and Pasricha, Ravdeep and Papalexakis, Evangelos. SamBaTen: Sampling-based Batch Incremental Tensor Decomposition., SIAM SDM 2018 
  • Mike Izbicki, Evangelos E. Papalexakis, Vassilis J. Tsotras. Exploiting the Earth's Spherical Geometry to Geolocate Images., European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), Würzburg, Germany, September 16-20, 2019 
  • M. Izbicki, E. Papalexakis and V. Tsotras. Geolocating Tweets in any Language at any Location., CIKM 2019, Beijing, China, Nov 3-7, 2019 

Broader Impact

High-School Outreach, supported by NSF


NSF Link

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1746031&HistoricalAwards=false


This material is based upon work supported by the National Science Foundation under Grant No. (NSF grant number).

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.