Traditional Emergency Management Systems (EMS) are mainly focused on the institutional warning response and do not fully exploit the active participation of citizens involved. In the case of emergency events, citizens are usually considered as people to be rescued rather than active participants. Nowadays the widespread adoption of digital media and the production of content by ordinary people have marked a significant change in the study of the disaster context and have allowed analysis of the event from a completely new perspective: that of citizens involved.
Thanks to the use of blogs, social networking sites, and video/photo-sharing applications, a large number of citizens are able to produce, upload and share content related to the impact of a disaster, the emergency response, the search and rescue operations, the restoration phase, etc. All this social content can be exploited in order to provide a more accurate situational awareness of the event from below, in addition to the traditional EMS. This thesis focuses on a Smart Multimedia User Generated Content Retrieval system (SMR) expressly conceived for event detection and situational awareness applications. Based on state-of-the-art clustering algorithms, it is able to locate an event and extract the most significant multimedia content. Contrary to already existing EMS, the proposed SMR system is able to analyse not only the textual content posted by users during an event, but also the visual context. To perform such a task, specific computer vision algorithms have been exploited in order to evaluate images retrieved from social platforms.
Retrieved images are then displayed by emergency operators through a user-friendly graphical interface. Important results have been obtained by testing the system with over 60 events that occurred in 2015. More than 130K images were retrieved and analysed by the proposed SMR system. Results obtained are really promising and show the feasibility and the interest of the proposed SMR system.
For more information on my phd thesis do not hesitate to contact me
This work addresses the problem of video surveillance of out- door environments with unmanned aerial vehicles (UAV). Specifically it proposes a two-step approach, with an initial online stage in which a mosaic of the zone to be monitored is built from video sequences. The second step tackles with the problem of online detection of relevant differences between the acquired images and the mosaic model. A GPS- assisted approach is proposed to deal with efficiency issues in this online step. Experimental results prove that the proposed approach can be used to detect relevant changes in the specific case of road safety assurance in dangerous zones.
C. Piciarelli, C. Micheloni, N. Martinel, M. Vernier, G.L. Foresti, Outdoor environment monitoring with unmanned aerial vehicles, International Conference on Image Analysis and Processing (ICIAP), Naples, Italy, September 9-13, 2013.
This work introduces a novel method for person re-identification using embedded smart cameras. State-of-theart methods address the re-identification problem using global and local features, metric learning and feature transformation algorithms. Such methods require advanced systems with high computational capabilities. Nowadays, there is a growing interest in security applications using embedded cameras. Motivated by this we propose to study a new system that addresses the challenges posed by the reidentification problem using devices (e.g. smartphones, etc.) that have limited resources. In this work we introduce a novel client-server system that exploits a feature learning method to achieve a two-fold objective: (i) maximize the re-identification performance over time and (ii) reduce the required computational costs. In the training phase, state-of-the-art features are selected considering both the device capabilities and re-identification performance. During the detection phase, the re-identification performance are maximized by selecting the best features for a given input image. To demonstrate the performance of the proposed method we conduct the experiments using different mobile devices. Statistics about feature extraction and feature matching are presented together with re-identification results.
M. Vernier, N. Martinel, C. Micheloni, G.L. Foresti, Remote Feature Learning for Mobile Re-Identification, International Conference on Distributed Smart Cameras, Palm Spring CA, USA, October 29 – November 1, 2013.
This work introduces a novel information visualization technique for mobile devices through Augmented Reality (AR). A painting boundary detector and a features extraction modules have been implemented to compute paintings signatures. The computed signatures are matched using a linear weighted combination of the extracted features. The detected boundaries and the features are exploited to compute the homography transformations. The homography transformations are used to introduce a novel user interaction technique for AR. Three different user interfaces have been evaluated using standard usability methods.
Martinel, N.; Vernier, M.; Micheloni, C. and Piciarelli, C., Image Processing Supports HCI in Museum Application. In International Conference on Computer Vision Theory and Applications, Barcelona Spain 2012.