2021 marks an increasing trend of putting analytics directly into the space. While before remote sensing researchers used to download “raw” satellite images of the Earth from centralized websites to their computer for further analysis (and even catch physical film canisters from a satellite ironically named Corona back in the 1950s1), now their work gets much easier. Initiatives like from the European Space Agency2 use artificial intelligence on-board of satellites to process images into ready usable products and send them to the ground. In the context of my research, disaster monitoring and assessment, that could mean no more hours spent on working with raw images and building my own algorithms to derive extent of disaster damage from space. Instead, the focus is shifted towards utilizing downloaded image products, like flood masks, in more complicated computer models for various applications and integrating with other datasets.
My current research is about urban disaster damage assessment that goes beyond simple from-above physical damage identification with satellite images. I am interested in linking the pixels to people3 and understanding impact of past and on-going disasters on a society. This large-scale analysis is again, only possible thanks to the computational progress described above, availability of usable data and emergence of big data to better understand our changing environment. For disaster assessment, that means gathering and incorporating big volumes of almost real-time information from individuals together with field surveys, remotely sensed images of the area and longer-term census data. For example, Fig. 1 (below) shows this “layer cake” of disparate data that was implemented by the team of Ohio State researchers for hurricane flood monitoring. The computer platform considers many sensor feeds, including individuals data from Twitter. Another interesting example is FloodFactor platform5 that rather than assessing extent and damage of on-going disasters, offers predictions of future damage risk for a given residential address. The methodology relies on deriving maps of flood probability and putting those in the context of each building type and historical losses in the area. All in all, while merging datasets in damage assessment is not new, there are still several methodological challenges and key datasets needed to be explored. One of which I am focusing on right now is incorporating economic and social geospatial data as “proxies” for physical damage measures in situations of missing data.
Most importantly, I would like to position my work within the context of smart cities. Disaster damage assessment is an integral part of future smart cities that “use connected technology and data to improve the efficiency of city service delivery, enhance quality of life for all, and increase equity and prosperity for residents and businesses”6. That is very important when one realizes the exacerbating climate change realities and social vulnerabilities in cities across the globe that lead to disasters. The risk and damage assessment of the future that we need is the one relying on interconnected sensors (satellites, social media, field data etc.), merging of data, and exercised by local municipalities for better decision-making. Accordingly, I seek to contextualize my future findings from local case studies in a broader narrative of smart city development and disaster risk reduction initiatives.PhD Student, Department of Geography
References
- http://heroicrelics.org/info/corona/corona-overview.html
- https://www.nature.com/articles/s41598-021-86650-z
- https://www.nap.edu/catalog/5963/people-and-pixels-linking-remote-sensing-and-social-science
- https://dl.acm.org/doi/pdf/10.1145/3331184.3331405
- https://floodfactor.com
- https://smartcitiesconnect.org/what-a-smart-city-is-and-is-not/
Wow, this is fascinating and really exciting stuff! What biases are there (or could there be) in these new products, e.g., is the European satellite data uniformly useful in all contexts? Can they incorporate regionally specific contextual information in different types of cities?
These are all great questions. If we are considering satellites that send users already processed on board products (like flood masks), then their flood delineation algorithm is not necessarily biased. What could be biased is whether satellites cover all continental areas with equal revisit time. The overall approach these days seem to be launching a constellation of nano satellites to provide more rapid data across the globe, even if, for example, it is a European agency launching them. Some other initiatives in this field provide satellite image datasets that cover a diverse set of geographic regions, which helps machine learning models to overcome training biases and get more accurate. The latter can be a PhD project on its own – how to make machine learning/AI less biased.