Quantifying disruptions to human activity in near-real-time: A tutorial
A new use case for Facebook’s Movement Range Maps
This work has been done entirely using open data, and was co-authored with Kai Kaiser. All errors and omissions are those of the author(s).
A few months into the pandemic, Facebook made a new product, Movement Range Maps, publicly available to the humanitarian community: data that illustrates how cities, municipalities, and towns have responded to physical distancing measures. Because of the striking similarities in how natural disasters and pandemic-induced lockdowns restrict human movement, we find a new use case for Movement Range Maps: city-level disaster analytics.
Movement Range Maps provide two mobility metrics, updated daily: Change in Movement (how much people are moving compared to a baseline established before Feb 2020) and Stay Put (the proportion of Facebook users that have stayed in the same location all day).
In two other blogs, we evaluate the representativeness of Facebook user data, relative to the underlying population, and draw insights from Movement Range Maps for the 2020 Pandemic Typhoon Season in Vietnam and the Philippines. In this tutorial, we walk through the Typhoon analysis step-by-step in Python.
Full code for this tutorial can be found in this GitHub repo.
Setup
This analysis used hdx
, os
, sys
,shutil
, webbrowser
, urrlib
and zipfile
for downloading, parsing, unzipping, and saving data; numpy,
pandas
and geopandas
for data processing; and matplotlib
, datetime
, and seaborn
for visualization.
Data Processing
We defined a few utility classes for initial preprocessing. Below we explain how each of them works.
- The
read_mobility()
class prepares country boundaries and Facebook data for this analysis. read_mobility()
has adownload_from_gadm()
method that takes in a list of ALPHA-3 country codes and prompts the download of GADM boundary shapefiles for countries of interest. In this case, we instantiate read_mobility, and pass the input[‘VNM,’PHL’]
to the function to download data for Vietnam and the Philippines. Unless the countries of interest change, this step need only be undertaken once––if boundary data already exists for these countries in the‘data/boundaries’
filepath, the function will skip the download.
r = read_mobility()
r.download_from_gadm([‘VNM’,’PHL’])
- Movement Range Maps data is available on the Humanitarian Data Exchange, with a Python API called
hdx
which enables us to download data within the same script as our analysis (and even set up automations to keep the data up-to-date). Below, we download the data using theread_from_hdx()
method, and save the destination filepath in the same directory in which we are working, in thedata/mobility_data/
filepath. This step should be re-run as frequently as required (or even automated), since the data is updated daily. If Facebook data already exists in the directory you are working, the function will skip the download.
r = read_from_hdx()
data = r.read_facebook_data()
- For differentiating affected areas from unaffected areas during a disaster, we define a new class called
mobility_analytics
, which takes in the same ALPHA-3 country codes. Themobility_analytics.find_affected()
method takes in a dictionary of custom shapefiles, with country codes as keys and shapefiles as values. For any country passed to themobility_analytics
class, if we have not passed the optional custom shapefile, the function will prompt a selection of admin regions for analysis.
m = mobility_analytics(['VNM','PHL'])
vnm_custom = gpd.read_file('vnm_extent.shp')
phl_custom = gpd.read_file('phl_extent.shp')
affected_dict = {'VNM':vnm_custom,'PHL':phl_custom}
affected = m.find_affected(affected_dict)
That’s it! We are done with setup and processing. Below are two plots that illustrate mobility changes for the Change in Movement and Stay Put metrics in Vietnam, respectively.


Code used to generate the Stay Put metric is as follows. Change 'all_day_percent_single_tile_users'
to 'all_day_percent_single_tile_users'
to plot the Change in Movement metric.
fig, ax = plt.subplots(figsize=(14,3)affected = data[data[‘polygon_id’].isin(vnm_affected[‘GID_2’].unique())].groupby(‘ds’).mean().reset_index()unaffected = data=data[~data[‘polygon_id’].isin(vnm_affected[‘GID_2’].unique())&(data[‘country’]==’VNM’)].groupby(‘ds’).mean().reset_index()sns.lineplot(data=affected,x=’ds’,y=’all_day_percent_single_tile_users’,ax=ax)sns.lineplot(data=unaffected,x=’ds’,y=’all_day_percent_single_tile_users’,ax=ax)
This analysis was conducted by the World Bank in partnership with Facebook Data for Good. Special thanks go to the Australian Government Department of Foreign Affairs and Trade (DFAT) for their support of this work through the Vietnam Big Data Observatory for COVID-19 Socio-Economic Response, Recovery, and Resilience as part of the Australia World Bank Strategic Partnership in Vietnam, Phase 2. All errors and omissions are those of the author.
Please email us for questions or if you would like to discuss the use of this data for other natural disaster contexts! Full code for this tutorial can be found in this GitHub repo.