In 1981, urban design expert Allan B. Jacobs noticed a significant detail while visiting a housing development in Tangshan, China - hand-made grates covering many windows and porches. This observation led him to comment that such grates would indicate to visitors in the United States that the neighborhood was considered unsafe. When his Chinese colleague confirmed the same interpretation for China, Jacobs pondered why urban planners did not utilize these simple visual clues to understand neighborhoods.
This is how Jacobs begins his influential book, Looking at Cities, which focuses on the art of urban observation and detection of urban clues. Fast forward four decades, and Jacobs' framework remains pertinent despite drastic transformations in our tools for studying and visualizing cities. We no longer need a notebook and comfortable shoes to comprehend an urban environment. Instead, with just a computer and internet connection, we can analyze any neighborhood worldwide using Jacobs' approach.
Technological advancements now allow us to efficiently review images, identify urban clues, and reveal spatial patterns across entire neighborhoods or cities. By applying machine learning algorithms to high-resolution aerial imagery and street view imagery, we can make comprehensive multi-view urban observations from both sky and street level. The resulting georeferenced datasets and accompanying visualizations offer enormous potential for creating resilient, healthy communities by aiding spatial planning and disaster risk management.
This data enables planners and engineers to analyze built components of blocks, neighborhoods, and cities by providing an overview of specific urban characteristics such as building size, use, masonry type, vintage, roof condition/materials/condition/wall material/condition/total condition (based on roof and wall conditions). This rapid high-resolution screening across multiple square kilometers facilitates cheaper, more efficient pre- and post-disaster planning.
A newly published paper supported by Global Facility for Disaster Reduction and Recovery’s (GFDRR’s) Global Program for Resilient Housing outlines how this approach allows planners to identify specific buildings needing improvements or strengthening at scale. It shows promise as a proxy for social vulnerability, an essential aspect of disaster risk management. The approach can also inform region-wide traffic and infrastructure management decisions and scan for buildings’ structural vulnerabilities in earthquake-prone areas.
This method is applicable in almost any environment, particularly in neighborhoods and cities with limited recent data describing their built environment or those prone to natural hazards. The street view imagery, once collected, can be uploaded to Mapillary, a crowdsourced platform that blurs faces and license plates before making the imagery publicly available.
However, it's crucial to follow ethical guidelines and mitigate potential biases in training data when leveraging artificial intelligence (AI), machine learning (ML), or deep learning (DL). Local experts and other users can then review the datasets to ensure the accuracy of the predictions and classifications made by the algorithms. This approach has proven effective in various countries, including Colombia, Guatemala, Indonesia, Mexico, Paraguay, Peru, Saint Lucia, and Sint Maarten.
In these projects, urban imagery and machine learning results are navigable in a browser interface. This accessibility allows local planners and officials with an internet connection to verify machine learning predictions with the imagery and conduct simple analyses to visualize patterns in the built environment.
By combining classic observational principles with machine learning algorithms to evaluate buildings, neighborhoods, and urban areas we can now make educated and efficient deductions about vast complex built environments. Specific granular information and visualizations are available to save lives, protect assets, shield economies from increasing disaster risks.
In conclusion, by 2024 we have finally realized Jacobs' vision from 1981: widespread simple visual analysis that informs urban development using machine learning and imagery collected from sky and street level - ultimately improving people's lives.