As technology improves, there are new and better ways to capture data that informs design. Years of past data is available from a variety of sources, plus sensor technology is becoming a major part of design and will offer a new layer of insight.

In 1969, William H. “Holly” Whyte decided to analyze, and eventually decode, New York City’s rambunctious street life. A famed author, Whyte, along with a handful of collaborators, was recruited by the city’s planning commission to set up cameras and surreptitiously track human activity.

Whyte and his team spent countless afternoons filming parks, plazas, and crosswalks, and even more time counting, crossing out, analyzing, and quantifying footage. Notations were made for how people met and shook hands. Pedestrian movement was mapped on pads of graph paper. To get accurate assessments of activity at a street corner, Whyte’s researchers manually screened people caught waiting for lights to change. Imagine how much time it took to figure out that at the garden of St. Bartholomew’s Church, the average density at lunch time is 12 to 14 people per 1,000 square feet.

Observe a city street corner, crosswalk, or plaza long enough, and eventually, energy and entropy give way to understanding. The public greeted Whyte’s work with curiosity and amusement. “One thing he has discovered is where people schmooze,” deadpanned a 1974 New York Times article. “The other thing he has discovered is that they like it.”

Whyte’s Street Life Project was a revelation. Whyte offered nuggets not of gold, but of actionable data, which helped shape city policy: peak versus off-peak activity, average densities, walking patterns. Called “one of America’s most influential observers of the city,” Whyte’s insights and hard-earned wisdom informed New York’s 1969 city plan, helped revise its zoning code, and turned once-squalid Bryant Park into a prized public space.

What’s inspiring and a little mind-boggling about Whyte’s process is that until relatively recently, planners still practiced that type of time-consuming manual observation. Infrared cameras and other technologies have been around for years to make data-gathering easier. But often, going beyond surveys, personal observations, and educated guesses required hand counts and film study.

With smartphones in our pockets, and smart city technology increasingly embraced by local leaders, it may seem like we’re already awash in a flood of urban data. But that’s a drizzle next to the oncoming downpour that may radically transform our understanding of cities and how they function. Two rapidly rising technologies—computer vision and machine learning—offer the potential to revolutionize understanding of urban life.

“The ability to transmit images into data, without human intervention, is the single most powerful thing,” says Rohit Aggarwala, chief policy officer at Sidewalk Labs, the Google urban technology spinoff that is building its own smart neighborhood in Toronto.

With the advent of ever-cheaper cameras, computer vision analysis to turn images into data, and machine learning to turn data into patterns, predictions, and plans, suddenly every city is on the verge of being able to do what William H. Whyte did, without the staff. Technological advancement seems guaranteed: In 2016 alone, venture capital firms invested half a billion dollars in computer vision companies, while estimated global spending on machine learning ranged between $4.8 billion and $7.2 billion. The Cities of Data project at New York University expects the urban science and informatics field to grow to a $2.5 billion enterprise by 2030, and at the Consumer Electronics Show in Las Vegas earlier this month, more vendors self-identified as peddling “smart city” tech than gaming or drones. As a younger generation of digitally native city planners steps into office, seeing automation and autonomous vehicles on the horizon, the hunger and sense of urgency for improving municipal technology has never been greater.

Read More