Hehan Zhou
Home    Research    Projects    About




R03 → Learning from Great Streets: A Pix2Pix-Based Approach for Street Plan Generation
Keywords:#Pix2pix#Generative Design#Urban Streets #Street Plan Generation# Year:2023
This is an assignment from the research-based master's course "Frontiers in Artificial Intelligence and Digital Design", completed by Hehan Zhou and Wang Xin.


A pix2pix-based approach  for street  plan generation
This project explores the use of Pix2Pix, a generative adversarial network, to create high-quality urban street plans inspired by the principles in Great Streets by Allan B. Jacobs. By translating qualitative street qualities into labeled visual elements—such as sidewalks, greenbelts, trees, and buildings—the study constructs a dataset of street plan samples and trains a model to generate full layouts from base road inputs. Through two rounds of model training and annotation refinement, the Pix2Pix network not only learns the spatial relationships between elements but also exhibits early signs of design logic and spatial awareness. The research demonstrates the potential of AI-driven generative tools to support urban design at the street scale, particularly in the context of public space planning and design prototyping.

Motivation & Objectives
Most AI studies in urban design focus on streetscapes, leaving street plan generation underexplored. Given the critical role of plan layouts in public space design and urban renewal, this project asks whether AI can learn to generate street plans that embody the qualities defined in Great Streets, with potential applications in rapid prototyping, scenario planning, and intelligent design assistance.

Data & Method
A set of 39 street plan cases from Great Streets were redrawn and annotated with key spatial elements such as trees, sidewalks, markers, and driveways. The dataset was augmented to 195 pairs through flipping and mirroring. The Pix2Pix model was trained in two rounds: the first involved full-feature training; the second applied label simplification, task separation (e.g., training shadows independently), and parameter tuning to improve clarity and spatial consistency in the outputs.
RESULTS &  ANALYSIS
The second training round significantly improved the model’s spatial reasoning. Generated plans displayed clearer boundaries between buildings, sidewalks, and greenbelts. 3D model reconstructions revealed strong learning of walkability, edge clarity, and greening, while more abstract qualities such as entry/exit markers and spatial symbolism remain challenging. This suggests varying levels of learnability across design features.

Implications & Outlook
This study shows that generative models can support design at the street plan level—not just in visual representation but in embedded spatial logic. Future work may expand datasets across cities and climates to train transferable design models. Additionally, integrating elevation, section data, or behavioral patterns could further enhance model richness and real-world applicability.
Sample Library Construction – Case Selection



Data Standardization Processing


Model Training

Testing Results