San Francisco locals who think they know their neighborhood like the back of their hand will find their match in Waymo’s neural networks, which have not only studied millions of photos but also recreated parts of the city in 3D.
Building 3D visualizations from 2D images isn’t new, of course, but the artificial intelligence technology that enables them—called Neural Radiance Fields (or NeRFs)—is limited and can probably only handle one environment at once.
Waymo, an autonomous driving tech firm owned by Google parent Alphabet that previously powered Google’s self-driving car project, is amplifying this technology with a technique called , where multiple reconstructions by NeRFs are stitched together to form an immersive virtual world.
The main attraction is a full-fledged recreation of San Francisco’s Alamo neighborhood, built with the help of 2.8 million photos taken from cameras in self-driving vehicles over three months.
The Alamo neighborhood. Video via Waymo
Other areas that have been visualized are the Grace Cathedral, Lombard Street, Moscone Center, the Bay Bridge and Embarcadero Street, and the downtown area. These, too, are simulations generated from millions of still images.
As points out, constructing large-scale 3D environments comes with its fair share of challenges. They’re subjected to moving obstructions like cars and people, in addition to rendering issues.
To combat this, the team segmented environments into individually-trained “blocks”—giving the project the name Block-NeRF—and then stitched them together.
Waymo’s researchers note that full-scale 3D reconstructions would be especially handy for autonomous vehicles and aerial surveys. Peer at some impressive scenarios below.
Grace Cathedral. Video via Waymo
Bay Bridge. Video via Waymo
[via
http://www.designtaxi.com/news/418306/AI-Builds-Virtual-Reconstruction-Of-San-Francisco-From-Millions-Of-Photos/
Leave a Reply