• Ingen resultater fundet

Syracuse, NY USA {Abartosh, rgu104}@syr.edu

ABSTRACT

Urban environments are not comprised solely of physical objects like buildings, infrastructure, and landscapes, but also invisible, but critically influential, information like traffic patterns, economic values, and energy use. This intangible overlay of quantifiable urban behavior is essential to understanding how cities function. Vast quantities of urban data are now widely available through online open source data repositories, but the raw data remains limited in its value to support informed decision-making unless it can be synthesized and represented in a meaningful fashion. This paper describes in-progress research exploring the spatialization and representation of urban data using virtual reality (VR). This research uses Manhattan as a test case for enabling users to access urban data immersively and interactively from multiple vantage points and scales. It describes the process for visualizing the city in VR, representing urban data three-dimensionally, and creating a user interface for data interaction while in the virtual environment. The paper identifies initial steps towards creating an immersive representation of urban data to effectively inform future urban planning initiatives and design decisions.

Author Keywords

Data visualization; urban data; mixed reality; smart cities;

virtual environments; users interface.

ACM Classification Keywords J.2: Physical Sciences & Engineering 1 INTRODUCTION

The industrial adoption of virtual reality technology extends beyond its obvious application for gaming and recreation.

Initial industrial implementation was in defense and medical training, but architectural practices have been quick to take advantage of the rapid rendering capabilities of VR to facilitate communication between designer, consultants, stakeholders and clients. Management, operations and business administration are also using VR for project demonstration and simulation. Such examples of industry-relevant applications which act as precedents for this project include, the Roames project, developed by Fugro and

presented at the United 2017 International Conference of Virtual Reality Software Developers. This platform uses sophisticated artificial intelligence and cloud computing technology to map geographic space. Complex three-dimensional objects, such as structures of buildings and bridges, poles, city accessories, and wires, trees, and streetlights, can be compiled with less than centimeters of accuracy and generated on a visible scale in the virtual world.

Intended to help facilitate managerial understanding of complex systems, the Roames platform enables users to remotely check systems status and realistic simulate infrastructure and environmental conditions. In this way, users can view the status of the entire network in real time [1].

Figure 1 Fugro Roames 3D Simulation Virtual World The Roames analytics is a platform for data visualization with certain advantages that might be translated for application to visualizing open source urban data. First, Roames uses a video-game interface and within it a 3D virtual world can be freely navigated by mouse or controller (Figure 1). Second, the location and form of the data icons are similar to those used by Google maps and other geographic information system (GIS) software, and the user can toggle on and off relevant data via a simple click which facilitates large-scale information loading and layering.

Despite its sophisticated modeling and video game based interface, the Roames system is still intended to be interacted with through a 2D screen which potential limits its

communicative impact. Users interacting with 3D data visualizations within virtual reality platforms have demonstrated that immersion provides better retention of perceived relationships and more intuitive data understanding than traditional desktop visualization tools.

[2]. Perhaps the most pervasive VR application for urban visualization is Google Earth VR which gives the user the ability to zoom in and out from a global perspective down to birds-eye view, and in specific locations and for limited distances even allows the user to virtually walk through the city at street-level [3]. Google Earth VR is impressive in the volume and detail of its model, but it does not integrate additional data beyond the city’s visual appearance and massing.

Other relevant work has attempted to link data visualization to VR representation, with varying degrees of success.

Notably relevant work which merges urban data with VR includes the Urban Fusion project conducted at the Future Cities Laboratory at the Singapore-ETH Centre which has an impressive user interface that tracks hand movement and allows for gradient control of data representations, but doesn’t provide a multi-scalar view of the city [4]. Alternate VR urban data visualizations remove the form of the city altogether [5].

With this project the authors have created a multi-scale representation of urban data using VR which couples the geo-physical aspects of the city with the invisible but relevant urban data related to it. The representation is immersive by nature of the head-mounted VR display and designed to be interactive using video game platform mechanics.

2 METHOD

The process of creating our test case for interactive and immersive urban data representation required modeling the city digitally, collecting and spatializing urban data, and creating a VR user interface.

2.1 City Modeling and VR Integration

Digital city models are becomingly increasingly easy to find or generate thanks to the development of GIS tools such as ESRI CityEngine. Once generated, the city model was exported from 3D digital modeling software as an .fbx file.

This file was then imported into the video game engine, Unity, where the open-source asset package “Virtual Reality Tool Kit” (VRTK) and SteamVR were used to visualize the model in VR. An HTC Vive system was used as the virtual reality display system. It includes a room-scale “play space”

of approximately 9’ by 15’ with infrared lighthouses, head-mounted display, and hand controllers.

The VR system allows for a multi-scale view of the city- from a zoomed-out aerial perspective and from street-view at full-scale. The initial system prototype allows the user to choose between three different scenes of Manhattan: the aerial overview, a street-view of the Columbus Circle neighborhood, and World Trade Center area. A different degrees of modeling complexity was implemented with the change in scale between scenes. Due to both computer processing constraints and the need for a simpler backdrop for data visualization when zoomed out, the aerial overview used a simple untextured massing model to represent the physical aspects of the city. However, for the street view scenes a more detailed modeling of the surrounding buildings was both appropriate visually and necessary for user orientation. (Figure 2)

2.2 Urban Data

Open source urban data, like that found on the NYC Open Data website [x], is often one-dimensional in nature – a collection of numerical values without a compelling or obvious mode for graphic interpretation. Data that is mapped two-dimensionally, like most of the data from OpenStreetMap (OSM) or GIS, is more visually interpretable, but is still subject to the limitations and bias of Cartesian logic. The representation of such data in a 3-Figure 2 Diagram of work flow from digital model to data integrated virtual urban environment

dimensional virtual environment requires that the data be

“spatialized” or given additional dimensionality. This data spatialization required translation and interpretation by the design team. Tested means for spatializing the data for 3D visualization included piping, extrusion, point clouds, and point markers. Spatialized data included public transportation locations (subway and metro), noise levels, and energy use by block.

2.3 User Interface (UI)

By using precedent UI systems, like that of Google Earth VR, as a model, the user interface was designed to take advantage of the video game inclination of Unity to create a interactive UI that allows the user to toggle on and off data sets to create novel overlays of information. Two key aspects to an effective UI, is that it reduce user memory load, the amount of effort the user has to repeatedly input to understand the interface, and supports clear interaction with the “game’s” intent. Attempts to keep user memory load levels low were manifest through hand control button labels which were visible in the virtual environment and a menu/instruction panel which could be easily accessed through the touch of a button. (Figure 4) Users can toggle on and off the data sets by pointing a laser-pointer like extension at the end of one of the hand controllers at menu options and pulling the trigger to select. Similar mechanics are used to move between the aerial perspective and the two street view scenes.

Figure 4 Controller labels are part of User Interface 3 ITERATION & IMPLICATIONS

Varying data spatialization methods were tested for graphic clarity and comprehension. Traffic noise for example was visualized as a height map, where the noise volume was

represented by both the color and height of the resulting topographic surface. Extrusions, piping and height map data visualization methods worked sufficiently in the aerial perspective. However, there is no uniform standard for 3D representation of urban data which makes user interpretation of the data difficult to predict. After completing the initial prototyping phase, iterative evolution of the data representation showed that those with both color and formal differentiation gave mappable data the most visual clarity in the aerial view. By adding transparency, multiple data sets could be layered on one another adding novel comparative possibilities while remain individually coherent (Figure 5).

Figure 5 Layers of data visualized as overlays on the city However, at street level, the same data visualization methods resulted in misinterpretations of the data as architectural objects. Therefore, a new dashboard method was developed for data visualization in the full-scale perspectives. This dashboard linked to the data in a manner which allows color and indicator changes related to user position in space. Even though the data itself is static, the user’s relation to it via position is changing, and the visualization of this results in a dynamic display. (Figure 6)

Figure 3 Traffic noise in ArcGIS, Rhinoceros & Unity3D

Figure 6 Street Level Perspective Data Dashboard

Additionally, data representations were limited by the rendering capacity of the program and the computer. Overly complex geometries or representations with high face counts or material textures would inhibit the picture in VR and result in a flashing picture inside the headset. This flashing could cause a detrimental effect on the user’s experience. Special attention was given thereafter to performatively optimizing both the city and data models.

And lastly, while the full-scale capacity of the HTC Vive allows for user movement to be matched one to one between the physical play space and the virtual environment, the scale of the city neighborhood was so large that users defaulted to

“teleporting” through the environment virtually while standing fairly still in the physical space. Such observations suggest that the perception of extent or understanding of potential travel distance within the 1:1 scale experience is undermined by the fact that the user is not really moving their body in a manner that replicates a city pedestrian.

4 CONCLUSION

This paper presents in-progress research aiming to contribute to a body of explorations collaboratively done by architects, engineers, urban planners, and computer scientists to represent, understand, and engage the urban condition through virtual modelling and simulation.

This specific project used the gaming engine Unity3D as a platform for providing both VR visualization and an interactive user interface. Key contributions of the work in its present state include a multi-scalar visualization of data for both aerial and street-view perspectives and an exploration of methods for translating data from 2D to 3D.

Future work will apply and test this representation tool as an urban planning communication device between designers and community leaders. Next iterations will add more data sets related to energy and environmental conditions and experiment with in situ augmented reality data representations which would blend the 1:1 street scale data representations with the actual street.

ACKNOWLEDGMENTS

Funding and support of this project was provided through the Syracuse Center of Excellence and Syracuse University School of Architecture.

REFERENCES

1. Roames. Fugro. Accessed Decmber 1, 2018.

https://www.fugro.com/our-services/asset-integrity/roames-power.

2. C. Donalek et al., "Immersive and collaborative data visualization using virtual reality platforms", Proc. IEEE Int. Conf. Big Data (USA Big Data), pp. 609-614, Oct.

2014.

3. Joanna Kim, "Get a Closer Look with Street View in Google Earth VR," Google, September 14, 2017, , accessed October 22, 2018,

https://www.blog.google/products/earth-vr/get-closer-look-street-view-google-earth-vr/.

4. Jan Perhac, Wei Zeng, Shiho Asada, Stefan Mueller Arisona, Simon Schubiger, Remo Burkhard, and Bernhard Klein. "Urban Fusion: Visualizing Urban Data Fused with Social Feeds via a Game Engine." 2017 21st International Conference Information Visualisation (IV), 2017.

doi:10.1109/iv.2017.33.

5. Al Bondakji, Louna, Anna-Maria Chatzi, Minoo Heidari Tabar, Lisa-Marie Wesseler, and Liss C. Werner. "VR-visualization of High-dimensional Urban Data." 2018

SimAUD 2019 Symposium on Simulation for Architecture & Urban Design

Modeling Urban Energies

A Data-driven Framework For Urban Building Operational Energy Use Modeling . . . .71

Narjes Abbasabadi