Building Virtual Worlds, Part 5

Asynchronous loading is finally working…

After about a month, I’ve finally got the loading of 3D content into the data cache working asynchronously. It’s taken me a lot of work to get this right as it had to be implemented using double message queues to offload background workers to another thread while the main thread continues with the rendering. Once the 3D content is loaded into the main cache, a message gets pushed back to the rendering thread, signalling that the new block of buildings is ready for display.

It’s a pity that the YouTube clip ends up so blurred, but it might be that it doesn’t like my 21:9 aspect ratio monitor. The original clip is 2560×1080 pixels, so it started out OK.

Now that I’ve got the message queues sorted out I’m going to go back and revisit some of the rendering. The Earth textures also need to load on demand as you can see how I’ve compromised the level of detail above. Also, the buildings don’t load into buffers on the background thread yet. The SSD disk on my machine is so fast that it looks like they’re loading from RAM when actually they’re coming from the disk. Running on my iMac at work shows what a big difference it makes.

 

3D Buildings

I’ve been experimenting with 3D buildings in my virtual globe project and it’s now progressed to the point where I can demonstrate it working with dynamically loaded content. The following YouTube clip shows the buildings for London, along with the Thames. I didn’t have the real heights, so buildings are extruded up by a random amount and, unfortunately, so is the river, which is why it looks a bit strange. The jumps as it zooms in and out is me flicking the mouse wheel button, but the YouTube upload seems to make this and the picture quality a lot worse than the original:

Performance is an interesting thing as these movies were all made on my home computer rather than my iMac in the office. The green numbers show the frame rate. My machine is a lot faster as it’s using a Crucial SSD disk, so the dynamic loading of the GeoJSON files containing the buildings is fast enough to run in real time. The threading and asynchronous loading of tiles hasn’t been completed yet, so, when new tiles are loaded, the rendering stalls briefly.

On demand loading of building tiles is a big step up from using a static scene graph. The way this works is to calculate the ground point that the current view is over and render a square of nine tiles centred on the viewer’s ground location. Calculation of latitude, longitude and height from 3D Cartesian coordinates is an interesting problem that ends up having to use the Newton-Raphson approach. This still needs some work as it’s obvious from the movies that not enough content ahead of the viewer is being drawn. As the view moves around, the 3×3 grid of ground tiles is shuffled around and any new tiles that are required are loaded into the cache.

Working on the principle that tiles are going to be loaded from a server, I’ve had to implement a data cache based on the file URI (just like MapTubeD does). When tiles are requested, the GeoJSON files are moved into the local cache, loaded into memory, parsed, triangulated using Poly2Tri, extruded and converted into a 3D mesh. Based on how long the GeoJSON loading is taking on my iMac, a better solution is to pre-compute the 3D geometry to take the load off of the display software. At the moment I’m using a Java program I created to make vector tiles (GeoJSON) out of a shapefile for the southeast of England. I’ve assumed the world to be square (-180,-180 to 180,180 degrees) then cut the tiles using a quadtree system so that they’re square in WGS84. Although this gives me a resolution problem and non-square 3D tiles, it works well for testing. The next step is to pre-compute the 3D content and thread the data loading so it works at full speed.

Finally, this was just something fun I did as another test. Earth isn’t the only planet, other planets are available (and you can download the terrain maps)…

 

Tiling the Blue Marble

Following on from my last post about level of detail and Earth spheroids, here is the NASA Blue Marble texture applied to the Earth:

The top level texture is the older composite Blue Marble which shows blue oceans and a very well-defined North Pole ice sheet. Once the view zooms in, all subsequent levels of detail show the next generation Blue Marble from January 2004 with topography [link]. Incidentally, the numbers on the texture are the tile numbers in the format: Z_X_Y, where Z is the zoom level, so the higher the number, the more detailed the texture map. The green lines show the individual segments and textures used to make up the spheroid.

In order to create this, I’ve used the eight Blue Marble tiles which are each 21,600 pixels square, resulting in a full resolution texture which is 86,400 x 43,200 pixels. Rather than try and handle this all in one go, I’ve added the concept of “super-tiles” to my Java tiling program. The eight 21,600 pixel Blue Marble squares are the “super-tiles”, which themselves get tiled into a larger number of 1024 pixel quad tree squares which are used for the Earth textures. The Java class that I wrote to do this can be viewed here: ImageTiler.java. As you can probably see from the GitHub link, this is part of a bigger project which I was originally using to condition 3D building geometry for loading into the globe system. You can probably guess from this what the chunked LOD algorithms are going to be used for next?

Finally, one thing that has occurred to me is that tiling is a fundamental algorithm. Whether it’s cutting a huge texture into bits and wrapping it around a spheroid, or projecting 2D maps onto flat planes to make zoomable maps, the necessity to reduce detail to a manageable level is essential. Even the 3D content isn’t immune from tiling as we end up cutting geometry into chunks and using quad tree or oct tree algorithms. Part of the reason for this rests with the new graphics cards, which mean that progressive mesh algorithms like ROAM (Duchaineau et al) are no longer effective. Old progressive mesh algorithms would use CPU cycles to optimise a mesh before passing it on to the graphics card. The situation now with modern GPUs is that using a lot of CPU cycles to make a small improvement to a mesh before sending it to a powerful graphics card doesn’t result in a significant speed up. Chunked LOD works better, with blocks of geometry being loaded in and out of GPU memory as required. Add to this the fact that we’re working with geographic data and spatial indexing systems all the time and solutions to the level of detail problem start to present themselves.

 

Links:

NASA Blue Marble: http://visibleearth.nasa.gov/view_cat.php?categoryID=1484

Image Tiler: https://github.com/maptube/FortyTwoGeometry/blob/master/src/fortytwogeometry/ImageTiler.java

ROAM paper: http://dl.acm.org/citation.cfm?id=267028 (and: http://www.cognigraph.com/ROAM_homepage/ )

Programmable Maps

GeoGL_Tube

I’ve been getting increasingly frustrated with the tools available to visualise the tube, bus and train data I’ve been collecting, so I’ve ended up creating my own. If you’re wondering why some of the lines don’t have any tubes in the above diagram, it’s because I’ve got the speed set very high and I’m not picking the new route correctly when there is a choice.

The image above is an OpenGL visualisation in C++, but based on my experience with the AgentScript (2D) and Three.js (3D) browser based visualisations. Essentially, I wanted something that would allow me to create the animations that I’ve been using 3DS Max for, but in a much simpler way. The following is a bus animation that I built for a recent presentation:

This was created using 3DS Max, with some custom MaxScript code to load the bus positions and create the animation key frames. The main problem with this is the scale of the data, which is why I had to limit it to between 09:00 and 12:00. Art tools generally don’t like to handle this quantity of data and I’ve also had issues with packages like Unity and Lumion.

Increasingly, I’ve been moving towards the idea of “Programmable Maps” where the visualisation is built through a series of stages which load the data, apply behaviours to elements of the scene that move and produce an impressive 2D or 3D visualisation with advanced lighting or tilt shift in the same way as ViziCities. The use of APIs and 3rd party libraries to obtain the real-time data, along with the temporal aspect, makes it very difficult to fit this type of visualisation into a conventional GIS framework.

The example above is built around a C++ and OpenGL graphics engine, but one that is linked with geospatial libraries so it’s more than just a game engine rendering 3D assets as artwork. The experience with the XBox tubes demo (C#, XNA) and Chrome Three.js example showed that it’s a nightmare to get the geometry in the correct place and orientation unless it’s properly georeferenced. Working with the live tube data, where the API can only be queried every 3 minutes, leads to a real-time visualisation where positions are effectively being forecast between data updates. Putting all this together results in a requirement for a geospatially aware graphics engine linked with an agent based modelling package that allows us to code behaviours for the elements that are moving.

The programmable maps part doesn’t really come into play until you increase the level of sophistication and start to layer additional levels of processing. For example, bus positions are calculated based on arrival times at the next stop. This is a graph technique where you interpolate the time between nodes, but, in order to visualise the positions correctly, this position along the link needs to be applied to a road network to find the real position on the ground. Otherwise you get buses driving through the river and not using the bridges.

What I’m describing is a workflow for geospatial data to go from the raw data through to visualisation using library building blocks and web services where appropriate. There is one final trick to this approach though, as we could use it to make a visualisation directly from a NetLogo agent based model. A while ago I showed how to run the NetLogo program inside a Java program and capture the positions of the agents which can then be loaded into 3DS Max. Exactly the same thing could be applied here, with a NetLogo simulation driving the 3D engine.