home > the swörd > january 2009

Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/january2009.php on line 36 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/january2009.php on line 37 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/january2009.php on line 38 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/january2009.php on line 39 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/january2009.php on line 40 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3   

The Swörd - January 2009

Copyright notice: The screenshots on this page show a number of textures that are borrowed from various commercial projects, including Jedi Knight II, Quake 3 Arena, and Doom 3. All of these textures are the property of their respective owners, and are used here for non-commercial testing purposes only.

Building Better Worlds

I'll admit to a certain degree of obsession concerning the technology that id Software keeps churning out, ever since the days of the original Doom. In particular I became very fascinated with the way that id chose to build levels for the Quake series of games, a workflow that continued in the Doom3 engine as well as the the Half-Life series of games from Valve Software.

For me, using Constructive Solid Geometry to build 3d levels is a no-brainer, simply because it makes logical sense to be building with solids instead of surfaces (i.e. polygons). Lots of people disagree with me here, from artists to heads-of-programming at big game companies, but both id and Valve seem to prefer using this method. Due to the success of these companies games commercially, not to mention with me personally, I lean towards following their example.

I have been dabbling in 3d since my days at Massive Entertainment, but since my responsibilities there didn't directly include low level 3d-programming, I never really immersed myself fully in the field. It wasn't until I found myself the only programmer at Jungle Peak Studios in 2006 that I made a concentrated effort to learn DirectX9 and HLSL. In the process I found that lots of things had changed since consumer grade 3d-accelerators became available in about 1996-1997.

At Jungle Peak we built an editor that enabled us to create worlds that were quite reminiscent of the stuff I'd been involved with at Massive, i.e. heightfield based outdoor maps, but with the twist of using mesh-based cliffs similar to what Blizzard Entertainment did for Warcraft III. Working on this I found that I could get away with batching lots and lots of static geometry on the GPU, sorted by shader. As the drawing calls were the major bottleneck (as I'd always been taught), we did some really cool stuff with instances of trees (for example). We skewed the bottoms of the models on a per instance basis (the roots) to follow the ground, creating a nice organic look, and then just baked all of those meshes together into a single giant world-wide mesh.

Since then I've been very interested in doing some kind of indie 3d project myself, but it seems that several things have constantly been holding me back. I did lots of work, following in the footsteps of what I'd done at JunglePeak, based on generating 3d worlds from a simpler dataset (data amplification). Most of the stuff was based around a 2d grid, and as a preprocessing step the world mesh would be generated and cached.

In a regular grid environment you can get pretty good results with simple vertex-based lighting, since you have very good control over the regularity of the vertices. With everything equally spaced, things look good. I did a lot of work on pre-baked lighting with occlusion, and this got me closer to the look of games like Quake. Eventually however, I started feeling limited by the grid-feel of the levels, and my interest in the technology waned.

Since then I've realized that what I really wanted was to have a toolset that resembled Quake 3 Arena. I've probably been working towards that goal for the better part of a year, starting off with axis-aligned boxes, and then slowly moving towards the arbitrary convex polytopes that Quake uses. It's been a long and sometimes very frustrating experience, but now I feel like I'm very close to having a good basis for future game projects.

Of course, I'm the first guy to caution against building "engines" and "technology for the sake of technology", but in this case I really needed to learn a whole bunch of math and basically really polish up my 3d skills to make this happen.

From Axis-Aligned Boxes to Convex Polytopes

I started out with a simple brush editor very much like Q3Radiant, just using axis-aligned boxes. Even here, there was a lot of work with getting the editor to work, implement all the tools I wanted, etc. I liked the automatic world-space texture mapping that Quake uses, and once that was implemented I got the feel of being able to build with "solid blocks of material".

I had some basic light tracing stuff from earlier projects, and implement collision detection with axis-aligned boxes wasn't all that hard, so basic vertex lighting wasn't far off. However, as you can see in the screenshot above, using vertex lighting in this context showed a lot of artifacts, simply because there were fewer vertices than before, and the ones that were there weren't uniformly spaced.

I didn't want to get in completely over my head, so I opted for continuing to use vertex lighting and rather look into other ways of tesselating the world. Obviously, placing boxes on top of boxes led to situations with lots of hidden surfaces that I didn't really need. I wanted to get rid of those to keep down the total vertex and triangle counts, as well as get the tesselation to the point where there always would be vertices where boxes met, hoping that this would improve the lighting situation.

What followed was one of the harder aspects of the whole thing, namely tesselation of the box sides. This eventually led me to discover Delaunay triangulation, and that became a whole side project. About the same time, I had started messing around with another goal, namely using convex polytopes instead of axis-aligned boxes. Both these areas of research were quite difficult, but in the end the Delaunay stuff and the polytope stuff came together faster than the axis-aligned tesselation, simply because the polytopes forced me into thinking more in terms of arbitrary planes and local coordinate systems on these planes.

From Vertex Lighting to Lightmaps

As things stand today, I'm reasonably happy with how the tesselation is working out. There are still issues related to getting rid of all hidden surfaces, but currently I'm working around those by having a special material that marks a brush (polytope) side as being "nodraw". But I'm getting ahead of myself.

However, by this point it had become obvious that vertex lighting wasn't going to be good enough, unless I tried some kind of insane tesselation tricks to get the lighting samples to be uniform enough across surfaces. It seemed that the vertex counts involved seemed like a bad idea, so I started looking into what lightmaps would entail.

What worried me the most was how the whole shader setup would work. Since I basically needed a unique texture per triangle for the lighting, I was worried that I would end up with way too many textures. Luckily I had found a site earlier detailing how to pack arbitrary rectangles onto a square using a 2d bsp tree, and when I implemented that I was surprised how well the packing worked. Of course, my luxel-per-world-unit ratio is much lower than my texel-per-world-unit ratio, so in the end all the lightmaps for a mid sized level easily fit into a 256x256 texture; at this point I'm no longer worried about that.

The hardest thing about the lightmaps was figuring out a good way to uv-map the various brushsides. I had been doing a lot of local space (on the side's plane) work concerning tesselation, and had some good tools for that, so that was my first approach. The problem with that was that angled (non-axis-aligned) brushsides would get mapped diagonally onto the lightmap, making really bad use of texture real-estate as well as showing significant filtering artifacts.

The second approach (which is currently in use) is similar to the world-space texture mapping mentioned earlier. I basically figure out the closest axial plane for a brushside (based on the normal), and map the surface to the lightmap projected onto that plane (figuring out the size in 2d). To get from there to 3d again (in order to be able to position the actual patches to trace for lighting) I simply project the axial points onto the brush side again.

As always, texture filtering makes small adjustments of uv-coordinates necessary, and I ended up expanding each lightmap be a few luxels in order to get rid of artifacts, which were especially apparent on small features.

Materials and the Beam Light Model

Mixed up in all of this stuff was the implementation of materials. As mentioned, I liked the idea of "building with materials", the way the Quake and Half-Life games work. This tied heavily into how the fundamental rendering of the world would work, as materials are closely tied to the actual HLSL shaders I use to render everything.

Right now the materials control practically everything, and are the basis of the batching of all polygons in the world. Based on their settings they are either "lightmapped" (diffuse plus lightmap textures), "prelit" (light emitting materials), and some variations of these two main cases based if I want additive / energy-like stuff.

In the material editor you can specify if a material has a base; that is a pre-lit uv-animated texture which is rendered beneath the normal diffuse texture, using the diffuse alpha as a mask. This is basicaly a hard coded test of Quake 3 Arena style shaders. The diffuse in this case uses the lightmap, while the base is fully lit.

After some initial tests with point lights (ala Quake), I wanted to implement the feature from Quake 3 Arena in which you can specify that a material emits light (I really wanted to avoid having to place "invisible lights" manually). If I understand correctly this is implemented by taking the area of the brushside onto which the material is placed into account, and then basically placing a bunch of implicit point lights that all contribute to the lighting from this material.

Needless to say, this is expensive due to the explosion of individual point lights. I have built a ton of Quake 3 Arena levels over the years, and I always got severely frustrated by the time the lighting took to calculate. I started to remember all of this at this point, and understood that I needed to be creative in order to get around this.

On a fundamental level, I sort of felt that the subdivision into many point lights seemed like a hack, and an expensive one at that. I reasoned that one should be able to calculate lighting from a whole brushside at once (a sort of surface or area light). I speculated that everything at a given distance directly in front of the light emitting surface should be uniformly lit, and things outside the edges should fall off in a fashion similar to what happens with a point light.

After messing around with this for a while I got some pretty good results. In this area the choice to use convex polytopes to model world features really paid off, because I could use the convex properties of each brush to help out with the lighting model, which I am calling the Beam Light Model. Calculating lighting for an entire level is very very fast (with occlusion disabled), simply due to the fact that there are fewer total light emitting entities (as a single beam replaces many point lights). Also, since beams automatically backface cull themselves, the lighting preview is very very close to the final (occluded) result.

While working on getting the occlusion to work, something which I had as an absolute requirement since I fell in love with the shadows in Quake a long time ago, I tried a little cheat that ended up working very well. Since I didn't want to resort to tracing lines of sight from many points along a light emitting surface to the patch/luxel in question (similar to having lots of point lights), I reasoned that I could fake it by limiting the points traced to the 4 corners and the center of the beam. Based on the success of these 5 traces, I then just contributed the corresponding percentage of the total beam light (which is fundamentally based on material settings and the area of the beam surface) to the luxel being lit.

Surprisingly this worked very well, and is reasonably fast. Also, it sort of looks like a radiosity solution (which it definitely is not on a technical level), as there is some bleeding of lights. Additionally, I found lower resolution lightmaps to look better in the final image (with textures on), probably because the fake-radiosity combined with texture filtering tends to mix things together nicely.

All in all I personally think that the lighting looks better than Quake 3 Arena, which was basically the look I was aiming for. I think this is due to the combination of the fact that I'm not placing any "invisible" point lights anywhere, as well as the characteristics of the beam lights. All light is emitted from surfaces that "look like they would emit light", and I think this helps a lot to sell the final image.

And oh yeah, the bsp tree...

In order to get the tracing of all the beams (hundreds) vs all the patches (about fifty thousand) in my biggest test level, I ended up creating a bsp tree from the brush sides in order to speed the occlusion calculations. This is still semi-buggy, but ideally I would like to use this for all traces / collision detection in any game built on top of this stuff. In any case, this speeded lighting calculations from over 3 minutes to about 11 seconds in my big test level.

Again, as I learned at Jungle Peak, there isn't much that I could reasonably do in order to reduce the number of total draw calls or the amount of geometry that the GPU deals with. I don't use the bsp for anything related to culling; I simple don't cull. I have instead opted to cache everything statically on the GPU and just render the entire world all the time. This works fine; the nice fellows over at nVidia really knew what they were talking about when they argued for hardware accelerated transform and lighting back in 1998 (at Microsoft Meltdown in London), even though all the game devs in the room insisted that they would never use it even if nVidia implemented it... :)

Below is the current state of things. Observe the lightmap that is displayed in 2d in the final image.

Future directions

At this point I feel pretty happy with the basic "state of the technology". Again, I really don't like to hammer away at technology without having a specific game project in mind, and by this point I feel that I have basically have everything I need to get started on an actual game.

There is of course a lot more that could be said about the general design of the application, as all of the editing tools are built into the actual "game"; i.e. you can freely switch between editing the world, applying materials and aligning them, as well as editing materials, tweaking lights, and finally simply running around in the level.

Another thing that I really want to use the bsp for is to figure out the volumes of the "inside" and "outside" of the level. This is something that I understand the Quake tools to do, and this would help me a lot with the culling of hidden surfaces as well as lighting patches that end up outside the visible level.

However, this is really not that important, as the number of triangles, vertices, and ultimately draw calls involved here are very low. I expect for any game project to have lighting done as an offline process, and store the lightmap textures as part of the level data on disk. With that setup the load times are basically insignificant, meaning that I can split up levels in any way I want without anything really getting out of hand.

So now I guess it's time to put my game design hat on instead... :)

 home > the swörd > january 2009
 contact: johno(at)johno(dot)se