home > the swörd > september 2009

Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/september2009.php on line 36 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/september2009.php on line 37 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/september2009.php on line 38 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/september2009.php on line 39 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3    Warning: Missing argument 2 for tab(), called in /customers/6/6/0/johno.se/httpd.www/sword/september2009.php on line 40 and defined in /customers/6/6/0/johno.se/httpd.www/functions.php on line 3   

The Swörd - September 2009

Ambient Occlusion

Since January I've been messing around on and off with the technology, and one of the things that I've implemented is a variation on ambient occlusion. The main driver has been laziness, since even though the Beams make lighting the level much simpler than placing point lights, things that make level design easier and above all faster are always of interest.

Not really knowing much about ambient occlusion, I forged ahead rather blindly. The basic approach I'm using is to trace lots of rays in a spherical shape (a "vector ball") from each lightmap patch. These are little rays that travel for a certain length, and depending on the ratio of rays that collide with level geometry to ones that don't, I calculate a lighting value. This is all just white light; I haven't figured out a way to take colored lightsources into account here, so this is just light coming from no specific source, i.e. "everywhere".

One of the cool things about all this is playing around with the length of the rays. Making them "long" in relation to your level geometry / scale has the effect of making smaller rooms darker, and bigger rooms lighter. This is of course just an exaggeration of the basic effect, but to me it looks sort of like a radiosity solution, and I think it's pretty viable for a general lighting scheme. Perhaps a little colorless though...

I personally like the look without textures more...

In the ever present search for a game project to actually base on this stuff, I've been thinking that the basic ambient occlusion look with the addition of some "fully bright" details would be pretty cool and look reasonably original. However to really achieve that I would need to get away from the "Quakey" dark and hi-frequency albedo textures...

Subtractive Geometry

I recently re-played the original Unreal game from 1998, a game that I've always liked the look of. I managed to install the final patch and enable all the D3D goodness, including the cool detail textures they have going. While playing through it I happened to read up about the technology on Wikipedia, and thus remembered that the level construction process of UnrealEd allowed for both subtractive brushes as well as Quake-style additive / solid brushes.

The big plus here is supposedly that you couldn't cause "leaks to the void" in UnrealEd (since you are carving out a level from solid mass), something that is easy to do in Quake level editors (where you are building a level in the "void"). I also noticed that Wikipedia stated that build times for Unreal levels vs Quake levels (at the time) were several orders of magnitude faster. Obviously both companies (Epic and id) had proprietary technologies, but it seemed to me that brush counts in Unreal levels would probably be much smaller, and that should in some fundamental way account for the quicker processing times.

For my part I don't have any requirement for "sealed" levels as Unreal and Quake seem to have had, but as a result I've implemented the stop-gap measure of the "no-draw" material, which I use to mark all the surfaces I don't need. In even moderately sized levels this quickly becomes quite tedious and really goes against my implicit goals of keeping things quick and productive.

Notice the sheer volume of brushes that get stripped away here.
In general less than half of the faces of any given brush are actually used.

So basically I wanted subtractive brushes too! I made some quick and dirty changes to my tesselation (inverted the winding order) and found that things looked pretty promising. The basic idea was to be able to get away with building an entire room with a single brush, thereby utilizing all 6 faces for actual visible walls. However, the tesselator needed work to get rid of the unwanted faces between the brushes, and that proved to be reasonably challenging.

In the end the tesselator turned out to be better than before, even for the old additive brushes, but it was a bit slower. When run on the most complex level (the one that sparked all of this in the first place) it looked like it was going to start to become an issue (due to the high brush count of that level, which was built "additively"). Based on that it seems to me that subtractive brushes are validated even more.

I proceeded to build a semi-extensive level with the new subtractive brushes.
In the top-view you can see the dominance of subtractive brushes (blue) over additive (red).
The colors in the 3d-view show the mapping of the lightmaps, but the actual lighting isn't supported for subtractive brushes yet.

There were tons of options when it came to implementing the support for both kinds of brushes, and the collision detection was especially complicated by this. As things stand right now I have managed to implement support for brush based collision (used for the player right now), but I really need to get the bsp-tree working again in order to get the lighting calculations to run at reasonable speeds.

3d Character HELL!

One of the absolutely most difficult and frustrating things about being an indie coder / developer is the issue of 3d content, and in particular this applies to the issue of getting your hands on 3d characters. This has had me literally pulling my hair out at times. I know a number of professional 3d modellers / animators, and I can't seem to get them to even lend me a character that could work in the context of my games.

To be fair, this is a complex issue. During my time at Jungle Peak Studios we ended up outsourcing the creation of a character animation system and pipeline. This system, called Skeletor, ended up to be very good from a coder point of view, but at the same time it was also quite fragile. The artists had to build things in a particular way (we used Maya 2007 at the time), observing the kinds of transforms they applied to the skeleton, the max number of bone weights per vertex (3), as well as a number of naming constraints in order to get the skin and skeleton to export at all.

The system was designed to support simultaneous animations on different parts of the skeleton, as well as animation blending, but for some reason the artists never managed to build ANY character with more than a single animation for the full body. My personal theory here is that modeling, texturing, rigging, and animating good characters is SIMPLY VERY HARD, even though most artists I've met won't admit to that.

In the years since I left Jungle Peak Studios I've constantly been thinking about the issue of getting 3d characters into a game engine. To use Skeletor would have been very preferable, much because it supported both everything that I could want as a coder (being fully Immediate Mode) as well as everything that the artists wanted. However, as Maya evolved to new versions the APIs also evolved, and in order to both compile and run the Skeletor exporter you need an installed version of Maya.

Personally I'm not into stealing software, so this effectively killed Skeletor's usefulness for me. I know that Daniel over at Spell of Play Studios has done some work trying to get Skeletor to work with content created in Milkshape, but I don't think that panned out too well. Finally it very honestly seems to me that exporting skinned meshes and skeletons out from a commercial 3d package and into a custom codebase is a luxury that we as indies simply cannot afford. I know from first-hand experience at both Massive Entertainment, DICE, Jungle Peak, and MindArk that these kinds of systems are a major investment and a very important part of each company's proprietary codebase, often with more than one dedicated full-time coder assigned to them.

Do It Yourself!

Things started looking rather dim for a while, at least to me, but I was determined to solve the various problems separating me from making games in 3d, and to me the first one to tackle was that of access to ANY kind of content at all. As a result I have spent some time off and on during the past 2 years learning how to model.

I reasoned that one of the main reasons that I've been able to create games at all is based on the fact that I can find my way around a 2d paint program reasonably well and use it create art that basically looks like the thing I'm trying to draw. Also I don't like being forced to wait for someone to complete something so that I can push forward to the next thing, so having some level of modeling skill myself seemed like a good idea.

Learning to model... it's been really really painful! After messing around with all of the freeware programs I could find, I finally fell for Wings 3D. One of the main reasons for this is the fact that it is a solid modeler, and that kind of thing just clicks conceptually for me. After a few days learning Wings I was able to create basic mechanical-looking things, but characters continued to elude me.

I think one of the main reasons for this is simply the need to learn some basic anatomy, which I slowly but surely did. During all of this I realized that I not only wanted to build A character, I wanted a character with CHARACTER. My preferences in graphics have always been very retro, and since I was also semi-consciously thinking about how to animate and actually get this stuff into a game of mine, low-poly / lo-fi was always a big focus.

I finally found a really cool forum thread on the subject of low-poly and have collected some of my favorite images here. Some of the stuff there I find to be really brilliant, in particular Rooster's use of a low-res texture on a low-res mesh; that just seems like a great conceptual fit to me!

I love Rooster's stuff!

As things stand I'm still a fledgling 3d-artist, but lately I stumbled upon a character design that is based on a very simple premise while still being expandable into something with style. Also I've been trying to do some very simple animation on this design, and a plan has slowly been forming in my mind...

Who needs arms?

The PixelMesh

During all of this painful growth, I've been considering the pipeline issue; both how to animate characters and how to get them into my "engine". I think Wings has worked out very well for the modeling and uv-mapping stages (it was really just yesterday I figured out how to uv-map properly in Wings...)

My plan is this:

I've talked to a lot of artists about this, and the discussions are universally very lengthy. The gist is that artists know all about vertex blending and generally don't like it, since you can't properly do movement in an arc when linearly blending between static frames of animation. As a result they say that they "require" mesh skinning based on skeletal animation, i.e. what Skeletor did.

I retort:

Even though I do not count on any artist being interested in creating content for my games, I think that this is a case of "if you build it, they will come". If they ever were inclined to come at all...

So, in order to get started I dug around and got my hands on some content from Quake II, which is exactly the right kind. One single mesh that is animated by changing the vertex positions, i.e. you have positions for each vertex for each frame. This boils down to having normals for each vertex and frame as well, which Quake II incidently compressed to all hell by having the meshes reference a global table of normal vectors, but I digress... :)

Since I speculated that doing all of this transformation "by hand" (on the CPU) would be a bad thing, I implemented a system I call PixelMesh. In this scheme, I convert the positions and normals for each frame into pixels, encoding the data in a texture map, and animate everything on the GPU using a HLSL shader. As it turns out, this requires shader model 3 in order to read the texture in the vertex shader, as well as requiring floating point textures. I don't really know if requiring this shader model is reasonable in a game these days, but it doesn't seem completely crazy either. I guess I could always do a software fallback, but as things stand I can render LOTS of dudes at a VERY high frame rate, all with individually timed animations. All of this completely done on the GPU.

625 guys on screen (almost) at over 100 fps.
Ok sure, they have broken polygons but this is an early version... :)

The big tradeoff here is texture memory. I need 2 RGBA floating point textures (one for positions, one for normals), and these need to be numVerts X numFrames pixels big. However, I will be running relatively low-poly (the test dude is 1400 vertices and can be severely optimized) and SEVERELY low-frame. By this I mean that I will use a very low number of key-frames and then simply linearly interpolate between them. I mean come on, my guys are SUPPOSED to look like Lego-dudes, and they don't have any knees. And they don't need arms either... :)

The disk size for the current test PixelMesh dude is about 50kb tris and control info, and about 200kb each for the floating point position and normal textures.

As mentioned, the Lego-inspiration is probably very obvious, but that's because the design of those Lego-dudes is brilliant. Lots of personality built on something that is essentially very limited articulation-wise. One of the things that I would like to try with this is to have interchangable heads, feet and hands / weapons / things to hold. Also I like the idea of animating the face by animating uvs; just have a bunch of different painted (2d) faces on a texture map and swap them around to do different expressions and dialog.

As things stand right now I'm seriously considering building my own animation tools into The Swörd. This is partly because the idea of learning Blender or something similar just to do rigging and skinning just completely turns me off, and partly because I've been getting results that are absolutely good enough for my purposes by just manually animating vertices in Wings. Sure this is sooo 1990's, but I don't care.

So in the end this is all just a mix of the old and the new. I'm have lots of fun and learning tons of stuff, and hope to have a little game showing most of this stuff off within the next... oh say 6 months.

I mean let's face it, it's time for me to get over the 640 x 640 x 16 bits-per-pixel love-affair that I've had for so long... :)

 home > the swörd > september 2009
 contact: johno(at)johno(dot)se