Archive for October, 2010

GDC Online 2010 [Austin]

October 16th, 2010 No comments

So I attended GDC Online this year. I missed the David Perry talk whereas they have a new technology that allows the user to play game demos that stream over the internet. The kicker- the game actually runs fully on the server and employs Cloud Computing technology they call Gaikai. It’s a lot different from OnLive in the aspect that the game developer is charged rather than the gamer.

Attended a talk whereas Ubisoft discussed bringing over their MMOs to the East (China) and the challenges they faced. The talk was very informative. It drove home many of the familiar points

A) It’s easier to start off as a Free to play title then go Subscription based
B) Eastern gamers are more eager to embrace F2P Models

Sadly, I missed CCP’s Talk about the benefits of bringing terrority takeover to MMO space. They did this talk on a different day than I planned to attend :(

There was a crapload of buzz about Unity. I’ve heard about it before and meant to check it out. Apparently, they made their tech very accessible and the pricing structure is very flexible allowing many small teams to hit ground zero running. We played a game (Ruined Online) developed using the tech that actually played in a web browser. Apparently, Unity has a web based plugin. Very impressive indeed.

I also help man the Vigil games booth this year (was there in person Friday) and talked to several game developers and future game devs. It was nice to see all the talent gearing up getting ready to enter the industry. Hopefully, I was able to give out some good advice on getting started

Categories: Games Industry, MMO Tags:

Using a SQL Database for Tools

October 4th, 2010 No comments

[Updated Oct 7, 2010]

I think I blogged on this subject a little previously here and there. But figured I’d dedicate a blog to this subject again.

SQL Databases can be very useful for game development on the Tools side. During runtime, you can export the data from SQL into a binary format your engine understands from there. The benefits should outweigh the bad. I know devs hate reinventing the wheel, but using 3d Studio Max just to build the Levels completely can lead to much gnashing of teeth since now only 1 artist has access to the Level at a time. Don’t get me wrong, I’m actually a fan of writing a plugin/script to utilize a 3d modeling package for small teams or for teams where your levels are components. However, what about the other content like the data designers tweak?
Read more…

Categories: Programming Tags:

Deferred Rendering Notes

October 2nd, 2010 2 comments

Like I’ve said before, Deferred Rendering is the new buzz term these days. The basic advantage is that you can render a lot of dynamic light sources that uniformly affects the scene. You can write one that can use a single render target by making multiple passes or just use MRT (multiple render targets) straight up.

If you are thinking bout writing one I can offer some advice:

1. When you write to your depth buffer, most tutorials will suggest the z/w method. However, there are better techniques such as the ones employed by Blizzard / Crytek which makes use of view space rays. MJP wrote an excellent blog on this method here (which is what I do). You can also try accessing the hardware depth buffer directly as well.

2. Not all video cards support render targets that can have non-matching bit depths. So, what many folks do is go either 64 bit or 32 bit. Currently, we render all of our targets to 32 bit depth. So, the depth buffer is D3DFMT_R32F. Every other render target is D3DFMT_A8R8G8B8 except for Normals, which gets D3DFMT_A2B10G10R10. In the Starcraft 2 tech paper, Blizzard goes D3DFMT_A16B16G16R16F (64 bit depth) which is what I do for high quality settings

3. Some people resort to bit packing and such. MJP writes a post which goes over the method here:

Packing multiple values into a single component is pretty ugly in SM3.0 and below, because you don't have integer ops. So essentially what you need to do is use floating point math to determine the final integer value of a component, and then divide that value by the max integer value to normalize back to the [0,1] range. So for instance lets say you wanted to pack two [0,1] values into an 8-bit integer. The lower 4 bits is 0 - 127, so you would simply multiply by 127 to get the integer value. The upper 4 bits is 128-255, so you would multiply by 127 and add 128 to get the value of the upper 4 bits. Then you would add (equivalent of bitwise OR) the two values to get your final 0-255 value. Then you would divide that sum by 255 to get the final [0,1] value to be written out to the render target.
float Pack8Bits(float val0, float val1)
    float lower = val0 * 15.0f;
    float upper = val1 * 15.0f * 16.0f;
    float sum = lower + upper;
    return sum / 255.0f;

To adjust for 16-bit you would just expand the range to 0 - 65535 instead.
float Pack16Bits(float val0, float val1)
    float lower = val0 * 255.0f;
    float upper = val1 * 255.0f * 256.0f;
    float sum = lower + upper;
    return sum / 65535.0f;

[Richard] Note, you will also have to unpack these values in the shader if you go this route.

4. During the deferred lighting pass, you ‘accumulate’ all of the light contribution from all of the lights in the scene. Thus, you can get by with just 1 shadowmap which will get reused during rendering the shadowmaps for each shadow-casting light. Below is an explanation I posted once:

What I do is use a light accumulation buffer (see deferred rendering). During the lighting phase I do this:

– For every light in the scene, render from its perspective into a shadowmap
– Now switch to the deferred point/spotlight shader. While rendering the light (using alpha blending), sample the shadowmap and project the shadows into the light accumulation buffer

From what I gather, most techniques seem to involve having all of the lights contribute to a light accumulation buffer and then later, this render target is composited into the final scene

Categories: Programming Tags: ,