Blog

You are filtering on tag 'xna'. Remove Filters
RSS


Windows Phone Code Release

November 26th, 2012

About a year back I wrote a couple of classes to aid rapid Windows Phone app development in my development group. I've decided to release them on my site and announce them in the hopes that anyone else might be able to make use of them.

This includes these three files: The static TMicrophone class and the static TAccelerometer class, which provide wrappers on the the existing microphone and accelerometer APIs that greatly simplify their use - most useful functionality is available through simple wrapper methods. The final file contains Sprite and SpriteSheet classes that provide simple, lightweight spritesheet loading and sprite creation, with variable-speed playback and circle-to-circle and box-to-box collision.

For more detailed information and downloads, visit this page: https://www.brianmacintosh.com/code/code.php?id=3


Permalink

Dynamically Lighting 2D Scenes with Normal Maps

April 13th, 2012

My random inspiration of this week occurred when one of my professors, Dan Frost, mentioned basic lighting techniques in lecture. It turns out that for any surface, the final color of that surface with a light on it can be calculated quite simply by multiplying the surface's color by the color of the light times the dot product of the surface normal and the normalized vector from the light to the surface. I thought to myself, "I could totally do that." Any about an hour later, I had this demo up and running:

Normal mapping has been traditionally used in 3D games on surface textures for 3D models. In this context, its use can easily make an extremely low-poly object appear much more detailed at a low processing cost by rendering textures with lighting such that they appear to "pop out". In this demo, I have instead used it to fake a 3D effect on a completely flat 2D plane.

The process is fairly straightforward. First, the standard scene is rendered to a render target cleared black, with nothing special done. Then, the scene is rendered a second time to a second render target, but this time the normal maps for each image, if they have one, are drawn instead of the standard image. Normal maps encode information about the surface normal, or the direction the surface faces, of each pixel in an image. In most implementations, the X, Y, and Z components of the normal vector are encoded as the R, G, and B components of the color of a pixel on an image. This can be seen toward the end of the above video. This second render target is cleared RGB(128, 128, 255) so that images that do not have normal maps can be rendered as flat planes. Finally, both of these renders are passed to a shader that calculates the final color of each pixel in the first image using information about the locations and colors of all the lights in the scene, and the pixel's normal from the second image.

I plan to post the code from this demo soon, after I get my work on ambient light, directional light, and multiple point lights in there, so look out for it!


Permalink


Previous Page
3 posts — page 1 of 1
Next Page