Twitter
github
npm
Freesound.org
Ludum Dare
Ludum Dare
Game Jolt
Kongregate
One Game a Month
Youtube

Welcome to brianmacintosh.com. I'm Brian MacIntosh, and I am a game programmer in the Orange County area of Southern California. This site serves to host and distribute some of my games and my blog, below.

I have developed games and apps for the XBox 360, Windows PC, iPad, Amazon Alexa, and Windows 7 Phone. I'm particularly interesting in procedural generation, pixel art, and emergent gameplay, and I'm looking forward to developing more games with these technologies.

Blog RSS E-Mail

You are filtering on tag 'random'. Remove Filters
Previous Page | 12 total posts | page 3 of 3 | Next Page

Normal Mapping Code Released


April 22nd, 2012 @ 22:59
Tags: code, random

I was working on adding specular mapping to the normal mapping demo (a technique that imitates the "shininess" of a surface), but I ultimately decided to release the basic code now and perhaps create an updated library later. Enjoy :).

Download Here


Permalink

Dynamically Lighting 2D Scenes with Normal Maps


April 13th, 2012 @ 12:45
Tags: random, camera obscura

My random inspiration of this week occurred when one of my professors, Dan Frost, mentioned basic lighting techniques in lecture. It turns out that for any surface, the final color of that surface with a light on it can be calculated quite simply by multiplying the surface's color by the color of the light times the dot product of the surface normal and the normalized vector from the light to the surface. I thought to myself, "I could totally do that." Any about an hour later, I had this demo up and running:

Normal mapping has been traditionally used in 3D games on surface textures for 3D models. In this context, its use can easily make an extremely low-poly object appear much more detailed at a low processing cost by rendering textures with lighting such that they appear to "pop out". In this demo, I have instead used it to fake a 3D effect on a completely flat 2D plane.

The process is fairly straightforward. First, the standard scene is rendered to a render target cleared black, with nothing special done. Then, the scene is rendered a second time to a second render target, but this time the normal maps for each image, if they have one, are drawn instead of the standard image. Normal maps encode information about the surface normal, or the direction the surface faces, of each pixel in an image. In most implementations, the X, Y, and Z components of the normal vector are encoded as the R, G, and B components of the color of a pixel on an image. This can be seen toward the end of the above video. This second render target is cleared RGB(128, 128, 255) so that images that do not have normal maps can be rendered as flat planes. Finally, both of these renders are passed to a shader that calculates the final color of each pixel in the first image using information about the locations and colors of all the lights in the scene, and the pixel's normal from the second image.

I plan to post the code from this demo soon, after I get my work on ambient light, directional light, and multiple point lights in there, so look out for it!


Permalink


Previous Page | 12 total posts | page 3 of 3 | Next Page