Blog

You are filtering on tag 'procedural generation'. Remove Filters
RSS

How I make infinite procedural swords

January 26th, 2020 (edited November 3rd, 2022)

If you haven't seen my procedural sword generator, you can check it out at that link.

Four randomly-generated sword sprites.

This is a custom algorithm written specifically to produce swords - not using any kind of generic machine-learning. I'm pretty happy with the amount of variety it can produce - each piece (the blade shape, the crossguard, the hilt, and the pommel) is generated from scratch from a variety of randomized parameters. So how does it work?

Process Note

I usually start a procgen project by trying to draw the thing I'm making by hand. This helps me analyze how it's made - in particular, how colors combine and where shadows and highlights go. After a quick web search for "pixel art sword" and some experimentation and iteration, I came up with this:

Simple, hand-drawn pixel art sword sprite.

Pretty basic, but it helped me figure out how to color the blade (a key part of the art I was unsure about) with a light half and a dark half, a gradient that starts dark at the hilt and runs up the blade, and a rim of very light pixels to hint at "sharpness".

The Algorithm

You can follow along in the Javascript source code (drawRandomBlade function) if you'd like to see more implementation details.

Off the top, I randomize and calculate a bunch of values within ranges I set (and tweaked over time), such as:

  • The pommel, hilt, crossguard, and blade lengths
  • The blade's start width and taper
  • The blade's chance to acquire a "jog" or a "curve", and the max magnitude of those
  • Parameters for a cosine wave that can vary the blade's width

Next, I generate the shape of the blade. I start where the top of the crossguard is going to be with a forward angle of 45 degrees, then step forward one pixel at a time. Each time, I push the point into an array, along with some metadata like the current direction, how far along the blade it is, and the width at that point (which can vary based on the cosine wave I mentioned before). After recording the point, I randomly check to see if the blade should "jog", making a sharp angle, or acquire a curve. Then, move on to the next point.

A curved line showing the generated sword shape.

Now I actually render the blade. Walking along the control points drawing segments is intuitive, but it seemed tricky to make sure the space between each segment gets filled in properly. So instead, I actually iterate over every pixel in the image. I determine which control point is closest to that pixel, then check the distance to it. If it's within the blade's radius, we'll draw it.

Color: here's how I decide what color a blade pixel will be:

  1. In the point metadata, I previously saved a normal vector that points to the "left" of the control point. I can do a dot product between that and the vector between the pixel and the control point to figure out if it's on the left or right side of the blade. I pick the light color or the dark color based on that.
  2. Darken the color based on how far along the blade the control point is (another value I stored in the metadata).
  3. If the pixel is very close to the edge of the blade (i.e. the distance between it and the control point is close to the blade width), I blend in the very light "edge" color.

For most of these operations, I'm using linear interpolation to blend two different colors together.

Only the blade of a random sword.
It's not the same sword, sorry, I don't have seeding yet.

Next, it's on to the hilt. This is much simpler: I generate another cosine wave to control the lines of the grip, a width, and a color. When randomizing colors, I usually work in HSV and convert to RGB before drawing because I'm not so interested in controlling how red or green or blue the color is, but I'm much more interesting in controlling how bright (V) and how colorful (S) it is. In this case I make sure the color has a high V so it has room to get darker to make shadows.

Then I just walk along the hilt drawing slices. There is some rounding trickery to make sure I get all the pixels filled in nicely. The color of each pixel is based on the value of a cosine wave, and darkened somewhat toward the right side.

Now the crossguard. The process for this is nearly the same as that for the blade - just in a different direction. I generate two different curves for the left and right parts, but with a high probability I just discard one of them and use the other, mirrored.

Finally, the pommel. I was getting itchy to share my work at this point, so I didn't do too much here. Just generate a random radius, and draw a circle in the same color as the crossguard that's shaded darker to the bottom-right and lighter to the top-left.

Oh, and the black border is added here at the very end. Any pixel that's empty or at the edge of the canvas, and orthogonally adjacent to a filled pixel, gets filled in black.

I hope that gives you some interesting ideas! Since you made it through all that, here's a gif of the process:

An animation of a sword being drawn pixel-by-pixel.


Permalink


ASCII Art Generator

March 26th, 2018 (edited November 3rd, 2022)

A few years ago, I wrote a program that could produce ASCII art from raster input images. I was going to use it to generate paperdoll images for the equipment screen in an ASCII roguelike game. It worked great, but it was written using XNA and the images to convert had to be set at compile-time. Ick!

Now, I've finished converting the program to Javascript/HTML5, and made a number of other improvements. You can try it right here!

http://brianmacintosh.com/asciiart/

ASCII Art generator sample

You can find a number of ASCII art generators on the internet, but they usually operate simply by selecting characters with more or less "ink" based on the brightness of each character-sized cell in the image. This method requires pretty large output sizes for the image to be recognizable. My generator uses edge detection to find characters that actually follow the lines of the input image, which holds up a lot better at smaller sizes. The downside is that noisier images (such as photographs) can produce noisy output that requires a lot of cleanup, though there are some steps that try to mitigate this.

The source code is under the GPL (3.0), and images produced with the page are free to use. You can visualize what each step looks like by adjusting the Debug Stage in the Advanced settings, and there are a number of sliders to play with the tweak the results. Here is an overview of the steps in the algorithm.

  1. Greyscale the input image.
  2. Gaussian blur the image to reduce noise.
  3. Use Sobel edge detection to produce a greyscale image representing the edges present.
  4. Threshold the edge-detected image, leaving us with a black image with white lines.
  5. (Optional) Dilate the image, thickening the edges.
  6. (Optional) Erode the image, reducing the edges. When used in combination with equivalent dilation, this can help reduce noise.
  7. Blur the line image. This makes it easier for us to match up characters in the next step when the best match doesn't precisely line up with the grid.
  8. Split the image into a letter-sized grid. For each grid cell, overlay ("convolute" is the technical term) each ASCII character over the content. Whichever character best matches the content of that cell is the one we'll use. We also try the characters at small offsets to try to catch, say, a vertical line that doesn't land quite in the middle of a cell.

ASCII Art steps: original image, Sobel edge detection, overlaid ASCII characters


Permalink


Side Project: Random Music Generation

December 10th, 2012

This week's random side project was inspired by my friend Bryan Ploof and my Music and Technology class. We've been discussing music generated by computers with little or no human influence, including attempts to map anything from fractals to sorting algorithms to musical notes. Michael Matthews gave a lecture on his research in creating music using cellular automata. Cellular automata are sets of simple rules that define how a grid of "cells" evolves over time. They provide very interesting possibilities in the generation of music because their rules often generate musically interesting patterns and progressions. Michael's approach was to use a one-dimensional automata to control the pitches and timbres of sound available to a player, who modulated the sound directly through motions picked up by a webcam.

My goal was to make a system that could act autonomously to create a unified tune with a strong melody. My approach was to create three seperate automata to control different aspects of the music. One controls the rhythm of the song by selecting the timing of each note (quarter, eighth, etc). The other two control the notes that are played: one selects a chord from a pre-defined set, and the other selects which notes from the chord are activated.

Here's a sample! This video uses a few chords selected from the Star Wars theme to generate a tune.

An executable and the full C++ source are available on my code page. Future plans for the project may include replacing the chord-selection mechanism with a Markov chain to make it more musically coherent, and implementing samples and ADSR so the notes can be things other than sine and square waves.


Permalink


Previous Page
6 posts — page 1 of 2
Next Page