Resurrected Entertainment

Archive for the 'Programming' category

The Revolutionary Guide to Bitmapped Graphics

December 29, 2009

revguidebitmappedgraphicsThis is another book from my library that I have decided to take a look back on and see if there are any useful tidbits to be used by programmers today. As with most technical books which are more than ten years old, there is usually an abundant amount of information about specific technologies which are no longer in popular use, or perhaps the technologies are still present in one form or another but the means to access them have changed dramatically. I personally believe that many of these books can give the novice programmer a background not taught in universities and colleges and will certainly give them an edge when working on limited or older machines.

The book does talk about video hardware used in that time period and delves deep into the programmatic underpinnings when accessing the display and creating custom video modes. I found some of the discussions to be noteworthy but if you really want a thorough explanation, you may want to investigate the Zen of Graphics Programming or the Graphics Programming Black Book. It also delves into a bit of assembly language primer, which is very typical for these books, since many of the routines were coded using that language. The introduction is short but may be a nice refresher for those who haven’t gotten their hands dirty in a couple of years.

I’ve made a list of what was still useful for work you may be doing today – unless you’re one of the lucky few who get to maintain software written in 1994. Your mileage will vary as some of the techniques are really just short introductions to a much larger field like digital image processing (DIP) and morphing. It even had a short introduction to 3D graphics, which seemed to be slapped on at the end because the publisher wanted “something on 3D” so they could put it on the cover.

  • It provided color space introductions, conventions, and conversions for the following spaces: CIE, CMY, CMYK, HSV, HLS, YIQ, and RGB. Most of the conversions go both ways (to and from RGB space), although CMY/K conversion calculations are only provided from RGB space.
  • Dithering and half-toning, followed by a chapter on printing. I think the authors mentioned Floyd-Steingberg in there somewhere, but it wasn’t a full discussion.
  • Fading the YIQ and HLS color space. I’m not sure why they didn’t provide one for the RGB space, but it could very well be on the bundled CD-ROM.
  • It introduces the reader to a few algorithms for primitive shape drawing and clipping, like Bresenham line drawing and Sutherland-Cohen clipping. It also included discussions and examples for ellipses, filled polygons, and b-spline curves.
  • Extensive discussions on graphics file formats for GIF, JPEG, TGA, PCX, and DIB. Although these tended to be higher-level than what would have been useful for someone implementing a decoder for any one of these formats (with the possible except of PCX). Associated algorithms like LZW and RLE are also explained as they are used by encoders of these formats.
  • The topic on fractals and chaotic systems was a little out of place, but was a little more extensive than the chapter on 3D. It did explain the concept of an L-system fractal, and even provided a generator for it. When supplied with a configuration file, it could produce fractals like the von Koch curve. It briefly touched on the Harter-Height Dragon fractal and introduced the Mandelbrot and Julia sets, but didn’t delve into chaos theory, even though I’m sure one the authors desperately wanted to do so.
  • Related to the discussion of fractals was the section on generated landscapes via the midpoint displacement method. While not a landscape per se, the authors digressed a bit to talk about cloud generation as well.

The book finally managed to get around to the reason I bought it in the first place many years ago, which was the all too brief chapter on DIP techniques. It quickly introduced and provided code for algorithms like the Laplace filter, as well as popular effects like emboss, blur, diffuse, and interpolation. The treatment was very light, so the reader will not walk away with a solid understanding for any of the example code, other than trivial effects like pixelate or crystalize.

Racing the Beam

May 16, 2009
Racing the Beam

Racing the Beam

I just finished another great book the other day, entitled Racing the Beam: The Atari Video Computer System by Montfort and Bogost. It’s an inside book about some the development challenges and solutions when writing games for the Atari VCS. This is a unique machine and is often considered one of the most difficult machines for a programmer to cut their teeth on. With 128 bytes of RAM and an average ROM size of 2, 4, or 8K, you must fight tooth and nail of every byte used by your software. What lengths do some programmers go to skimp and save on bytes? Ever thought about using the same byte for both an opcode and a piece of data? Ever thought about using the opcodes and operands found in the code segment of your program as data, which gets fed a pseudo-random number generator or to produce a rendering effect because you didn’t have the spare space in ROM to place this stuff into the data segment? Well, neither did I until I read this text. Along with little gems like this, the book has a number of interesting tips and tricks into the how and why of software development for the Atari 2600.

The book centres itself around the idea of a platform, and how the constraints and peculiarities of a system can affect how a game is presented. Game adaptation, especially when you’re trying to port software from one hardware architecture to another, is a very important topic when you’re trying to maintain the look or feel of a game. Sometimes, neither is possible and you’re forced to go your own road and come up with something completely different.

A word of caution, though. This book will not teach you how to write software for the 2600 system. It is not a technical reference by any means, nor does it advertise itself as one. However, I would heartily recommend this title to anyone thinking about producing a game for that system, or those of us with an inner geek needing to be satisfied.

I love the idea behind this series of “platform” books as I have often wished for such books to be written and have even contemplated writing one myself just to fill the void. One of the most useful parts of this book is the reference section which can lead you to all sorts of new and interesting articles, books, or projects. I do hope the next book contains a bit more technical detail while keeping thevarious bits of historical data and interesting character references which really helps to tie the why and the how of the topics together.

Computer Virus Research

December 21, 2008

As part of being a well-rounded programmer, I dabble in all sorts of technical things. One of my areas of interest is computer virus research. In the last thirty years, I have witnessed a large number of changes to this industry, and I find myself compelled to write a little bit about it today after reading about a couple of courses offered at the University of Calgary.

As it exists today, computer virus defense is wide collection of software programs and support networks which are offered to companies and users for the sole purpose of protecting their data from loss, damage, or theft from a myriad of small computer programs called computer viruses. These programs must have the ability to replicate (either a copy of themselves or an enhanced version) and which often carry a payload. The means by which a computer virus can replicate are complicated and often involve details of the operating system. In addition to preventing virus outbreaks from occurring, anti-virus software is also used to help prevent service outages and ensure a general level of stability. In other words, they are selling security or at least one form of security, since security in general is a very large net which cannot be cast by only one program. As an aside note, please be aware of the tools you are using for anti-virus protection. With some research and a little education, it’s often not necessary to purchase these programs in the first place.

I am currently reading Peter Szor’s book entitled, The Art of Computer Virus Research and Defense (ISBN-10: 0321304543). I am almost finished the text and I have found the book to be incredibly informative; filled with illustrations and summaries for all sorts of computer virus deployment scenarios, technical information about individual strains, and historical pieces of information as to how the programs evolved and mistakes made by both researchers and virus writers.

Even though I have the skills and the opportunities to do so, I have never written a computer virus for the purposes of deployment, nor do I ever wish to do so, but I can tell you that writing an original computer virus is challenging work; writing a simple virus is easy. Isolating, debugging, and analyzing the virus is also interesting work, albeit somewhat more tedious. Both jobs require similar skill sets, detailed knowledge of and low level access to a specific system.

I used to posit that the best virus writers would be the people who have taken it upon themselves to write the anti-virus software. After all, the best way to ensure the success of a business built on computer virus defense is to construct viruses that can be easily and quickly disarmed by your software. Much to the disappointment of conspiracy theorists, this is probably not the case, since fellow researchers would easily link a pre-mature inoculation with a future virus outbreak if it happened too often to be mere coincidence. However, if your business was based on quick and successful virus resolutions, then timely outbreaks followed by timely cures would seem to solidify the business model. Personally, I think anti-virus researchers are kept busy enough with “naturally” occurring strains to necessitate a manual jump start of the industry. Although that could change as users and technology platforms become more advanced, although the more probably route is the disappearance of the anti-virus industry; we live in a messy world and there may be opportunities for those wanting to leave their mark, even in the face of futuristic technology gambits.

Computer virus writers are plagued, somewhat ironically, by numerous problems with deploying their masterpiece. A computer virus can be written generically so that it can spread to a wider variety of hosts, or it can be written for a specific environment, which can include requirements on the hardware or software being used. Dependencies on software libraries, operating system components, hardware drivers and even specific types of hard-disks are all liabilities and advantages for a virus. They are liabilities because dependencies limit the scope of infection so the virus spreads more slowly, but at the same time, they often enable the virus to replicate, since the virus may be using known vulnerabilities or opportunities within these pieces to deliver the payload or as as means to allow for it to spread.

Virus research, writing, and defense is a fascinating topic. Unfortunately, I find the pomposity, and to some degree the absurdity, in various branches of the industry to be laughable and a little scary at times. In case you haven’t heard, the University of Calgary is offering a course on computer virus research. While I find this to be a refreshing take on education, my hopes are quickly dashed when I read the requirements and the Course Lab Layout (warning PDF monster). Do they think their students are secret agents working in a top secret laboratory? Of course they do, why else would there be security cameras installed in the room, and why do they restrict access to the course syllabus? Well, I’ve got news for the committee who approved the layout of the lab, and who probably approves the students who can attend the course: computer viruses are just pieces of software. That’s right, they’re just software. They don’t have artificially intelligent brains, they can’t get into your computer by the power lines, and they are quite a bit less complicated than your average word processor. This means that any programmer with the desire and a development environment can write a virus, trojan, or any other form of malware. They don’t need to take your course and they don’t need access to your Big Brother Lab.

The absurdity of protecting information which is already publicly available and has been for decades makes me want to laugh out loud and strangle someone at the same time. It’s rather disturbing and I really don’t like the idea of closing doors on knowledge, even if the attempt is futile. The University of Calgary’s computer science department should be ashamed at perpetuating such ignorance within a learning institution, and I am truly disappointed how bureaucratic such systems have become.

Update 12-29-20008: To respond to a verbal conversation I had with a couple of people: I understand why the university placed the security restrictions in the program; they want to validate the program and make it appear legitimate to the community and their peers. That’s fine, but at the same time, it must be acknowledged that the secret to mounting a successful defense against viral software and Internet based attacks is shared knowledge and open avenues for information. Understandably, this information will go both ways, but the virus writer will gain nothing they do not already possess (except the knowledge that we know what they are doing), while the general public may be a little more aware of the problem than they would be without this information.

Indeed, using viral kits and small customization programs can make viral programming easy for the layman or immature programmer, but we shouldn’t be locking away information about these techniques or programming practices simply because the result is something undesirable or easy to dispense. There are real opportunities to learn and disseminate this knowledge today, and the bigger the audience, the larger the opportunities for successful anti-viral software and general consumer awareness which will combine to create the most effective vaccine of all: knowledge.

Qt for Games – Niblet #1

July 19, 2008

A few days ago, I decided to experiment with Qt as it applies to game development. You haven’t heard of Qt? Well, it’s an SDK for cross-platform development which has really started to take off in the cross-platform arena over the last couple of years. KDE, a desktop platform for Linux, is based on the Qt library; so is the VOIP application called Skype. They even have a mobile platform called Qtopia for those who are interested in small devices. You can find out more information on the Trolltech website.

To begin the project and to get the brain juices flowing (after the second cup of coffee, of course), I wanted to come up with a small, interactive demo which would make use of some of the classes related to QSceneGraph and QGraphicsItem. I also wanted to play around with the layers a little bit and see how suitable they would be for games. The first program is a simple one indeed; in fact, it’s not really a game at all – unless of course you’re really bored and you start to chase invisible dots on your screen. Just run demo, and you’ll understand what I’m talking about.

I am basing this project off a classic so that I don’t stray too far off topic. MS-DOS 5.0 shipped with a QBasic game called Nibbles. The game play revolves around a worm that you control through a maze of walls while looking for food. If your worm collides with a wall or itself, you lose one life. The difficulty revolves around the worm’s velocity; it gets faster as you advance. The idea is to use that basic game as a model at first and build on it a little as time goes on; this will help to keep the project focused, which can be a real challenge for those who are new to game programming. To get things started, I have decided to dub the codename for the project: Niblet. The real name will be decided, if and when, this Qt game comes together.

The first problem is the worm, of course. It needs to snake around and grow to certain lengths when food pellets are ingested (or numbers as was the case in the original game).; I also like the idea of it shrinking in response to an item collected during the progression through a level. Anyway, the rules can be fleshed out later. For now, let’s just get a very basic framework up and running. To compile the little demo, just run qmake and then make if you’re on Linux or open up the project in Xcode if you’re using Mac OS X; Windows users can open the project in Microsoft Visual Studio. If you’re using Mac OS X Leopard, I suggest you try out Qt 4.4 which will produce a proper project file for Xcode 3.1.

Dynamic Path Demo using Qt

As you can see, very little code is needed when using Qt. If you were to write the same program for Windows using DirectX or even through the GDI, you would need considerably more code just to handle the application start-up process and surface initialization. But that’s the beauty of these SDKs, they essentially handle a lot of the routine programming so you can concentrate on building your software faster and better using these handy libraries.

Building Software with DOSBox?

May 12, 2008

DOSBox LogoI have written about DOSBox in the past and I have nothing but good things to say about it. It’s a great project whose people continue to push for greater compatibility, speed and functionality. Over the weekend, I tried to use DOSBox as a development environment for compiling DOS applications using the DJGPP 32-bit compiler and associated development suite such as RHIDE, Allegro, GRX libraries, etc.

Sadly, it didn’t work out so well since a number of common tools simply didn’t function that well in v0.72. I’m not terribly disappointed, since the emulator takes forever to compile a project in its environment, and wouldn’t be suiteable for long term development. I’m raising the issues here in case anyone tries to use DOSBox in the same way.

First, let’s talk about what did work. DJGPP for starters. It works just fine; albeit, the compilation and linking process does take a while. Some might argue that this is all you need to develop a project, and while that might technically be true, it certainly doesn’t work out for anyone trying build something more than a “Hello World!” application.

So, what didn’t work? Well, I tried four different editors: edit (the one which ships with your MS-DOS operating system), TDE (Thomson-Davis Editor – one of my favourites), MEL (Multi-Edit Lite – another great little editor), and RHIDE (the development environment which you can install alongside DJGPP). I didn’t try VI/VIM, but I might sometime later just because I’m curious.

First out of the gates, Edit simply wasn’t suitable in an environment where tab characters are important. It simply replaces them with spaces and continues on its way. Too bad since the editor is simple and to the point. Although, it really isn’t a good solution since it lacks one of the most basic development features like syntax highlighting.

DJGPP makefiles will not tolerate spaces when there should be tab characters. No, it’s not being quirky, it’s in the POSIX standard after all but it doesn’t make it any less annoying. So, the solution is to switch to an editor which respects tab characters and does not try to convert them to nasty spaces. So I jumped to a more sophisticated editor: TDE. Sadly, I was let down again, due to a bug (could be within TDE or DOSBox) where the cursor was not visible while using the editor. It was a little like fumbling in the dark trying to find that blasted light switch.

After some research I found MEL, so I installed it and tried it out. The editor was well laid out, the cursor showed up, and it respected my tab characters. Super, finally something I can use! Alas, I soon discovered that our relationship would never last. You see, MEL doesn’t use tab characters out of the box, you need to turn it on through one of the configuration dialogs. No problem, right? Well, it doesn’t seem to remember the setting once you save it, so you need to set it every… single… time you load the editor. Hrrmph.

“Well, it’s time to bring out the big guns!” I thought. I had used RHIDE in previous development projects and found the environment quite enjoyable for the larger ones. I typed in the command to fire it up and… whammo! Congratulations, it’s a bouncing baby SEGFAULT. Sigh. I don’t know about you, but four editors is enough experimentation for one day. If you find an editor which works well and meets my humble requirements, please post it in the comments.

Anything missing which should be added? Well, you’ll certainly want to add in some basic DOS utilities such as XCOPY, DELTREE, ATTRIB, and what not. It will make your life much easier. It would also be great for DOSBox to incorporate a configuration reload command, so you don’t need to exit the emulator in order to reload the preferences file. For now, I’m going to use one of my older development boxes for this software project, and stick with DOSBox to run my games.

PGP: Source Code and Internals

April 28, 2008

PGP BookMy copy finally arrived the other day and I am elated. Now, before you make another hasty buying decision based solely on my opinion, and on a single line of text, there are a couple of things worth mentioning about the book. First and foremost, the book does lack a few structural elements like plot and little things like paragraphs. Having read about the history of Philip Zimmerman and his toil with the U.S. government, I already knew this before I purchased the book.

It does contain source code. Lots of code. The entire PGP program, in fact, including the project files. The book was sent to print because the digital privacy laws the government was attempting to enact at the time, did not cover the printed page. If all you want is the source code, you can simply find it on the Internet. Looking for a specific version, especially an older version, is more difficult and may be fraught with export restrictions.

The story behind the printing of the book is a fascinating history lesson and one we should all be concerned about, even if you live in another country, since we all know governments are not terribly adept at learning from their mistakes.

QBasic

December 5, 2007

QBasic EnvironmentWhen I mention QBasic to some people, they immediately think I’m talking about Quick BASIC. The two products, however, are a little different. They were both created by Microsoft but Quick BASIC is basically a super set of QBasic. QBasic has an interpreter and an editor built as one package; I hesitate to call it an IDE since your projects could only use one module at a time. It was also limited in the amount of memory available to the program and the amount of memory available to the editor. I experienced the latter problem only once while creating a game involving viruses and robots (I didn’t get around to naming it); the editor just started losing lines of code I had written and was behaving eradically. Eventually, I became frustrated and moved on to other and presumably smaller projects.

QBasic made its first appearance with MS-DOS 5.0. It came with a few example programs and games. One of these games was called Nibble. I love this simple game, even to this day. It’s a little similar to games like Centipede, although the game itself is far too simple to make a reasonable comparison. The goal for each level is to gobble up the numbers that appear in random locations. Each time one of those numbers gets consumed, your “snake” grows a little longer. You have to avoid running into the walls of the level, which get more complicated as you progress, and you must not run into yourself. As you attain higher levels, your snake becomes faster and faster. This game was never synchronized with the system clock, so if you play the game on a machine made today, it would move around so quickly as to render the game unplayable.

I have often thought a game like Nibbles would make an excellent game to practice your porting skills on other platforms. It is sufficiently interesting to make the project worthwhile and could be adapted to play well using almost any input device. It could also be rendered using a simple text mode, just like the QBasic version, or you could enhance it in a graphics mode using imagery, vector graphics, etc.

QBasic also introduced the concept of functions and other forms of structured programming. GW-BASIC could only remember and execute your program if each line was prefixed by a number, or right away if you are using instructions in immediate mode. As you added and removed lines from your code, there were a couple of functions to reorder or renumber your line numbers when you ran out of room.

The concept of a function didn’t really exist in GW-BASIC; instead, it allowed you to jump to a particular line number using commands like GOTO or GOSUB. The latter was more like a function since you could jump to a specific region of code and then return from that function when the code had finished. GW-BASIC also supported one line functions which were handy for calculations. Although QBasic could still use line numbers, it encouraged the use of named labels for functions instead. Despite my work with Amiga’s BASIC, I still preferred the old way since I had been doing it for so long. It took me a while to adjust to the new program structure at first, so I purchased a new QBasic book after upgrading and essentially dove in head first.

Interrupts would become increasingly important for me in the future, but at this time I knew little about them. QBasic had no direct interface for handling interrupts, but it could handle interrupts used by the system’s timer:

ON TIMER(n) GOSUB MySubRoutine

Having no functionality to manipulate interrupts meant there were no functions to gather information about input devices like the mouse. Despite this seemingly major failing, all was not lost. While it could not handle input from the mouse directly, you could make use of a machine code sub-routine which could get the information you needed, like position and button states. You could use techniques like this to gather information from other devices.

QBasic also introduced me to one of the greatest time saving features ever created: the debugger. A debugger can be a separate program or feature within an IDE which allows you to trace through your program and examine variables and addressable data as the software executes. One of the core features of a debugger would be the ability to set a break-point at a specific addressable location that corresponds to a precise line within your source code. Before debugging, I was tracing through my program by hand and using PRINT commands to dump the contents of a variable. Even today, there are professional programmers who don’t use a debugger, either by choice or lack thereof, and choose to examine how there software operates by sending information to a log file or an output stream of some sort.

GW-BASIC

November 29, 2007

GW-BASIC PromptI first discovered this little gem while poking around on my Tandy 1000 RL computer back in 1991. Because I was familiar with various versions of BASIC already, I was able to fire it up and immediately begin writing fairly simple applications. However, there were differences to Atari’s version of BASIC, and no discernable way to figure that out without a book or some other documentation. I poked around on a few of my favourite Bulletin Board Systems (BBS) and found a small cache of programs written in GW-BASIC. I downloaded each of them, spending all of my download credits in the process. I poured over them line by line and found myself having more questions than answers.

Many new programmers today expect there to be some sort of documentation available when they learn a new technology. It’s an expectation that has evolved over the years. Today, it would be practically unheard of if you couldn’t find some resource describing the software on the Internet, or some blurb in a book, or even bundled with the product. However, when I started looking for a book on GW-BASIC (version 3.23 to be exact), it was darn near impossible. The people I spoke with had no understanding of programming, let alone a specific language. The computer sections in the store were spartan compared to the sections you find in stores today. In fact, several years later, I still needed to special order the books I wanted through the book store; even today, I usually order my titles through Amazon since a store like Indigo usually doesn’t have them in stock.

I eventually wandered into a local computer store – I was attracted by a demo of Wing Commander II playing on one of their expensive new 486 machines, so I went in to ask them if they knew where I could get my hands on some material. I spoke with the owner and he seemed to recall seeing a book for BASIC in the back of the store. He left for a couple of minutes and returned with two books under his arm. One was for the exact version of GW-BASIC I was looking for and the other was a compatible book for MS-DOS. I was ecstatic and I nearly fainted when he gave them to me for free. I don’t remember what I said to the man that day, but I’m sure it wasn’t adequate.

I wrote so many programs in that environment. I think I still have a few of them today on floppy diskette. They were simple at first, like simple word games such as hangman. However, it wasn’t long before I discovered how to increase the resolution and colour depth and draw simple graphics. I had already written code for the Atari which made use of pixel plotting routines or simple geometric shapes. There were pre-canned routines the Atari provided for drawing shapes, along with a few parameters for style, colour, and shape. GW-BASIC was similar but provided several more commands and options. With a smaller font and a higher resolution, debugging within BASIC became more feasible. I still couldn’t scroll, but at least I could view a larger portion of the code on the screen.

Sound was also possible through the PLAY command, or if you were so inclined, through a custom routine written in assembly language which could be written to generate all sorts of sounds like noise or even speech. I had converted several songs using music sheets and converted them to their GW-BASIC equivalent. Not terribly exciting, but it did make for some creative demos which seemed to impress onlookers.

Some of the more interesting projects involved controlling external devices like printers. I must have programmed my software to use every conceivable option available through my Star NX 1000 II dot-matrix printer. I could instruct it to use standard type effects like bold and italics when printing text, but I could also program it to print graphical data like images and shapes. I even created printed graphics for the Mandel Brot set by rendering the fractal to my printer instead of the screen, one pixel at a time.

Perhaps the most interesting device was a robotic arm which was controlled through a series of commands dispatched through the parallel port. I didn’t have access to such a device at home but my school eventually purchased one, so one of the teachers decided to create a contest for students. Whoever could program the robotic arm first, would win a special prize. The task was to pick up a piece of chalk and draw a picture consisting of at least four shapes. It was a fun contest and I walked away with a pass which granted me as much lab time as I wanted.

Before I upgraded to a new machine and moved to a new version of MS-DOS, I was intimately familiar with every command in that book, even the more obscure commands like CHAIN and PCOPY. Although, the really obscure commands were still a mystery. I didn’t know you could actually write assembly language routines in GW-BASIC using DEF USR and USR commands until sometime later.

Neverball

September 28, 2007

While I’m waiting for OS installations or code compilations to finish, I’ve been playing this great game on Xandros called Neverball. It’s a game similar to Monkey Ball, which is one of my faviourites on the Game Cube. It’s very polished for an open source game and is complete with the exception of minor enhancements or bug fixes from time to time. Its only dependencies on Linux are the most common SDL libraries: core library, SDL_image, SDL_ttf, and SDL_mixer. It doesn’t compile so well on Intel Macs, but I should get that working pretty soon. I believe there is an Xcode project file available too for those who choose to take the quick and easy path. I may soon walk the path of the dark side, depending on how quickly I can resolve these obtuse linker errors.

Game Development: Introduction

September 18, 2007

There are two sides to every coin. Game development can be hard or game development can be easy. These are relative terms, of course. Just because writing a game can be easy, doesn’t mean your grandmother could do it (although I wouldn’t hedge any bets, especially with mine). Personally, I think writing any finished application is an achievement for any programmer working alone. With the help of my colleagues, I have produced several applications which have made it onto store shelves, but I won’t take the credit since my part was just one piece to the puzzle. Of course, some applications cannot be programmed by just one person. Oh sure, one software developer could work for years or decades to achieve their goal, but who has the time and talent required to design and engineer something as complex as Metroid Prime, Photoshop, or the KDE Desktop on their own? It just doesn’t happen in the real world because life inevitably steps in and takes over.

However, all is not lost for the eager programmer setting out to create a game. It all boils down to following a few simple rules and having a few good resources available. For the next couple of weeks, we’ll look at some of the tricks of the trade and present a list of books which I have found very useful while exploring game development. So, without further procrastination, let’s dive right in to our first topic.

Game Design – The key difference between finishing a game and an unfinished hack is managing expectations. Yes, we would all love to design and build a game like Halo, but that’s not terribly likely and will almost certainly lead to failure. Instead, try and reduce the scope of the project by carefully choosing the characteristics of a game you wish to include. Care must be taken to use only those features which are absolutlely necessary. You should spell out how you want your game to work, what features need to be written, and any third party tools you plan to use which can save you time. In essence, you need to write a bit of documentation. Yes, I know, documentation is a bore and some of you may find it difficult. But you don’t need to write a best-selling novel, just outline what you want to do and how you want it to work.

By translating those exciting images and ideas you have into something concrete, you’ll be able to assess your goals and overall design much more easily. It also makes your game much easier to digest for other people. If you wanted to hire an artist, for example, you would need to describe the concept in fragmented conversations over the phone or via the Internet. It would make the job of the artist so much easier if you had the design for the game already completed. In fact, most artists (the smart ones anyway) won’t accept a contract without a working game design in place. Without this document, you would be unable to tell the artist which assets need to be produced now and what can wait until later. If you know all of the work which must be completed before the artist draws a single pixel, he/she will be able to commit to a schedule and a price.

Over the last few years, several good books have been written on designing (and managing) game development projects. However, these books tend to focus on many details which are only relevant or important to large scale projects. Naturally, there are excerpts which you will find useful, but don’t get overwhelmed by the sheer volume of information. Generally, you don’t need to concern yourself with all of those details; the object of this lesson is to finish a fun game you want to create in your spare time. If followed some of those books to the letter, it would take ages to complete a project and you may find your interest in the project begin to drift after a year or two.

Project Management – When you are creating a game by yourself and for free, I don’t recommend setting a schedule. Schedules are for people and companies who must release the game in a timely manner because their economic futures depend on it. The bottom line for you and me, however, is that schedules aren’t very accomodating for new game developers and they aren’t very much fun when you’re ten weeks behind your target date. If you don’t need to release a game in six months, then why would you care that your line drawing algorithm is taking twice as long to craft as expected?

On the other hand, you could always use the project as a way to enhance your project management skills, which is admirable but you may find the accuracy of your estimates vary considerably. This is partly due to life, who once again chooses to interfere with your project. As a programmer at work, I can count on having at least five or six hours a day for solid work. At home, my free time will vary wildly. If you only consider the cost (in time units) and forget about dates, then you may find your estimates more useful. If you really insist on reading a book to brush up, then I recommed:

  • Joel Spolsky’s book entitled Joel on Software. It’s a compilation of articles for novice program and projects managers. It has a number of good practical tips which can benefit almost any software project, including projects done by the hobbyist.