Archive for the 'Programming' category
The first week
January 13, 2019For some people, the hardest part of any project is just getting started. There are many reasons why this can be the case, but for me, here are the two areas where I normally get stuck:
- Priorities – I have a family. I have many hobbies. Deciding what to focus on in the latter bucket and how to juggle that with the former is a challenge, and perhaps it is my biggest challenge when it comes to starting a new project. Deciding to focus on one thing and the expense of other things can be very difficult, especially when your project gets hard, or when a bright shiny object comes into view. Keeping your goals realistic is probably the single best rule worth following. Plan for a singular, specific goal, and then try to manage your time around that. For example, today I had the goal of getting my development environment setup and fixing a sample project to help get me started on my journey of building a small game in Love2D for my kids. As with any game, there are many steps, but I am not concerned about those at this point.
- Analysis Paralysis – It can be difficult to choose the right tools and technologies for a project. The reason behind the analysis is clear, but the cause of the paralysis is usually because I don’t know enough about the tools, technologies, or the project itself to make a decision. In this case, I suggest just jumping in and try to decide on one or two evaluation points, where you can take a step back and try to take what you have learned and apply it against what you have yet to build. Do you see any major problems with the tools and technologies you have chosen going forward? Do you need to make a pivot towards something else?
Categories: Game Development, Programming, Projects, Reflections
No Comments »
DOS Software Development Environment
December 12, 2014I love writing software for Microsoft’s DOS. I didn’t cut my teeth programming on this platform, that was done on an Atari 800 XL machine. However, it was on this platform that I was first exposed to languages like C and Assembly Language, and thus sparked my torrid love affair for programming which lasts to this day. The focus of this post is about DOS software development and remote debugging.
If you have done any development for iOS or Android, then you have already been using remote debugging — unless you are some kind of masochist who still clings to device logging even when it is not necessary. The basic concept is that a programmer can walk through the execution of a program on one machine via the debugger client, and trace the execution of that program through a debug server running on another machine.
The really cool part of this technology is that it’s available for all sorts of platforms, including DOS! Using the right tool chain, we can initiate a remote debugging session from one platform (Windows XP in this case), and debug our program on another machine which is running DOS! The client program can even have a relatively competent UI. For this project, the toolset we are going to use is available through the OpenWatcom v1.9 project, and the tools found inside that wonderful package will allow us to write 16-bit or 32-bit DOS applications and debug them on an actual DOS hardware target! In addition, we can apply similar techniques but this time our server can be hosted within a customized DOSBox emulator, which is also really cool since it allows you to debug your code more easily on the road.
The first scenario is the one I prefer, since it is the faster of the two approaches, but before we get into the details how to set this up, let’s consider some of the broader requirements.
You’ll need two machines for scenario number one. The DOS machine will need to be a network enabled machine, meaning it should have a network interface card and a working packet driver. I would recommend testing your driver out with tools like SSH for DOS, or the PC/TCP networking software originally sold by FTP Software. In order to use the OpenWatcom IDE, you’ll need a Windows machine. I use VirtualBox and a Windows XP Professional installation; my host machine is a Macbook Pro running Max OS X 10.7.5 with 4 GB of RAM.
The second scenario involves using the same virtual machine configuration, but running the DOSBox emulator within that environment. You will need to use this version of the DOSBox emulator, which has built-in network card emulation. They chose to emulate an NE2000 compatible card for maximum compatibility, and also because the original author of the patch was technically familiar with it. After installation, you’ll need to associate a real network card with the emulated one, and then load up the right packet driver (it comes bundled with the archive).
For reference, the network interface card and the associated packet driver I am using on the DOS machines is listed below:
- D-Link DFE-538TX
These are the steps I have used to initiate a remote debugging session on the DOS machine:
- Using Microsoft’s LAN Manager, I obtain an IP address. For network resolution speed and simplicity, I have configured my router to assign a static IP address using the MAC address of my network card; below is the config.sys and autoexec.bat configurations for my network
AUTOEXEC.BAT @REM ==== LANMAN 2.2a == DO NOT MODIFY BETWEEN THESE LINES == LANMAN 2.2a ==== SET PATH=C:\LANMAN.DOS\NETPROG;%PATH% C:\LANMAN.DOS\DRIVERS\PROTOCOL\TCPIP\UMB.COM rem - By Windows 98 Network - NET START WORKSTATION LOAD TCPIP rem - By Windows 98 Network - NET LOGON michael * @REM ==== LANMAN 2.2a == DO NOT MODIFY BETWEEN THESE LINES == LANMAN 2.2a ==== CONFIG.SYS DEVICEHIGH=C:\LANMAN.DOS\DRIVERS\PROTMAN\PROTMAN.DOS /i:C:\LANMAN.DOS DEVICEHIGH=C:\LANMAN.DOS\DRIVERS\ETHERNET\DLKRTS\DLKRTS.DOS DEVICEHIGH=C:\LANMAN.DOS\DRIVERS\PROTOCOL\TCPIP\NEMM.DOS DEVICEHIGH=C:\LANMAN.DOS\DRIVERS\PROTOCOL\TCPIP\TCPDRV.DOS
- Load the D-Link Packet driver
- I load a TSR program, which I have built from a Turbo Assembly module, which can kill the active DOS process. I do this because the TCP server provided with OpenWatcom v1.9 does not exit cleanly all of the time, and will often lock up your machine. In the end, your packet driver may not be able to recover anyway, and you will need to reboot the machine, unless you can find a way to unload it and reinitialize. Incidentally, the packet driver does have a means to unload it, but when I attempt to do so after the process has been killed, it reports that it cannot be unloaded. The irony of the situation will make you laugh too, I am sure.
- Navigate to my OpenWatcom project directory, then I start the TCP server which uses the packet driver and your active IP address to start the service. The service will wait for a client connection; in my case, the client is initiated from my Windows XP virtual machine using the OpenWatcom Windows IDE.
- Ensure that the values for “sockdelay” and “datatimeout” are both “9999”, and make sure the “inactive” value is “0” in your WATTCP.CFG file. Even though the documentation says that a value of “0” for the “datatimeout” field is essentially no timeout, I did not find that to be the case. The symptom of the timeout can be onbserved when you launch the debug session from the OpenWatcom IDE and you see the message “Session started” on your DOS machine, but then the IDE reports a message the the debug session terminated.
These are the steps for the DOSBox emulator running within the Windows XP guest installation:
- Install the special network enabled build of DOS Box mentioned above;
- Fire up the NE2000 packet driver  (c:\NE2000 -p 0x60);
- Start the TCP service
- Note that I configured a static IP address on my router using the Ethernet address reported by the packet driver. You should not be able to ping that address successfully until the TCP server is running in DOSBox. While the process worked, I found the time it took for the session to be established and the delay between debug commands to be monstrously slow (45-90 seconds to establish the connection, for example) and as a result, made this solution unusable.
While working on a project, it can be really useful to create the assets on a modern machine and then automatically deploy them to the DOS machine without needing to perform a lot of extra steps. It can also be useful to have the freedom to edit or tweak the data on the DOS machine without needing to manually synchronize them. The solution which came immediately to my mind was a Windows network share. This is possible in DOS via the Microsoft LAN Manager software product and has been discussed before in a previous post.
Categories: DOS, Game Development, Programming, Retro
No Comments »
Building Wolfenstein 3D Source Code
June 16, 2014Way back on Feb 6, 2012, id Software released the source code to Wolfenstein 3D — 20 years after it had already been written. The source code release does not come with any support or assets from the originally released game. In fact, id Software is still selling this title on various Internet stores like Steam. I played around with a DOS port of the DOOM source code quite some time ago, but I had never bothered to try and build its ancestral project. Until now!
As it turns out, it’s actually quite straight-forward with only a minor hiccup here and there. The first thing you’ll need is a compiler, that almighty piece of software that transforms your poorly written slop into a form that the operating system can feed to the machine. For this project, the authors decided to settle on the Borland C++ v3.0, but it is 100% compatible with v3.1. I don’t know if more recent compilers from Borland are compatible with the project files, or the code present in the project produces viable targets, so good luck if you decide to make your own roads.
As per the details in the README file, there are a couple of object files you will want to make sure don’t get deleted when you perform a clean within the IDE:
- GAMEPAL.OBJ
- SIGNON.OBJ
You can open up the pre-built project file in the Borland IDE, and after tweaking the locations for the above two files, you should be able to build without any errors. The resulting executable can then be copied into a working test directory where all of the originally released assets are located, I believe my assets were from the 1.2 release.
There are also a few resource files you must have in order for the compiled executable to find all of the right resources. According to legend, the various asset files were pulled from a sprinkling of source formats and assembled into “WL6” resource files. A utility called I-Grab, which is available via the TED5 editor utility, produced header files (.H) and assembler based (.EQU) files from that resource content which allowed the game to refer to them by constant indices once the monolithic WL6 resource files were built. There are annotations in the definition files, using the “.EQU or .H” extension, with a generated comment at the top which confirms part of that legend.
The tricky part in getting the game to run properly revolves around which resource files are being used by the current code base. The code refers to specific WL6 resource files, but locating those resource files using public releases of the game can be very tricky because those generated files have changed an unknown number of times. Luckily, someone has already gone through the trouble of making sure the graphics match up with the indices in the generated files. The files have conveniently been assembled and made available here:
After unpacking, you’ll need to copy those to the test directory holding the registered content for the game. Note that without the right resource files, the game will not look right and will suffer from a variety of visual ailments, such as B.J. Blazkowicz’s head being used as a cursor in the main menu, or failing to see any content when a level is loaded.
Categories: DOS, Game Development, Games, Programming, Retro
No Comments »
Space Invaders in SVG!
May 29, 2014I wrote a little space invaders game in SVG (Scalable Vector Graphics) several years ago. It plays sound effects and has a few of the essential features within the game. I have recently modified it to work on Google Chrome browsers, since it was originally built to run within Adobe’s SVG Viewer plug-in which could only run on Windows 98 or XP platforms.
It’s a great game for people who are new to programming to get their feet wet and hack around with it. Some of the core concepts needed to modify the game to any great extent would be a rudumentary knowledge of JavaScript, SVG, and DOM level programming; you can run the game in the browser directly, there is no need to embed it within an HTML page although that is certainly possible, and you also download it here. You can move the player using the ‘A’ and ‘D’ keys, and fire with the ‘S’ key. The space bar pauses and unpauses the game.
Here is a little excerpt taken from the Space Invaders Atari 2600 manual:
Welcome to Space Invaders! Before you can begin playing, the first step is to place your cursor over the docking rectangle in the upper-left hand corner. Once the cursor has been positioned over the rectangle, click it and it should change color. Lift your hand from the mouse and you’re ready to play!
You are a recent enlistee in the Earth Defense Corps. For the past six weeks you’ve undergone grueling and intensive training. Now you stand at attention, nervously anticipating the most critical section of your training…
“Okay, kid, you’re on!” barks your commanding officer.
Quickly you climb into a laser tank. A second enlistee follows you. You each settle into deep, leather seats. With a soft whirring sound, the automatic hatch cover closes overhead. As your eyes adjust to the dim light of the laser capsule, you begin to make out the controls. Mentally, you check off each knob, dial, button, and display. For the next several hours you and the other enlistee with operate these controls to defend your planet in an attack simulation. The screen in front of you lights up. A column of bomb-dropping aliens advances toward you. What next? For a second your mind goes blank. Have you learned your lessons well? No time to refer to the manual now. Your commanding officers are watching and it’s your show.
Your tasks are to stop the invaders from landing on your territory; avoid enemy bombs; and score as many points as possible. The simulation ends when you lose all your lives or when any invader lands on your planet. If you destroy all 36 space invaders before they touch your planet, a new set of invaders will appear on the screen. Each new set of invaders will move a little faster than the previous set.
You begin each game with three shields. Initially, you are safe behind the shields. But as you and the enemy hit the shields with lasers and bombs, they become damaged and eventually disappear altogether. As the space invaders come close to the shields in their descent toward you, the shields will be destroyed and your only hope is to destroy the remaining invaders before it’s too late…
Categories: Games, Programming, Retro
No Comments »
Borland Turbo C++ v3.0
February 13, 2013In many ways I found my departure from C to C++ to be less than stellar. Sure, it brought to the table new paradigms and new capabilities, all of which were bright and shiny to new and experienced programmers alike, but hidden away in a pocket covered in lint was an even greater number of difficulties, obscure errors, and buggy or non-standard compilers.
Despite these problems, C++ still managed to shine and eventually the features began to rub off on me. Without a doubt, the three most important features of the language were encapsulation, inheritence, and polymorphism. Using these new capabilities programmers everywhere found new ways to leak memory, produce bugs, and blot their code; moreover, and somewhat less sarcastic, they also found new design patterns, complex adaptive software architectures, and spiffy new data structures that just made everything taste better. Where would the software world be without indecipherable meta-programming techniques and obscure job interview questions? Sorry, more sarcasm coming through.
Borland’s Turbo C++ was a fast 16-bit compiler created by Borland and was essentially a cheaper and less functional version of C++ Builder. Comparing it against cheaper tools at the time, it had many of the same capabilities as Microsoft’s QuickC compiler and provided a few new ones too. Most imporatantly, it could compile C++ and C source code while QuickC could only handle the latter. Like QuickC, it had a built-in debugger, but Turbo C++ was more feature rich than Microsoft’s incarnation. To be fair, Microsoft had a C++ compiler too, and it would not be a stretch to say it was one of the most popular compilers in the industry; however, it was also not the cheapest compiler to be had, and the Microsoft version didn’t support a lot of the C++ standard until recently but exactly which standard and to what degree is a hot topic which I won’t dive into here. Borland provided an implementation the AT&T C++ 2.1 standard with their product.
I remember the Turbo C++ compiler having more support for templates than most of the competition at the time. According to Wikipedia, the Borland product line was instrumental in the development of the Standard Template Library. I was wary of templates when I first encountered them back in the 1992. The problem was mostly due to documentation and compatibility. Many C++ books never even touched upon templates since few of the major compilers, including Microsoft’s, didn’t support them or supported them to such an extent to render them unusable. Professional programmers probably weren’t pushing for the technology either since it was so haphazard. Eventually this all led to poor interoperability between compilers even on the same operating system.
One major limitation with Borland’s product at the time was the inability to produce 32-bit executables. This feature was necessary if your program needed to use 32-bit protected mode for access to extended or expanded memory (there was a 286 16-bit protected mode available in Turbo C++ but it didn’t interest me). Because of this unfortunate limitation, I didn’t use the program for as long as I probably would have, and opted to use the famous port of the GCC compiler from DJ Delorie instead called DJGPP.
The Borland C++ line of products is now distributed by Embarcadero Technologies, which acquired all of Borland’s compiler tools with the purchase of its CodeGear division in 2008.
Categories: DOS, Programming
No Comments »
Microsoft Quick C Compiler
December 21, 2010When I first came in contact with this compiler, I was just starting high school and eager for the challenges ahead (except for the material which didn’t interest me — basically non-science courses). When I went to pick the courses for the year, I noticed a couple which taught computer programming. The first course, which was a pre-requisite for the second, taught BASIC while the second course taught C programming. At this point in my life, I was an old hand at BASIC, so I basically breezed through first programme. The second course intrigued me much more. I was familiar with C programming from my relatively brief experience with the Amiga, but I had a lot left to learn. My high school didn’t use the Lattice C compiler, but a Microsoft C compiler instead. I located the gentleman who taught the course and he pointed me to a book called Microsoft C Programming for the PC written by Robert LaFore and the Microsoft QuickC Compiler software. I had a job delivering newspapers at the time, so I could just barely afford the book using salary and tips saved from two weeks doing hard time ($50 at the time), but the compiler was just too expensive. So I did what any highly effective teenager would do, basically I dropped really big hints around the house (including the location and price of the compiler package I wanted) until my parents purchased a copy for me on my birthday.
There are a number of differences between the BASIC and C programming languages. One of the more obscure differences lies in how the C programming language deals with special variables that can hold memory addresses. These variables are called pointers and are an integral part of the syntax and functionality of the language. BASIC did have a few special functions which could accept and address locations in memory – I’m thinking of the CALL and USR functions specifically, although there were others. However, a variable holding an address was the same as one holding any other number since BASIC lacked the concept of strong types. The grammar of the C language is also much more complex than BASIC; it had special characters and symbols to express program scope and perform unary operations, which introduced me to the concept of coding style. When a programmer first learns a particular style of coding, it can turn into a religion, but I hadn’t really been exposed to the language long enough to form an opinion. That would come later, and then be summarily discarded once I had more experience.
There were libraries of all sorts which provided functionality for working with strings, math functions, standard input and output, file functions, and so on. At the time, I thought C’s handling of strings (character data) was incredibly obtuse. Basically, I thought the need to manage memory was a complete nuisance. BASIC never required me to free strings after I had declared them, it just took care of it for me under the hood. Despite the coddling I received, I was familiar with the concept of array allocations since even BASIC had the DIM command which dimensioned array containers; re-allocation was also somewhat familiar because of REDIM. However, there were many more functions and parameters in C related to memory management, and I just thought the whole bloody thing was a real mess. The differences between heap and stack memory confused me for a while.
There were many features of the language and compiler I did enjoy, of course. Smaller and snappier programs were a huge benefit to the somewhat sluggish software produced by the QuickBASIC compiler and the BASIC interpreter. The compiled C programs didn’t have dependencies on any run-time libraries either, even though there was probably a way to statically link the QuickBASIC modules together. Pointers were powerful and were loads of fun to use in your programs, especially once I learned the addresses for video memory which introduced me to concepts like double buffering when I began learning about animation. Writing directly to video memory sounds pretty trivial to me right now, but it was so intoxicating at the time. I was more involved in game programming by then and these techniques allowed me to expand into areas I never considered. It allowed for flicker-free animation, lightning fast ASCII/ANSI window renderings via my custom text windowing library, and special off-screen manipulations that allowed me to easily zip buffers around on the screen. A number of interesting text rendering concepts came from a book entitled Teach Yourself Advanced C in 21 Days by Bradley L. Jones, which is still worth reading to this day.
At around this time, I also started to learn about serial and network communications. The latter didn’t happen until my last year at high school. Basically, I wanted to learn how to get my computers to talk to one another. It all started when I became enchanted by the id Software game called DOOM, which allowed you to network a few machines together and play against each other in a vicious winner takes all death-match style combat. Incidentally, games like Doom, Wolfenstein 3D, or Blake Stone: Aliens of Gold led me down another long-winding path: 3D graphics, but that didn’t happen until a few months later. Again, the book store came to the rescue by providing me with a book entitled C Programmer’s Guide to Serial Communications by Joe Campbell. I was somewhat familiar with programming simple software which could use a MODEM for communication, since BASIC supported this functionality through the OPEN function, but I knew very little about the specifics. Once I dug into the first few chapters, I knew that was all going to change.
Categories: DOS, Programming, Reflections, Software
No Comments »
Dissecting DOSBox
December 20, 2010If you’re a gamer and have been for years, then you’ve probably heard of and quite possibly used DOSBox. If you haven’t, then let me introduce it to you. DOSBox is great little program for running all of your favorite classic games. Games which were originally built for monitors and video cards which since been retired, and legacy audio systems like SoundBlaster 16, Audio Galaxy, or Gravis Ultrasound. Specifically, it supports games and programs which were written for the MS-DOS or compatible operating system. Although, the software specializes in supporting games, you may have success in running other programs. Although it doesn’t make any guarantees regarding these legacy applications, which can require different features provided by unsupported drivers, I have had success in running complex software like the DJGPP compiler but run into a bit of trouble when running an older version of the TDE (Thomson-Davis Editor).
I’ll wait here while you go and download your copy of the source code…
Now that you’ve downloaded DOSBox source and presumably unpacked it, you are ready to get your hands dirty. This isn’t going to be an article about how to use DOSBox, but rather, how does it work under the hood, exactly? What are the major software gears and wheels used for handling such programs? Why don’t all of your favourite games have 100% compatibility?
The first thing to understand about DOSBox is that it’s an emulator for x86 CPU instructions, floating point unit instructions, and various functions within MS-DOS compatible operating systems. Specifically, it emulates functions around Interrupt 21H (hexadecimal) and a couple around 20H, 25H, 26H, and 27H. It also installs an NULL handler for interrupts 28H and 29H which do nothing. But before we get ahead of ourselves, let’s take a step back and look at the various modules that make up the program.
CPU Emulation. At the heart of any emulation program is the CPU emulation core. Every program (.EXE or .COM file) on your DOS powered computer contains machine code and data. When using DOSBox, it’s the CPU emulator’s job to process that machine code; therefore, each program that is teased apart and executed by DOSBox arrives at this sacred chunk of memory sooner or later.
DOSBox can be configured to emulate the x86 core in a few different ways. Therefore, when we talk about emulation for CPU cores, we’re either talking about emulating each instruction found in the program one at a time, or the emulator will choose to batch process these instructions and operands and translate them to native instructions for direct execution on the host CPU (they call this mode ‘dynamic’). This direct execution mode can provide into better performance on some machines, but may be slower on others so the original emulation mode is still available.
Emulation can be a little confusing if you’ve never heard of it before, so let’s go over it once again. An executable program contains a bunch or binary data representing machine operation codes (or op codes), operands (arguments to that opcode), and data. This information can be fed to your CPU by your operating system, or they can be read by another program, just like any other file, and interpreted one byte at a time. In the last case, it would be the emulation program which decides how to implement the functionality of the CMPSB instruction (used for comparing bytes in memory), for example, rather than the CPU providing the hardware implementation. It’s this act of interpretation that differentiates emulation from direct execution on the host processor.
FPU emulation. In the world of electronic gaming, the software powering those fantastic explosions, shattering those fragile glass windows, and hurling those flying projectiles often need to do a series of calculations to determine values for acceleration, direction, and manipulating various bits of trigonometry. These calculations can involve irrational numbers like PI (3.141592654…), which I’m sure you all remember from school. In programming terms, these numbers are often stored in variables which follow a standard method of encoding the information about the significand and the exponent (along with the sign of the number); one such standard is IEEE 754. Let’s not get too mired in the intricacies of how floating points are stored, which is quite boring after all and not conducive to an interesting read on a Friday afternoon. Instead, let’s evade this topic and push on to describing operation codes in which floating point numbers are used as operands (parameters or arguments to a function). In this case, these op codes may need to be emulated just like the ones used by the CPU. Only this time, if the encoding standard differs from the host processor (in other words, if it doesn’t use IEEE 754 and you’re using a PC to run DOSBox), then the emulator will need to convert that format into something it can use natively, which an expensive operation (in terms of adding processing cycles and increasing execution time).
The take away piece of information for all this is that floating point emulation can be slow in the worst case scenario, and fast in the best case. In either case, it’s not a good topic at parties so let’s just slowly walk away from it…
Hardware emulation. This is certainly one of the more interesting layers in DOSBox and also the most prone to hacks and tweaks within the source code. The science of taking an analog output and reproducing it digitally is prone to approximations, since the nature of analog is approximate and being digital is exactly the opposite. Most of the analog operations come from sound cards where the device is capable of producing a variety of analog wave forms which is used to create music and sound effects.
The operating system layer. Before the days of operating systems employing graphical user interfaces to shield the user from arcane console commands, and providing a host of time wasting games like Solitaire and Mind-sweeper, a sizable chunk of the PC market used DOS. Whether that was PC-DOS, MS-DOS, or DR-DOS is not really that important since the other versions typically remained closely compatible with MS-DOS. The DOS platform offered a host of utility programs to the user, along with a few drivers which included support for specific implementations for memory management, mouse drivers, and access to generic CD-ROM drives. Users could always install a specific version of a driver for piece of hardware they bought, like a Sound Blaster card, and after setting a jumper of two to configure interrupt and DMA channels, they would be off to the races. Unless something went horribly wrong…
Unbeknownst to many users but knownst to a few geeks around the Megaverse, the motherboard BIOS code provides a set of default drivers so that the boot sequence and the operating system can access a few essential devices like the hard drive, floppy drive, and keyboard when they start up. These drivers are usually ignored or replaced by the operating system so that it can provide its own, more advanced versions, but some of them are still used when your Windows operating system boots into safe mode, for example. The kernel, which is a core component of any operating system, provides mechanisms for switching between drives, accessing disks and partitions, and in the case of DOS, providing fixed names for devices like “LPT1:” (printer), “COM1:” (communications port), or “NUL:” (the abyss). These device names and indeed the drivers themselves provided a level of abstraction for the user and higher-level programs. The user could print a text file, for example, by issuing a command like “TYPE FILE.TXT > LPT1:” directly as a shell command, but they could also use a program like WordPerfect which has its own set of specialized printer drivers, so that it could do more tasks requiring advanced printing like graphics and italicized or bold lettered text.
DOSBox provides limited support for a few of these commands, but really it’s only enough to get your games up and running since that is its modus operandi after all. These commands can take one of two forms: an executable program or a keyword command available in the shell. I provide the list of available keyword commands in the Shell section below.
The interrupt layer. Much of the hard work when creating DOSBox probably rose up from the requirements around the CPU/FPU emulation, DOS and hardware abstraction support, and the support for interrupts. The bulk of the core system code for the DOS lies in supporting the features for every required interrupt function. Interrupts are specific routines which can accept parameters from the calling program and then return the results of the function in special variables. All of this happens by loading up certain register variables and invoking the interrupt CPU instruction. If you were to embed a small assembler routine into a C function, it may look like this:
void mouse_hide(void)
{
asm {
mov ax,02h;
int 33h;
}
}
In this example, the value “02H” is being loaded into the AX register which represents the interrupt function to hide the mouse cursor, and the interrupt “33H” is an entry in the interrupt vector table to access available mouse functions (it acts like and index). DOSBox supports many interrupt functions, but their focus is around the ones necessary to run your favourite games. The important thing to remember about interrupts is that they do exactly that, they interrupt the CPU and force it to run the requested function.
Without going into too many technical details, interrupt authors generally follow two design rules: the functions must execute quickly and you shouldn’t call an interrupt from within another interrupt. The programmers working on DOSBox need to implement those interrupt functions in whatever way makes the most amount of sense on the running platform. So, if the game invokes an interrupt requesting a change of screen resolution and color mode, then the DOSBox emulator needs to adjust the resolution of the game window and invoke software support for VGA and EGA video modes, or a nice CGA video mode with a four-colour palette. Pretty.
The abstract front-end layer. This would be the charming side of DOSBox, if the project actually provided a graphical user interface out of the box. Instead, they have designed it one level deeper and abstracted the program’s front-end so that it could use different media and windowing libraries provided by the host’s operating system. By default, it uses the SDL library (SDL stands for Simple DirectMedia Layer) to handle the creation of the application container, window frame, sound, input functions and graphics modes. And lucky for them, SDL is available for Linux, Mac OS X, and Windows (and varied support for other platforms too), so there may never be a reason to move to a different library… until the SDL project is retired or if their senior programmer gets hit by a bus. If the project didn’t use a library like SDL, then the application would be tied to a specific set of operating system libraries, or it would need to provide an assortment of implementation modules for each OS target that followed a nice, clean little interface… like the ones provided by SDL.
I’ll pause while you give a big hug to the people working on that project. Don’t you feel better now? Wait, it wouldn’t be right to ignore DOSBox, since they are the stars of this little side show. Let’s spread the love around and try not to get too messy in the process.
The scaler layer. Strictly speaking, I wouldn’t really call this a layer, but the architecture does abstract it somewhat, and a lot of people like this feature so it’s worth discussing it a bit. When you decide to fire your favourite game up in DOSBox, you may notice the Window it creates is a little on the small side depending on your host’s current resolution. Wouldn’t it be nice if you could make the window bigger and still have it look good? That’s the job for the scaling routines. They take what would normally be a pixelated image (assuming you don’t like that sort of thing) and smooth out some of the rough spots. As with most scaling routines (other than piecewise-constant algorithms like nearest neighbour), there can be a bit of blurring but if the algorithms use a smaller set of surrounding pixels for their sample set, like an EPX or Scale2x routine then the result looks quite good and still maintains an acceptable level of sharpness and detail.
The shell layer. The shell provides a sub-set of the total native keyword commands available to DOS: DIR, CHDIR, ATTRIB, CALL, CD, CHOICE, CLS, COPY, DEL, DELETE, ERASE, ECHO, EXIT, GOTO, HELP, IF, LOADHIGH, LH, MKDIR, MD, PATH, PAUSE, RMDIR, RD, REM, RENAME, REN, SET, SHIFT, SUBST, TYPE, and VER. It also provides an execution environment for running batch files.
Hopefully, when you fire up your next DOSBox powered game (a number of product use this software, including game services like Steam), you’ll think of the long hours and tedious bits of programming that went into developing this stellar product, and maybe choose to send a bit more love their way this Christmas season.
Categories: DOS, Programming, Software
No Comments »
The Revolutionary Guide to Bitmapped Graphics
December 29, 2009This is another book from my library that I have decided to take a look back on and see if there are any useful tidbits to be used by programmers today. As with most technical books which are more than ten years old, there is usually an abundant amount of information about specific technologies which are no longer in popular use, or perhaps the technologies are still present in one form or another but the means to access them have changed dramatically. I personally believe that many of these books can give the novice programmer a background not taught in universities and colleges and will certainly give them an edge when working on limited or older machines.
The book does talk about video hardware used in that time period and delves deep into the programmatic underpinnings when accessing the display and creating custom video modes. I found some of the discussions to be noteworthy but if you really want a thorough explanation, you may want to investigate the Zen of Graphics Programming or the Graphics Programming Black Book. It also delves into a bit of assembly language primer, which is very typical for these books, since many of the routines were coded using that language. The introduction is short but may be a nice refresher for those who haven’t gotten their hands dirty in a couple of years.
I’ve made a list of what was still useful for work you may be doing today – unless you’re one of the lucky few who get to maintain software written in 1994. Your mileage will vary as some of the techniques are really just short introductions to a much larger field like digital image processing (DIP) and morphing. It even had a short introduction to 3D graphics, which seemed to be slapped on at the end because the publisher wanted “something on 3D” so they could put it on the cover.
- It provided color space introductions, conventions, and conversions for the following spaces: CIE, CMY, CMYK, HSV, HLS, YIQ, and RGB. Most of the conversions go both ways (to and from RGB space), although CMY/K conversion calculations are only provided from RGB space.
- Dithering and half-toning, followed by a chapter on printing. I think the authors mentioned Floyd-Steingberg in there somewhere, but it wasn’t a full discussion.
- Fading the YIQ and HLS color space. I’m not sure why they didn’t provide one for the RGB space, but it could very well be on the bundled CD-ROM.
- It introduces the reader to a few algorithms for primitive shape drawing and clipping, like Bresenham line drawing and Sutherland-Cohen clipping. It also included discussions and examples for ellipses, filled polygons, and b-spline curves.
- Extensive discussions on graphics file formats for GIF, JPEG, TGA, PCX, and DIB. Although these tended to be higher-level than what would have been useful for someone implementing a decoder for any one of these formats (with the possible except of PCX). Associated algorithms like LZW and RLE are also explained as they are used by encoders of these formats.
- The topic on fractals and chaotic systems was a little out of place, but was a little more extensive than the chapter on 3D. It did explain the concept of an L-system fractal, and even provided a generator for it. When supplied with a configuration file, it could produce fractals like the von Koch curve. It briefly touched on the Harter-Height Dragon fractal and introduced the Mandelbrot and Julia sets, but didn’t delve into chaos theory, even though I’m sure one the authors desperately wanted to do so.
- Related to the discussion of fractals was the section on generated landscapes via the midpoint displacement method. While not a landscape per se, the authors digressed a bit to talk about cloud generation as well.
The book finally managed to get around to the reason I bought it in the first place many years ago, which was the all too brief chapter on DIP techniques. It quickly introduced and provided code for algorithms like the Laplace filter, as well as popular effects like emboss, blur, diffuse, and interpolation. The treatment was very light, so the reader will not walk away with a solid understanding for any of the example code, other than trivial effects like pixelate or crystalize.
Categories: Books, Graphics, PC, Programming
3 Comments »
Racing the Beam
May 16, 2009
Racing the Beam
I just finished another great book the other day, entitled Racing the Beam: The Atari Video Computer System by Montfort and Bogost. It’s an inside book about some the development challenges and solutions when writing games for the Atari VCS. This is a unique machine and is often considered one of the most difficult machines for a programmer to cut their teeth on. With 128 bytes of RAM and an average ROM size of 2, 4, or 8K, you must fight tooth and nail of every byte used by your software. What lengths do some programmers go to skimp and save on bytes? Ever thought about using the same byte for both an opcode and a piece of data? Ever thought about using the opcodes and operands found in the code segment of your program as data, which gets fed a pseudo-random number generator or to produce a rendering effect because you didn’t have the spare space in ROM to place this stuff into the data segment? Well, neither did I until I read this text. Along with little gems like this, the book has a number of interesting tips and tricks into the how and why of software development for the Atari 2600.
The book centres itself around the idea of a platform, and how the constraints and peculiarities of a system can affect how a game is presented. Game adaptation, especially when you’re trying to port software from one hardware architecture to another, is a very important topic when you’re trying to maintain the look or feel of a game. Sometimes, neither is possible and you’re forced to go your own road and come up with something completely different.
A word of caution, though. This book will not teach you how to write software for the 2600 system. It is not a technical reference by any means, nor does it advertise itself as one. However, I would heartily recommend this title to anyone thinking about producing a game for that system, or those of us with an inner geek needing to be satisfied.
I love the idea behind this series of “platform” books as I have often wished for such books to be written and have even contemplated writing one myself just to fill the void. One of the most useful parts of this book is the reference section which can lead you to all sorts of new and interesting articles, books, or projects. I do hope the next book contains a bit more technical detail while keeping thevarious bits of historical data and interesting character references which really helps to tie the why and the how of the topics together.
Categories: Atari 2600, Books, Programming
2 Comments »
Computer Virus Research
December 21, 2008As part of being a well-rounded programmer, I dabble in all sorts of technical things. One of my areas of interest is computer virus research. In the last thirty years, I have witnessed a large number of changes to this industry, and I find myself compelled to write a little bit about it today after reading about a couple of courses offered at the University of Calgary.
As it exists today, computer virus defense is wide collection of software programs and support networks which are offered to companies and users for the sole purpose of protecting their data from loss, damage, or theft from a myriad of small computer programs called computer viruses. These programs must have the ability to replicate (either a copy of themselves or an enhanced version) and which often carry a payload. The means by which a computer virus can replicate are complicated and often involve details of the operating system. In addition to preventing virus outbreaks from occurring, anti-virus software is also used to help prevent service outages and ensure a general level of stability. In other words, they are selling security or at least one form of security, since security in general is a very large net which cannot be cast by only one program. As an aside note, please be aware of the tools you are using for anti-virus protection. With some research and a little education, it’s often not necessary to purchase these programs in the first place.
I am currently reading Peter Szor’s book entitled, The Art of Computer Virus Research and Defense (ISBN-10: 0321304543). I am almost finished the text and I have found the book to be incredibly informative; filled with illustrations and summaries for all sorts of computer virus deployment scenarios, technical information about individual strains, and historical pieces of information as to how the programs evolved and mistakes made by both researchers and virus writers.
Even though I have the skills and the opportunities to do so, I have never written a computer virus for the purposes of deployment, nor do I ever wish to do so, but I can tell you that writing an original computer virus is challenging work; writing a simple virus is easy. Isolating, debugging, and analyzing the virus is also interesting work, albeit somewhat more tedious. Both jobs require similar skill sets, detailed knowledge of and low level access to a specific system.
I used to posit that the best virus writers would be the people who have taken it upon themselves to write the anti-virus software. After all, the best way to ensure the success of a business built on computer virus defense is to construct viruses that can be easily and quickly disarmed by your software. Much to the disappointment of conspiracy theorists, this is probably not the case, since fellow researchers would easily link a pre-mature inoculation with a future virus outbreak if it happened too often to be mere coincidence. However, if your business was based on quick and successful virus resolutions, then timely outbreaks followed by timely cures would seem to solidify the business model. Personally, I think anti-virus researchers are kept busy enough with “naturally” occurring strains to necessitate a manual jump start of the industry. Although that could change as users and technology platforms become more advanced, although the more probably route is the disappearance of the anti-virus industry; we live in a messy world and there may be opportunities for those wanting to leave their mark, even in the face of futuristic technology gambits.
Computer virus writers are plagued, somewhat ironically, by numerous problems with deploying their masterpiece. A computer virus can be written generically so that it can spread to a wider variety of hosts, or it can be written for a specific environment, which can include requirements on the hardware or software being used. Dependencies on software libraries, operating system components, hardware drivers and even specific types of hard-disks are all liabilities and advantages for a virus. They are liabilities because dependencies limit the scope of infection so the virus spreads more slowly, but at the same time, they often enable the virus to replicate, since the virus may be using known vulnerabilities or opportunities within these pieces to deliver the payload or as as means to allow for it to spread.
Virus research, writing, and defense is a fascinating topic. Unfortunately, I find the pomposity, and to some degree the absurdity, in various branches of the industry to be laughable and a little scary at times. In case you haven’t heard, the University of Calgary is offering a course on computer virus research. While I find this to be a refreshing take on education, my hopes are quickly dashed when I read the requirements and the Course Lab Layout (warning PDF monster). Do they think their students are secret agents working in a top secret laboratory? Of course they do, why else would there be security cameras installed in the room, and why do they restrict access to the course syllabus? Well, I’ve got news for the committee who approved the layout of the lab, and who probably approves the students who can attend the course: computer viruses are just pieces of software. That’s right, they’re just software. They don’t have artificially intelligent brains, they can’t get into your computer by the power lines, and they are quite a bit less complicated than your average word processor. This means that any programmer with the desire and a development environment can write a virus, trojan, or any other form of malware. They don’t need to take your course and they don’t need access to your Big Brother Lab.
The absurdity of protecting information which is already publicly available and has been for decades makes me want to laugh out loud and strangle someone at the same time. It’s rather disturbing and I really don’t like the idea of closing doors on knowledge, even if the attempt is futile. The University of Calgary’s computer science department should be ashamed at perpetuating such ignorance within a learning institution, and I am truly disappointed how bureaucratic such systems have become.
Update 12-29-20008:Â To respond to a verbal conversation I had with a couple of people: I understand why the university placed the security restrictions in the program; they want to validate the program and make it appear legitimate to the community and their peers. That’s fine, but at the same time, it must be acknowledged that the secret to mounting a successful defense against viral software and Internet based attacks is shared knowledge and open avenues for information. Understandably, this information will go both ways, but the virus writer will gain nothing they do not already possess (except the knowledge that we know what they are doing), while the general public may be a little more aware of the problem than they would be without this information.
Indeed, using viral kits and small customization programs can make viral programming easy for the layman or immature programmer, but we shouldn’t be locking away information about these techniques or programming practices simply because the result is something undesirable or easy to dispense. There are real opportunities to learn and disseminate this knowledge today, and the bigger the audience, the larger the opportunities for successful anti-viral software and general consumer awareness which will combine to create the most effective vaccine of all: knowledge.
Categories: Programming, Reflections
No Comments »