14. March 2017 · Comments Off on TheF#WORD – Crossword Generator · Categories: Programming · Tags: ,

Bitbucket repository


This article discuses algorithm for constructing crossword puzzles. Several techniques and heuristics are described that can reduce search time. The algorithm is implemented in F#.


Crossword construction is a constraint satisfaction problem which belong to NP complexity class. To reduce search space following techniques are employed by the algorithm:

  • most constrained variable heuristics – for choosing next pattern to be filled
  • least constrained value heuristics – for choosing a word with which selected pattern will be filled
  • forward checking – for keeping the list of available words consistent with the current state of crossword
  • backjumping – for going back to pattern that is causing inconsistency
  • tree pruning – for removing words that are known to lead to the dead ends

All of these subjects are discussed in following sections.


For illustration purposes, simple dictionary is constructed to demonstrate certain behaviors of the algorithm.


The following image shows a single run of algorithm with the simple dictionary. Also random behaviors are disabled to get reproducible example. This example will be used to show different aspects of the algorithm.

run of the algorithm with the simple dictionary

Crossword Representation

Crossword is represented as two-dimensional array of fields. Each field is described by several properties:

  • letter – letter currently occupying the field (special cases: '_' – empty field and '#' block field)
  • fill count – stores the number of field’s patterns that are currently filled
  • conflict marking – indicates that it was not possible to fill one of it’s patterns on the latest try due to inconsistency.
  • pattern references – index of horizontal and vertical patterns to which the field belongs

Fields are grouped into patterns on which the algorithm operates. Patterns has two important states:

  • options – pool of words that are valid for current state of crossword and can be used to fill the pattern.
  • stamp – step number of the algorithm in which the pattern was filled.

Horizontal and vertical patterns are kept in separate lists, for faster lookup when searching for connected patterns. Pattern reference is actually an index in these lists. Patterns also have globally unique IDs, so the algorithm can avoid using/processing the same pattern more then once in certain steps.

Pattern Selection

The first thing in each step of the algorithm is to decide which pattern should be filled next. The algorithm selects pattern which has the least number of possible words that can fill it. This is done so the algorithm can hit imminent dead ends sooner rather then later and prune bad search paths early. The heuristic is called most constrained variable.

As it was already said, the algorithm maintains list of valid words for each pattern. This list is called options of the pattern. So to select most constrained pattern it is enough to find pattern with least number of options. Side effect of this approach is that successively filled patterns do not have to be connected, which unfortunately makes backtracking more complicated, but it is worth the hassle.

Word Selection

After the pattern has been selected, then next decision is to choose which word will be used to fill the pattern. This time opposite approach is used. Word which excludes the fewest options from connected pattern is chosen, which leaves greater flexibility in further search. Least constrained value is the name of this heuristic.

Weight of the word is calculated as:

where | Pi | is number of remaining options for ith unfilled connected pattern under assumption that word w is selected. Product instead of summation is used to rule out words which would cause one of the unfilled patterns to become inconsistent (| Pi | = 0).

Weighting all options of the selected pattern would be too expensive, especially in the early phase of search, so only limited number of words is considered. The algorithm takes only several words from the pool and selects the one with the greatest weight. Once the word is selected it is removed pattern’s options.

Forward Checking

Once the word is selected, constraints imposed by the selection are propagated to connected unfilled patterns, so only valid words remain as options for those patterns. This is called forward checking.

This step is necessary, not only for inconsistency checks, but also to get the most constrained variable needed already described. So these two things go hand in hand. It is possible to implement more sophisticated consistency checks such as arc consistency (AC-3 algorithm), but these might be expensive operations.

New set of options for unfilled patterns is already produced by the word selection, needed in order to calculate words’ weights, so it will be reused by this phase. The only thing left to do is to save new pattern options.


Once the algorithm finds a pattern that it cannot fill, it has to go back to one of the patterns that was previously filled and fill it with another word in hope the issue will resolved. This is the most complex part of the algorithm.

Naive backtracking approach would be to pick another word for the most recently filled pattern, but it’s far from the optimal way. The reason for inefficiency of simple backtracking is order in which the patterns are filled. Since the algorithm fills the most constrained pattern first, patterns filled in two successive steps might not be connected at all. If that is the case, then changing the word of the first pattern will not resolve inconsistency in the second, it will only waste CPU cycles.

Better approach is to go back only to the patterns that are connected to the one that is inconsistent. This technique is called backjumping.

To solve this issue, the algorithm stamps each filled pattern with a step number in which it was filled. This addition makes it is possible to recover the order in which patterns were filled. Now it can be determined which pattern should be changed as well as the list of all patterns affected by that change.

Steps performed by backjump are following:

  1. Mark all fields of inconsistent pattern as conflicted

    The algorithm simply set conflict marking for each field of the inconsistent pattern.

  2. Identify target pattern to which the algorithm needs to backujump

    Target is the most recently filled pattern that has been filled earlier than and is connected to the one which is inconsistent. To determine which pattern satisfy this condition, the algorithms uses previously described stamps. If no such pattern is found, the most recently filled pattern globally is used as target.

    backjump to pattern that is not connected

    Steps #8 and #9 demonstrate this situation. If that fails as well, it means the search is back at the beginning with no more options available. This indicates end of search and failure to construct crossword.

  3. Undo all patterns that depends on the target of backjump

    Once the target is located, all patterns depending on it must be undone. To generate list of these patterns, the algorithm starts from the target and adds all connected patterns that are filled after the target. The process is repeated recursively for each new pattern added to the list. Once the list is completed, fields of patterns in the list are updated with new fill count and cleared if necessary.

  4. Restore options for all affected patterns

    List of possible words for each undone pattern must be regenerated, based on the values of underlying fields which remained filled. Undo process, that was previously described, also affects unfilled patterns connected to those that are undone. So the final list of affected patterns is union of cleared patterns and unfilled patterns connected to them. For each pattern in the list, the algorithm forms the list of words by matching the words from dictionary against the state of pattern’s fields.

  5. Prune options of target pattern

    Pruning process is described in the following section.

Tree Pruning

Once the algorithm backtracks it is possible to determine which parts of target pattern caused the problem, so options that are going to repeat the same mistake can be removed from list of valid options.

To prune options, the algorithm will construct filter based on conflicting markings of pattern’s fields. As it already said, each time an inconsistent pattern is found all of it’s field are marked as conflicted. These markings are preserved and accumulated across backjumps and they are cleared only when the pattern is filled successfully.

mark fields of inconsistent
pattern as conflicted
clear conflict mark after
filling the pattern

When backjump reaches its target all it needs to do is to remove words that have matches the pattern of marked fields.

pruning in action

Effects of pruning can be seen between step #9 and #10. After AAAAA word had failed to produce solution, the algorithm selected DDDBB even thought there was BBBAAA before it in dictionary. The approach is rather simple and it is possible do develop more sophisticated algorithm that would prune even more options by constructing less restrictive filters, but it was not not implemented.

non-optimal prune filter

This potential sequence of tries would create ___AA filter for pruning, but since those two inconsistent patterns are not dependent, much better option would be to create two filters: ___A_ and ____A as this pair would remove greater number of options.


Field of crossword is represented by CwField record:

type CwField = 
    { Letter : char
      HorRef : int
      VerRef : int
      Count : int
      Conflicted : bool }

Patterns are represented by CwPattern class:

type CwPattern(grid : CwField [,], dictionary : string array,
    direction : CwDirection, no : int, xs : int, ys : int, length : int) = 
    // prune current options according filter produced
    // using conflicted markings on underlying fields
    member m.PruneOptions()
    // restore all options according to state of underlying fields
    member m.RestoreOptions()
    // sets new options for the pattern
    // options generated by the algorithm during word selection phase
    member m.UpdateOptions(opts : string array)
    // updates underlying fields with letters and increment their fill count
    // removes the word from remaining options
    member m.Fill(word : string, step : int)
    // clears letters of underlying fields and decreases their fill count,
    // but it keeps remaining options
    member m.Undo()
    // marks all underlying fields of pattern as conflicted
    member m.MarkConflicted()
    // array of letters in stored in underlying fields
    member m.Letters
    // tests whether any underlying field is marked as conflicted
    member m.Conflicted
    // string of letters in stored in underlying fields
    member m.Word
    // unique ID of the pattern in the crossword
    member m.No
    // X position of the first field
    member m.X
    // X position of the first field
    member m.Y
    // number of letters in the pattern
    member m.Length 
    // direction of the pattern: horizontal or vertical
    member m.Direction
    // array of all remaining options
    member m.Options 
    // number of options remaining 
    member m.Count 
    // step number if the algorithm in which the pattern was filled
    member m.Stamp

Crossword is represented by Crossword class:

type Crossword(filePath : string, dictionary : CwDictionary) = 
    // minimal pattern length
    // shorter patterns are not considered during the construction
    let minPatternLength
    // maximal number of words that are taken into account
    // when selecting word for a pattern
    let maxWordPoolLength
    // two-dimensional array of fields
    let grid

    // horizontal and vertical patterns
    let hPatterns 
    let vPatterns 

    // returns list of crossword's patterns that are orthogonal
    // to specified pattern 
    let orthogonal (pattern : CwPattern)
    // selects pattern from queue of unfilled patterns
    let selectPattern queue
    // selects word for specified pattern
    let selectWord (pattern : CwPattern)
    // finds the target of backjump, clears the patterns
    // and return them to queue
    let backjump (start : CwPattern)
    // main loop of the algorithm
    member x.Solve(refreshRate : int)
    // checks whether all patterns are correctly filled
    member x.Check()
    // prints crossword on console
    member x.Print()


23. July 2014 · Comments Off on Commodore 64 Emulator · Categories: Programming · Tags: , ,


(Check out a version of Commodore 64 emulator for Windows Phone)

This article describes details about implementation of Commodore 64 emulator written in C#.

The performances are not that great, because of several reasons, like cycle-based and real drive emulation as well as the way some things are implemented in source code. I re-implemented the emulator in C++ for other purposes and the implementation yielded a great increase of performance. In the wake of these results I abandoned the C# implementation. Since it would be a waste time and effort not to use the code available in some way, I decide to write an article about the subject.

Unfortunately there are stuffs that will remain incomplete, missing or implemented correctly, but it is still a good base to explain the basic concept of emulation.

In this article I will not try to explain how the actual hardware works. There is a huge amount of great resources that cover these subjects better that I could do. Instead I will focus my attention on emulation – what should be done and how it was done in this implementation.

Source code is available at Bitbucket. Before you try to build the solution or run the emulator you should read section about ROM files.


Parts of Commodore 64 that are visible and can be used by the programmer are:

  • 6510 chip – the CPU (same as 6502 chip but with one additional IO port)
  • VIC-II chip – responsible for graphic
  • SID chip -responsible from audio
  • 2xCIA chips – responsible for timers and IO (like serial bus, keyboard, joystick…)
  • Color RAM – 4-bit memory used by VIC-II chip to generate colors of displayed graphic
  • RAM chips – 64 kilobytes available to program
  • 3xROM chips – storing Basic interpreter, OS (KERNAL) and character generator used by VIC-II chip

The implementation covers the most of the hardware needed to run games. The one major thing that is missing is SID chip emulation. REU and tape emulation is also not available. With appropriate tools it should be possible to convert games that are available in T64 format to D64 (disk) format, so those games should be playable too.

Emulator is cycle-based, which allows perfect emulation of chip timings and synchronization among different chips in the system. It also implements real-drive emulation of 1541-II drive. All these techniques provide better compatibility with different software, but they come with a performance costs.

Commodore 64 Block Diagram

Project Structure

Emulation code is separated in several assemblies so some things, like CPU, memory and IO port emulation, can be reused for other projects like VIC, PET or NES emulators.

  • c64_common – clock, memory, IO port and interrupt line emulation
  • c64_cpu – 6502 and 6510 emulation
  • c64_av – VIC-II and SID chips emulation
  • c64_io – CIA chip and IEC (serial) bus emulation
  • c64_system – C64 board that connects all emulated hardware as well as keyboard/joystick emulation
  • c64_1541 – 1541-II drive emulation
  • c64_environment – definition of interfaces used by emulator to interact with external world (like outputting graphic and audio, reading files…)
  • c64_win_gdi – implements GUI and connects everything together (C64 board, 1541-II drive and environment)


Everything in the system is synchronized by the CPU clock. Clock cycles are separated into phases. These phases define the order of chip emulation in each cycle and each chip takes a single phase of the cycle. In each phase emulator execute single clock operation scheduled for the chip. There can be any number of phases in a cycle depending on the number of chips that are emulated.

System Clock Block Diagram

ClockOp defines the interface for clock operation where Execute should contain the logic that should be executed in the clock cycle.

Clock class represent clock of the system, it keeps queue of clock operation that should be executed for each phase. Clock does not work with ClockOp directly but through ClockEntry and ClockEntryRep classes. These classes provide additional context to ClockOp like what is the next operation that should be executed, and whether that instruction should be executed in same cycle. ClockEntryRep class is used for operations that take multiple cycles to complete and contains information about its length – number of cycle it takes to execute.

QueueOpsStart and QueueOps methods are used for queuing operations for specified phase of the clock. QueueOpsStart method is used when the first operations are queued after the emulation is started, for every subsequent request QueueOps should be used. For most chips calling QueueOpsStart once at the beginning, with all clock operations specified in cyclic list is usually enough. Only CPU is using QueueOps method to queue operations that should be executed after it decodes the instruction.

Stall methods can be used to stall execution of queued operations in a certain phase (i.e. holding CPU while VIC-II chip is accessing RAM) or Prolong if additional cycles are required to complete operation (i.e. executing branch instruction that causes cross-page jump).

Once everything is ready to start/resume the emulation, Run method of the Clock should be called. Halt method can be used for halting execution of the operations queued in the clock. This is used for holding emulation after the frame is completed and before new one should be rendered, or for stopping drive emulation when drive is not active to save resources.

Once the last phase of the current cycle is completed, OnPhaseEnd event is raised. This event can be used to chain additional clock of other boards like 1541-II drive.

Clock emulation class diagram

Memory Model

This chapter will describe how the emulation of memory bus, ROM and RAM is implemented. It will also explain what memory mapped devices are and how memory map is working in this implementation.

Memory mapped devices

Each device that can be accessed through the memory bus is memory mapped device (including RAM and ROM). They are mapped to a certain address range which is specified by start address and the size (number of memory location).

MemoryMappedDevice class is base for all memory mapped devices. These devices should implement and support Read and Write methods which are called when CPU or some other chip uses memory bus to read or write content of address to which the device is mapped.

Memory map

Memory map represents a memory configuration. This configuration defines which devices are mapped and where they are mapped in the address space of the system. Important thing to notice is that write operations can trigger different chip from read operations for the same memory location even when the same memory map is used. Due to this requirement, this implementation has two separate maps, one for writes, and one for reads.

Memory Map Block Diagram

Commodore 64 can have different memory maps that can be selected by the programmer. To accomplish this, emulator will create multiple memory maps are pre-configured for each possible configuration.

Each location is represented by MemoryMapEntry class and it has two entries – device that should be called for writes and device that should be called for reads.

MemoryMap class handles memory mappings and it’s just an array of memory map entries. It has Map and Unmap methods that can map a device in specified address space and remove it from there. Another important pair of methods is Read and Write. These methods will find which device is mapped at the specified location for the requested memory operation and invoke appropriate methods on the device.

Memory emulation class diagram


RAM emulation is as simple as it gets. Memory is just an array of bytes which can be read or written to by the CPU or some other chips like VIC-II.

RAM class implements emulation of RAM. Since the RAM is obviously mapped into address space, this class is based on MemoryMappedDevice class. Read and Write methods are straightforward and do as they say – read or write memory location.


ROM emulation is even simpler, implementation just provide reading of bytes that were loaded from predefined file.

Due to hardware implementation of memory bus in Commodore 64, stores to locations which are currently mapped to ROM will result in values being written to RAM at these location. This feature is realized by memory map of the emulator and not by the ROM emulation, which just ignore writes.

ROM emulation is implemented by ROM class which is similar to RAM class. It also provides additional method Patch which allows content of ROM to be changed which can be useful for skipping various checks performed by firmware.

Color RAM

Commodore 64 has additional RAM dedicated for storing colors of all characters that are currently being displayed and it is used by VIC-II chip to generate video output. Only lower 4 bits of each byte is used by VIC-II.

Emulation of Color RAM is implemented in ColorRAM class and it based RAM class.


In some cases, single device can be mapped to multiple locations, but this is not supported by the current implementation. Also chips are usually mapped to much larger space then they have registers. For instance VIC-II address size is 1024 bytes, but it only has 64 registers. In these cases it is up to the chip to determine what it should do with reads/writes to locations beyond available register, but most often only the first few bits of the address are significant and others are assumed to be 0. This makes memory map looks like those 64 registers are mapped multiple times in the allotted address space to fill whole 1024 bytes of address space.

Unmapped address space

Certain memory configurations allow address ranges that do not have any devices mapped. In real hardware, writes to these locations would be ignored and reads would return predefined values. In this implementation an exception would occur, which can cause problems with some of the existing software.

IO ports

IO ports are available in several chips: VIA, CIA and 6510 CPU. Pins on a port can be either input or output. Direction of the pin can be controlled programmatically. These ports are used for connecting hardware that is not connected to memory bus directly – they are not memory mapped. These are devices like serial bus, keyboard and joystick in Commodore 64 or drive head controller in 1541-II disk drive…

Logic for controlling IO ports is provided by IOPort class, since it is the same for all chips.

Properties Input, Output and Direction provides access to pins of IO port and allows control of their direction as well as their state. These properties will set state of all pins to specified values which is not always desirable. SetSingleInputFast method can be used when there is a need to change the state of single input pin.

In addition to methods that can read or set state of pins, or change their direction, this class also exposes event when the state of output pins are changed – OnPortOut.

How these ports are controlled and exposed to the rest of the system depends on the chip logic of which they are part.

IOPort class


Commodore 64 computer is powered by MOS6510 CPU. This version of CPU is exactly the same as MOS6502 chip, but with an extra general purpose IO port that is mapped into address space. This IO port is used for controlling tape drive and memory configuration of the system.

Only documented opcodes are implemented in the emulator. Execution of illegal opcodes (not documented by MOS) will cause emulator crash.

MOS6502 class connects all the individual components of CPU emulation. MOS6510 class extends MOS6502 and it only implements emulation of additional IO port available on that chip.

CPU Registers

6502 has set of 6 registers that are visible to the programmer. This set is represented by CPUState class.

Each register implements IRegister generic interface, where the underlying type of the interface depends on the size of the register.

Accumulator (A) and index (X and Y) registers are implemented by GpRegister class. Stack register and program counter are implemented by StackRegister and ProgramCounter classes, respectively.

Processor status register is implemented by StatusRegister class where each status flag is exposed as property.

CPU registers class diagram


Fetching and decoding opcode is the first step of instruction execution. In case of 6502 CPU all opcodes are one byte long and it is enough to decode whole instruction – so we know which instruction it is and what addressing mode it uses.

Decoding process uses pre-created lookup table which maps opcodes to structure that indicates which addressing mode should be invoked and which instruction should be executed as well as their timings – number of cycles it takes to address, execute and store result. This table is implemented by DecodingTable class.

Decoder will then create list clock operations that will be queued for the execution based on the information obtained from decoding table.

Decoder itself is implemented as clock operation (DecodeOpcodeOp class) and it is attached at the end of clock operation list for every decode instruction.


Next step, after the instruction has been decoded, is to determine operands. Operands tell the source and/or destination of the instruction.

6502/6510 CPU has several addressing modes, which are not going to be discussed here, since there are far better sources on the subject, including the official datasheet.

Each addressing mode is implemented as separate class which implements AddressingMode interface. The only method defined is Decode which is responsible for calculating target address depending on current state of the processor.

Class diagram of addressing modes

Address decoding process will create and set active target of CPU depending on the addressing. Target can be writable or only readable; it can be memory or register. Target is defined by AddressedTarget interface it should implement logic for retrieving and storing values of the targeted location and each addressing mode has its own kind.

ReadableTarget class is used by the addressing modes that only require reading memory location, while WritableTarget class is used by those modes that also requires storing values to the targeted location. IndirectTarget is used by addressing modes which does not reference target directly but through another, intermediate, memory location that stores the actual target location.

There are also targets specific for certain addressing modes, like accumulator addressing which targets accumulator register of the CPU and immediate addressing which specify that target is part of decoded instruction.

It is important to note that some addressing modes require fetching additional bytes from memory referenced by program counter to decode target location and each fetch causes moving program counter to the next address.

Class diagram of addressing targets

Addressing mode Addressing class name Target class name
Accomulator AccAddressing specific target (WritableTarget as base)
Immediate ImmAddressing specific target (ReadableTarget as base)
Absolute AbsAddressing WritableTarget
Zero Page ZeroAddressing WritableTarget
Indexed Zero Page (using X register) ZeroXAddressing WritableTarget
Indexed Zero Page (using Y register) ZeroYAddressing WritableTarget
Index Absolute (using X register) AbsXAddressing WritableTarget
Index Absolute (using Y register) AbsYAddressing WritableTarget
Implied ImpAddressing none
Relative RelAddressing WritableTarget
Indexed Indirect IndAddressing IndirectTarget
Absolute Indirect(using X register) IndXAddressing IndirectTarget
Absolute Indirect(using X register) IndYAddressing IndirectTarget

Address decoding is wrapped in DecodeAddressOp clock operation class so it can be queued for the execution.

How each of these addressing modes works is described in 6502’s datasheet.


Each class that represents an instruction needs to implement Instruction interface which exposes Execute method. This is the method which is responsible for executing the logic of the instruction.

Execution phase of the instruction is wrapped into DecodeAddressOp class which represent queueable clock operation. This wrapper provides all the necessary information to Execute method, such as CPU that executes the instruction, current cycle needed for instruction that take multiple cycles and so on…


6502/6510 CPUs have two interrupt lines – standard one that is maskable (IRQ) and non-maskable (NMI). IRQ line is level triggered – meaning that CPU will detect interrupt every time the level of the line is raised, while the NMI line is edge triggered – meaning the CPU will recognize only when the level of the line changed from low to high.

Interrupt line emulation is realized by IrqLine and NmiLine classes. Unfortunately NmiLine does not handle multiple interrupt sources correctly.

CPU is the owner of these lines, but chips that are attached to them can get the references and use them to raise or lower the levels of the line.

Clock operation that is responsible for instruction decoding (DecodeOpcodeOp) checks interrupt lines before it fetches the next instruction. If any of the lines are active it will enqueue clock operation that is responsible for interrupt handling. In the case of maskable interrupt (IRQ) this operation will check process status register before it starts interrupt handling.

Interrupt handling clock operation is implemented by InterruptOp class.

Class diagram of CPU’s different clock operations

6510 IO port

The only addition to 6510 is IO port. The port is always mapped to addresses $0000 (pin direction register) and $0001 (pin state register). Exact function of each pin is provided in the references.

MOS 6502/6510 emulation Class diagram

Commodore 64

The following section discusses emulation of individual chips and hardware components that are present on Commodore 64’s main board.

VIC-II chip

This chip is responsible for generating graphic in Commodore 64. Although there is no official datasheet for VIC-II chip, there is an article with the detailed description of internal workings of the chip which is a product of reverse engineering. Link to the article is provided in the section with references. It would be good for a reader to get familiar with the subject before getting into details of the actual implementation.

VIC-II emulation is implemented by VIC class. Since the chips is mapped in address space and accessed through the memory bus, VIC class have to extend MemoryMappedDevice. Read and Write methods are responsible for retrieving state of chip’s register and updating them.

RasterLine class is responsible for managing the well-defined sequence of operations performed by the chip in each cycle.

Each clock cycle is separated in two steps – memory access operation and graphic generation operation. Some cycles have both operations defined, others have only one or none. To cover all the cases there are several implementations clock operations for VIC-II chip:

  • VicNop – cycles where the whole chip is idle
  • VicGraphOp – cycles where only the graphic generator is active, but no memory access is needed by the chip
  • VicReadOp – cycles where graphic generator is idle, but the memory access is required by the chip
  • VicGraphReadOp – cycles that requires memory access and have graphic generator active
  • VicReadOp62 – special case of VicReadOp which used only for the last cycle of raster line

Raster line has two pre-defined lists of clock operations. One list represents operations that should be performed in the raster line when graphic output is active and the other list contains operations that are performed by VIC-II chip when graphic output is idle – during rendering borders or deactivated programmatically by the programmer.

Raster line emulation class diagram

As it already mentioned, each cycle has strictly defined set of actions that it performs. Logic of each cycle for memory access and graphic generator operations are implemented as different methods of RasterLine class. These methods are responsible for updating state of the chip, performing memory reads and invoking active graphic mode to generate graphic.

Graphic modes are responsible for generating all graphic output except sprite graphic. Each graphic mode has its own class that implemented GraphiceMode interface. The only method of the interface is render, which would generate pixel for the current position according to current state of the chip.

Class diagram of VIC-II’s graphic modes

RasterLine class is also responsible for generating graphic output for sprites.

Output method of IVideoOutput interface is called when the chip needs to output pixel on the screen.

VIC-II object also keeps collision matrix to implement collision detection feature of the chip. This matrix stores information for each pixel and it defines whether the pixel can cause collision with another sprite or not. This information is stored when pixel of background graphic is outputted and checked when sprite graphic is outputted.

VIC-II chip emulation class diagram

CIA chip

CIA chip has two IO ports (A and B), two timers (A and B), one clock (Time of Day or TOD) and serial pin which is unused in Commodore 64. These chips are responsible for connectivity with IO devices such as keyboard and joysticks that are connected directly and disk drives and printers that are through IEC bus.

The chips are attached to interrupt lines so they can raise interrupts when required, i.e. when the state of input pin is changed, timer reached 0, etc. CIA #1 is attached to IRQ line while CIA #2 is attached to NMI line, but in this implementation it is also attached to IRQ line incorrectly.

Both CIAs are mapped into address space of Commodore 64, so they can be accessed through memory bus.

For the details about each component of CIA chip you can find link to CIA datasheet in the references.

Emulation of the chip is implemented by CIA class. This class extends MemmoryMappedDevice since CIAs are mapped into memory space. Read and Write method reads and updates registers of the chip.

CIA class also implements ClockOp interface which is responsible for updating timers’ counters. In some cases CIA chip will be idle for a few cycles when the timer is reloaded, but they are not covered by this implementation.

Timer emulation is implemented by TimerState class.

IncrementTod method which is responsible for updating Time of Day is called after end of each frame.

CIA chip emulation class diagram

SID chip

SID chip emulation is not implemented, unfortunately, and it will not be discussed here. Link to SID datasheet is provided in the reference section of the article.

Memory Configurations

Commodore 64 has several memory configurations and appropriate configuration can be selected by setting the first 3 bits of 6510’s IO port. Details about each configuration can be found in referenced articles. Each configuration, of 8 possible, has its own instance of MemoryMap which is pre-configured at the startup. This improves performance since it is enough to set reference to appropriate instance of MemoryMap when the memory configuration is changed by writing the IO port.

Keyboard and Joystick

In Commodore 64, keyboard and joysticks are connected to IO ports (A and B) of CIA #1 chip. Keyboard forms a matrix which can be decoded based on state of output port A and input port B. Restore is attached directly to NMI line (this is not implemented).

Keyboard class is responsible for receiving key presses from the system, converting them according to the matrix and the output port A and setting the state of input port B.

IEC bus

IEC bus is used for communication with disk drives, such as 1541-II, and printers, among the other things. It has three lines: DATA, CLOCK and ATN. These lines are attached to IO port B of CIA #2. Even though CIA chip has hardware support for serial communication, this feature is not used, but the protocol is implemented in software.

SerialPort class implements serial bus emulation, where each line is represented by BusLine class. State of the line can be controlled with State property of BusLine class. The class tracks how many devices keeps the line low and raises an event when the state of the line is changed.

Devices are not attached to the line directly but through and object of BusLineConnection class which is responsible for tracking local state – state of the line set by that device.

IEC bus emulation class diagram

1541-II Drive

The emulator also implements real drive emulation of 1541-II disk drive. This means that emulation is not on IEC bus protocol level but that the real hardware of the drive is emulated. This type of emulation requires more resources, but provides much better compatibility.

1541-II drive block diagram

GCR Coding and D64 Format

GCR is format used by Commodore to store information on disks. It defines how the data are laid on disk: track sizes, data synchronization, how sector headers looks like and coding of sector data. Link that contains detailed explanation of the format is provided in sections with references.

The most widespread format for storing disk images is D64, so it is the only one implemented here. Disk images in D64 format contains sector data from actual disk. These are the sector data that Commodore 64 receives from the disk drive and they are not GCR encoded. In addition to sector data, there are versions of D64 format that provide additional data for emulation of bad sectors, which improves compatibility with some copyright protection mechanisms. Unfortunately these extensions are not implemented here. D64 format is also covered by the referenced material.

GCRImage class is responsible for loading disk images, from D64 format, and creating in-memory GCR encoded image that can be used by the rest of 1541-II emulator.

GCRImage class

VIA chip

VIA is an IO port controller like its successor – CIA chip. It provides two general purpose IO ports as well as two programmable timers. In addition to 8 IO pins each port has two additional control lines that can be used as interrupt inputs or as handshake outputs.

Drive head and motor as well as the IEC bus lines are attached to the ports of two VIA chips that are installed in the drive.

VIA class implements emulation of the chip. It extends MemmoryMappedDevice since VIAs are mapped into memory space of the drive’s CPU. Read and Write method reads and updates registers of the chip.

This class also implements ClockOp interface which is responsible for updating timers’ counters as well as ports’ control lines.

PortA and PortB properties of the class expose two IO ports of VIA chip. CA1, CA2, CB1 and CB2 properties provide access to control line of the IO ports.

VIA and Drive head emulation class diagram

Drive head

Drive head is electro-mechanical subsystem that controls drive’s motors, position of the head and the head itself that reads/writes data from/to the disk. It is attached to IO ports VIA#2, where port B is responsible for head control and port A is responsible for transferring data.

Head is also attached to control lines of VIA chip as well as SO pin of CPU, which sets overflow flag, to notify it that the data are ready for reading or that they have been written to disk.

DriveHead class as its name suggests implements emulation of drive’s head. This class implements ClockOp interface, so in each clock cycle logic for reading or writing data will be executed. If the head is active, which is controlled by the VIA chip, after certain number of cycles, data will become available for reading operation or stored to disk for write operation. To increase capacity of disks, tracks have different density, depending how far away they are from the center of the disk. So the number of cycles after which the pending read/write operation will be completed depends on the current disk track on which the head is positioned.

Persisting Emulator State

Each emulated device has certain state, these states should be persisted in a file if we want to be able to save or load emulation session which is very useful feature of an emulator and makes old games much easier.

Emulator will save state only at the end of a frame during which the save command is issued. The reason for this is simplification of code that is responsible for handling the state of VIC-II chip.

Saving and loading state of the emulator is done through IDeviceState interface. Class for each emulated component that needs to save its state should implement this interface. ReadDeviceState and WriteDeviceState methods of the interface are called when state needs be persisted or restored. Emulated component can read or write state file using reference to file object that is provided as a parameter of these two methods.

If it is composite and contains other components that implement IDeviceState interface, parent component should call appropriate methods on all of its subcomponents. For instance, C64 board will call store/load methods on components that represent memory, CPU and all other chips.

IDeviceState interface

Structure of Emulator State


Environment project contains interfaces whose purpose is to isolate external system from the emulator. Currently there are two interfaces defined: IVideoOutput interface which abstracts video output and IFile which abstracts file operations.

ROM Files

ROM files are not included in source code, due to legal reasons, but you can get them from this location:

Files that are required by the emulator are:

  • kernal.rom – content of ROM chip hosting OS
  • basic.rom – content of ROM chip hosting Basic interpreter
  • chargen.rom – Charachter ROM
  • d1541.rom – firmware of CBM 1541-II drive

They should be copied to .\c64_roms folder of the project. If you are just try to run the emulator binaries, copy ROM files into the same folder with the executables.


This section provides links official documentation and datasheets for hardware. It also has links to very useful resources that are product of reverse engineering of Commodore 64’s hardware and software, such as memory maps, illegal opcodes, ROM disassemblies…

Hardware datasheets and documentation

  • MOS 6510 Datasheet – CPU [PDF]
  • MOS 6502 Illegal Opcodes – CPU [TXT]
  • MOS 6526 Datasheet – CIA [PDF]
  • MOS 6522 Datasheet – VIA [PDF]
  • MOS 6581 Datasheet – SID [PDF]
  • MOS 6567 Description – (VIC-II) [TXT]

Memory Maps

  • C64 Memory Map [HTML]
  • CMB 1541-II Drive Memory Map [HTML]

ROM Dissasemblies

  • BASIC and KERNAL ROMs Disassemblies [HTML]
  • CBM 1541-II Firmware Disassembly (DOS 2.6 ROM) [HTML]

Hardware Schematics

  • C64 Schematics [GIF, GIF]
  • CMB 1541-II Drive Schematics [PNG]


  • D64 file format [TXT]
  • IEC bus [PDF]
07. August 2013 · Comments Off on Genetic Algorithm for Bin Packing Problem · Categories: Programming · Tags: , ,

This article will demonstrate an implementation of Hybrid Grouping Genetic Algorithm (HGGA) proposed by Falkenauer for solving grouping problems such as bin packing using GALex library.

Bin Packing Problem

Bin packing problem belongs to the class of NP-hard problems, like the others that were discussed in previous articles. The task is to pack a set of items of different size into bins of fixed size in such way that minimal number bins is used. Depending on the requirements, bin packing can be single-dimensional (1D) or multi-dimensional (2D, 3D…) Genetic algorithm describe in this article is designed for solving 1D bin packing problem.

Hybrid Grouping Genetic Algorithm (HGGA)

Solution representation and genetic operations used in standard and ordering genetic algorithms are not suitable for grouping problems such as bin packing. Genetic operations, such as crossover and mutation, used in these algorithms are not aware of groups (bins). These operations will often disrupt these groups so they will not be able to pass meaningful information and good groups to offspring chromosomes. Hybrid Grouping Genetic Algorithm is proposed to solve this problem, by changing representation so the individual genes represent groups, not the individual items. Crossover and mutation operations are also changed so they are aware of groups. Original article discusses Hybrid Grouping Genetic Algorithm in more details and compares it to the ordering genetic algorithms. This article is focused on implementation of the algorithm using GALex library and provides some additional explanations.

Chromosome Representation

In Hybrid Grouping Genetic Algorithm, representation is designed with bins in mind, not individual items so each gene in a chromosome represents a single bin (group of items) not an individual item. This allows crossover operations to handle bins correctly and allows them to pass whole bins to offspring chromosomes instead of cutting them in the middle and disrupting good bins.

Not only that standard/ordering genetic algorithms disrupt good bins , but the items copied from the other parent has completely different meaning as their membership to the bins depends on position in encoding and the items that come before them in the encoding. So the items copied from parent to offspring are out of context. This problem reduces chances that crossover operation will pass useful information to future generation.

The following diagram illustrates chromosome representations of a few solutions in Hybrid Grouping Genetic Algorithm:

bin packing chromosome representation


Chromosome configuration block is implemented by BinConfigBlock:

class BinConfigBlock : public Chromosome::GaChromosomeConfigBlock
  struct Item
    std::string _label;
    float _size;

    Item() : _size(0) { }

    Item(const std::string& label,
      float size) : _label(label),
      _size(size) { }

    Item(const Item& src) : _label(src._label),
      _size(src._size) { }

    inline Item& operator =(const Item& rhs)
      _label = rhs._label;
      _size = rhs._size;
      return *this;

  Common::Data::GaSingleDimensionArray<Item> _items;
  Common::Data::GaSingleDimensionArray<int> _indices;
  float _binCapacity;

    const Common::Data::GaSingleDimensionArray<Item>& items,
    float binCapacity) : _binCapacity(binCapacity) { SetItems( items ); }

  BinConfigBlock(const BinConfigBlock& rhs) :
    _binCapacity(rhs._binCapacity) { SetItems( rhs._items ); }

  virtual GaChromosomeConfigBlock* GACALL Clone() const
    { return new BinConfigBlock( *this ); }

  inline const Common::Data::GaSingleDimensionArray<Item>& GACALL
    GetItems() const { return _items; }

  void GACALL SetItems(
    const Common::Data::GaSingleDimensionArray<Item>& items);

  const Common::Data::GaSingleDimensionArray<int>& GACALL
    GetIndices() const { return _indices; }

  inline float GACALL GetBinCapacity() const { return _binCapacity; }

  inline void GACALL SetBinCapacity(float binCapacity)
    { _binCapacity = binCapacity; }

Item class handles information about item such as its size and label.

As it was already discussed, single gene in the chromosome represents a single bin. The chromosome gene is represented by Bin class:

class Bin
  typedef Common::Data::GaList<int> ItemList;

  ItemList _items;
  float _capacity;
  float _fill;

  Bin(float capacity) : _capacity(capacity),
    _fill(0) { }

  Bin(const Bin& src) : _items(src._items),
    _fill(src._fill) { }
  /* getters, setters and other methods */

Class keeps track about items the bin contains, how much it is filled as well as its capacity (all bins have the same capacity). It also provides various supporting methods for handling items (adding, replacing, removing and transferring them to other bins).

After the gene is defined, representation can be completed by choosing appropriate chromosome class from the library. In this case it is GaListChromosome class, so the definition of chromosome is this:

typedef Common::Data::GaList<Bin> BinList;
typedef Chromosome::Representation::GaListChromosome<Bin>::GaType

Now it is possible to define chromosome initializator, which is responsible for creating random chromosomes and filling initial population.

Initialization is simple, all items are shuffled and then they are added sequentially to the first bin that can accommodate them. If there is no such a bin, new one is created and the item is added to it.

Initializator is implemented by BinInitializator class:

class BinInitializator : public Chromosome::GaInitializator
  virtual Chromosome::GaChromosomePtr GACALL operator ()(
    bool empty,
    const Chromosome::GaInitializatorParams& parameters,
      configBlock) const;

  virtual Common::GaParameters* GACALL CreateParameters()
    const { return NULL; }

operator () implements creation and initialization of new chromosome according to provided chromosome configuration block.

CreateParameters does not create new object for operation parameters as they are not required.

Fitness Operation

The most obvious fitness function for this problem is the number of items used by the solution, but it does not create smooth search space that genetic algorithm can explore. To make search space smooth, function that takes the fill of bins in the solution into account is used and it looks like this:

bin packing fitness function

F – fitness of the solution, n – number of bins, fi – fill of the ith bin, c – capacity of the bin, k – constant greater then 1

k constant controls whether the more filled bins are preferred or equally filled bins. Larger values should be used in case more filled bins are preferred. This example sets value of k to 2. More details about this constant are given in the original article.

Genetic algorithm should maximize this value.


Fitness function has a single objective, so a simple fitness object can be used to represent it:

typedef Fitness::Representation::GaSVFitness<float> BinFitness;

Parameters of fitness operation are handled by BinFitnessOperationParams class.

class BinFitnessOperationParams :
  public Fitness::GaFitnessOperationParams
  float _kParam;

  BinFitnessOperationParams() : _kParam(2) { }
  BinFitnessOperationParams(float kParam) : _kParam(kParam) { }
  BinFitnessOperationParams(const BinFitnessOperationParams& params) :
    _kParam(params._kParam) { }

  virtual GaParameters* GACALL Clone() const
    { return new BinFitnessOperationParams( *this ); }
  inline float GACALL GetKParam() const { return _kParam; }
  inline void GACALL GetKParam(float kParam) { _kParam = kParam; }


The operation parameters stores k constant described in previous section.

Fitness operation which is responsible for calculating fitness value of the chromosome is implemented by BinFitnessOperation class.

class BinFitnessOperation :
  public Chromosome::GaChromosomeFitnessOperation
  virtual Fitness::GaFitness* GACALL CreateFitnessObject(
    Common::Memory::GaSmartPtr&lt;const Fitness::GaFitnessParams&gt;
    params) const { return new BinFitness( params ); }

  virtual void GACALL operator ()(const GaObjectType& object,
    Fitness::GaFitness& fitness,
    const Fitness::GaFitnessOperationParams& operationParams) const;

  virtual Common::GaParameters* GACALL CreateParameters() const
    { return new BinFitnessOperationParams(); }


CreateFitnessObject method is called by the library to create appropriate object for storing fitness value of chromosome.

operator () implements calculation of chromosome’s fitness value, and stores it in provided fitness object.

CreateParameters() method creates object for storing parameters required by the fitness operation. In this case it is object of previously discussed BinFitnessOperationParams class.

Crossover Operation

The first thing crossover operation do to produce offspring is creation selected parents’ clones. These clones are going to be used as base of offspring chromosomes. Then the operation chooses two crossover points per clone.

bin packing crossover

The next step is transferring bins between parents. Crossover copies and inserts bins between crossover points from the first parent into second parent’s clone at the first crossover point. Then it swaps parents and copies bins from second parent into the first parent’s clone at the second crossover point.

bin packing crossover

Copying bins like this will produce invalid solutions, as the image illustrates. Now there are items that appear twice in the solution. Because of this algorithm has to perform corrections to make valid solution. To do so, it searches for finds all bins not copied from the other parent that contains duplicates and removes them. In this way gene transferred from the other parent are preserved, which is the point of crossover operation.

bin packing crossover

Removing these bins will also cause removal of some items that are not duplicates, because they were stored in the same bins with duplicates. Algorithm needs to create list of these items (unassigned items), and reinserts them back in to the solution.

bin packing crossover

Inserting unassigned items back into solution is process that has two steps. The first step is replacement of items, which will be discussed later in section that explains mutation.

When replacement step is finished and there are still unassigned items, crossover operation will sort list of remaining items by their size in descending order and insert them into the first bin that has enough free space. If there is no bin which can accommodate item, new bin is created and the item is stored in it. This technique is called first-fit descending heuristic.

bin packing crossover


Crossover operation is implemented by BinCrossoverOperation class that inherits GaCrossoverOperation.

class BinCrossoverOperation : public Chromosome::GaCrossoverOperation
  virtual void GACALL operator ()(
    Chromosome::GaCrossoverBuffer& crossoverBuffer,
    const Chromosome::GaCrossoverParams& parameters) const;

  virtual int GACALL GetParentCount(
    const Chromosome::GaCrossoverParams& parameters) const { return 2; }

  virtual int GACALL GetOffspringCount(
    const Chromosome::GaCrossoverParams& parameters) const { return 2; }

  virtual Common::GaParameters* GACALL CreateParameters() const
    { return new Chromosome::GaCrossoverPointParams(); }

operator () implements crossover operation.

GetParentCount returns number of parent chromosomes required to perform crossover operation (in this case it always 2) and GetOffspringCount method returns the number of offspring chromosomes that crossover operation natively produces (it is also always 2).

CreateParameters method creates parameters object required by the operation. Type of parameter object is GaCrossoverPointParams which holds number of crossover points in addition to crossover probability.

Mutation Operation

Mutation operation is simple: few bins are selected randomly and destroyed. Destruction of bins will allow those items that were in destroyed bins to be rearranged after reinsertion. This, hopefully, will lead to improvements of bin space usage.

bin packing mutation

The items that were in removed bins are preserved in the list of unassigned items.

bin packing mutation

The next step of the mutation algorithm is to reinsert those unassigned items into bins in the same fashion as it is done in crossover operation.

This is good opportunity to explain replacement of items mentioned during the discussion about crossover operation. Replacement work in the following way: If there is currently unassigned item U and set of items P in a single bin B, such that size(U) > size(P) and size(B) – size(P) + size(U) <= capacity(B), then items from P are removed from B and U is inserted instead of them. Number of items that algorithm search for set P is limited to three for performance reasons. The idea behind this technique is discussed in the original article.

bin packing mutation

Here, the item 10 replaced items 6 and 5 from bin 2, as the sum these two items is less than size of item 10 and it can fit the bin. Then item 1 replaced item 5 from bin 3 for the same reason.

Just like it was the case with crossover operation, when algorithm cannot find any more items that satisfy criteria for replacement, it switches to first-fit descending heuristics to insert unassigned items into bins.

bin packing mutation

I this case, no items could fit into existing bin so new bin has to be created.


Mutation operation is implemented by BinMutationOperation class that inherits GaMutationOperation.

class BinMutationOperation : public Chromosome::GaMutationOperation
  virtual void GACALL operator ()(Chromosome::GaChromosome& chromosome,
    const Chromosome::GaMutationParams& parameters) const;

  virtual Common::GaParameters* GACALL CreateParameters() const 
    { return new Chromosome::GaMutationSizeParams(); }

operator () implements mutation operation.

CreateParameters method creates parameters object required by the operation. Type of parameter object is GaMutationSizeParams which holds number genes that should be mutated in addition to mutation probability. Number of genes can be absolute, meaning it is fixed despite the size of chromosome on which the operation is preformed, or relative, meaning that certain percent of genes are changed.

Genetic Algorithm

Population size is set to 100 individuals. In each generation tournament selection is used to select two parents that will produce 50 offspring chromosomes. For each parent two rounds of roulette wheel selection is performed and the parent with better fitness is selected.

Crossover probability is 100%, so each offspring is produced by crossover operation and none is cloned.

In the original article mutation is performed on 33 of 50 individuals, so that is 66% probability of mutation occurring which is the probability used in this implementation.

mutation probability: 66%
mutation size: 2 genes
only accept mutations that improve fitness: yes
crossover probability: 100%
crossover points: 2 (implicit)
algorithm type: incremental (overlapping population)
population size: 100 chromosomes
sorted population: yes
fitness sorting: maximization
selection type: tournament (roulette wheel)
tournament selection rounds: 2
selection size: 2
coupling type: simple coupling
number of offspring to produce: 50
scaling type: no scaling
replacement type: replace worst
chromosomes to replace: 50
stop criterion type: fitness change
stop criterion depth: 100 generations


The code that puts all the pieces together to create genetic algorithm looks like this:

// mating setup:
// - crossover: 100% probability, produces 2 child
// - mutation: 66% probability, 2 genes are modified, improvements accepted
Problems::BPP::BinCrossoverOperation crossover;
Problems::BPP::BinMutationOperation mutation;
Chromosome::MatingOperations::GaBasicMatingOperation mating;
Chromosome::GaMatingConfig matingConfiguration(
    &crossover, &Chromosome::GaCrossoverParams( 1.0f, 2 ), NULL ),
    &mutation, &Chromosome::GaMutationSizeParams( 0.66f, true, 2L ), NULL ) );

// initilizator setup
Problems::BPP::BinInitializator initializator;
Chromosome::GaInitializatorSetup initializatorSetup( &initializator, NULL,
    &Problems::BPP::BinConfigBlock( items, binSize ) ) );

// fitness comparator setup: maximize fitness value
Fitness::Comparators::GaSimpleComparator fitnessComparator;
Fitness::GaFitnessComparatorSetup fitnessComparatorSetup( &fitnessComparator,
    Fitness::Comparators::GACT_MAXIMIZE_ALL ), NULL );

// create population statistics trackers 
// for fitness values and population size 
Population::GaPopulationSizeTracker sizeTracker;
Population::GaRawFitnessTracker rawTracker;
Population::GaScaledFitnessTracker scaledTracker;
Algorithm::Stubs::GaSimpleGAStub::GaStatTrackersCollection trackers;
trackers[ Population::GaPopulationSizeTracker::TRACKER_ID ] =  &sizeTracker;
trackers[ Population::GaRawFitnessTracker::TRACKER_ID ] =  &rawTracker;
trackers[ Population::GaScaledFitnessTracker::TRACKER_ID ] =  &scaledTracker;

// selection setup:
// 2 tournament rounds using roulette wheel method, selects 2 parents
Population::SelectionOperations::GaTournamentSelection selection;
Population::GaSelectionSetup selectionSetup( &selection,
  &Population::SelectionOperations::GaTournamentSelectionParams( 2, -1, 2, 2,
    fitnessComparatorSetup, Chromosome::GaMatingSetup() ) );

// coupling setup:
// produces 50 offspring chromosomes using defined mating operation
Population::CouplingOperations::GaSimpleCoupling coupling;
Population::GaCouplingSetup couplingSetup( &coupling,
  &Population::GaCouplingParams( 50, 1 ),
    Chromosome::GaMatingSetup( &mating, NULL, &matingConfiguration ) ) );

// replacement setup:
// replaces 50 of the worst chromosomes
Population::ReplacementOperations::GaWorstReplacement replacement;
Population::GaReplacementSetup replacementSetup( &replacement,
  &Population::GaReplacementParams( 50 ), &Population::GaReplacementConfig() );

// scaling setup: no scaling
Population::ScalingOperations::GaNoScaling scaling;
Population::GaScalingSetup scalingSetup( &scaling, NULL,
  &Population::GaScalingConfig() );

// fitness setup: individual based evaluation, k = 2
Problems::BPP::BinFitnessOperation fitnessOperation;
Population::GaCombinedFitnessOperation populationFitnessOperation(
  &fitnessOperation );
Population::GaPopulationFitnessOperationSetup fitnessSetup(
  &populationFitnessOperation, &Problems::BPP::BinFitnessOperationParams( 2 ),
  &Fitness::GaFitnessOperationConfig( NULL ) );
// incremental genetic algorithm with:
//  - population: size 100 individuals, sorted by raw fitness
Algorithm::Stubs::GaSimpleGAStub simpleGA(
  Population::GaPopulationParams( 100, 0,
    Population::GaPopulationParams::GAPFO_FILL_ON_INIT ),
  Population::GaFitnessComparatorSortingCriteria( fitnessComparatorSetup,
    Population::GaChromosomeStorage::GAFT_RAW ) );

// 2 workflow branches will execute algorithm
simpleGA.SetBranchCount( 2 );

// create workflow
Common::Workflows::GaWorkflow workflow( NULL );
  *workflow.GetFirstStep()->GetOutboundConnections().begin(), true );

// connect algorithm stub to workflow 
Common::Workflows::GaWorkflowBarrier* br1 =
  new Common::Workflows::GaWorkflowBarrier();
simpleGA.Connect( workflow.GetFirstStep(), br1 );

Common::Workflows::GaBranchGroup* bg1 =  (Common::Workflows::GaBranchGroup*)
  workflow.ConnectSteps( br1, workflow.GetLastStep(), 0 );

// create stop criteria that will stop the algorithm if: 
// raw fitness of the best chromosome in the population 
// has not been changed for the last 100 generations. 
Algorithm::StopCriteria::GaStopCriterionStep* stopStep = 
  new Algorithm::StopCriteria::GaStopCriterionStep(
    Algorithm::StopCriteria::GaStopCriterionSetup( &stopCriterion,
	    Population::GADV_BEST_FITNESS, 100),
  	NULL ),
    workflow.GetWorkflowData(), WDID_POPULATION_STATS );

Common::Workflows::GaBranchGroupTransition* bt1 =
  new Common::Workflows::GaBranchGroupTransition();

// connect stop criterion to workflow and algorithm stub
bg1->GetBranchGroupFlow()->SetFirstStep( stopStep );
bg1->GetBranchGroupFlow()->ConnectSteps( stopStep, bt1, 0 );
workflow.ConnectSteps( bt1, simpleGA.GetStubFlow().GetFirstStep(), 1 );

// subscribe handler to event raised before new generation cycle begins
Common::Workflows::GaDataCache<Population::GaPopulation> population(
  workflow.GetWorkflowData(), WDID_POPULATION );
  Population::GaPopulation::GAPE_NEW_GENERATION, &newGenHandler );

// start algorithm and wait for it to finish

Source code of the example is available with the rest of GALex project at Bitbucket.


The following section presents results of a single run where the bin size is set to 150, number of items is 500 and their sizes are chosen randomly between 20 and 100. It shows progress of chromosomes’ fitness values and number of bins used. X-axis represents generations of the population and Y-axis represents fitness value or number of bins used by the solution.

fitness progress graph
Progress of fitness values during generations

bin count progress graph
Number of bins used by the best solution in the generation