Palindraw

Palindraw is the third game I created for Android and released on Google Play. Palindraw™ is a challenging new type of line puzzle game. Complete each puzzle by twisting colorful lines from colored dots through the empty squares. As you draw a line in one direction, a second line will flow from the colored dot in the exact opposite direction. Colorful lines wrap around each other in a symmetric pattern. Colliding with an existing line will break it so be careful to keep an eye on the opposite end!

Android Development – Sudoku Forever

Sudoku Forever™ is a free Sudoku puzzle game available on Google Play. Sudoku Forever features a random puzzle generator that creates each new game on the fly. This allows for a virtually limitless number of fun and challenging Sudoku puzzles.

‘eregi’ is deprecated errors

The other weekend I was setting up Plogger for my daughter to share her photos. She’s been really getting in to photography and I think she’s really getting good at it. If you’re interested (and she starts uploading stuff and takes down the pics I posted) you can check it out here: lexie.fourthwoods.com.


Anyway, apparently I’m running a newer PHP than Plogger was originally written for because I got a bunch of errors along the lines of:

Deprecated: Function eregi() is deprecated in blah....

It seems thewhole class of functions have been deprecated in PHP 5.3.0. Fortunately there is an easy fix. For instance:

eregi("NIKON", $make)

should be changed to:

preg_match('/NIKON/i', $make)

Note the regular expression is wrapped in ‘/’ characters which denote the pattern to search. after the last ‘/’ is the ‘i’ modifier flag which indicates case insensitive which is what eregi does.


Other functions that can be replaced similarly are:

ereg()          // replace with preg_match()
ereg_replace()  // replace with preg_replace()
eregi()         // replace with preg_match() with the 'i'
                // modifier
eregi_replace() // replace with preg_replace() with the
                // 'i' modifier

undefined reference to IID_IPicture

I was writing a small test application to display some pictures and ran into a linker error complaining about an undefined reference to IID_IPicture. In COM, interfaces are identified by a globally unique identifier string. To get the linker to pick them up you need to link to uuid.lib.

Facebook Fail!

It seems someone at Facebook has broken the BigPipe. Currently, the site is broken for me on both Firefox and IE8. Looks like this problem has been around for a while. Something seems to get screwed up client side but I can’t tell what. Clearing the cache and other temporary data doesn’t seem to fix it. So far, the only thing that seems to work is to reboot the machine. ]]>

Viewtron: From the AT&T Archives

Do your shopping, banking and get messages right in your living room. You can also play games, get up to the minute stock quotes, recipes. Why waste time driving to the store or library when you could be using that for quality time with your family. School work is a breeze with instant access to books and encyclopedias at the touch of a button. In 1983, the future is here NOW!

ITworld had a great article about the Viewtron system which gave people access to information, online shopping and banking and much of what the Internet is today. This was in 1983. Cool stuff!

Failed to load the JNI shared library "C:\glassfish3\jdk\jre\bin\client\jvm.dll"

My laptop at work crashed this week so I had to set up a replacement until it’s fixed. After reinstalling my development tools and firing up Eclipse I run into this error:

Failed to load the JNI shared library "C:\glassfish3\jdk\jre\bin\client\jvm.dll"

It turns out this was caused by a mismatch between the version of Java installed and the version of Eclipse. In particular, our project requires us to use the 32-bit Java environment and I, out of habit, grabbed the 64-bit Eclipse package. Downloaded the 32-bit Eclipse and all is good again.

Playing MIDI Files in Windows (Part 5)

Jump to: Part 1, Part 2, Part 3, Part 4, Part 5

In the last article we created some code that is capable of playing most (not all) MIDI files out there. We do most of the grunt work decoding and playing MIDI events including processing time values, waiting a specific amount of time and synchronizing events from multiple tracks.

Related source and example files can be downloaded here: mididemo.zip

We created our usleep() function to get mostly accurate MIDI timing which seemed to work well. Unfortunately, the usleep() function does not actually sleep but spins in a tight loop. This is a big waste of processor and still isn’t perfectly accurate. The Windows multimedia API supplies some mid-level APIs that can be used to make things easier on us and possibly our processor.

The midiStream*() functions take a stream of MIDI messages and time values and takes care of processing the time values and playing the messages. This eliminates our need for usleep() and lets Windows take care of timing and playing individual messages. We still need to decode the MIDI events and time values and format them into the stream of MIDIEVENT structures that the API expects. However, the API will either provide its own timing and message processing or, if the device supports it, hand the buffer off to the MIDI device itself freeing up our CPU for other tasks. (much better than our tight usleep() loop!)

Anyway, our get_buffer() showed how to decode the MIDI file and pack the events into our own buffer format. The format I chose is not much different than the format that the midiStream*() functions expect. The buffer is an array of MIDIEVENT structures. The MIDIEVENT structure looks like this:?

typedef struct 
  DWORD dwDeltaTime
  DWORD dwStreamID;
  DWORD dwEvent;
  DWORD dwParms[];
} MIDIEVENT;

The first field is the time delta-time value. This is the same delta-time value we used in all our previous examples. It is an unsigned 4-byte integer in little-endian format. The second field is a stream ID. What it is for I don’t really know but MSDN says it is reserved and must be set to 0. The third field is the MIDI event. This event is the same format as the events we passed to midiOutShortMsg(). The last field is a variable length field and is only used for long messages such as System Exclusive messages. Up to this point we had no need for long messages and we still don’t. We do not need to do anything with this last field an essentially eliminate it altogether.

So, our buffer is an array of these structures that looks like this:

delta-time
0
event
delta-time
0
event
...

This is nearly identical to our previous buffer format, except it has an extra value between the delta-time and event.

delta-time
event
delta-time
event
...

Yeah, Microsoft is notorious for adding extra “reserved” fields in their data structures for what reason, nobody really knows.

Another important thing to note is that the buffer used by the midiStream*() functions must be less than 65k. When we are processing the MIDI file and constructing the stream buffer, we must do it in 65k or less chunks. For large MIDI files we may need to call get_buffer() several times.

The get_buffer() function is nearly identical to our previous examples, it now just needs to add that extra 0 and limit the size of the returned buffer. In this example we limit the buffer to a maximum of 512 MIDIEVENT structures:

#define MAX_BUFFER_SIZE (512 * 12)

unsigned int get_buffer( struct trk* tracks, unsigned int ntracks, unsigned int* out, unsigned int* outlen) {
  MIDIEVENT e, *p;
  unsigned int streamlen = 0;
  unsigned int i;
  static unsigned int current_time = 0; // remember the current time from the last time we were called.

  if(tracks == NULL || out == NULL || outlen == NULL)
    return 0;

  *outlen = 0;

  while(TRUE) {
    unsigned int time = (unsigned int )-1;
    unsigned int idx = -1;
    struct evt evt
    unsigned char c;

    if(((streamlen + 3) * sizeof(unsigned int)) >= MAX_BUFFER_SIZE)
      break;

    // get the next event
    for(i = 0; i < ntracks; i++) {
      evt = get_next_event(&tracks[i]);
      if(!(is_track_end(&evt)) && (evt.absolute_time < time)) {
        time = evt.absolute_time;
        idx = i;
      }
    }

    // if idx == -1 then all the tracks have been read up to the end of track mark
    if(idx == -1)
      break;  // we're  done

    e.dwStreamID = 0; // always 0

    evt = get_next_event(&tracks[idx]);

    tracks[idx].absolute_time = evt.absolute_time;
    e.dwDeltaTime = tracks[idx].absolute_time - current_time;
    current_time = tracks[idx].absolute_time;

    if(!(evt.event & 0x80)) { // running mode
      unsigned char last = tracks[idx].last_event;
      c = *evt.data++; // get the first data byte
      e.dwEvent = ((unsigned long)MEVT_SHORTMSG << 24) |
                  ((unsigned  long)last) |
                  ((unsigned long)c << 8);
      if(!((last & 0xf0) == 0xc0 || (last & 0xf0) == 0xd0)) {
        c = *evt.data++; // get the second data byte
        e.dwEvent |= ((unsigned long)c << 16);
      }

      p =  (MIDIEVENT*)&out[streamlen];
      *p = e;

      streamlen += 3;

      tracks[idx].buf = evt.data;
    } else if(evt.event == 0xff) { // meta-event
      evt.data++; // skip the event byte
      unsigned char meta = *evt.data++; // read the meta-event byte
      unsigned int len;

      switch(meta) {
      case 0x51 // only care about tempo events
        {
          unsigned char a, b, c;
          len = *evt.data++; // get the length byte, should be 3
          a = *evt.data++;
          b = *evt.data++;
          c = *evt.data++;

          e.dwEvent = ((unsigned long)MEVT_TEMPO << 24) |
                  ((unsigned long)a << 16) |
                  ((unsigned long)b << 8) |
                  ((unsigned long)c << 0);

          p = (MIDIEVENT*)&out[streamlen];
          *p = e;

          streamlen += 3;
        }
        break;
      default: // skip all other meta events
        len = *evt.data++; // get the length byte
        evt.data += len;
        break;
      }

      tracks[idx].buf = evt.data;
    } else if((evt.event & 0xf0) != 0xf0) { // normal command
      tracks[idx].last_event = evt.event;
      evt.data++; // skip the event byte
      c = *evt.data++;  // get the first data byte
      e.dwEvent = ((unsigned long)MEVT_SHORTMSG << 24) |
                ((unsigned long)evt.event << 0) |
                ((unsigned long)c << 8);
      if(!((evt.event & 0xf0) == 0xc0 || (evt.event & 0xf0) == 0xd0)) {
        c = *evt.data++; // get the second data byte
        e.dwEvent |= ((unsigned long)c << 16);
      }

      p = (MIDIEVENT*)&out[streamlen];
      *p = e;

      streamlen += 3;

      tracks[idx].buf = evt.data;
    }
  }

  *outlen = streamlen * sizeof(unsigned int);

  return 1;
}

Just like in the last two examples we will support mulit-track MIDI files. The difference is that now, instead of locating and processing all the tracks in the get_buffer() function, we will set up some variables in our main function to not only locate the individual tracks within the MIDI file but also remember where we leave off each time we need to call get_buffer().

HANDLE event;

unsigned int example9() {
  unsigned char* midibuf = NULL;
  unsigned int midilen = 0

  struct _mid_header* hdr = NULL;

  unsigned int i;

  unsigned short ntracks = 0;
  struct trk* tracks = NULL;

  unsigned int streambufsize = MAX_BUFFER_SIZE;
  unsigned in* streambuf = NULL;
  unsigned int streamlen = 0;

  ...

  hdr = (struct _mid_header*)midibuf;
  midibuf += sizeof(struct _mid_header);
  ntracks = swap_bytes_short(hdr->tracks);

  tracks = (struct</code><code>trk*)malloc(ntracks * sizeof(struct trk));
  if(tracks == NULL)
    goto error1;

  for(i = 0; i < ntracks; i++) {
    tracks[i].track = (struct _mid_track*)midibuf;
    tracks[i].buf = midibuf + sizeof(struct _mid_track);
    tracks[i].absolute_time = 0;
    tracks[i].last_event = 0;

    midibuf += sizeof(struct _mid_track) + swap_bytes_long(tracks[i].track->length);
  }

  streambuf = (unsigned int *)malloc(sizeof(unsigned int) * streambufsize);
  if(streambuf == NULL)
    goto error2;

  memset(streambuf, 0, sizeof(unsigned int) * streambufsize);

  event = CreateEvent(0, FALSE, FALSE, 0);

Once we have the file open and our track structures set up, we open the MIDI device for streaming by calling midiStreamOpen().

HMIDISTRM out;
unsigned int device = 0;
midiStreamOpen(&out, &device, 1, (DWORD)example9_callback, 0, CALLBACK_FUNCTION);

The first parameter is a variable to hold the opened MIDI stream handle. The second is a variable that contains the device ID to open. I’m not sure why this needs to be a pointer to a variable holding the ID rather than pass by value like midiOutOpen(). The third parameter is “reserved” and must be 1. (Does anyone actually know why Microsoft does things like that?) The fourth parameter is a pointer to a callback function that will be called during MIDI playback. The fifth parameter is data that is passed to the callback. Our callback doesn’t use any extra data so this parameter is set to 0. The final parameter is a flag that specifies we are using a callback function to receive playback information, as opposed to an event, thread or window.

MIDIPROPTIMEDIV prop;
prop.cbStruct =  sizeof(MIDIPROPTIMEDIV);
prop.dwTimeDiv = swap_bytes_short(hdr->ticks);
midiStreamProperty(out, (LPBYTE)&prop, MIDIPROP_SET|MIDIPROP_TIMEDIV);

Once the stream is open we need to set the PPQN value to control tempo. If not set the default tempo is 120 beats per minute (or 500,000 microseconds per quarter note). The default time division (PPQN) if not set is 96 ticks (pulses) per quarter note. The PPQN value is read from the MIDI file header in the ticks field and set here. Note that we swap the byte order since the ticks value is stored in big-endian format and dwTimeDiv is little-endian.?

mhdr.lpData = (char*)streambuf;
mhdr.dwBufferLength = mhdr.dwBytesRecorded = streambufsize;
mhdr.dwFlags = 0;
midiOutPrepareHeader((HMIDIOUT)out, &mhdr, sizeof(MIDIHDR));

The next thing we need to do is prepare the buffer for processing by midiStreamOut(). We set lpData to our buffer we allocated earlier. Many examples I’ve seen elsewhere show this buffer populated with MIDI data first before midiOutPrepareHeader() is called, however this is not necessary. Next, we set dwBufferLength to the size of the buffer as well as dwBytesRecorded. It is not really important to set dwBytesRecorded here as we will be overwriting it again later anyway. We clear dwFlags by setting it to 0. Windows will use this to return state information if we want it. Finally, we call midiOutPrepareHeader() passing our open stream handle, a pointer to our header structure and its size. From this point on, we can keep repopulating streambuf and passing this header to midiStreamOut().

That’s almost it. By default, when a stream is opened it is in stopped mode. If we cue a stream it won’t start playing until midiOutRestart() is called. We’ll call it here so we don’t have to worry about it later.?

midiStreamRestart(out);

We can now start buffering MIDI events and playing them. We’ll grab the first buffer full before we enter our loop. The number of events read will be our exit condition on the loop. Once this value is 0 we’ve hit the end of the MIDI score and we can exit.?

get_buffer(tracks, ntracks, streambuf, &streamlen);
while(streamlen > 0) {
  mhdr.dwBytesRecorded = streamlen;
  midiStreamOut(out, &mhdr, sizeof(MIDIHDR));
  WaitForSingleObject(event, INFINITE);
  get_buffer(tracks, ntracks, streambuf, &streamlen);
}

Here we call get_buffer() which was described above passing our tracks and the streambuf to read the events into. The streamlen parameter will receive the number of bytes read into streambuf. Once this value returns 0 we can exit.

After we have our first buffer and enter the loop we need to remember to update dwBytesRecorded with the number of bytes in streambuf that are valid. If we don’t midiStreamOut() may try to play garbage (or possibly old events) left at the end of the stream buffer.

I had previously skipped over talking about the event object we created earlier. It is used to pause our loop while the cued buffer is playing. When we opened the stream we passed a pointer to a callback function that will be called when the buffer has finished playing. That callback looks like this:?

void CALLBACK example9_callback(HMIDIOUT out, UINT msg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) {
  switch(msg) {
  case MOM_DONE:
    SetEvent(event);
    break;
  case MOM_POSITIONCB:
  case MOM_OPEN:
  case MOM_CLOSE:
    break;
  }
}

The callback is fired for various events during playback. The only one we care about is MOM_DONE. This message indicates the cued stream has finished playing. When we receive this message, we signal the loop to continue by calling SetEvent().

In the main loop we pause by calling WaitForSingleObject() on the event. The loop will wait here until the event is signaled. Once signaled, we call get_event() to buffer the next chunk of MIDI data and loop again.

  midiOutReset((HMIDIOUT)out);
  midiOutUnprepareHeader((HMIDIOUT)out, &mhdr, sizeof(MIDIHDR));
  midiStreamClose(out);
  CloseHandle(event);

  free(streambuf);
  free(tracks);
  free(hdr);
  return(0);
}

Once the loop exits we are done and can clean up. The midiOutReset() function stops any notes that are playing and silences the device. The midiOutUnprepareHeader() cleans up after our header and anything Windows might have been doing with it. Finally, midiStreamClose() closes the stream handle.

A few things to note here, there is a very small delay between when the previously cued stream stops and the next one is buffered and starts. It is not really noticeable but it is obviously there from looking at the code.

midiStreamOut() has the ability to cue multiple streams at a time, when one finishes the next one starts immediately. It is normal for applications to cue two or more buffers at a time which will completely eliminate the delay.

In the MIDI library I created for some of my projects I use a double buffer. I cue the first buffer and while it is playing I process the next chunk and cue it. Once the first buffer completes the next cued buffer begins playing immediately and the callback is fired, at which time I reuse the first buffer and cue up another chunk.

That’s it for our MIDI tutorials unless there is interest in some other topics. When it’s ready I’ll be posting my multimedia library that implements the ideas discussed in these articles. Let me know by email or in the comments below what you think of these articles and if you have any questions or ideas you’d like to learn about.

Related files:

Further reading:

Playing MIDI Files in Windows (Part 4)

Jump to: Part 1, Part 2, Part 3, Part 4, Part 5

In the previous article we developed an algorithm for combining multiple MIDI tracks into a single stream. In this article we will build on that example and add some more advanced MIDI concepts such as META events, running mode and a better timing mechanism. Related source and example files can be downloaded here: mididemo.zip

META Events

The first code example below expands on the previous get_buffer function and adds support for META events. Once we’ve gotten the next event to process we have to determine what kind of event it is. The MIDI event byte 0xff indicates the event is a META event.

META events provide extra data used by processors and applications. They are not typically sent to the output device but they may affect how the MIDI file is processed. The wiki page discusses each META event in great detail.

For our purposes the only thing we need to understand is that META events begin with the MIDI event byte 0xff, are followed by the META event byte, a length and length number of data bytes. All of the META events may be ignored except one, the tempo event. As shown below, skipping META events is easy, read the event byte, read the length byte, then skip over the next length number of bytes.

#define TEMPO_EVT 1

if(evt.event == 0xff) { // meta-event
  evt.data++; // skip the event byte
  unsigned char meta = *evt.data++; // read the meta-event byte
  unsigned int len;

  switch(meta) {
  case 0x51:
    {
      unsigned char a, b, c;
      len = *evt.data++; // get the length byte, should be 3
      a = *evt.data++;
      b = *evt.data++;
      c = *evt.data++;
      msg = ((unsigned long)TEMPO_EVT << 24) |
              ((unsigned long)a << 16) |
              ((unsigned long)b << 8) |
              ((unsigned long)c << 0);
      while(streamlen + 2 > streambuflen) {
        unsigned int* tmp = NULL;
        streambuflen *= 2;
        tmp = (unsigned int*)realloc(streambuf, sizeof(unsigned int) * streambuflen);
        if(tmp != NULL) {
          streambuf = tmp;
        } else {
          goto error;
        }
      }
      streambuf[streamlen++] = time;
      streambuf[streamlen++] = msg;
    }
    break;
  default: // ignore any of the other META events
    len = *evt.data++; // get the data length
    evt.data += len; // skip over the data
    break;
  }

  tracks[idx].buf = evt.data;
}

The tempo META event is used to adjust the tempo of the score. The tempo is expressed in microseconds per quarter note. This value is used to calculate beats per minute (BPM).

The default BPM if no tempo is specified is 120. There are 1000 microseconds per millisecond and 1000 milliseconds per second which equals 1,000,000 microseconds per second.

120 BPM = 60 * 1,000,000 / 120 = 500,000 microseconds per quarter note.

In previous examples we discussed the PPQN and PPQN clock. PPQN stands for pulses per quarter note. This is the number of clock ticks per quarter note.

In the MIDI header is the ticks field that specifies the PPQN. The tempo value (defaulted to 500,000 if none is specified) divided by the PPQN will give you the number of microseconds per tick.

As an example, 500,000 microseconds per quarter note / 96 PPQN = 5208 microseconds per tick. Note that 5208 microseconds = 5.208 milliseconds. This is why the Sleep() function is not precise enough for accurate MIDI score timing. A more accurate method will be described later.

The tempo META event is the value 0x51 followed by a length byte which is always the number 3, and then the 3 data bytes.

The 3 data bytes form a 24-bit integer representing the number of microseconds per quarter note as described above. Again these bytes are in big-endian format and will need to be converted to little-endian before they can be used correctly.

The tempo event is passed to the caller in the stream in the same way the other events are passed. Because the most significant byte in the stream message is always ignored for normal events (and set to 0) we use it to pass a special value to the caller indicating that the message should be treated as a tempo change rather than a normal message to be passed to midiOutShortMsg().

We define TEMPO_EVT to be 1 (because it’s not 0) and place it in the most significant byte. The remaining data bytes are placed into the lower three bytes of the message in the correct (little-endian) format. The main loop processing the stream buffer will be able to find the tempo change event and process it accordingly.

All of any of the other META events, if encountered will be skipped. Because they provide their length information, there is no need to actually decode the event. Instead, the length is read and that number of bytes are skipped.

Running Mode

If the event is not a META event, we check to see if running mode is being used. Running mode is a method of compression used by the MIDI file format where the event byte may be omitted if the current event is the same as the previous event.

Running mode is determined by checking the most significant bit of the event byte. If the bit is set then the byte is the actual event and the following bytes are the data bytes.

If the bit is not set (0), the byte is actually the first data byte (data bytes always have a most significant bit of 0) and the previous event byte is assumed. For example, to play the C-major cord you may use the following commands:

 0x90, 0x3c, 0x7f
 0x90, 0x40, 0x7f
 0x90, 0x43, 0x7f

Because they are all the same note-on event, the note-on (0x90) byte for the second two events may be omitted:

 0x90, 0x3c, 0x7f
 0x40, 0x7f
 0x43, 0x7f

To decode running mode we need to be able to remember previously encountered event bytes. The last_event field of our trk structure is used for this purpose.

When a normal event is encountered, it is recorded in the last_event field. If running mode is detected, the event is pulled from the last_event field and the data bytes are processed accordingly. Obviously, there must be one normal event specified first before running mode can be used. If not, the MIDI file is malformed.

else if(!(evt.event & 0x80)) { // running mode
  unsigned char last = tracks[idx].last_event;
  msg = ((unsigned long)last) |
            ((unsigned long)*evt.data++ << 8);
  if(!((last & 0xf0) == 0xc0 || (last & 0xf0) == 0xd0))
    msg |= ((unsigned long)*evt.data++ << 16);
  while(streamlen + 2 > streambuflen) {
    unsigned int* tmp = NULL;
    streambuflen *= 2;
    tmp = (unsigned int*)realloc(streambuf, sizeof(unsigned int) * streambuflen);
    if(tmp != NULL) {
      streambuf = tmp;
    } else {
      goto error;
    }
  }

  streambuf[streamlen++] = time;
  streambuf[streamlen++] = msg;
  tracks[idx].buf = evt.data;
}

This last example is exactly like the normal mode from the previous article. However, we now check to make sure the output stream is big enough to hold the parsed MIDI data. If the stream buffer is not large enough, it is resized to accommodate more data. This should be able to hold arbitrarily large MIDI files.

Another type of event is the System Exclusive event. They are not usually in most MIDI files (but they can be) and we are not supporting them yet. If we find something that is not one of our previously described events, we’ll bail out.

else if((evt.event & 0xf0) != 0xf0) { // normal command
  tracks[idx].last_event = evt.event;
  evt.data++; // skip the event byte
  msg = ((unsigned long)evt.event) |
            ((unsigned long)*evt.data++ << 8);

  if(!((evt.event & 0xf0) == 0xc0 || (evt.event & 0xf0) == 0xd0))
    msg |= ((unsigned long)*evt.data++ << 16);

  while(streamlen + 2 > streambuflen) {
    unsigned int* tmp = NULL;
    streambuflen *= 2;
    tmp = (unsigned int*)realloc(streambuf, sizeof(unsigned int) * streambuflen);

    if(tmp != NULL) {
      streambuf = tmp;
    } else {
      goto error;
    }
  }

  streambuf[streamlen++] = time;
  streambuf[streamlen++] = msg;
  tracks[idx].buf = evt.data;
} else {
  // not handling sysex events yet
  printf("unknown event %2x", evt.event);
  exit(1);
}

MIDI Score Timing

The usleep() function provides a more precise delay for MIDI score timing. It provides better precision than Sleep() but precision is still dependent on the precision of the high-resolution performance counters on an individual system. This can and will vary from system to system.

void usleep(int waitTime) {
  LARGE_INTEGER time1, time2, freq;
  if(waitTime == 0)
    return;
  QueryPerformanceCounter(&time1);
  QueryPerformanceFrequency(&freq);
  do {
    QueryPerformanceCounter(&time2);
  } while((time2.QuadPart - time1.QuadPart) * 1000000ll / freq.QuadPart < waitTime);
}

QueryPerformanceCounter() returns the current value of the high-resolution performance counter of the system. This counter ticks at a frequency based on the CPU clock (but is not necessarily the same as the CPU clock). QueryPerformanceFrequency() returns how many times the performance counter ticks every second.

This function does not sleep but instead spins in a tight loop until a specified number of counter ticks has passed. To convert ticks to microseconds, we calculate the difference in ticks, multiply by 1,000,000 (microseconds/second) and divide by the frequency of ticks per second.

When this value is greater than or equal to the number of microseconds passed in, we return.

This function is not exact but is pretty good. Some things may throw this off for instance the precision of the high-resolution counter is not likely to be down to a single microsecond. If you want to wait 10 microseconds and the precision/frequency is one tick every 6 microseconds, your 10 microsecond wait will actually be no less than 12 microseconds. Again, this frequency is system dependent and will vary from system to system.

Also, Windows is not a real-time operating system. A process may be preempted at any time and it is up to Windows to decide when the process is rescheduled. The application may be preempted in the middle of usleep() and not restarted again until long after the expected wait time has elapsed. There really isn’t much you can do about it. If you’ve ever heard MIDI music stutter in games this is likely the reason why.

unsigned int example8() {
  unsigned char* midibuf = NULL;
  unsigned int midilen = 0;
  unsigned int* streambuf = NULL;
  unsigned int streamlen = 0;
  unsigned int err, msg;
  HMIDIOUT out;
  unsigned int PPQN_CLOCK;
  unsigned int i;
  struct _mid_header* hdr;

  err = midiOutOpen(&out, 0, 0, 0, CALLBACK_NULL);
  if (err != MMSYSERR_NOERROR)
    printf("error opening default MIDI device: %d\n", err);
  else
    printf("successfully opened default MIDI device\n");

  midibuf = load_file((unsigned char*)"example8.mid", &midilen);
  if(midibuf == NULL) {
    printf("could not open example8.mid\n");
    return 0;
  }

  hdr = (struct _mid_header*)midibuf;
  PPQN_CLOCK = 500000 / swap_bytes_short(hdr->ticks);

  get_buffer_ex8(midibuf, midilen, &streambuf, &streamlen);

  i = 0;
  while(i < streamlen) {
    unsigned int time = streambuf[i++];
    usleep(time * PPQN_CLOCK);
    msg = streambuf[i++];

    if(msg & 0xff000000) { // tempo change
      msg = msg & 0x00ffffff;
      PPQN_CLOCK = msg / swap_bytes_short(hdr->ticks);
    } else {
      err = midiOutShortMsg(out, msg);

      if(err != MMSYSERR_NOERROR)
        printf("error sending command: %08x error: %d\n", msg, err);
    }
  }

  midiOutClose(out);
  free(streambuf);
  free(midibuf);
  return 0;
}

Finally, this last function retrieves the stream buffer and plays the score. It is still much like the previous examples with the exception of the added tempo adjustments. First, instead of the PPQN_CLOCK value being hard-coded at 5 milliseconds, we calculate it based on the default 500,000 microseconds per quarter note and the PPQN ticks from the MIDI header.

Second, we replace the Sleep() function with our improved usleep() function passing in the delay time in microseconds instead of milliseconds.

Third, if we encounter a tempo change META event in the stream, we pull out the new microseconds per quarter note value and recalculate the PPQN_CLOCK value. Otherwise, the message is treated as before.

These last few articles focused on the low-level details of parsing and playing MIDI files. By this time you should have a pretty good understanding of how that works. In the next article I’ll talk about some of the mid-level APIs for off loading some of that work to Windows and possibly the sound card if supported. We’ll still need to parse the files but things like timing the events we won’t have to worry about as much.

In the near future, when it is completed (enough for my liking) I’ll post a fully functioning set of APIs for manipulating MIDI files. Also, I’m working on the same for playing MUS files as found in DOOM and Hexen which I’ll also post.

UPDATE: I’ve actually posted this library to SourceForge as part of my DooM port. If you’re interested you can check it out here! As always, if you find any errors, omissions, have suggestions or improvements, please comment below or email me. I want these tutorials to be as correct as possible. Related files:

Playing MIDI Files in Windows (Part 3)

Jump to: Part 1, Part 2, Part 3, Part 4, Part 5

In the last article we demonstrated playing a very simple single track MIDI file. In this article we will build on that and add support for multiple tracks. This example will still only support very small MIDI files and is only guaranteed to work correctly for the provided example MIDI file. Related source and example files can be downloaded here: mididemo.zip In the previous example, each event was preceded by a delta-time value. This delta-time value is the amount of time to wait before executing the MIDI event. This time is relative to the previous event so a delta-time of say 10 ticks means “wait 10 ticks from the last event before executing the next event”. In the single track example, this was pretty straight forward. In multiple track MIDI files this becomes a little more complicated. The delta-time values are relative to the previous event in the same track. So for example, if one track executes an event every 10 ticks and another simultaneous track executes an event every 25 ticks, they will need to be merged (or mixed) into a single stream and each delta-time will need to be adjusted based on the events from both tracks. For example:

 Ticks   |   Track 1    |   Track 2   ==>   Merged
---------------------------------------------------
  0      |   0 - e1     |   0 - e1     | 0  - t1e1
         |              |              | 0  - t2e1
  5      |              |              |
  10     |   10 - e2    |              | 10 - t1e2
  15     |              |              |
  20     |   10 - e3    |              | 10 - t1e3
  25     |              |   25 - e2    | 5  - t2e2
  30     |   10 - e4    |              | 5  - t1e4
  35     |              |              |
  40     |   10 - e5    |              | 10 - t1e5
  45     |              |              |
  50     |   10 - e6    |   25 - e3    | 10 - t1e6
         |              |              | 0  - t2e3
  ...

These two tracks begin simultaneously executing their first event. 10 ticks later an event from the first track is executed followed by another after another 10 ticks. Now it starts to get interesting. Because the second event from track 2 is executed after 25 ticks, we must take into account the events from track 1 that have been executed so far and subtract that value from the delta-time for the track 2 event. Therefore, when the tracks are merged, the second track 2 event will need to be executed only 5 ticks after the third track 1 event. Also, since the second event from track 2 is executed 5 ticks after the third event from track 1, the delta-time for the fourth event for track one must be adjusted to account for that and is executed 5 ticks later. At this point in the example things get back on track until the 50 tick mark when two events must be executed simultaneously again. A total of 10 ticks must pass before executing one of the events, and because they are to be executed simultaneously, 0 ticks must pass before executing the second event. This gets seemingly more complicated as more tracks are added, however the algorithm is relatively simple. A counter is needed to keep track of the absolute time from the beginning of the score. Another counter for each track is also needed to keep track of the absolute time processed so far within the track. The algorithm goes something like this:

  1. Loop through each track.
    1. If a track is at the end-of-track marker, skip the track.
    2. If all the tracks are at the end-of-track marker, we are done.
  2. Select the event closest to the current absolute time.
    1. Extract the event.
    2. Advance the track pointer to the next event in the track.
    3. Advance the absolute time for the track by the extracted event’s delta-time.
  3. The difference between the absolute time for the track and the absolute time for the score is used as the new delta-time for the event.
  4. The absolute time for the score is advanced by the new delta-time.
  5. The delta-time and event are added to the stream as in the previous example.
  6. Continue from step 1.

To put this into practice we’ll introduces some new structures and helper functions.

struct trk {
	struct _mid_track* track;
	unsigned char* buf;
	unsigned char last_event;
	unsigned int absolute_time;
};

The trk structure will keep track of the processing of each MIDI track. The track field is a pointer to the Track header as described in the last article. buf will be initialized to point to the first delta-time of the first event. The absolute_time field will be initialized to 0 and will keep track of the absolute time of all processed events. The last_event field we won’t worry about for now. It will be used in the next article.

struct evt {
	unsigned int absolute_time;
	unsigned char* data;
	unsigned char event;
};

The evt structure will be used to represent the extracted event data from the track. The absolute_time field will represent the absolute time of the event from the beginning of the track. The data field will point to the beginning of the event (first byte past the delta-time value). The event field will store the event byte.

unsigned short swap_bytes_short(unsigned short in)
{
	return ((in << 8) | (in >> 8));
}
unsigned long swap_bytes_long(unsigned long in)
{
	unsigned short *p;
	p = (unsigned short*)∈
	return (  (((unsigned long)swap_bytes_short(p[0])) << 16) |
				(unsigned long)swap_bytes_short(p[1]));
}

Because numeric data in MIDI files is stored in big-endian format, swap_bytes_short() and swap_bytes_long() are helper functions used to convert the numeric data to little-endian format.

struct evt get_next_event(const struct trk* track)
{
	unsigned char* buf;
	struct evt e;
	unsigned int bytesread;
	unsigned int time;
	buf = track->buf;
	time = read_var_long(buf, &bytesread);
	buf += bytesread;
	e.absolute_time = track->absolute_time + time;
	e.data = buf;
	e.event = *e.data;
	return e;
}

The get_next_event() helper function simply reads the next event from the track. It does not advance the track pointer or absolute time, it simply returns the next event available. If the returned event is the one that will be selected, the track pointer will have to be advanced later. This function uses read_var_long() as discussed in the previous article to read the delta-time from the track buffer. That value is added to the track’s absolute_time to get the event’s absolute_time value. The data field is set to the next byte past the delta-time in the track buffer and the event byte is copied into the event field.

int is_track_end(const struct evt* e)
{
	if(e->event == 0xff) // meta-event?
		if(*(e->data + 1) == 0x2f) // track end?
			return 1;
	return 0;
}

Finally, the is_track_end() helper function determines if the returned event is the end-of-track marker.

unsigned int get_buffer_ex7(unsigned char* buf, unsigned int len, unsigned int** out, unsigned int* outlen)
{
	struct _mid_header* hdr = NULL;
	struct trk* tracks = NULL;
	unsigned short nTracks = 0;
	unsigned int i;
	unsigned int* streambuf = NULL;
	unsigned int streambuflen = 1024;
	unsigned int streamlen = 0;
	unsigned char* tmp = buf;
	unsigned int currTime = 0;
	streambuf = (unsigned int*)malloc(sizeof(unsigned int) * streambuflen);
	memset(streambuf, 0, sizeof(unsigned int) * streambuflen);
	hdr = (struct _mid_header*)tmp;
	tmp += sizeof(struct _mid_header);
	nTracks = swap_bytes_short(hdr->tracks);
	tracks = (struct trk*)malloc(nTracks * sizeof(struct trk));
	for(i = 0; i < nTracks; i++)
	{
		tracks[i].track = (struct _mid_track*)tmp;
		tracks[i].buf = tmp + sizeof(struct _mid_track);
		tracks[i].absolute_time = 0;
		tracks[i].last_event = 0;
		tmp += sizeof(struct _mid_track) + swap_bytes_long(tracks[i].track->length);
	}
	while(TRUE)
	{
		unsigned int time = (unsigned int)-1;
		unsigned char cmd;
		unsigned int msg = 0;
		unsigned int idx = -1;
		struct evt evt;
		// get the next event
		for(i = 0; i < nTracks; i++)
		{
			evt = get_next_event(&tracks[i]);
			if(!(is_track_end(&evt)) && (evt.absolute_time < time))
			{
				time = evt.absolute_time;
				idx = i;
			}
		}
		if(idx == -1)
			break; // we're done
		evt = get_next_event(&tracks[idx]);
		tracks[idx].absolute_time = evt.absolute_time;
		time = tracks[idx].absolute_time - currTime;
		currTime = tracks[idx].absolute_time;
		cmd = *evt.data++;
		if((cmd & 0xf0) != 0xf0) // normal command
		{
			msg = ((unsigned long)cmd) |
				  ((unsigned long)*evt.data++ << 8);
			if(!((cmd & 0xf0) == 0xc0 || (cmd & 0xf0) == 0xd0))
				msg |= *evt.data++ << 16;
			streambuf[streamlen++] = time;
			streambuf[streamlen++] = msg;
			tracks[idx].buf = evt.data;
		}
	}
	*out = streambuf;
	*outlen = streamlen;
	free(tracks);
	return 0;
}

This function first parses the MIDI header to determine how many tracks are in the file. It then allocates an array of that number of trk structures. Each trk structure is initialized with the beginning of the track data for each track within the buffer. Once the trk structures are initialized, processing begins. We loop through each trk structure looking for the next event that is not the end-of-track marker and has the lowest absolute_time value. If more than one event have the same absolute_time, the first event is chosen. (the next iteration will choose the next event, and so on…) If all the tracks are at the end-of-track marker, our stream buffer is filled and we can return to the caller. With the next event in hand, we update the absolute time for the track from which the event came to be the absolute time of the event. We calculate the new delta-time for the event by subtracting the score absolute time value from the track absolute time value. Finally, we update the absolute time for the score to be the absolute time of the event (track). The event is processed in the same way as described in the previous article. However, the data field in the evt structure is used to read from the track buffer to determine where the data bytes are and find the beginning of the delta-time for the next event in the track. Once the event is processed, evt.data is pointing to the beginning of the next delta-time. The buf field of the trk structure is advanced to this location.

unsigned int example7()
{
	unsigned char* midibuf = NULL;
	unsigned int midilen = 0;
	unsigned int* streambuf = NULL;
	unsigned int streamlen = 0;
	unsigned int err;
	HMIDIOUT out;
	const unsigned int PPQN_CLOCK = 5;
	unsigned int i;
	err = midiOutOpen(&out, 0, 0, 0, CALLBACK_NULL);
	if (err != MMSYSERR_NOERROR)
		printf("error opening default MIDI device: %d\n", err);
	else
		printf("successfully opened default MIDI device\n");
	midibuf = load_file((unsigned char*)"example7.mid", &midilen);
	if(midibuf == NULL)
	{
		printf("could not open example7.mid\n");
		return 0;
	}
	get_buffer_ex7(midibuf, midilen, &streambuf, &streamlen);
	i = 0;
	while(i < streamlen)
	{
		unsigned int time = streambuf[i++];
		Sleep(time * PPQN_CLOCK);
		err = midiOutShortMsg(out, streambuf[i++]);
		if(err != MMSYSERR_NOERROR)
			printf("error sending command: %d\n", err);
	}
	midiOutClose(out);
	free(streambuf);
	free(midibuf);
	return 0;
}

Because the multiple tracks have been merged into a single stream, the code that sends the messages to midiOutShortMsg() is exactly the same as in the previous example. In this article we looked at merging multiple MIDI tracks into a single stream. In the next article we will look at supporting more advanced features such as running mode and META events. We will also look at a more precise timing mechanism for timing MIDI events instead of using the Sleep() function. The result will be a (mostly) complete example capable of playing (most) midi files thrown at it. Related files:

Further reading:

]]>