It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
Tarvis: It makes sense, but the trick is to figure out the logic behind how the game chooses which one. Because I've heard CR halfway through missions, and others with AX right off the bat. It might have to do with the number of craft around or other such things.
Might it be distance and ship type based?I mean,
CR: No enemy or neutral ships in range (6 km?, whole sector) of any kind. I think it happens a lot in those mission about protecting a rendezvous of rebel ships before any enemy comes.
AX: There are enemy or neutral ships in range, but they aren't fighters. If I don't remember wrong, this one happens a lot in minefield missions.
DG: Enemy fighter closer than 6km.
CH: Enemy fighter targeted and in front of us.

The last two seem to follow a pattern like DGxy, where x goes from 0 to 3, forming like a complete melody, then y are alternatives for each x). For example, all DG0y sound like beginings, then DG1y sound like continuations to the previous, with DG3y like conclusions of the melodies and conections to the next one.
The loop tables would customize this random musical construction to presets that the musician specialy liked, maybe?
I've done some testing and I've determined that CR kicks in when you are 25km or more from enemies. That trumpet track I mentioned loses more and more notes the closer it gets to 40km away from enemies, and then that trumpet track is completely gone.

My guess then is that the presence of neutral craft (but not hostile) is what causes AX to play. I'm pretty sure that's it. I'll go back to confirm, but after guessing that just now it makes sense, since it tends to play in rendezvous or missions with cargo in them. Mines might not count as "hostile craft" perhaps. They don't count for laser hits in the debriefing, so that would make sense.

So, it works like this:
CR - base calm track. We'll call it "cruise" to match the filename.
AX - neutral craft within 25km. Let's call it "anxious", since you usually expect hostiles to show up later when it's used.
DG - "battle" state - plays when hostiles within 25km
CH - "chase" state. Targeted craft near sight. Probably a degree cone. Will have to check the distance requirement. (My guess is it's laser range)
FL - "No shield" state. Seems to play for a bit then return to CR when out of battle. Might just check at certain time intervals if you have shields or not.
SU - "Success" state, but only seems to play when hypering out. I would be fine if it played in-mission like TIE Fighter does. I did some testing and it doesn't play when are no hostiles, either.

As for DG0y, DG1y, DG2y, etc. It's completely arbitrary, defined only by that loop table. So yes, the musician is in charge of what parts form complete melodies.
Post edited November 12, 2014 by Tarvis
avatar
Azrapse: The original missions use an index-based list of ships. So for example, ship type 19 is a B-Wing, and ship type 1 is an X-Wing. We could add an unlimited amount of ships to that list, but then custom missions that used to replace old ships with new ones would need to be rewritten to account for the new indices of every ship.
For the future, it would be better to create a new mission file format that doesn't use a index-based look up table for these things, and instead use some other unique identifier like "xwing" or "bwing" instead of numbers. All of this is totally possible but, for now, I think it's better if we focus on having the original assortment of ships working. :)
With a MAPINFO-like system of extra data, you could have mission-specific translation of indices into ships. Something like this:

ships
{
1 = xwing
2 = bwing
3 = tiefighter
4 = freighter
}

(Or whatever values are appropriate.)

Then you don't need to rewrite the entire custom mission, only a small text file. Especially small if you only need to list the ships that have actually been changed from the default.

As the saying goes, any problem can be solved by adding another layer of indirection. :)
Post edited November 12, 2014 by Gaerzi
Frankly I think making a new mission format would be best. It would allow both to surpass old restrictions and have a more modern and more understandable format. Such a translation table would be best for compatibility with currently existing ones, but using a new mission format would be encouraged.

Also, there's already 4-character short names (T/I, X-W, CORT, FRG, STD, etc) defined for all ships, right? Those could be used instead of long names.
Interested in your project. Will follow your progress.
avatar
Tarvis: Okay, I have the full inflight loop table deciphered. I have narrowed the inflight music down to 9 states. Essentially, I noticed every table had 9 lines after it that had no definition name, and many had the same referenced name on the same line. So, the best explanation is that there are 9 States defined in order, and each State has a transition set for every possible other State it could have come from.

[...]

As for the beginning silent measures, I can probably just have a program run through each file and remove them. They could be an artifact from an imperfect conversion, and not defined in the song piece itself.
I just want to say that you did an awesome work there.
I spent the evening yesterday trying to write a little program that resembles the in-flight music system:
Basically, I have entered your table in a two level graph. As attached.
The higher level graph has a main node for each of the main tables or themes: Event, Cruise, Dogfight, Anxiety, Chase, Success, Failing, Training, and Death Star.

The game moves from between these main nodes following certain game events as you detailed. When there is not specified transition, the game just finishes playing the current active chunk in the current main node, moves to the destination main node, then starts playing from the first listed chunk in that theme table.
If a transition is specified between two nodes, then instead the game plays the chunk indicated in the transition. In the graph I have specified the transitions like [theme].[chunk].
For example, when transitioning between Anxiety to Chase, there is a transition defined CH.AX-D, that means that game plays the AX-D chunk found in the Chase table.
For example, when transitioning between Chase and Death Star, there is no transtion defined. So the game will play the first chunk defined in the Death Star table.

The Event node is special and works differently. As you described, the Event table lists a chunk of what to play for each of the 9 different events that the game can throw. Event chunks are played without a "coming in" transition.
However, after the event chunk is played, they have a "going back" transition. The game transitions back to whatever theme table is the active one following the transition chunk indicated in every theme table for each of the 9 event types.

This seems to be fine and I think it actually mimics what happens in the original game. If you see any mistake, please tell me.

The problem that I am finding now is that when actually trying to get the program to play the music, it is incredibly frustrating becase:
- The .mid files have a 1920 tick period at the beginning of pure silence. That obviously disrupts the whole playback. I guess the .mid data could be edited on the fly to remove that silence.
- The MIDI synthesis depends on what kind of soundfont is used or installed in the user's machine. Soundfonts are copyrighted and I don't think we should include one. The default Microsoft GM sound tables are just okay, but somehow it still sounds like crap when used in my example. I don't know why.
- I have experimented with making the midis into MOD, IT, S3M or XM mod files, so ensure that they sound good. It works, and it works nice. But either I am a noob (that I am in respect to music and music files) or I don't know how to keep all the sound samples in one single file instead of in every chunk. Because currently every file will contain a copy of the sound samples, and that increases the file size to about 250KB each.
- I have experimented also with rendering the MIDI files to lossy wave formats. In particular OGG. It sounds nice, and it is smaller than with MOD files, around 80KB per chunk.

The more time I spend on this the more I realize that I am not the right person to decide over this. Music is not my field, and I should instead focus on the flight sim. But I want to have this ready because we are almost there with the iMuse system simulator.
What is your opinion?
I still don't know if there would be a legal problem with us creating wave files from the original MIDI files and distributing them along with the XWVM. I'm pretty sure sheet music and music recordings are copyrighted, but I'm not so sure what happens if we make our own recording based on a MID file.

PS: Thinking twice about the MOD files, maybe we could keep all the chunks in a single MOD file, one after each other, then record somewhere the offsets of the starting and finish points for every chunk. Then the music player would just need to jump to the correct offset during playback. This would achieve for us to keep a single copy of all the sound assets. But I am not sure if it's that easy to jump around in a MOD file so easily as it is in a wave file.
Attachments:
Some notes about transitions:

Every "node" has a transition defined for ANY other possible source "node." That's what those 9 lines at the end of each table graph are. They are the transitions for each node in order. So the first line is the transitions from "Cruise", the second the transitions from "Battle", and so on.

This means, for Battle (DG) for example, the following transitions are defined:

From Cruise: CR-D
From Battle: Randomly picks DG01 or DG03
From Chase: Randomly picks DG12, DG13, DG15, or DG18
From Shields Down: Randomly DG12, DG13, DG15, or DG18
From Anxious: AX-D
From Deathstar: DG13
From Success: DG13
From Training: DG13
From an event cue: DG13

Some transitions don't make much sense because they will never happen (e.g. training to battle, or deathstar, or battle to battle for example) but they are defined anyway.

The one thing left on the tables I can't figure out is how it handles duplicate music piece definitions. You might notice there are two separate tables for the Training block, for example. I think definitions lower in the list override earlier ones. For example, the second one has TR00 defined while the first does not, yet TR00 always plays at the start of the Training track. So those duplications can just be cleaned up manually.

As for playing the MIDIs properly without the gap, we can get away with just editing the .MID files to remove the silence as a stopgap for now. But eventually the goal would be to play them directly from the .GMID files in GMIDI.LFD., which will probably involve constantly streaming the data instead of specifically sending whole files to the MIDI player. I don't know much more about how to do this, sorry...

Also, that is why I wanted to advocate for supporting the Adlib/MT-32 music, because they both have GPLed open source code/libraries that can emulate them, and it will sound the same on any system unlike MIDI configurations. In fact, I think X-Wing was composed for MT-32 (more specifically CM-64) first then ported to other devices later. Though, for General MIDI, there are several public domain soundfonts that could be distributed. Though I think it should be left up to the user's own system MIDI setup to define the soundfont instead of bundling it with the program.

I've also gotten further on the concourse music. The only thing to be aware of is that the playback method must support MIDI loop points, because most of them have an "intro" part and then a "looping" part all within the same track. For digital music/.ogg implementation, those sections could just be made as separate files entirely in the case of modding.

As for the legality of recording the MIDI music, well, several mods (TF conversion for XWA) and the TIE Fighter Reconstructed do that and they haven't had any legal trouble, in fact the latter is pinned in GoG's TIE Fighter forum. I think it's fine as long as it isn't being sold. But in my opinion such a thing should be a separate addon pack instead of being bundled.

Failing that, we can sorta cheat and just make a definition file that tells the engine where the loop points are for each track.

Edit: Also, I've looked at TIE Fighter's loop table. It's a little different in that each song reference name and filename is 8 characters long instead of 4, and the transitions corresponding to each State is referenced by number instead of having them all defined for each State.

Final news is that I've done more research on the triggers for events. I've confirmed that Anxious is definitely when neutral craft are present, even if they are "hostile" such as Blue pirates. But if an Imperial shows up (red) then it goes to Battle instead.

Also, craft type seems to affect the range for when the Battle track kicks in. For cruisers and larger it seems to be within 25km, but for fighters it's something low like 2km. I'll get the exact number on that later.
Post edited November 13, 2014 by Tarvis
It's important to keep in mind that you have to approach an "in-flight-iMuse-simulator" for digital audio differently than a MIDI one. In MIDI you just put each snippet of the flight music back to back and the MIDI device just plays it back as one seamless piece of music.

With digital audio it's a bit different, because you have to take release trails into account. A release trail is basically the sound of instruments stopping to play plus any reverb, which is never instantaneous. So just placing the digital audio bits back to back like the MIDI ones will result in very harsh cuts in the audio between the snippets of music. The solution would be to allow the different pieces of music to overlap where they change from one bit to the next.

For example, let's assume each snippet of music is four measures long. The engine knows that after 4 measures (or so and so many ms) the next piece of music will start playing. Now we can simply make each snippet of music 5 measures long, with the 5th one containing the release trail of the instruments stopping. But the next piece of music still starts after 4 measures, so that each snippet overlaps the next one by one measure, creating a much more fluid and seamless playback.

Now I know nothing about programming, so I don't know how difficult this is. I guess stuff like preloading the music and constantly checking the timing needs to be addressed as well, but in theory I think this might work.
avatar
Tarvis: Some notes about transitions:

Every "node" has a transition defined for ANY other possible source "node." That's what those 9 lines at the end of each table graph are. They are the transitions for each node in order. So the first line is the transitions from "Cruise", the second the transitions from "Battle", and so on.

This means, for Battle (DG) for example, the following transitions are defined:

From Cruise: CR-D
From Battle: Randomly picks DG01 or DG03
From Chase: Randomly picks DG12, DG13, DG15, or DG18
From Shields Down: Randomly DG12, DG13, DG15, or DG18
From Anxious: AX-D
From Deathstar: DG13
From Success: DG13
From Training: DG13
From an event cue: DG13

Some transitions don't make much sense because they will never happen (e.g. training to battle, or deathstar, or battle to battle for example) but they are defined anyway.
Hmm. I think you are right.
But then, the meaning of the transition list changes for the Events table?
If I understood it right:
- In all other tables, the transition list selects to which reference in the table one must jump to when coming to this table, depending on from which other table we come from.
- In the event table, the transition list selects to which reference in the table one must jump to depending on the kind of event that was triggered. Finally, it goes back to the table it was before the event happened.

Am I right?

avatar
Tarvis: As for playing the MIDIs properly without the gap, we can get away with just editing the .MID files to remove the silence as a stopgap for now. But eventually the goal would be to play them directly from the .GMID files in GMIDI.LFD., which will probably involve constantly streaming the data instead of specifically sending whole files to the MIDI player. I don't know much more about how to do this, sorry...
Well, I think that, thanks to your efforts, we are in a sweet spot with the musical system now where we know how it works almost all of it, but there are several difficult decisions to take in respect to how to implement it.
I would opt for the least effort one, even if it's not the best, so that we keep our momentum and get something working fast and nice, even if it could be even better at a later point, when we have more time and hands helping.
I'd propose generating small wave files, .ogg or similar, and stream them on the fly.

They can be read as individual sound data arrays and added to the playback buffer at real time with no effort.
I think it's the most straight forward approach and the simpler to program. Also, it will sound as good as the best soundfont we can find to generate the oggs.

I don't really feel for finding out how to stream live MIDI sound while keeping good sound quality and low CPU use, or how to synthesize MT-32 or Adlib, while the 3D flight engine is there sitting waiting to be developed.
Later on we can come back to this and polish it further.
If someone could please export the MIDIs to ogg or mp3 or wav after removing the silences at the begining, we could just have something running in no time.

Meanwhile I will go back to the flight simulation. I am working on having the "Rescue Ackbar" historical mission working, as a working template. It has several interesting concepts to test the AI, while keeping the number of different ships small. Only 3 Y-W, 3 squadrons of T/F, 4 SHU and a frigate. I would like to have it ready to be seen from the map perspective flown by the rudimentary AI by next week.
Here's some info about iMUSE, gleaned from its patent specs: http://www.google.com/patents/US5315057

This is actually quite helpful, because it tells us exactly the commands that are expected to be given outside of the standard MIDI commands:


Jumping/looping
md-- jump (sound, chunk, beat, tick)
md-- set-- loop (sound, count, start-- beat, start-- tick, end-- beat, end-- tick)
md-- clear-- loop (sound)

The jump and set loop points are probably defined by SysEx messages in the MIDI file. Clear-loop is probably given by the engine to proceed in a song to a later part.


md-- scan (sound, chunk, beat, tick)

I think this is how those 2 seconds of opening silence are skipped. The patent specs says this skips silence in the file until a musical note is found, while ensuring that MIDI system commands in that silent gap are handled. Probably used so that the composers had a lot of room to edit in commands at the beginning without having to worry about overwriting or pushing back the actual music data. So, this would be called every time a new file is played.


Enabling/disabling instruments like the concourse doors do
md-- set-- part-- enable (sound, chan, state)
md-- set-- part-- vol (sound, chan, vol)

Seems straightforward. Chan tells which MIDI channel to enable/disable. In the main concourse song, the Training door flute is Channel 4, the Historical Mission trumpet is Channel 7 and the TOD Desk trumpet is Channel 6. For recorded music implementation, we can just split those channels into separate music files.


md-- set-- hook (sound, class, val, chan)
md-- enqueue-- trigger (sound, marker-- id)
md-- enqueue-- command (param1 . . ., param7)
md-- clear-- queue ()
md-- query-- queue (param)

This is probably how the inflight state transitions and event interludes are handled. State changes are placed into a queue and played at an appropriate time (such as the end of a measure). So momentary event cues like Ship Arrival are probably most simply done by queuing the Event song and then queuing the current state after it, so it goes back.


The ones we probably don't need to care about:
md-- set-- vol (sound, vol)
md-- fade-- vol (sound-- number, vol, time)

These are just standard volume changes. We can define it on our own easy.


md-- set-- speed (sound, speed)

I dont think X-Wing ever calls for speed changes outside of the tempo changes defined regularly in the MIDI files, so set speed can probably be ignored.


md-- set-- transpose (sound, rel-- flag, transpose)
md-- set-- detune (sound, detune)
md-- set-- pan (sound, pan)

I don't think X-Wing even uses these. It was probably used a lot in the Monkey Island games, but that's not our problem!


md-- set-- priority (sound, priority)

Used for playing sound effects and music on the same soundcard, accommodating their limited channels. We don't need to worry about that today!


Also, the .GMID files are actually nothing more than Midi Type 2 with some extra information in front of the "MThd" MIDI header. So, if you wanted to parse it directly with the MIDI player, just skip everything until "MThd" is encountered until we figure out what the extra header stuff signifies. So special format interpretation beyond figuring out what each SysEx command does is probably not necessary. Note that Type 2 MIDIs can contain several songs in one. For example, the main concourse MIDI file contains 4 tracks: the main track, the transition from the Register room, taking the shuttle to a new ship, and one other one I can't quite place.
Post edited November 13, 2014 by Tarvis
avatar
Laserschwert: It's important to keep in mind that you have to approach an "in-flight-iMuse-simulator" for digital audio differently than a MIDI one. In MIDI you just put each snippet of the flight music back to back and the MIDI device just plays it back as one seamless piece of music.

With digital audio it's a bit different, because you have to take release trails into account. A release trail is basically the sound of instruments stopping to play plus any reverb, which is never instantaneous. So just placing the digital audio bits back to back like the MIDI ones will result in very harsh cuts in the audio between the snippets of music. The solution would be to allow the different pieces of music to overlap where they change from one bit to the next.

For example, let's assume each snippet of music is four measures long. The engine knows that after 4 measures (or so and so many ms) the next piece of music will start playing. Now we can simply make each snippet of music 5 measures long, with the 5th one containing the release trail of the instruments stopping. But the next piece of music still starts after 4 measures, so that each snippet overlaps the next one by one measure, creating a much more fluid and seamless playback.

Now I know nothing about programming, so I don't know how difficult this is. I guess stuff like preloading the music and constantly checking the timing needs to be addressed as well, but in theory I think this might work.
Good point. I guess it depends how the digital parts are generated. If some reverb and trailing is there after the end of the last measure, then it will indeed sound cut off.
I don't think, though, that we would have many problems. We need to know how many measures are there in every piece and how long a measure is. Then, as you say, we need to glue the next piece with the current one exactly at the point were the current one is supposed to end, not where its audio actually ends because of its trailing.
If I am not mistaken, the digital audio is comprised of a stream of float values modelling the wave form at a certain sampling frequency. So for 44kHz digital audio, there are 44000 floats per second.
It is just some easy arithmetic to find out at which particular point the next piece should start. The overlapping audio data between both pieces should be merged by just adding the data in the audio buffer.
Again, if I am not mistaken.

It would help if most audio pieces would have similar length. Otherwise, we will need to have a file somewhere stating the length of every piece to be able to know where the next piece is supposed to be merged.
There's a few ways I can think of to handle that.

Method 1: Definition
There's a definition file for every song and every song piece that tell the engine things like how many measures there are, or what the real end-of-song (not including trailing) time position is. But, how many measures there are would not be helpful if the tempo changes within a segment (like in many transition tracks) because unlike with MIDI data that information cannot be fed back in real time to keep track of tempo in-engine.

Method 2: ID loop tags
There was an ID3 tag or something similar that .ogg supports that specify where loop points are. Specifically we can use them to define the physical start and end of every track to avoid having to have a separate definition file.

Method 3: Cheating
I think this is the most elegant. Arbitrarily declare that the last 1 or 2 seconds or so of every digital music file is to be used for trailing. So, when a file reaches 1 second to the end (or whatever we declare the trailing time to be), overlap the next file in the queue.

Also, different segments can have different numbers of measures, so we can't just assume it's 4.

As for handling overlapping, you can probably just get away with having 2 music channels and alternate between them for every music segment. I do not think any of them at all are short enough to finish before the previous segment trailing ends.


And finally, here's the simple logic for handling with the same queue transitions to other music States or proceeding with the current one.

1. When the end of the current song segment is reached, check the queue.
2. If it is empty, use RNG to pick from the defined "next" segments using that loop table, and queue that.
3. If it is NOT empty, there's a state transition. Do not queue anything yet.
4. If the next queued segment is an Event Cue, queue the From Event transition segment for the the current State, after the upcoming Event Cue.
5. Otherwise, it's a regular State change, do not queue anything.
Post edited November 13, 2014 by Tarvis
avatar
Tarvis: There's a few ways I can think of to handle that.

Method 1: Definition
There's a definition file for every song and every song piece that tell the engine things like how many measures there are, or what the real end-of-song (not including trailing) time position is. But, how many measures there are would not be helpful if the tempo changes within a segment (like in many transition tracks) because unlike with MIDI data that information cannot be fed back in real time to keep track of tempo in-engine.

Method 2: ID loop tags
There was an ID3 tag or something similar that .ogg supports that specify where loop points are. Specifically we can use them to define the physical start and end of every track to avoid having to have a separate definition file.

Method 3: Cheating
I think this is the most elegant. Arbitrarily declare that the last 1 or 2 seconds or so of every digital music file is to be used for trailing. So, when a file reaches 1 second to the end (or whatever we declare the trailing time to be), overlap the next file in the queue.

Also, different segments can have different numbers of measures, so we can't just assume it's 4.

As for handling overlapping, you can probably just get away with having 2 music channels and alternate between them for every music segment. I do not think any of them at all are short enough to finish before the previous segment trailing ends.
Thanks, by the way, for the study of the patent.
The third method sounds easy to program. However it requires the generation of the digital files to have that in mind.
So what we do?
All three methods would involve someone (or something) having to figure out where the real end of the track is, so I don't think any of them are particularly harder than the other.

After removing the beginning delays I can probably programmatically get the time length of each MIDI, then programmatically implement them into the digital files no matter which method is used.

If I'm not busy this weekend I'll try recording every segment. I don't have a $1000 setup, but it's something to use for now to work with implementing the music system. I'll also upload the complete MIDI set, including concourse, cutscenes, and inflight music, in case anybody else wants to do it.
Post edited November 13, 2014 by Tarvis
I'd love to take care of converting the flight music, but I'm not yet sure when I find time for it (and right now I'm already working on quick versions of the menu and cutscene music, and those will already take sometime). For implementation it should be enough to just create some quick WAVs from the MIDIs and just replace them later on with fancier versions. As long as we decide on the "format" of the WAVs (like 1 second of release trail), that should work.