The submission will be via Microsoft CMT. The preliminary submission entry for generators is now available. Please follow the instructions below and submit your generators.
The preliminary result contains two submission deadlines. The first deadline is for generators
to submit 200 music phrases for each composer. The second deadline is set around one week later to judges for submitting the results
HCS, which are the feedback for the music
phrases generated by generators. Notice that any valid submission record in
these two preliminary submissions will be considered as completing the registration for this challenge but the
submitted content will not affect the final score.
The last few days before the final submission deadline will be the evaluation stage, when 200 pairs of beginnings and endings will be released for each composer's pieces in the list.
In 48 hours, generator systems should complete these music phrases and submit
them as their final submission. Then, these phrases will be compiled together and made available to judge systems. From this moment on, in the next 48 hours, judge systems are expected to calculate the
s for these phrases and make the final submission. For each composer, a certain number of extra phrases
extracted from human-composed pieces will be mixed into the input.
Generator scores are measured by are the average
from all judges.
Judge performances are ranked by the sum of the following two scores:
HCSfor human composed phrases.
1-HCSfor phrases generated by generators.
For generator models, no supplementary training data is officially provided. Participants should use the compositions from corresponding composers to develop their systems.
The evaluation dataset
generator_evaluation_dataset.zip, designated for the final
submission, is to be released one week before the deadline, in the following format:
# Filename: generator_evaluation_dataset.zip generator_evaluation_dataset.zip/ ├── composer1/ │ ├── 001.mid │ ├── 002.mid │ ├── ... │ └── 200.mid ├── composer2/ │ ├── 001.mid │ ├── 002.mid │ ├── ... │ └── 200.mid ├── ... └── composer8/ └── ...
This dataset contains 200 pairs of beginning and ending bars for 200 music phrases in each composer folder. The composer folders inside alphanumerically remain the same order with the released violin composer list, but the midi files in which are either extracted from the composer’s compositions (positive samples) or composed by others (negative samples).
# Filename: `generator_output_preliminary.zip` or `generator_output_final.zip` generator_output*.zip/ ├── composer1/ # 200 `.mid` files in each folder │ ├── 001.mid # 3 digits, starting from 1 with padding 0. │ ├── 002.mid │ ├── ... │ ├── 099.mid │ └── 200.mid ├── composer2/ │ ├── 001.mid │ ├── 002.mid │ └── ... ├── ... └── composer8/ └── ...
No matter whether the input is given (for the final submission) or not (unspecified, for the initial
submission), the generators are expected to submit the files in a
with the file structure revealed in the table above.
If the beginning and ending bars are provided (at the hack session), the midi metadata key signature, and
tempo should remain the same with provided, even if they remain default value. Otherwise (at the intermediate
checkpoints), the generate midi files should follow the constraints listed in this table.
After each submission for generators, all the generator answers will be mixed up by the system and be randomly sampled to a new dataset available to the judges (team information will be anonymous to them). In addition, a certain amount of music phrases extracted from those listed composers’ compositions will be added to the corresponding folders. There will not be any label that indicates the melody being composed by a human or not since it is the task for judges to label them.
# Filename: `judge_evaluation_dataset_preliminary.zip` or `judge_evaluation_dataset_final.zip` judge_evaluation_dataset*.zip/ ├── composer1/ │ ├── 001.mid │ ├── 002.mid │ ├── 003.mid │ ├── ... ├── composer2/ └── ...
Judges should calculate the probability of given music phrases being composed by human. The output for each phrase is a number between 0 (computer) and 1 (human), with 3 significant digits. For each style, the text file should have the same number of lines as the midi files given for the respective composer.
# Filename: `judge_output_preliminary.zip` or `judge_output_final.zip` judge_output_*/ ├── composer1.txt ├── composer2.txt └── ... # composer8.txt # each line contains one number. 0.693 # Use `\r\n` for line break 0.081 0.973 0.765 0.648 ... # the line count should be the same as the number of files in the corresponding composer directory.
The MIDI files as input or output for both roles should meet the following requirements. The following table contains the requirements of MIDI files’ metadata.
|1||Track||Single track (MIDI 0 format).|
|2||Instrument||Violin (41st instrument of 128 standard GM1 instruments)|
|3||Tempo||Constant tempo. No change.|
|4||Pitch||Constant tempo. No change.All pitches should not be lower than G3 (note 43), the lowest pitch of the violin|
|5||Note Velocity||Constant velocity. No change.|
|6||MIDI CC||No bending CC or any special articulation such as sustain (64)|
|7||Time signature||3/4 or 4/4. No change.|
|8||Key signature||One key signature specified, otherwise considered as C major. No change.|
In addition, the MIDI files should also satisfy the following constraints (not in the metadata):
|9||Bar||Number of bars: 8 to 16|
|10||Voice||At most 4 voices, which should also be playable for the violin|
|11||Beginning & End||For the composer model’s output, the beginning and ending bars should remain the same as given.|
Subjective data labelling is not allowed. The composer models can use the genre info indicated by the folder name.