12 users online: Evernn, FedoraFriday, Foxy_9000_, Hwailaluta, Jest, Klug, MrMartley64, Nciktendo, Princesspeach89, RichardDS90, S.U, YazanO4 - Guests: 82 - Bots: 323 Users: 55,706 (2,321 active)
Latest user: Princesspeach89
Not logged in.
Posts by musicalman
musicalman's Profile - Posts by musicalman
Pages: « 1 2 3 4 5 615 16 »
1 and 2. I know mostly how they work but it's difficult for me to explain. I'm sure someone else could do it way better than I. If nobody does, I'll write an explanation here later.
3. Can you provide an example? I'm a little lost.
4. You can switch instruments to switch between drum sounds with the @ command as you described earlier with your ties question. So if @30 is a kick, and @31 is a snare, you can use those to switch the kick and snare out on one channel. You can do this with any instrument(s). If you want to play them at once, you may need two channels, but for kick and snare, you can normally get away with just doing one at a time. Cymbals I'll leave up to you, it's always tricky. You can waste a channel with them or you can compromise the sustain of the cymbal and put them on a channel with stuff already on it. A lot of games do this, so you'll hear sometimes the cymbal seems to cut because another sound stops it. If you end up having to do that in your music, so long as it isn't painfully ear-mutilatingly noticeable, you can just do that and save a channel.

--------------------
Make more of less, that way you won't make less of more!
MarioFanGamer, that's a great explanation. A little hard maybe to get first time, but it makes a lot of sense. I'll use it as a refresher when I need. :)

As for the FIR filter. after some digging around a few years back, I found an explanation which made some sense. It's not an overly technical or mathematical explanation, if you want that look elsewhere! Basically a FIR filter mixes the signal with itself using very, very short time delays, called taps or coefficients, which create filtering effects. You can hear the effect in an audio editor too if you use a very, very small delay or echo, generally less than 0.0001 sec.

A more detailed way to understand a little bit about why a fir filter works is to try the following procedure.
1. Go into an audio editor and open a sound. Resample it to 32000HZ because that's what the SPC works at.
2. Delete one sample from the beginning of your sound.
3. Mix this back in with the original. Make sure both are at the exact same volume. You'll notice the highs are reduced.
4. Now, take this newly filtered sound, and do the same again. Remove a sample from it and mix it back in. Don't do anything with the original sound, just with the filtered one. If you did it right, the highs will be reduced even more. The more times you do this, the more obvious the change, but the louder the sound will become too, because of mixing multiple copies together.
5. Go back to the original, but instead of removing 1 sample each time, try two or 3. With 3 especially, Notice how the highs are still audible but now there's a weird dip in the upper mids.

So, that's exploring a single coefficient of a FIR filter. The difference between that and SPc echo is that the SpC has 8 coefficients, so up to 8 delays can be mixed to create various filter responses. The 8 delay times correspond to individual samples similar to how we were doing above. So first delay IIRC is 0 samples, second is 1, third is 2, etc. The digits you specify in the text file control the volume of these delays. From $00 to $7f mixes the delays like we did above, which filters high frequencies well. From $80 to $ff inverts them which removes low frequencies well. It's similar to how the echo's surround sound effect works, so think of it as negative numbers again. You can also remove both high and low frequencies to create bandpass filters with careful coefficient setup. You just have to be careful, because the echo buffer passes through the FIR through each cycle of the feedback loop. Normally this makes the FIR filter more intense as the echo fades which is kind of cool, but if the coefficients or global feedback are up too high, you can create an infinite feedback loop. IF this happens mainly in the first few coefficients, it'll normally just make really, really loud echoes which quickly build up and may produce a load of subsonic noise along with it. More distant coefficients tend to make ringing sounds which may again go into an infinite feedback loop and overload things quickly. Neither is pretty and could quickly have you in for a shock if you're not prepared.

By setting the echo delay time to $00, and the feedback to $00, you can use the FIR as an EQ. Donkey Kong Country games tended to do this for some subtle EQ, particularly the second and third games. But you'll be sacrificing the notorious Snes reverb to do it.

For experimenting with fIRs, there's a JS plug-in included with Reaper called 8-tap Fir. I've also been told that FIR filters are easier to work with visually because you can see the filter response curve, but I have no experience with this. I haven't messed with the sPC FIR in a while because I sort of made a filter I liked, and every time I need a nice unobtrusive sounding echo I just stick to that, or else use the SMW FIR or no FIR at all. Lol

--------------------
Make more of less, that way you won't make less of more!
1. As Decoy said, midi to MML converters are not AMK compatible, so you have to change commands around. This does not extend to just AMK setup, channels, tempo, etc. but also to other commands. Setting instruments. Getting octaves right. Doing pitch bends, vibrato, etc. if you want to use those things. Often, bad instrument choices in the text file or bad octave placement is all that's audibly wrong, and fixing that alone will salvage a converted MML.
2. Most midi2mml converters are simplistic in nature. They simply define channels, and put in notes afaik. TinyMM was not even able to convert triplets, this is something that PetiteMM added. So, there are things these converters simply can't do yet, or if they do, they notate erroneously, requiring you to fix them. Part of that can be put down to a lack of development of the tools, but part of that also has to do with the midi. This leads to my next point.
3. If the midi isn't simple and made well, the quality of conversion will deteriorate. If I recall, you used to export midis from a tracker. It's very possible this tracker will export midi data which is a complete structural mess. I'm not saying it is in your case, but it's very possible, as I've seen trackers and other old video game music to midi converters do it. As an example of structural mess, imagine you have a track at 127 bpm, but because the midi converter is just spitting out data and doesn't know the original properties, it will spit out a file at 100 bpm or whatever speed it decides to use, but the notes will still sound at 127 bpm. What does this mean? The note lengths and times will not line up with visible beats, and it is almost like looking through the wreckage of a natural disaster. Doable, but only just, depending on how long you want to spend on it. Try running that through PetiteMM or another more simplistic tool and it will choke on it. Hard. Even if the midi is made better, you are not out of the woods in terms of making things sounding good and working well in AMK. While the tools have gotten better about conversions, there are still many tools and midi files out there which are a wreck, and you can normally tell if they are by the midi sound alone. Broken timings, ridiculous instrument choices, weird articulation that isn't supposed to be there, etc. So, some familiarity with midi and mml both are useful if you are going to use midi2mml converters!

4. Using tools like SPC2MML. I briefly tried it, and as amazing as it is, it also has its fair share of problems. The main problems it has IMHO are twofold. First, The MML is sometimes weird to read visually, but more importantly, they are so poorly optimized that sometimes they will not even compile, or will try to do things that freeze the NSPC engine that is used in SMW. SPC2MML works by converting every DSP register event to some AMK representation. While it knows how to handle or intelligently guess at certain things, other things it will just blindly plow through and this often results in things breaking. For example, if something triggers just a few milliseconds earlier or later than expected, that delay is represented, but AMK cannot work quite that finely, so often the minute timing inaccuracies that are barely audible in the original translate to wild sloppy loose-sounding playback when converted. Some sound drivers tend to produce better SPC2MML conversions than others, and it all depends on how the driver writes to the DSP registers. I haven't played with or heard much about NintSPC, but it might be better since I think it reads right from the sequence data instead of the DSP registers. Someone can inform me if I'm wrong.
So, in conclusion. There is no automatic MML solution. While we can help you, nobody is going to be able to magically show you how to make good ports. It is imperative you have some basic knowledge of MML so that you aren't relying exclusively on automatic tools and step-by-step guides. While such things are exceedingly useful, they don't often address what could happen if things go wrong. As you do more manual work, you'll get a better understanding of how the MML works, but saying we are not helpful because your mmls don't work does nothing constructive. Start off simple. Try to learn how the basics of MML is meant to work. There is a severe shortage of tutorials out there for starters, but there is enough spurious information on it on these forums which you could read if you had the patience to look. You can always ask and I'm sure somebody will step in and try to help. And even though you don't want to, read the readme, but not like a chapter in a textbook. Skim through it until you find something that interests you or that you need at a particular time. I literally write all my MMLs by hand and I still use the readme at least a few times in every single project I make, including the ports I've helped you with, because I just can't remember all the commands, or even how they work. But don't just put l16 in there, for example, because somebody simply told you it is important. That doesn't help the learning process. What does l16 do? Do you even need it? Etc. It will not come overnight. Take it slow and develop a feel for what you are doing as you do it. This isn't meant to be obscure rocket science or programming or anything like that. It just has a learning curve.

--------------------
Make more of less, that way you won't make less of more!
Looking at the MML, I see several things:
1. In your echo definition, the $f1 command, placed right before #0, is done wrong. It needs three digits after the $f1, but you only have two.
2. There are spots where you have incomplete hex commands such as $ee without a value, when in fact this command needs two values IIRC. For now I've just deleted the $ee at that spot. It might adversely effect the sound but I won't know what the values should be until I've gotten it to compile and can give it a listen.
3. Once those things are fixed, there are sections where AMK will report note's pitch was too high or low, I don't have time to sort through all of these.
4. There are a lot of = commands used. AFAIK this isn't even documented, it's another way of specifying note lengths, and you shouldn't use it unless you know what you're doing, as incorrect usage breaks things. There are spots where AMK seems to think you are using this command erroneously, but I can't look into it now.
5. The organization of the MML is strange. There are spots with blank lines everywhere, particularly in channel 0 where there is a string of at least a few dozen blank lines at the beginning of the channel. This shouldn't affect AMK stuff but obviously affects reading and file size.

--------------------
Make more of less, that way you won't make less of more!
1. The $f1 command. You do have to specify the FIR filter, either $00 or $01. In your case I'd use $00 since you don't want it.
2. The $ee command. There's only one bad one that I saw, but in general I'd not use $ee at all unless you have a specific need for it.
3. Sorting which notes are too high or low will be the hardest part, and getting octaves right.
4. Using = is perfectly fine in most cases, to be honest I'm not sure why it's breaking here. I'll look at it more later maybe. In any case, the NSpC engine has 48 ticks per beat and the = commands lets you specify note lengths in ticks. Thus, =48 is a quarter note, 24 is an eighth note, 12 is 16, etc. The one that AMK got stuck on was =33 somewhere in the depths of that file. And there are probably others as well. I saw some rather strange ones in there, and in general unless you know you need such odd lengths, they are red flags that something funny is going on. My guess is that the file actually needs =32, which would be a quarter note triplet, and you could notate that with conventional triplet commands.
5. I don't think blank lines mess with AMK. I put them occasionally to separate some things, but I wouldn't recommend allowing them to pile up too much. It just adds more lines and makes scrolling through the script harder. AMK couldn't really care less though AFAIK.

--------------------
Make more of less, that way you won't make less of more!
Out of curiosity, what does Notepad++ do that aids in porting? I've not used it much so I don't know much about it.

--------------------
Make more of less, that way you won't make less of more!
Actually a lot of my Snes stylings are based, not surprisingly, on Snes RPG music. Though that isn't always the case. A lot of it has this shiny finesse, and a lot of that comes from how samples are made and echo is used etc. I've always been fascinated with sample chips more than other kinds, so I spend more time on the sound aesthetic than music or composition, which is really backwards I know. I just can't work any other way it seems...

Compositionally I have a lot of influences but I've never been good at picking them out. All I can really say is that a lot of it comes from video game music. There are of course many exceptions to that though! And while I have confidence in my composition abilities to make something I like, I do feel that my compositions are sometimes a little generic-sounding by the time they're finished. Like I'm not being adventurous or expansive enough or something.

--------------------
Make more of less, that way you won't make less of more!
Composing for specific game scenarios is something I've thought about, but I honestly feel more comfortable with transcription. With that said, making some sort of fake snes ost would be a cool challenge if I had the motivation to do it, just to explore how I can evoke different moods with the different capabilities of the chip.

Separating the string notes between the two channels in the orchestral thing was not fully my idea; the friend who first composed the tune had done it as well so I followed his lead. I also didn't want the orchestral hits to be cut off so I tried to separate those channels as well. All in all it was a huge, huge pain. And, whether surprisingly or not, I haven't been highly driven to work on anything complex after that, besides the Super Mario thing which was complex in a different way. It was sort of refreshing to be honest, but not particularly inviting. I don't technically count it as a complex work either, because the roadmap for it was already laid out and all I had to do was translate, if you will.

I've had ideas about what to do next, but none have really taken off. Some are in various stages of completion though so perhaps I'll post something soon. I do know that there's a high chance that it'll be something that's not orchestral. Maybe more rock oriented, or electronic.

--------------------
Make more of less, that way you won't make less of more!
First, I'll try to talk about compiling an SPC, though I may not be the best person to help you with this since I only do spcs and ports, and not romhacks. I've found the easiest way for me to test single songs is to use the command line, which is discussed in the hackers section of the readme, on the advanced usage page. I've set up a couple quick dirty batch files to make the process as efficient as possible. Whether you use the command line or the GUI, making spcs is simple enough. You can PM me for help if you'd like or reply to this thread. Either is cool.

Now, for the echo effect.

It is, so far as I know, impossible or highly discouraged to change echo values in the middle of a song, apart from perhaps turning it on and off. To do a demonstration of how the echo effect works, you'd have to make a bunch of spcs, render as audio files and murge them in an audio editor.

To use the echo effect, you need several things as you know.
$ef $xx $yy $zz
$f1 $xx $yy $zz
Before we start, I should tell you that you should have an understanding of how counting in hexadecimal is done, and some understanding of binary counting is useful as well, though not completely necessary. If you don't have a basic understanding of these things, you'll have to brush up on those first.

For the $ef command, $xx corresponds to which channels have echo and is a two-digit hex value. It involves a lot of fun math, but we can simplify it with Windows calculator. Let's say you want channels 3, 2 and 0 to have echo. Go into calculator, press alt 3 for scientific view, then F8 for binary, and type the following:
00001101
What you are doing is inputting 0s and 1s (binary data) to decide which channels have echo. A 0 turns echo off and a 1 turns it on, and the digits go in order from channels 7 to 0. Studying the 2 below lines should make more sense. Remember for this example we want echo to be used on channels 3, 2 and 0, but not the rest.
76543210
00001101

Hopefully that made some sense. Once you've entered the binary data into the calculator, press F5 and it will convert this to a hex value which in this case is D. Hex values in AddmusicK must be two digits, so we should add a zero to the beginning of this. Once that's done, we'll have a value of 0d. Hex values need a $ command in AddmusicK, so just put $0d in place of $xx and you should have the desired effect.
Tip: Setting this value to $00 turns echo off for all channels, and $ff turns it on for all channels. Good to know for reference.

Tip: IN the above example, there were 4 0s at the beginning. 0s at the start of a binary value do not need to be entered into the calculator (remember in math class when they told you that zeroes only change the value if they're in the middle or end of a number, not the beginning?)

Tip: If the calculator outputs a 2-digit number when you press F5, there is no need to add a 0 to it for your MML because there are already two digits. If it outputs a 3 or more digit number, however, something went wrong.

Now let's tackle $yy and $zz. These two commands are simply the echo volume to the left and right speakers respectively. They have a range from $00 to $ff, again they are notated in hexadecimal. Half of this range, from $00 to $7f, plays the echo normally for that speaker, while the other half, $80 to $ff, inverts the phase of the effect on that speaker, which creates an interesting effect which is often called the surround sound effect. You can use the surround sound effect on either of the speakers, but not on both, as the effect will cancel out and it will just sound normal. Also, it's worth noting that from $00 to $7f increases the higher you go, and from $80 to $ff the effect decreases. So, $7f is the highest you can go for a normal sound, and $80 is the highest you can go with surround sound. Think of everything above $80 as negative numbers (the higher you seem to go, the less seems to happen). I hope that makes sense.

Here are a few examples of the $ef command. To listen to them, you'l need a $f1 command to go with it, and a sample phrases, so I've put that below.
Code
#amk 2

$f1 $05 $4f $01

?
#0 l4 @3 c d e f g f e d c


Copy the above code into a new text file, then copy one of the $ef lines below and paste it above the $f1 line. Only one $ef should be used in a file! A semicolon ; is a comment, giving a brief description of what it should sound like. It can be ignored, and there is no need to delete it. In fact using comments is a good habit to get into.

Code
$ef $ff $4f $4f;echo at moderate volume for both speakers
$ef $ff $00 $4f;echo only on the right
$ef $ff $af $4f;Surround sound enabled on left speaker.
$ef $ff $af $af;Surround sound on both speakers. Just sounds normal.


Now, let's tackle the $f1 command. It has three commands as well. $xx sets the delay for the echo, and the range is from $00 to $0f. $00 is only useful in special circumstances as it makes the delays instantaneous. $01 sounds like a small room, and the higher you go, the larger the space seems to sound. The higher delay you have, the more ram the echo uses, and the less room you have for sequence data and samples and whatnot.

$yy adjusts feedback. While it's not speaker dependent, I think you can put the surround sound effect on it as discussed above, though in practice I've not found this very effective. So I normally use a range from $00 to $7f. At $00, the echo only repeats once, while at $7f it is infinite.

$zz sets up which fir filter to use, and at present only has two values, $00 and $01. $00 uses the SMW fir filter, $01 turns the fir off. You can create your own fir filter too with the $f5 command, which I've talked about in the last post of this topic, which you can read if you want yet another dissertation of mine. Lol

Few, I hope that's helped! Don't be afraid to ask further questions!

--------------------
Make more of less, that way you won't make less of more!
Not sure how well-known this is here, but I found this damn good example of the SPC700 immitating FM synthesis. The SPC is available for download. From what I can tell, it's done using well-made BRR samples, so nothing exotic. Not sure what was used to make the SPC either.

--------------------
Make more of less, that way you won't make less of more!
Leod you are indeed right and I was confused about that. Thanks for clarifying!

--------------------
Make more of less, that way you won't make less of more!
Sorry for a late reply, I only found out about AddmusicK 1.1 beta today and gave it a try.
I have several questions and a bug report. Questions first.
I mainly use AddmusicK to make SPC music, not SMWC ports, so I would be using the new musician mode a lot. My first question: what should I be expecting in AMK1.1 for this purpose alone? I know the engine has been optimized for smaller size and less slowdown. And there's some PWM command in the works, I have no clue what that's meant to do as I really have no knowledge of the inner workings of processors or the SPC700. What would it allow me to do, is it some sort of synth kinda like the variable pulsewidth on the C64 or similar analog synths? If so I have no clue how the SPC would do that, and haven't the inclination to learn how either. Regardless I think I need clarification because I feel like I'm off track. And anything else I should know at present?

Second question. Because I often just write SPCs, I had a special copy of AMK set aside for this. In 1.0x, I cleared out the sound effects and removed all local songs. Since you need at least one song in globals afaik, I made a file called null.txt which has almost nothing in it to save a hair's breadth of space. Then, I would run addmusicK from the command line with the norom command and the name of my text file, for sake of discussion I'll just say my file was test.txt. The result would be an spc in the spcs folder called test.spc, and to my knowledge, the sequence for null.txt was just dumped in the spc, minding its own business, and doing nothing significant. In AMK1.1, my guess is that the trackmusic list makes one spc per entry? That would make the most sense. However it seems the songs to compile must be specified in that list. If you specify a file in the command line, it will not be taken. Is this intended behavior?

Now for a bug report. I have no clue how this is happening, but it seems that loading custom samples, in particular, is bugged. I tested one of my compositions with the musician mode, and noticed that in one of my custom samples, an area around my loop points was mildly disturbed. I'm using homemade samples here so that might be the problem, though I don't suspect that to be the case because the BRRs sound fine everywhere else, including old AddmusicK with the same text file and samples. Even stranger, depending on the setup of my custom samples, the defect either disappears, or comes back somewhere else with varying degrees of severity. This leads me to believe that the problem is with the engine rather than the brrs themselves. I'm also using the #amm group here to leave maximal room for samples. Using another group such as #optimized also changes when/if the defect occurs, but that also loads an overhead of samples! I also compared the brrs of the two spcs, and soundwise they are fine. I haven't tried doing any scientific comparison though. And I didn't test AMK1.1 in its normal mode either.

I put together a zip with everything needed to reproduce the problem. Everything is stripped down for easy testing. Included are: the text file illustrating the problem along with the necessary samples, a trackmusic list, and two spcs to compare. Remove the unplayed bells.brr, or flip the order of the two samples around, or put them in #optimized, and recompile. You'll get a different-sounding result than if you run them the way I have them initially set up in the .zip. I'd be intrigued to know what's going on!

Thanks for bringing AddmusicK back and overhauling it! I'll post suggestions at some point, as I have a few ideas for commands. I look forward to its further progress!

--------------------
Make more of less, that way you won't make less of more!
Thanks for your response!

Originally posted by Codec
Nothing stops you from writing directly to the samples. Thus, it's possible to generate algorithms onto samples even if they are actively being played. For three major examples, the F-Zero engine sounds, the Super Mario All-Stars wind and the lead synth used in this song starting at 0:50 generate FM, FM and PWM, respectively. Such manipulation is quite powerful, and can be done in a variety of creative ways. Thus, it will be possible to perform both PWM and FM using software-based code on the SPC700 using AMK, as other developers have already achieved this.

Whoa, that's cool! I do remember hearing things like the above examples and thinking it was some clever sample trick, though I wasn't sure what. Admittedly I was too lazy to do any looking. Seems to me as though you plan to adopt this sort of behavior into AMK which would allow for the creation of these sounds plus other things, and it won't be painfully restricted? In any case, it would probably require a learning curve to use. It'll seem a little intimidating but I look forward to it.

As a side note, updating the documentation might be... difficult, given the fact that so much has been included in AMK1.1. But as I'm sure you know, it's pretty necessary, so I hope it's not that bad. :)

Originally posted by Codec
It seems bell.brr is corrupted in some way, putting a different sample in the place of bell.brr produced a result where softhorns.brr plays normally (tried with several other samples), as did removing #amm. I'll try to look closely at the sample later today to figure out what could cause this behavior, as I've not had it happen in other songs.

Hmm, well for the record I did have problems with bell as well, though while testing to make a simple reproduction scenario, that one started sounding okay and softhorns inherited the problem. It's perhaps possible that one or even both are corrupted, but I still wonder why the behavior is so erratic, and why they work everywhere else for me but here. Hopefully your investigation will prove successful!

Not sure if it helps any, but I used BRRTools to make the brrs and a hex editor to insert the AMK loop header. I also remember having to restrict the compression filters used by BRRTools especially on the bell sample, because bells are hard to compress without making them too noisy, so I messed with the -f command until I got it as clean as I could. I believe I still have the original wav someplace, so I could experiment with that. Now I'm worried that all my custom samples are at risk. Maybe I should try some of my others and see if they break.

--------------------
Make more of less, that way you won't make less of more!
Originally posted by Ultima
Well, technically you can achieve the same effect by having two channels play the same thing, except one of them being slightly detuned and some vibrato (this is called a chorus effect IIRC)...

It's true that you can get a chorusing effect by copying a channel and detuning the second copy, and in fact this is a useful strategy in many instances. You could even do PWM like this if you use two separate detuned saw waves, with the second being inverted and at a different phase with the first. This would take up minimal sample space however it requires two channels, and is not what you might think it is. What is being described above seems to be a method of using an algorithm to write directly to a sample in realtime, which is independent from the note sequence data. So you don't have to deal with the effect being reset every time a new note is keyed, or the effect slowing down or speeding up relative to note pitch, which would both happen if you did the effect with detuning samples. You'll see what I mean if you look at the Romancing Saga SPC referenced above. Channel1 has a killer PWM lead solo which you could never recreate so faithfully with AMK1.0x.

Originally posted by Ultima
From what I seen, a PWM command actually makes a sample sound more distorted, but the principle's still there). I also see that's what most people use for SMW samples to give it more flair, but I just never really saw the point in using a PWM command as much as a chorus effect to be honest.

I think you might? be confusing PWM and channel modulation. The latter does FM across channels afaik and that does produce a gritty distortion, which is not often associated with PWM. With that said, I suspect you could do gritty type sounds with PWM if the modulator is fast enough, (think FM synthesis). In any case, I think what is being described above is also different from channel modulation as it confines the effect to one channel, and is a lot more flexible-sounding. Like I say, I don't really understand the full scope of possibilities yet, and I may never really grasp it but it's exciting for me.

--------------------
Make more of less, that way you won't make less of more!
Originally posted by Ultima
I would actually want that to be extended to other commands too, like vibrato and tremolo, since those'll really benefit from the fact that you dont have to write a million of those commands if you're using multiple of those values (unless there's a way to do that already). I may be misunderstanding here,

I think you are. Are you asking about using vibrato/tremolo across multiple notes? If so, those commands are applied to every note until you change its value, turn it off, etc. Also thirding the idea of an easier implementation of pitch bend, and also wondering if there's a way to allow pitch bends that aren't strictly from one note to another. I've thought of dividing semitones, or just adding an offset to a pitch register maybe. I really don't know, but since the engine does seem to have linear pitch slides (you can hear as such with the vibrato command afaik), using those to do non-semitone pitch bends seems like it might work in a musical way without too much strain, though I would be the last person to know.

I did say earlier I had ideas for commands. I've finally decided to try describing my ideas. It's nothing too deal-breaking, just would make it a whole lot more awesome for my ambitions :).

I think the best way I can put it is more flexible remote codes. Not sure if they'd be called remote codes with my ideas or not, but a remote code could have instructions to modify pitch, volume, gain, pan, and anything else that would be useful. Maybe with existing commands, doesn't matter really, After each instruction, there could be an optional instruction to wait x amount of ticks until the next (useful for making multistage envelopes, perhaps). There could also be a loop command (useful for LFOs). Remote codes could continue to be called as normal, though an optional loop break argument could be included so you can decide whether a loop you specified earlier will still take effect in the next code. You could get really scary with this if you wanted, especially if it's done in a similar vane to Famitracker which allows for relative or absolute offsets of a lot of things... That's way beyond my thought process at the moment. But I'll admit, using a remote code in such a way could be awesome and extremely abusive. Even now you have to be careful with them from what I've heard so...

Also, a command to unkey a note, preserve its release and restore its settings for the next seems to be lacking. R just cuts the note, and using the q command feels stiff to me. It has uses but I don't like depending on it for all note-offs. I think having the r command cutting notes should stay, with another command that would not cut them. And of course while there are ways to manage the things I am mentioning, they're tedious to set up, and you have to sort of curve how you do things to deal with current limitations.

Here are a few examples of how I would personally want to use such a remote code system. Drawing a custom vibrato to get closer to something Tim Follin's guitar work would use (listen starting at 1:40), or get a harsher square LFO like in this Final Fantasy track. A different usage, this time for gain, could be this, watch the flute releases.

I realize I'm being picky and, maybe more than a little wishful since it's not like everyone here will be after such things, but now's as good a time as any to throw them out there.

--------------------
Make more of less, that way you won't make less of more!
What do you mean changing values? Do you want different vibratos for each note? That's not a terribly common thing I don't think, and I don't know how you'd implement that in a simpler way directly. At present, you'll have to use find/replace commands within AMK to set up preset vibratos to make that easier.

If you just want the same vibrato on each note, you can just put in one vibrato command and the vibrato will take effect until another vibrato command.

--------------------
Make more of less, that way you won't make less of more!
Yes, you can use the $e3 $xx $yy hex command. $xx is the duration of the tempo fade, $yy is the target tempo, iirc it's just a normal t command but converted to hex. You can find slightly more info in the hex commands section of the readme.

--------------------
Make more of less, that way you won't make less of more!
Just wanted to come here and thank Coopster as well as everybody who has posted here, or will post here in future, for the kind words.
One thing I didn't mention in the interview was that I've posted some of my music here, at present only my Snes work. You can find it here. I hope you enjoy it. Feel free to make suggestions as to other music ideas or something. I doubt they'll get done promptly if at all but maybe I'll get inspired. When Idle comes around I'll need the ideas :)
Also don't hesitate to contact me if you want to talk or something; I don't bite. Not hard anyway.

--------------------
Make more of less, that way you won't make less of more!
The issue you describe is on channel 2 (#1). The spot where you want the loop to start is a place where #1 has an echo, which can be heard a 16th note after #0. The problem in this case is that you've set the loop on #1 to happen right on the onset of its echo, not a 16th-note before to match #0. Furthermore, to create that 16th-note offset initially, there is a rest that ties over the bar. The r8 right before the / in #1 is half-in the previous bar, half in the next. So you somehow need to put a loop in the middle of that rest to line it up with everything else's loops.

To do this, go to the / in #1. Above it, where the r8 is, change this to r16. Then below the /, add a r16 before the notes. This will split the previously mentioned rest into two separate rests, allowing the loop to settle right in between them where it belongs.

This is unfortunately something you have to deal with especially when using midi to mml converters. They obviously don't know what you're trying to do or where your loop is, so they'll represent the material as succinctly as is feasible. But if something ties over a loop boundary you'll have to split the command that's tied over into pre and post loop segments so that you can put the loop between those segments. This, I think, is what contributes to a lot of issues with loops going out of sync for people.

I hope this has helped and you've been able to follow my explanation!

--------------------
Make more of less, that way you won't make less of more!
Hi all,
Here's a huge wall of text, but I encourage you to read it because I really want community input on this before I run away with it.

As some of you may know, I am member of the month for June 2017, for making a home for myself in the custom music forum and trying to help people out. I only bring this up because the post below will make a lot more sense if you read my interview.

So, here is my idea, and how it came about, along with my questions.

Even before I joined SMWC, I had a few friends who wanted to make Snes music. They already knew a lot about MML with 8 bit sound chips. So all they needed help with were how to use AddmusicK, and maybe a bit of advice here and there. At first I decided to write a tiny little setup tutorial, but the more I experimented with things, the more I realized that the readme for AddmusicK is little more than a function reference really.

While the readme was enough to get me going on many fronts, I still had to spend days on certain things trying to figure out how to use them. Because I was trying to do advanced things though, I figured my frustrations, while lengthy and well, frustrating, were to be expected. But, I came to realize over time that these frustrations are even worse for people who are new to this sort of thing. Doing the simplest of things seems so complicated, not because of the complexity of the program, but because they are shooting in the dark most of the time. I don't want to knock the AMK Readme at all, but it's hard to argue with the fact that those who are new need something simpler. This became instantly clear to me when I joined SmWC and started posting here as well as having private conversations. The wide range of proficiency from amateur to expert naturally creates frustration, and there are few people who are in the middle. Those who are had to travel a hard road to get there.

At some point I became compelled to expand my initial idea of a quick setup guide into a full-blown tutorial explaining in some detail everything I know. My plan for a while has been to cover the basics, from explaining what a sound engine actually does in a practical sense, to dipping your feet into the waters of MML. I would then move on to more stuff like making your own samples and doing fairly advanced tricks. I know absolutely nothing about the assembly and programming and stuff, and even if I did, I wouldn't want to fill the tutorial with that knowledge. Instead it would be written for the geeky musician. Anything technical would always be explained in a musical or sound-related context. The only real prerequisites to using the tutorial effectively would be basic knowledge of music theory and fairly solid PC usage skills. That's the idea anyway.

Needless to say, I've set myself a daunting task. I'm known for doing that and consequently it falls flat. But I did actually start my tutorial. So far, I have a section dedicated to what you can expect to do with the SPC700 in terms of effects, a quick setup for AddmusicK, and some basic MML examples showing how it works. I'm writing all of it like I'm trying to explain it to someone who has never used a tool like this. Part of me wants to post it in its current state but I feel I could improve it first. Besides, I don't want to really make it public until there's more content.

As one might expect, I have quite a number of decisions to make, and this is where I need your input. I don't care if you've smashed 50 computers while trying to make music, or if you're a hardcore Snes musician. I will value your input equally, but please do keep the other end of the spectrum in mind and try not to make assumptions about what other people might know.

Because I am visually impaired:
1. I don't know how to insert music into a rom, nor how to ensure it works as intended in the rom. Instead I've focused all my efforts on turning AMK into a musician's tool so that's all I can presently cover. Input on how to tackle this is welcome!

2. People have tricks to look at MML files in a visual way. They also have a visual way of editing samples by looking at the waveform and whatnot. They sometimes use tools that I am unable to use. One example of such a tool is C700, which I know is a common one. Because of my limitations, I've come up with my own crazy methods which drive me mad at times but produce results that even the perfectionist in me will tolerate. Along the way I learned a lot of knowledge which has helped others even though I do things radically different. I almost want to think that anything written about some of these things would be more productive if it's done by someone who works more similarly to the general audience. Then I could throw my own thoughts in too but they would have much better ground. TO those of you who have read my few posts on BRRs and such especially, what do you think?

Other Stuff:
3. Format? Because I plan to make it extensive, how should I format it? I've thought about making several separate volumes if you will. Right now it's like a big chapter book in a single file, but I'm not sure if that's most appealing. Lol

4. Tools like PetiteMM or SPC2MML, etc. I personally have never been driven to use them. While I have no problem playing with them to learn more about them, I wouldn't be the best person to explain how to make the most use of them at the moment. Maybe that will change, but for now I write MML manually. I do see a purpose for these tools though and I don't want them to be overlooked, or on the converse, depended on.

Few, that's it! I realize I'm being kinda vague, but I'm trying to build a framework for the project before I get too deep into it. Please feel free to reply to this thread, send me a PM/e-mail, or whatever really, with any thoughts you might have. I'm also fully open to collaboration and or extensive discussion. In fact I don't think I can do this without at least a little teamwork. Let's make something happen!

--------------------
Make more of less, that way you won't make less of more!
Pages: « 1 2 3 4 5 615 16 »
musicalman's Profile - Posts by musicalman