Prime|Evil wrote:If this is possible, then is it possible to load vocals in and fuck around with them in a synth? Or maybe using the nnxt and loading a vocal sample into that and playing with it via there? Always thought it would be cool to make some weird grinding noise with my voice and a mic, record it and then screw around with it.
I'm not sure if you get the point here? (You're welcome to prove otherwise of course

)
Basically, under normal circumstances, Reason doesn't allow any live audio input. While it does accept incoming midi when Rewired, this isn't as good as being able to 'send' audio to the effects devices in the program.
The current way of working with vocals in Reason is to import them into any of the samplers (NN-19, NN-XT, Redrum & Dr.Rex). There are several fatal limitations to this.
1> It means artists can't hear what the final version of their recordings will sound like as they have to record their audio before they can use Reasons effects.
2> Samples alwys play from the start (or end). This means that if your sample is 3 minutes long, and you want to hear the last 20 seconds of it, you have to trigger it and wait for the whole sample to play through.
They way audio works in most daws is that playback occurs from the position of the play marker/curser.
The way Reason handles recorded audio is flawed, and is one of the main reasons I've decided to focus on using Reaper instead of Reason 4. (DZA & Darkmatter

)
If you can rig up ReWuschel correctly and get it working without any hitches, there is no need to use any of the samplers to playback recordings. One simply presses record on their audio recorder and performs their performance live. Any effects used on your live input are heard & recorded instantly
Going back to your first question, Reason also doesn't allow user created waveforms to be used in the available synths oscillators. There are synths out there that do this, these include Absynth, Surge, Chamellion, Alchemy & Vertigo, and also any additive synth capable of resynthesis.
It's a crime that Malstrom doesn't support user waveforms, because its mix of granular and wavetable synthesis is second to none!

(Note to self, petition Propellerheads to provide this feature in an upcoming version of Reason)
Hope this all makes sense?
Edit:
What could be cool would be to use a voice sample as the vocoder carrier signal, then to modulate that withyour own voice. An example would be to use any of the choir patches that come with the NN-19 & NN-XT as the carrier, then make them sound like their 'singin' when you modulate the signal with your words
