Hey Dan, thanks for demonstrating this. You mentioned a Discord server where Aaron posted a video. Could you share a link to that community? Many thanks!
Nice! Dan, how would you actually change a preset of a specific MSS instance at run-time? Like changing modulation parameters of a polyphonic synth while it is running and instantiated in the level? Great videos! Cheers
There are probably some needed features, for example, it'd be nice if we could query the preset values on the MetaSound. However, if you're just swapping the MetaSound Source voice where you have a Blueprint or code doing your polyphonic voice management, then I would update the sound to play for all new voices.
Thanks for the video. I don’t quite follow what the frequency multiplier to semitones does. If you don’t have that node, won’t it already output the sound at a random semitone range that you set in the random get? What is undesirable about this and what does the frequency multiplier change? Thank you!
The Frequency Multiplier/Scalar is a linear expression of frequency change. So 2.0 means twice the frequency, 0.5 means half the frequency. In semi-tones, this would be +12 and -12 respectively. They are different ways to express playback rate change and when sending data from gameplay, unless I know that gameplay will be dealing with musical formats, I prefer to receive normalized, linear inputs. In other words: normalized, linear values is easier for gameplay; semitones is easier for music.
Does the Wave Player stream the audio files from disc, load them into a buffer when they are needed (after they have been chosen randomly) or load them into a buffer at init (when the sound source is loaded)?
Great question. In Unreal Engine 5, the default loading scheme for audio is using Stream Caching which loads according to the designated loading behaviors (there are several). The default loading behavior with Stream Caching is to load sounds when they're needed to play, not when their UObject is loaded by reference.
Thanks Dan this is super helpful. Could you make a video on how to affect the inputs on a Metasound graph it's used as its own object in a game e.g., a music player and I want to trigger a sound or change the amplitude of something? Especially how to do it if I want to affect the changes from a trigger box somewhere else in the game? Thanks!
@@DanReynoldsAudio Thanks, that would be awesome! Especially if there's a way of bypassing casting via a blueprint and message the Meta graph directly!
@@nightonmars The Blueprint method is how you message the MetaSound graph directly because each spawned sound is its own instance of the MetaSound graph.
Thanks for this Dan. I've been having a ton of fun in P1 and presets are going to speed up workflow so much.
Great demonstration. The repetition is great
Very helpful Dan. Thanks for making these videos showing us how to use the features!
thanks for sharing
So helpful! I mean I have to go through all my sounds and adjust them now but it means in future things will be much faster, thanks so much :)
Hey this is really awesome!. One request though: Please have preset in the asset type so we dont have to "_Preset" every MetaSound. Thanks :)
Hey Dan, thanks for demonstrating this. You mentioned a Discord server where Aaron posted a video. Could you share a link to that community? Many thanks!
Nice!
Dan, how would you actually change a preset of a specific MSS instance at run-time?
Like changing modulation parameters of a polyphonic synth while it is running and instantiated in the level?
Great videos!
Cheers
There are probably some needed features, for example, it'd be nice if we could query the preset values on the MetaSound. However, if you're just swapping the MetaSound Source voice where you have a Blueprint or code doing your polyphonic voice management, then I would update the sound to play for all new voices.
@@DanReynoldsAudio Cool. Yes, I thought of going through all the created instruments on beginPlay, and reassigning a new preset.
Thanks for the video. I don’t quite follow what the frequency multiplier to semitones does. If you don’t have that node, won’t it already output the sound at a random semitone range that you set in the random get? What is undesirable about this and what does the frequency multiplier change? Thank you!
The Frequency Multiplier/Scalar is a linear expression of frequency change. So 2.0 means twice the frequency, 0.5 means half the frequency. In semi-tones, this would be +12 and -12 respectively. They are different ways to express playback rate change and when sending data from gameplay, unless I know that gameplay will be dealing with musical formats, I prefer to receive normalized, linear inputs.
In other words: normalized, linear values is easier for gameplay; semitones is easier for music.
@@DanReynoldsAudio Ah, I see. Thanks for clarifying.
This is fantastic 👍
Does the Wave Player stream the audio files from disc, load them into a buffer when they are needed (after they have been chosen randomly) or load them into a buffer at init (when the sound source is loaded)?
Great question. In Unreal Engine 5, the default loading scheme for audio is using Stream Caching which loads according to the designated loading behaviors (there are several). The default loading behavior with Stream Caching is to load sounds when they're needed to play, not when their UObject is loaded by reference.
Thanks Dan this is super helpful. Could you make a video on how to affect the inputs on a Metasound graph it's used as its own object in a game e.g., a music player and I want to trigger a sound or change the amplitude of something? Especially how to do it if I want to affect the changes from a trigger box somewhere else in the game? Thanks!
Yeah, for sure! I'll probably show how to get triggers and continuous float data from gameplay using the Dorian's Run example I recently uploaded.
@@DanReynoldsAudio Thanks, that would be awesome! Especially if there's a way of bypassing casting via a blueprint and message the Meta graph directly!
@@nightonmars The Blueprint method is how you message the MetaSound graph directly because each spawned sound is its own instance of the MetaSound graph.
@@DanReynoldsAudio Thank you, that's super helpful! Seems I'm on the right track then. Looking forward to a video on the subject!
Thankyou sir