For anyone interested, one of the reasons why his frequency spectrum has a 'gap' in the highest frequencies, is because he probably used an MP3 audio file. He quickly mentioned there isn't enough frequency information which is absolutely correct, but a contributing factor to having a gap in the highest frequencies is often the result of using an MP3 file. MP3 compression-depending on the MP3 settings when rendering an audio file to MP3-will often 'cut' off all frequencies above a specific high frequency (usually around 16kHz+) all the way to the audio file's samplerate/2. This is one of the techniques MP3 compression uses to reduce the size of the file. So if this audio file he used had a samplerate of 44.1kHz, that means the MP3 file probably cuts off the very high frequencies from 16,000Hz (roughly) all the way to 22050Hz. That's why you see a clear gap in the circle towards the end of the video. If you're using an MP3 audio file, bare this in mind. You might have to manually adjust the angle you pass in to the sine and cosine functions to make it 'fit' in to a perfect circle by adding a little extra to it :).
Years ago, when you had few enough viewers to respond to every question in the comments, I requested videos on sound analysis. When you uploaded this series, I didn't have time to do sound projects, but now I'm going through them and thank you so much for having made them in the meantime. FFT is very interesting. I got an idea for a way to compress sound in a way that could make music files take less memory but keeping a high quality. I don't know if it will work yet, but I'm just about to try it out.
Just amazing the things you do with p5. It took me some of the today's wee hours to watch and understand all the 11 videos of this serie of lessons, but it is was very worth it. The way you teach is so original and fun that I was hooked up during all the 11 videos. Thank you very much for teaching all these interesting stuff that I'm also learning it in my first course of programming at University of London.
Cool ! It took me like months to program something similar to this(a 3D waterfall display version) on windows in like 1996. Just finding out what OS/SDK/Libs communicate with the sound card was like half the battle. I even used my own 3D maths at 1st but eventually used the 1st version of directX, which had its own learning curve. & if it wasn't for some genius's posting their quality FFT source, I would have been looking at a whole other project requiring a stack of books & even more weeks or months & to outstretch my abilities as well. Now you can get just about the same thing with only a few lines of simple code in any browser. that really is some nice progress imo ! (for the app/front-end developers anyway). I sure as hell wouldn't want to have to write these interpreters and/or be responsible for all of the sloppy(for lack of a better word) layers that makes all of this possible !
This video was unbelievably helpful!!! Been doing research all afternoon and finally stumbled across this series! Now I'm going to try hooking an FFT script to an LED grid on my wall to make a custom graphic EQ for a cool party effect. Thanks for the awesome content, I learn so much watching these :) The "This dot" song was my jam by the way XD
Did you ever tried with a microphone ? Here's how, a simple sketch BUT you MUST use HTTPS (browser asks permission to use the microphone) AND add p5.sound.js (p5.min.js doesn't include it apparently...). If you're on a laptop like me, don't forget to connect a headset, otherwise you'll hear a larsen ! var mic; var spectrum; const canvas = { width: window.innerWidth, height: window.innerHeight }; function setup() { mic = new p5.AudioIn(); mic.start(); fft = new p5.FFT(); mic.connect();//needed in order to route sound into p5.FFT createCanvas(canvas.width, canvas.height); } function draw() { background(0); micLevel = mic.getLevel(); spectrum = fft.analyze(); for (let f = 0; f < 1024; f++) { stroke(spectrum[f], 0,0);//beautiful reds :-) line(f, 0, f, spectrum[f]); } }
I like how even when part of the code is put in wrong, it creates an an effect that makes me think "you know what I could do with that?" Also, hearing that "This Dot" song is hilarious at 1.75 speed lol
I've been watching your videos for a while now and love your content more than any other programming educator I know of. I'd like to see a series where you help us understand all the facets of web development. Not necessarily going into expert level depth but giving us an idea about what its all about; which pieces of software are used and why, how do databases work, etc.
Your vector field visualization made me learn how to do lots of lines in one drawCall: beginShape(LINES); then put both the start and end point in your for loop.
Lol, have just started the video and know I'm in for a good one. I love the way you explain things, and it really makes sense to me! Thinking to myself, I should watch more of your videos!
Pretty sure the Frequencies are inverted. After the "this-dot" intro, when the kick starts, you see the Freqs on the right start moving -- should be on the left. Also, if I had to guess, the range is only going from ~2kHz to ~100Hz.. It would look way better if you invert the order of the array and also expand to a range to something like 20Hz to 20kHz. Great job though!
Ironically, what you did previously with the sound rotating in a circle is very very close to what the fourrier transform is really doing 3blue1brown has a series explaining this
I really like to see that program you said that takes the sound and make a tridimensional visualization like a landscape. Thanks for any info and for the videos 😍
Holy cow, man. I'm actually working on a Visualizer and this just got interesting. Thank you very much, I didn't know you could do that. Is the language in JavaScript if I may ask?
To continue with Daniel's last sentence in this video, "future" becomes "today" after 6 years. Yes I'm still here, the 6th day in a row starting from the last day of 2022. I can see from the 100 long list that the next is 12.1 about Node. Bye sounds.
Teacher, I need to record and listen at the same time than I use any filter to cancel the noises in determined frequencies. Do you know if it is possible?
Instead of analyzing it we should assign the sound stem cell we needed base on alphabet and use 3d surrounding sound paint background needed to reproduce same stem no matter what terrain to make that happen then use tone and pich of command officer so if same word of different voice had different affect and later each profession are assigned different stem base on functional and responsibility
Hey, great video. I was wondering if you had a video where I can get the frequency of a single note if i play it on a guitar say. I would love to decipher the frequency of each note I play.
@Patrick O'Toole this is exactly why I'm here also, (kind of) I created one of these with the native WebAudio API and as he mentions at 7:44 we have skewed frequency levels. About 75% of sound resides at the lowest end band bars which is not accurate and not really even useful. Built an Angular 11 component at github.com/BenRacicot/ng-audio/blob/main/src/app/sys-sound/sys-sound.component.ts I'm tracking the question at stackoverflow.com/questions/64877198/webaudio-db-visualization-not-reflecting-frequency-bands-as-expected?noredirect=1#comment114708083_64877198
Good afternoon, my name is Alexander! I have a little question. can you tell me how to export an audio file to an array of amplitude numbers? in the form of a txt file for example. I can't seem to find a simple solution. maybe how can this be done with your library and browser?
What a great series! Thank you so much. I have one question though: Did you ever end up making that video with many sound files and many buttons? I've been trying to figure it out and can't seem to figure it out.
Good work sir.... which editor you are using i want to use this ?? Is any way we can remove the low amplitude frequencies using this library....I want to use it to remove background noises in audio ...
Can we visuaize speech sounds in such a way that one would be able to tell which sound is played by just looking at the visualization? Do you think it's possible?
I know this is an older video, and my comment might honestly have nothing to do with what's talked about in this video, but I've been trying to find information on something and I haven't been able to come across anything useful anywhere. I have chromesthesia (a form of synesthesia), so I interpret music as color. I'm also a musician, and until recently I thought that everyone saw/felt color when listening to music. Now that I know what I "have", I've started paying attention to it more, and I've thought it might be cool to try and visualize it for other people, and I think it would be really awesome to somehow even incorporate that visualizer into a live show scenario for my shows as a musician. I know visualizers exist, but what I'm thinking of would be different: each note from a guitar, for instance, would be represented by a light wave on a wall projection or on a screen. Depending on the note, the light wave-form would change "shape" and color. So for me, I could use this and actually reverse engineer it, in a sense, so that I'd be writing music with a light show in mind. I'd know that an A Minor chord is royal blue, and some notes from a G major chord are green, so I'd compose a song with that in mind to create certain shapes that flow into other shapes and colors that blend to form what I'M seeing when I play music. Anyway, sorry for the long explanation. Basically, I guess I want to know where I could start with all of this? Is there a program that already exists that I've just not been able to find? Am I going to have to learn code and create a program myself that does this? I know in general the program would need to be able to detect musical notes as frequencies, and then it would need to convert that into some sort of computer data that could be read by another program (the visualizer I'm assuming) that could convert that computer data into the color that I choose. Anybody's help would be greatly appreciated! If you're a coder, I'd even be willing to work WITH you to make this little idea of mine come to life on a small scale.
Kalab Buzan I used to experience something similar, but only during the hypnagogic state (right when you’re falling asleep). I would see colored shapes that would correspond to the music. It doesn’t happen anymore, but I would like to create a program that does something similar. That’s one of the reasons I started to learn Processing. Have you seen Kaitlyn Hova’s videos?
Hi, I am following your steps to create a player for my website. But my website failed to load the mp3 file. I performed loadSound in the preload function and the website just stopped at the "Loading" page and nothing happened. I tried different mp3 files and changing the accessing right with chmod but still nothing happened. T_T help me
Very nice video! I'm curious if this method (Fourier Transform) can be used to extrac data out of mp3 files and create minimalistic midi versions of those mp3's. Would it be possible (at least in theory) to extract values from for example a chord to get the individual notes. (Probably start with simple songs, unisono (only one voice or instrument)) Example: You take a song in mp3 format and get the sheet notes.
Hello:) great video this is really helpful thank you! Do you have a processing version? Or someone knows if there are any good references to do something like this on processing?
Thank you so much for this an awesome series, but i have an issue if I save in text editor then it works, but if I run my index.html in any browser then it gives me an error (The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page). Please help. What should I do? There is a way to fix that with p5.js?
I've noticed that the spectrum height seems to always be lower in the higher frequency ranges. I think it would be nice to kind of normalize the height a bit across the spectrum. Maybe by having a var that gets incremented by a certain value and added to the length of the line with each index of the spectrum array. I haven't tested this out yet, If anyone has a better idea of how to achieve this, I'd love to hear it. Also is there a way to access the RMS (Root Mean Square) values of the audio?
Hi, very nice job. I'm loking to use prossesing as a audio vizualizer taking my two main audio output signals (signal before my speakers) to vizualize it. Is it pposible with prossesing? I mean I would like to use an other application or software to perform music and I would love use prossesing as a visalizer. I've seen many people with this problem trought the web. Thx in advance.
Would you mind asking at discourse.processing.org/! It's a better platform for Processing and p5.js related code questions. You can share code there easily! Feel free to link from here to your post.
This is old af, but i still want to ask how the fft values are "dynamic". From signal theory i learned that you calculate fft for a certain window of time, returning fixed values that depends by the length of the window, the smoothness and so on. So what about this?
Late response I know, but you can replace the sound file with your mic, and if you are using windows you can pass your pc sounds (Every sound on your pc, so music, error messages sounds etc) through Stereo mix. Then you can visualize your music. P5.js looks a lot like processing, which is built on Java. So you don't really really need JS knowledge, I don't have it either
It is possible to do this in Processing? If it is, it could be interesting to use this along with an Arduino and then connect it to some LEDs and make a VuMeter or a Frequency Analyzer.
Question - I was trying to make a beep - yes, beep detection program..which detects the number of beeps in the sound. I got it to log the values of the spectrum when there was a beep, but i am still not sure how i can count the beeps. It logs multiple arrays each time their is beep. Any idea on how i could do that? Atleast point me towards the right direction. Thanks.
I am creating an android application that plays a beep sound whenever it detects a pre-defined sound. In another way, I have three drum/piano/vocal sequence mp3 file with me which is stored along with the apk assets. After installation, the three files moved to the internal memory. Whenever I play any song/video in any of the video/audio player(eg:youtube/vlc) on my mobile, if the music sequence of the playing audio matches any of the already stored three mp3 files, Then a beep sound should produce. This is my aim. But I am hard to find such a library to compare a music which is playing inside the mobile with another app. Google assistant and Shazam app just listen to audio from the mic and compare the audio. But in my case, I need to compare audio which is the playing device itself.
I need a simple application that can tell me the frequency of the sound outside. The audio input will be my microphone. I want to make it my self and implement a formula or something that will calculate the speed of the soundsource. I need this because I am making my "Lecture" about DopplerEffect. I want to make an app that can calculta automatically the speed of the soundsource. Can you or somebody else help me with this? I watched lot of tutorials from you and they help me a lot, but what I want to achieve is not included in the tutorial. :) XD
Actually i want to reverse this process what if u already have the array of everything beats per every milisecond or seconds u have bars timbers everything can u generate the sound based on that ? the meaning I'm getting the response in spotify audio analyzer API which give me tones of array input of songs so based on that can I make real sound ?
Dear sir how to make a comparison between one person's voice and another. For example: an English teacher gives an example of the pronunciation of a sentence to a kindergarten student: "I am reading a book" then followed by a kindergarten student: "I am reading a book". how to compare it so that the results uttered by the kindergarten students are the same as the teacher. Best Regard
For anyone interested, one of the reasons why his frequency spectrum has a 'gap' in the highest frequencies, is because he probably used an MP3 audio file. He quickly mentioned there isn't enough frequency information which is absolutely correct, but a contributing factor to having a gap in the highest frequencies is often the result of using an MP3 file. MP3 compression-depending on the MP3 settings when rendering an audio file to MP3-will often 'cut' off all frequencies above a specific high frequency (usually around 16kHz+) all the way to the audio file's samplerate/2. This is one of the techniques MP3 compression uses to reduce the size of the file. So if this audio file he used had a samplerate of 44.1kHz, that means the MP3 file probably cuts off the very high frequencies from 16,000Hz (roughly) all the way to 22050Hz. That's why you see a clear gap in the circle towards the end of the video. If you're using an MP3 audio file, bare this in mind. You might have to manually adjust the angle you pass in to the sine and cosine functions to make it 'fit' in to a perfect circle by adding a little extra to it :).
Years ago, when you had few enough viewers to respond to every question in the comments, I requested videos on sound analysis. When you uploaded this series, I didn't have time to do sound projects, but now I'm going through them and thank you so much for having made them in the meantime. FFT is very interesting. I got an idea for a way to compress sound in a way that could make music files take less memory but keeping a high quality. I don't know if it will work yet, but I'm just about to try it out.
Look forward to hearing how it goes!
Just amazing the things you do with p5.
It took me some of the today's wee hours to watch and understand all the 11 videos of this serie of lessons, but it is was very worth it.
The way you teach is so original and fun that I was hooked up during all the 11 videos.
Thank you very much for teaching all these interesting stuff that I'm also learning it in my first course of programming at University of London.
Cool ! It took me like months to program something similar to this(a 3D waterfall display version) on windows in like 1996. Just finding out what OS/SDK/Libs communicate with the sound card was like half the battle. I even used my own 3D maths at 1st but eventually used the 1st version of directX, which had its own learning curve. & if it wasn't for some genius's posting their quality FFT source, I would have been looking at a whole other project requiring a stack of books & even more weeks or months & to outstretch my abilities as well. Now you can get just about the same thing with only a few lines of simple code in any browser. that really is some nice progress imo ! (for the app/front-end developers anyway). I sure as hell wouldn't want to have to write these interpreters and/or be responsible for all of the sloppy(for lack of a better word) layers that makes all of this possible !
You old guys have some serious skills
@@ciph3r836 fr
priceless! ........its crazy how long something like this could take in the old days
This video was unbelievably helpful!!! Been doing research all afternoon and finally stumbled across this series! Now I'm going to try hooking an FFT script to an LED grid on my wall to make a custom graphic EQ for a cool party effect. Thanks for the awesome content, I learn so much watching these :)
The "This dot" song was my jam by the way XD
wow that sounds amazing! do you have a link where I could check this out?
Did you ever tried with a microphone ?
Here's how, a simple sketch BUT you MUST use HTTPS (browser asks permission to use the microphone) AND add p5.sound.js (p5.min.js doesn't include it apparently...).
If you're on a laptop like me, don't forget to connect a headset, otherwise you'll hear a larsen !
var mic;
var spectrum;
const canvas = { width: window.innerWidth, height: window.innerHeight };
function setup() {
mic = new p5.AudioIn();
mic.start();
fft = new p5.FFT();
mic.connect();//needed in order to route sound into p5.FFT
createCanvas(canvas.width, canvas.height);
}
function draw() {
background(0);
micLevel = mic.getLevel();
spectrum = fft.analyze();
for (let f = 0; f < 1024; f++) {
stroke(spectrum[f], 0,0);//beautiful reds :-)
line(f, 0, f, spectrum[f]);
}
}
A true hero in YT comments, what an elegant example!
I like how even when part of the code is put in wrong, it creates an an effect that makes me think "you know what I could do with that?"
Also, hearing that "This Dot" song is hilarious at 1.75 speed lol
I've been watching your videos for a while now and love your content more than any other programming educator I know of. I'd like to see a series where you help us understand all the facets of web development. Not necessarily going into expert level depth but giving us an idea about what its all about; which pieces of software are used and why, how do databases work, etc.
Your videos are wonderful, keep making more! Some of the best coding videos I've seen online, and so creative too.
Your vector field visualization made me learn how to do lots of lines in one drawCall: beginShape(LINES); then put both the start and end point in your for loop.
So cool this library. I used it to draw a heatmap like in Spectroid and creating different filter.
Lol, have just started the video and know I'm in for a good one. I love the way you explain things, and it really makes sense to me! Thinking to myself, I should watch more of your videos!
The most realistic coding session
Forget the FFT tutorial, let’s talk about the jam called: “this dot” that’s dance music at its finest!
Pretty sure the Frequencies are inverted. After the "this-dot" intro, when the kick starts, you see the Freqs on the right start moving -- should be on the left. Also, if I had to guess, the range is only going from ~2kHz to ~100Hz.. It would look way better if you invert the order of the array and also expand to a range to something like 20Hz to 20kHz. Great job though!
between loving the music
thank you for the videos, makes learning p5 on my own less hair ripping outy
I use p5 because of you man I never would've found it if it wasn't for you
Omg I literally though about sound visualition and this video come out!
@Nick Bravo look at the description.
Ironically, what you did previously with the sound rotating in a circle is very very close to what the fourrier transform is really doing 3blue1brown has a series explaining this
thank you for making this
Choose one good music is the key to follow the p5.sound.js tutorial. Mark this necessary tips please everybody.
Omg, you can combine your blobby coding challenge with this
+Pepa489 good idea!
I think it would be pretty interesting if you tried writing the algorithm as a challenge.
I really like to see that program you said that takes the sound and make a tridimensional visualization like a landscape. Thanks for any info and for the videos 😍
Fantastic! Love your channel so much!
Holy cow, man. I'm actually working on a Visualizer and this just got interesting. Thank you very much, I didn't know you could do that. Is the language in JavaScript if I may ask?
To continue with Daniel's last sentence in this video, "future" becomes "today" after 6 years. Yes I'm still here, the 6th day in a row starting from the last day of 2022. I can see from the 100 long list that the next is 12.1 about Node. Bye sounds.
Wow, really helps for understanding p5.FFT
Yeah, the frequency spectrum and time domain sounds like hocus-pocus to me before.
Yeah, thank you!
Perfect!!
Teacher, I need to record and listen at the same time than I use any filter to cancel the noises in determined frequencies. Do you know if it is possible?
Instead of analyzing it we should assign the sound stem cell we needed base on alphabet and use 3d surrounding sound paint background needed to reproduce same stem no matter what terrain to make that happen then use tone and pich of command officer so if same word of different voice had different affect and later each profession are assigned different stem base on functional and responsibility
Hey, great video. I was wondering if you had a video where I can get the frequency of a single note if i play it on a guitar say. I would love to decipher the frequency of each note I play.
@Patrick O'Toole this is exactly why I'm here also, (kind of) I created one of these with the native WebAudio API and as he mentions at 7:44 we have skewed frequency levels. About 75% of sound resides at the lowest end band bars which is not accurate and not really even useful.
Built an Angular 11 component at github.com/BenRacicot/ng-audio/blob/main/src/app/sys-sound/sys-sound.component.ts
I'm tracking the question at stackoverflow.com/questions/64877198/webaudio-db-visualization-not-reflecting-frequency-bands-as-expected?noredirect=1#comment114708083_64877198
Thank you!
the ear is concerned with loudness when a signal has a larger average RMS value hence why smoothing is handy. Peak values or transients not so...
Good afternoon, my name is Alexander! I have a little question. can you tell me how to export an audio file to an array of amplitude numbers? in the form of a txt file for example. I can't seem to find a simple solution. maybe how can this be done with your library and browser?
i knew you will help me one day
What a great series! Thank you so much. I have one question though:
Did you ever end up making that video with many sound files and many buttons?
I've been trying to figure it out and can't seem to figure it out.
Good work sir.... which editor you are using i want to use this ?? Is any way we can remove the low amplitude frequencies using this library....I want to use it to remove background noises in audio ...
Take me to the frequency domain baby!
How can we utilize p5.js to determine the tone (at different octaves) that is playing from an instrument (e.g. piano)?
Can we visuaize speech sounds in such a way that one would be able to tell which sound is played by just looking at the visualization? Do you think it's possible?
not with just FFT
thanks so so * 1000 much for this my teacher i'm arabic person followed you. :)
I know this is an older video, and my comment might honestly have nothing to do with what's talked about in this video, but I've been trying to find information on something and I haven't been able to come across anything useful anywhere. I have chromesthesia (a form of synesthesia), so I interpret music as color. I'm also a musician, and until recently I thought that everyone saw/felt color when listening to music. Now that I know what I "have", I've started paying attention to it more, and I've thought it might be cool to try and visualize it for other people, and I think it would be really awesome to somehow even incorporate that visualizer into a live show scenario for my shows as a musician. I know visualizers exist, but what I'm thinking of would be different: each note from a guitar, for instance, would be represented by a light wave on a wall projection or on a screen. Depending on the note, the light wave-form would change "shape" and color. So for me, I could use this and actually reverse engineer it, in a sense, so that I'd be writing music with a light show in mind. I'd know that an A Minor chord is royal blue, and some notes from a G major chord are green, so I'd compose a song with that in mind to create certain shapes that flow into other shapes and colors that blend to form what I'M seeing when I play music. Anyway, sorry for the long explanation. Basically, I guess I want to know where I could start with all of this? Is there a program that already exists that I've just not been able to find? Am I going to have to learn code and create a program myself that does this? I know in general the program would need to be able to detect musical notes as frequencies, and then it would need to convert that into some sort of computer data that could be read by another program (the visualizer I'm assuming) that could convert that computer data into the color that I choose. Anybody's help would be greatly appreciated! If you're a coder, I'd even be willing to work WITH you to make this little idea of mine come to life on a small scale.
Kalab Buzan I used to experience something similar, but only during the hypnagogic state (right when you’re falling asleep). I would see colored shapes that would correspond to the music. It doesn’t happen anymore, but I would like to create a program that does something similar. That’s one of the reasons I started to learn Processing.
Have you seen Kaitlyn Hova’s videos?
phenylalanine Nope, I haven't
Hi, I am following your steps to create a player for my website. But my website failed to load the mp3 file. I performed loadSound in the preload function and the website just stopped at the "Loading" page and nothing happened. I tried different mp3 files and changing the accessing right with chmod but still nothing happened. T_T help me
Epic video
Very nice video! I'm curious if this method (Fourier Transform) can be used to extrac data out of mp3 files and create minimalistic midi versions of those mp3's. Would it be possible (at least in theory) to extract values from for example a chord to get the individual notes. (Probably start with simple songs, unisono (only one voice or instrument))
Example: You take a song in mp3 format and get the sheet notes.
Did you try to do it yet ?
Do you do this anywhere with regular Processing?
This is great , but how about the source audio node from live streaming audio ?
windows media player did this before it was cool
oooooh yes pls combine the Perlin noise field with the fft !
You should make a tutorial on wavesurfed sounds? Its a representation like the one of soundcloud.
Should be very challanging
Is there a processing version?
I don't know if you saw my earlier pleas for an audio themed code challenge but thank you so much regardless!!
You're welcome!
Hello:)
great video this is really helpful thank you!
Do you have a processing version? Or someone knows if there are any good references to do something like this on processing?
Hi Daniel, do you rest you wrist on laptop while typing?
sepertinya seru kalau musiknya bergenre ska.
Thank you so much for this an awesome series, but i have an issue if I save in text editor then it works, but if I run my index.html in any browser then it gives me an error (The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page). Please help. What should I do? There is a way to fix that with p5.js?
Lol that 2nd song, he was a dj in a former life
you could take some fft numbers to animate 3d terrain, or fractal trees :D
Great idea!
Thanks for the awesome vid, how would one go about animating said terrain//objects ?
yeah but what would be the depth dimension ?
I've noticed that the spectrum height seems to always be lower in the higher frequency ranges. I think it would be nice to kind of normalize the height a bit across the spectrum. Maybe by having a var that gets incremented by a certain value and added to the length of the line with each index of the spectrum array. I haven't tested this out yet, If anyone has a better idea of how to achieve this, I'd love to hear it.
Also is there a way to access the RMS (Root Mean Square) values of the audio?
James Block rms I think should be equal to 2^(1/2)/2
I have a question how can I make the static frequencies? I can't find anywhere. Please)))
Hi, I want to light up LEDs with FFT. How can I compare the sounds in the input?
Great! went to link for source code but can't find anything on sound there!? Hoe do I download source for your tutorials?
Thanks
Jeff Marc
Hi, very nice job. I'm loking to use prossesing as a audio vizualizer taking my two main audio output signals (signal before my speakers) to vizualize it. Is it pposible with prossesing? I mean I would like to use an other application or software to perform music and I would love use prossesing as a visalizer. I've seen many people with this problem trought the web. Thx in advance.
Would you mind asking at discourse.processing.org/! It's a better platform for Processing and p5.js related code questions. You can share code there easily! Feel free to link from here to your post.
How would you access the sounds via the microphone?
as always, i always forget the thisdot :-(
Bonus question: How to detect any browser sound or computer sound? Does it only work with loading files?
I think I found something :) developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API
Helped me program a custom player/visualizer for iOS! Thanks
Is there anyway, we could put axis on our canvas? (x axis frequency and y axis amplitude). Thanks
Love the tutorials. (It's pronounced Four-ee-ay) :)
Keep up the good work!
Is there any smart way to determine the bassline from the frequencies spectrum?
How to estimate Estimate the total amount of operations (Floating point operations) needed by the model to process a ten seconds audio
This is old af, but i still want to ask how the fft values are "dynamic".
From signal theory i learned that you calculate fft for a certain window of time, returning fixed values that depends by the length of the window, the smoothness and so on. So what about this?
I guess that the window of time is how often the FFT is updated?
it would be great if you could make tutorials for Sound Visualization in Processing! I feel like it's so much more confusing in Java :(
Please suggest here! github.com/CodingTrain/Rainbow-Topics/issues
Laura Scopazzo Java isn't that bad! Compared to C/C++ thats as easy as it gets if you want to go more technical than JavaScript tbh
sweet! ...now, how do I see what MY music looks like without learning how to JS?
Late response I know, but you can replace the sound file with your mic, and if you are using windows you can pass your pc sounds (Every sound on your pc, so music, error messages sounds etc) through Stereo mix. Then you can visualize your music.
P5.js looks a lot like processing, which is built on Java. So you don't really really need JS knowledge, I don't have it either
FYI - *THIS IS A DOT* --> .
teacher, is there any way we can whoose to drill down and make a detailed measurement??
It is possible to do this in Processing? If it is, it could be interesting to use this along with an Arduino and then connect it to some LEDs and make a VuMeter or a Frequency Analyzer.
Yes indeed, check out the Processing sound library! processing.org/reference/libraries/sound/
Question - I was trying to make a beep - yes, beep detection program..which detects the number of beeps in the sound. I got it to log the values of the spectrum when there was a beep, but i am still not sure how i can count the beeps. It logs multiple arrays each time their is beep. Any idea on how i could do that? Atleast point me towards the right direction.
Thanks.
I'm not entirely sure what you're trying to do, but I think PeakDetect will be useful: p5js.org/reference/#/p5.PeakDetect
Why didn't you try that with a a microphone input instead of a song?
I am creating an android application that plays a beep sound whenever it detects a pre-defined sound. In another way, I have three drum/piano/vocal sequence mp3 file with me which is stored along with the apk assets. After installation, the three files moved to the internal memory. Whenever I play any song/video in any of the video/audio player(eg:youtube/vlc) on my mobile, if the music sequence of the playing audio matches any of the already stored three mp3 files, Then a beep sound should produce. This is my aim. But I am hard to find such a library to compare a music which is playing inside the mobile with another app. Google assistant and Shazam app just listen to audio from the mic and compare the audio. But in my case, I need to compare audio which is the playing device itself.
How can i update those parameters by input from the html to the code in p5 ? I'm new to p5 .
Oo I want to do this in Pure Data.
How you able to start this on page load without event?
I just can't get it to run like that. Where do I have to put the files to?
that music tho
I need a simple application that can tell me the frequency of the sound outside. The audio input will be my microphone. I want to make it my self and implement a formula or something that will calculate the speed of the soundsource. I need this because I am making my "Lecture" about DopplerEffect. I want to make an app that can calculta automatically the speed of the soundsource. Can you or somebody else help me with this? I watched lot of tutorials from you and they help me a lot, but what I want to achieve is not included in the tutorial. :) XD
mans the bob ross of code
Actually i want to reverse this process what if u already have the array of everything beats per every milisecond or seconds u have bars timbers everything can u generate the sound based on that ? the meaning I'm getting the response in spotify audio analyzer API which give me tones of array input of songs so based on that can I make real sound ?
hi, can i measure STI in a recorded file? :-)
can update the github link for the code? i cant find it
I can't find the source code either on github
Is it possible to make this in processing istead of p5.js ?
hey dan, I don't quite understand how the map function works , can you explain it a bit plz
See: th-cam.com/video/nicMAoW6u1g/w-d-xo.html
Thanks alot mate
Hi how we can use fft with vb or other language to get a frequecies and plot it
Dear sir
how to make a comparison between one person's voice and another.
For example: an English teacher gives an example of the pronunciation of a sentence to a kindergarten student: "I am reading a book"
then followed by a kindergarten student: "I am reading a book".
how to compare it so that the results uttered by the kindergarten students are the same as the teacher.
Best Regard
I apologize I'm new to this.. can this work on windows?
How does one turn the volume up and down?
Dear can you help me android based resound smart hearing aid