Build a Deep Audio Classifier with Python and Tensorflow

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ต.ค. 2024

ความคิดเห็น • 240

  • @sheikhshafayat6984
    @sheikhshafayat6984 2 ปีที่แล้ว +13

    This is exactly what I was looking for the past one month, and suddenly popped up on my recommendation!
    Can't thank you enough for this. You saved my semester!!

  • @captainlennyjapan27
    @captainlennyjapan27 2 ปีที่แล้ว +15

    1 minute into the video, absolutely amazed by the high high quality of this video. You are my favorite programming youtuber along with FireShip and NomadCoders! Thanks so much Nicholas!

  • @captainlennyjapan27
    @captainlennyjapan27 2 ปีที่แล้ว +2

    41 minutes into the video. Not even a second I was bored. Amazing

  • @IronChad_
    @IronChad_ 2 ปีที่แล้ว +5

    You’re the absolute best with these tutorials

  • @guillaumegalante
    @guillaumegalante 2 ปีที่แล้ว +22

    Thanks so much for all these great tutorials! I’ve discovered your channel a few days ago, your way of teaching makes it really easy to understand and learn. I was wondering if you’d be able to do a series or video around recommender systems: building a recommendation engine (content-based, collaborative filtering), rather Netflix (movie) recommendations, Spotify’s music recommendation (could include audio modeling) or Amazon (purchases) predictions. Many thanks! Keep up the amazing tutorials :)

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +3

      Definitely! I’m doing my own little deep learning challenge atm, will add it to the list!

    • @prajiltp8852
      @prajiltp8852 ปีที่แล้ว +1

      Can we use same if I wanted to seperate my bpos call recording from a conversation files, like if I train it based on my bpos recording and after that if I give a audio will it seperate my bpos sound?? Please help

    • @dwiechannel3196
      @dwiechannel3196 ปีที่แล้ว

      @@NicholasRenotte please answer my question, I really need some direction.🙏🙏🙏

  • @Maxwell-fm8jf
    @Maxwell-fm8jf 2 ปีที่แล้ว +2

    I worked on similar project on Audio classification hooked up on raspberry with some sensors three months ago but using rcnn and librosa. A different approach from yours basically the same steps. Thumb up mate!!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      Woahhh, nice! What was the latency like on the rpi? Noticed when I started throwing more hardcore stuff at it, it kinda struggled a little.

    • @farhankhan5951
      @farhankhan5951 ปีที่แล้ว

      What you have developed in your project?

    • @ellenoorcastricum
      @ellenoorcastricum 8 หลายเดือนก่อน

      What where you using the pi for and have any tips on how to make a system that recognizes certain sound in real time?

  • @adarshd249
    @adarshd249 2 ปีที่แล้ว +6

    Another great content from Nick. Thrilled to do a project on this

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +1

      Yess! Let me know how you go with it!!

  • @gaspardbos
    @gaspardbos 10 หลายเดือนก่อน

    Mc Shubap is spinning the decks in your memory palace 😆 Great tutorial so far.

  • @abrh2793
    @abrh2793 2 ปีที่แล้ว +5

    Nice one!
    Looking forward to a multi label text classification if you can!
    Thanks

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +2

      Yup, code is ready, should be out this week or next!

    • @abrh2793
      @abrh2793 2 ปีที่แล้ว +2

      @@NicholasRenotte yo thanks a lot!
      The way you get inputs from the community and interact is nice to see

  • @enzy7497
    @enzy7497 2 ปีที่แล้ว

    Just discovered this channel on my recommended. Really awesome stuff man! Thanks for the great content.

  • @rachitjasoria9041
    @rachitjasoria9041 2 ปีที่แล้ว +16

    A much needed tutorial !! btw can you make a tutorial on tts synthesis? not with pyttsx3... train a model to speak from provided voice data of a human

  • @lakshman587
    @lakshman587 ปีที่แล้ว

    This video is Awesome!!!
    I got to know from this video that we convert Audio data to image data, to approach audio related tasks in ML!!!

  • @ChrisKeller
    @ChrisKeller 8 หลายเดือนก่อน

    Super, super helpful getting my project off the ground!

  • @luisalmazan4183
    @luisalmazan4183 ปีที่แล้ว

    Thank you so much for these tutorials, Nicolas. Will be great a tutorial about few shot learning. Grettings from México!

  • @henkhbit5748
    @henkhbit5748 2 ปีที่แล้ว

    Awesome sound classification project👍I need a capuchino break after hearing the capuchind bird sound😎

  • @davidcastellotejera442
    @davidcastellotejera442 2 ปีที่แล้ว +2

    Man these tutorials are amazing. Congrats for creating such great content. And thank!!

  • @sederarandi1507
    @sederarandi1507 4 หลายเดือนก่อน

    bro you are absolute gold, thank you so much for all the effort you put on your videos and teachings
    +1 subscriber

  • @stevew2418
    @stevew2418 2 ปีที่แล้ว +1

    Amazing content and explanations. You have a new subscriber and fan!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      Welcome to the team @Steve, glad you liked it!

  • @ronaktawde
    @ronaktawde 2 ปีที่แล้ว +1

    Very Cool video Nick Bro!!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      Thanks homie! Good to see you @Ronak!

  • @DarceyLloyd
    @DarceyLloyd ปีที่แล้ว +3

    Great video. Would love to see a version of this done using the GPU, with multiple classifications, not just binary.

    • @0e0
      @0e0 11 หลายเดือนก่อน

      Tensorflow has GPU builds

  • @primaryanthonychristian2419
    @primaryanthonychristian2419 ปีที่แล้ว

    Bro, great video and very good detailed explanation. 👍👍👍

  • @pedrobotsaris2036
    @pedrobotsaris2036 ปีที่แล้ว +1

    good tutorial. Note that sample rate has nothing to do with the amplitude of an audio file but rather the number of times the audio file is sampled per seconds.

  • @GuidoOliveira
    @GuidoOliveira 2 ปีที่แล้ว

    Incredible video, much appreciated, on the side note, I love your face cam, also audio is excellent!

  • @gregoryshklover3088
    @gregoryshklover3088 ปีที่แล้ว +2

    Nice tutorial. A few inaccuracies there though about stft() usage: "abs()" there is not for getting rid of negatives, but for complex values amplitude. frame_length would probably better be power of 2...

    • @orange-dd5rw
      @orange-dd5rw 2 หลายเดือนก่อน

      how can i implement detection for example when the initial capuchin call started and ended how can i get this information in the end result( like in result it should show capuchin call - 2.3s)

    • @gregoryshklover3088
      @gregoryshklover3088 2 หลายเดือนก่อน

      The classification works on sliding windows of fixed size (3sec in this tutorial). One can slide the window with overlap to try to approximate the start of the matching sequence, or use other signal processing methods to find start of the sequence.

  • @supphachaithaicharoen7929
    @supphachaithaicharoen7929 3 หลายเดือนก่อน

    Thank you very much for your hard work. I really enjoy the video.

  • @Uncle19
    @Uncle19 2 ปีที่แล้ว

    What an amazing video. Definitely earned my sub.

  • @ayamekajou291
    @ayamekajou291 ปีที่แล้ว +2

    Hey nicholas, this project is great but how do i classify multiple animal calls using this model? I can classify the audio as capuchin or not capuchin this way but if i included more audio classes, how could i classify the audio file as the animal as well as the number of counts ?

  • @urielcalderon1661
    @urielcalderon1661 2 ปีที่แล้ว +1

    It's him, he is back.

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +1

      Ayyyyy Uriel!! What's happening!! Thanks a mill!

    • @urielcalderon1661
      @urielcalderon1661 2 ปีที่แล้ว

      @@NicholasRenotte Always faithful man, while there deep learning tutorials we will be there

  • @oaydas
    @oaydas 2 ปีที่แล้ว +1

    Great content, keep it up man!

  • @NoMercy8008
    @NoMercy8008 2 ปีที่แล้ว +3

    LET'S GO, NICK :D
    This is actually pretty awesome and once again one of those things that i feel you can do TONS of different things with.
    Voice commands and animal calls are obvious examples, but maybe you could build a device that listen for a human's breath and heartrate and stuff like that and detects irregularities? This could be used for diagnostic purposes, but also as a warning device for for example elderly people. The moment it hears weird heart or breath sounds or something, it gives a warning and tells them to see a doctor.
    Same thing can be applied in a bunch of different fields, i think. Listening for weird engine sounds for example to help diagnose engine problems before internal parts suddenly and violently become external parts.
    Also, astronomy! Listening for gravitational wave events and things like that, though I'm pretty sure they're already using tons of AI for this anyway, so it's probably being done already.
    By the by, you posted about crowdsourcing labels/labeled data the other day, i think that's a great idea, especially if you're sharing that labeled data with the public!
    Doing it this way is a much more manageable way to get labeled data since hopefully the work is being done in parallel by distributed resources ("many humans") and sharing it online means that this project essentially helps everyone wanting to play around with ML and use the data for something cool, learn about things and maybe come up with awesome ideas to change our future.
    So, awesome idea and great way to leverage the outreach you have here on YT, I love it! ❤
    What i really liked about this video in particular is the exploratory data analysis and the preprocessing alongside it.
    As we all know, it's very important to feed your data to your network in a way that makes it as easy to digest as possible, so learning more about this is absolutely essential and really fun aswell! Much appreciated!
    As always, Nick, thanks a ton for your videos and for doing all this for us, much much appreciated!
    Really love this video and looking forward to the next one!
    All the best, have a great week! :)

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +3

      Heyyy, I had a good laugh at this comment "before internal parts suddenly and violently become external parts" 😂 but yes definitely agree!
      I'm hoping the crowdsourced labelling can become a thing! I know there's datasets out there but for a lot of the niche and practical stuff I have in mind, I can't really seem to find anything. I figure if people are willing to help label then I can give back to the community by showing everyone how to use and build with it!
      Have an awesome weekend!!

  • @dimmybandeira7
    @dimmybandeira7 2 ปีที่แล้ว

    Very smart! can you identify a person speaking in the midst of others speaking more quietly?

  • @guruprasadkulkarni635
    @guruprasadkulkarni635 2 ปีที่แล้ว +2

    can I use this for classifying different guitar chords' audio?

  • @mosharofhossain3504
    @mosharofhossain3504 ปีที่แล้ว +2

    Thanks for such a great tutorial. I have a question:
    What happens when resampling is done to an audio file? Does its total time changes or its number of sample changes or both changes or it depends on specific algorithm?

    • @andycastellon919
      @andycastellon919 10 หลายเดือนก่อน

      Us humans can hear up to 22kHz approximately and due to Nyquist frequency, you need to sample it twice as its higher frequency, hence that 44100Hz you may have seen. However, on audio analysis, most useful data is found in up to 8000Hz, so we resample it up to 16000Hz, losing the rest of higher freq. The length of audio does not change. What changes is the amount of bits we need to save the audio.

  • @akashmishrahaha
    @akashmishrahaha 8 หลายเดือนก่อน +1

    Why are we reducing the sample rate from 44khz to 16khz, it was not clear to me?

  • @chamithdilshan3547
    @chamithdilshan3547 2 ปีที่แล้ว

    What a great video is this. Thank you so much !!! 😍

  • @周淼-k3u
    @周淼-k3u ปีที่แล้ว +1

    Thank you so much for these nice tutorials! They are quite helpful! I have a small question. I saw your process of building up models and training and testing them. If I want to spend less time in classifying the model, do you think it's possible to introduce some existing datasets such as esc-10 or esc-50 in your method?

  • @matthewcastillo8775
    @matthewcastillo8775 9 หลายเดือนก่อน +1

    I need help getting compatible version of tensorflow and tensorflow-io. The latest release of tensorflow io is 0.35.0, however my os is saying that only up to 0.31.0 is available. My tensorflow is updated to the latest version and I have Python 3.11.6.

  • @insidecode
    @insidecode ปีที่แล้ว

    Amazing job

  • @rajkumaraj6848
    @rajkumaraj6848 2 ปีที่แล้ว +1

    @NicholasRenotte The kernel appears to have died, It will restart automatically. Got this error while running model.fit. How can I solve this?

  • @ellenoorcastricum
    @ellenoorcastricum 8 หลายเดือนก่อน +1

    Is it possible to run this while i have my mic always listening and to do live proccesing on that? Btw this will be my first project and i know its a lot.

  • @vigneshm4916
    @vigneshm4916 2 ปีที่แล้ว +1

    Thanks for a great video. Could please explain why we need tf.abs in preprocess function?

  • @benbelkacemdrifa-ft1xr
    @benbelkacemdrifa-ft1xr ปีที่แล้ว +1

    It's a very interesting video. But can we do the test using sound sensor?

  • @empedocle
    @empedocle ปีที่แล้ว

    Amazing job Nicholas!! I have just a question, why didn't you calculate also the standard deviation of files' lenght so to have a more precise interval for your window?

  • @anthonylwalker
    @anthonylwalker 2 ปีที่แล้ว +2

    Another great video. I love the setup and style of your videos now, have to love a good whiteboard! I'd be interested in seeing a drone video - I've recently got a Ryze tello. Enjoying the python package to control, and computer vision capabilities that come with it!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      Definitely!! I've got one floating around somewhere, will get cracking on it! Thanks for checking it out @Anthony!

  • @kavinyudhitia
    @kavinyudhitia ปีที่แล้ว

    Great tutorial! Thanks!!

  • @vishalm2338
    @vishalm2338 2 ปีที่แล้ว +1

    How to decide the values of frame_length and frame_step in tf.signal.stft(wav, frame_length=320, frame_step=32) ? Appreciate any help !

  • @zainhassan8421
    @zainhassan8421 2 ปีที่แล้ว

    Awesome, kindly make a video on Speech Recognition model using Deep Learning.

  • @asfandiyar5829
    @asfandiyar5829 11 หลายเดือนก่อน +1

    Had a lot of issues getting this to work. You need python 3.8.18 for this to work. I had that version of python on my conda env.

  • @mayankt28
    @mayankt28 3 หลายเดือนก่อน +1

    If you're encountering a shape issue when calling model.fit and getting the error "cannot take length of shape with unknown rank," the solution might be to explicitly set the shape of your tensors during preprocessing.

    • @MahmoudSayed-hg8rb
      @MahmoudSayed-hg8rb 2 หลายเดือนก่อน +1

      can you elaborate further ?

    • @johndaniellet.castor7189
      @johndaniellet.castor7189 9 วันที่ผ่านมา

      ​@@MahmoudSayed-hg8rb great! @mayankt28, thank you very much for your insight!
      basically, we can add the line:
      spectogram = tf.image.resize(spectogram, [1491, 257])
      right before the "return spectogram, label" line in the preprocess() function

  • @ChristianErwin01
    @ChristianErwin01 8 หลายเดือนก่อน

    I've gotten through to the part where you start testing the predictions and my validation_data isn't showing up. The epochs run fine, but I have no val_precision or val_loss values. All I have are loss and precision_2.
    Any fixes?

  • @harsh9558
    @harsh9558 10 หลายเดือนก่อน

    This was awesome 🔥

  • @marioskadriu441
    @marioskadriu441 2 ปีที่แล้ว

    Amazing tutorial. Really enjoyed that Nick 🙏🏼
    I guess in case we wanted to detect multiple sounds from the same animal the procedure would be the same but we would need an equal number of samples to train the neural network?
    Furthermore in case we wanted to detect sounds from multiple animals and categorize them would we follow the same procedure just at the end we would put softmax instead of sigmoid ?

  • @tatvamkrishnam6691
    @tatvamkrishnam6691 2 ปีที่แล้ว +1

    Tried to recreate the same. Somehow the program abruptly stops at
    hist = model.fit(train, epochs=4, validation_data=test)
    It has to do with using lot of RAM. Anyway for me? Thanks!

  • @TheOfficalPointBlankE
    @TheOfficalPointBlankE 8 หลายเดือนก่อน

    Hey Nicholas, I was wondering if there was a way to change the code to print the timestamps in the audio clip that each sound is recognized?

  • @ahmedgon1845
    @ahmedgon1845 ปีที่แล้ว

    Great video thanks so much,
    I have a small question, In the line
    Spectogram = tf.signal.sftf
    Why you choose
    Fram_step =320
    Fram_length=32
    Can some one explain the method of choosing this please?

  • @Computer.Music.And.I
    @Computer.Music.And.I 4 หลายเดือนก่อน +1

    Hello Nicholas,
    I have been using this great video in my beginners courses and last year everything was fine. Unfortunately in today's lecture the code did not run on any of my machines or configurations ... The load_16k_ wav function is not able to resample the audio files, and much worse, the model.fit function complains about an input that could not be -1, 0 or 1.
    Are you willing to check and update your code ? (Spend 6 hours now to find the error😊)
    Thx jtm

    • @sanjay6013
      @sanjay6013 26 วันที่ผ่านมา +1

      did u find the fix for this pls help

    • @johndaniellet.castor7189
      @johndaniellet.castor7189 9 วันที่ผ่านมา

      @@sanjay6013
      @MahmoudSayed-hg8rb made a great insight for this model.fit issue
      basically, we can add the line:
      spectogram = tf.image.resize(spectogram, [1491, 257])
      right before the "return spectogram, label" line in the preprocess() function
      this is to explicitly set the shape

    • @johndaniellet.castor7189
      @johndaniellet.castor7189 9 วันที่ผ่านมา

      i think i also had the issue of tensorflow-io not being able to be installed
      so what we did is we used alternative code instead, in my case:
      original_length = tf.shape(wav)[0]
      new_length = tf.cast(16000 / tf.cast(sample_rate, tf.float32) * tf.cast(original_length, tf.float32), tf.int32)
      new_indices = tf.linspace(0.0, tf.cast(original_length - 1, tf.float32), new_length)
      new_indices = tf.cast(new_indices, tf.int32)
      wav = tf.gather(wav, new_indices)
      return wav
      credits to @shafagh_projects for this!

  • @Uebermensch03
    @Uebermensch03 2 ปีที่แล้ว

    Thanks for uploading a great video and one question! You sliced an audio file from the very start of the file. I think the position where we start slicing can affect the model accuracy. For instance, if you skip 1s and start slicing, it may yield different wav data. Do you think slicing audio from the very start of the file is golden rule?

  • @thoseeyes0
    @thoseeyes0 ปีที่แล้ว +3

    if anyone get error at 22:14 for
    pos = tf.data.Dataset.list_files(POS+'\*.wav')
    neg = tf.data.Dataset.list_files(NEG+'\*.wav')
    just use the / instead of \.
    pos = tf.data.Dataset.list_files(POS+'/*.wav')
    neg = tf.data.Dataset.list_files(NEG+'/*.wav')

    • @TheHearts567
      @TheHearts567 7 หลายเดือนก่อน

      thank you

  • @SaiCharan-ev8hu
    @SaiCharan-ev8hu 5 หลายเดือนก่อน

    hey nicholas,trying to execute this but facing issue as you havent done any preprocessing on the training data,looking for help from you

  • @Varadi6207
    @Varadi6207 ปีที่แล้ว

    Awesome explanation. Please help me to create audio augmentation for health records without losing information. I worked with time_shift(-0.5 to 0.5 variation in the wav). But, model ACC is not up to the mark.

  • @orange-dd5rw
    @orange-dd5rw 2 หลายเดือนก่อน

    how can i implement detection for example when the initial capuchin call started and ended how can i get this information in the end result ( like in result it should show capuchin call - 2.3s)

  • @sarash5061
    @sarash5061 2 ปีที่แล้ว +1

    This is amazing

  • @riyazshaik4006
    @riyazshaik4006 10 หลายเดือนก่อน

    Thanks so much sir, one request sir can you explain about how to classify audio as positive, negative and neutral

  • @Lhkk28
    @Lhkk28 ปีที่แล้ว

    Hello Nicholas, thanks for you video :)
    I have a question
    I am aiming to build a model for sound detection using deep learning algorithms ( I am thinking about using LSTM). for now I am done with preprocessing step. I have the spectrograms of the sounds (generated using Short time Fourier transform) also I have the labels (binary labels as arrays, 0s where there are no events and 1s where the events are present). I am now confused about who to fed this data to the model. The shape of each spectrogram is (257, 626) and the shape of each label is (626,). How should I give this data to the LSTM. Can I build a model that takes the spectrograms with their current shape and give the labels as sequence of ones and zeros or I have to segment the spectrograms and give each segment a label?

  • @mendjevanelle9549
    @mendjevanelle9549 5 หลายเดือนก่อน

    Hello sir!
    I installed tensor flow as presented but I don't understand the reason of the error message,no module named tensor flow.

  • @kundansaha2369
    @kundansaha2369 5 หลายเดือนก่อน

    i am getting a error and tried to debug so many way but not solve. Error is "The procedure entry point could not be located in the dynamic link library
    - to Positive and Negati
    a', 'Parsed_Capuchinbird_Clips') a', 'Parsed_Not_Capuchinbird_Clips')
    C:\ProgramData\anaconda3\lib\site-packages\tensorflow_io\ python\ops\libtensorflow_i0.50."

  • @konradriedel4853
    @konradriedel4853 2 ปีที่แล้ว +1

    hey nick, i was rebuilding this project of yours now on a local machine - i was struggling with the spectogram size and memory allocation for my gpu rtx3070.. i downscaled to frame_length=160, frame_step=64 with input_shape=(748, 129,1) so far so good, now regarding the .fit of the model my training time is 50ms per epoch, final results tend to yours...did the colab train on cpu for the sake of memory with "highscale" spectograms maybe? could you run that locally and give some info? im very insecure regarding the train time of 50ms to the 3mins in the vid.. thanks anyways for the great tutorial man!!!

    • @GGBetmen
      @GGBetmen 2 ปีที่แล้ว

      hello, can I ask you about this things?

  • @asimbhaivlog108
    @asimbhaivlog108 2 ปีที่แล้ว

    Hi
    I have to detect tree cutting voices after detection in a forest using iot can you make a detailed viedo on it and which hardware sensors and module can be used
    Illegal Tree-Cutting Detection

    • @farhankhan5951
      @farhankhan5951 ปีที่แล้ว

      Are you done with your project?

  • @thewatersavior
    @thewatersavior 2 ปีที่แล้ว

    58:00 - Another great one, thank you, already looking forward to applying. Quick question - why mix the audio signals on the MP3. I get that it gets us to one channel - is there a way to just process one channel at a time. Im imagining that would allow for some spatial awareness in the model? Or perhaps too many variables because we are just looking for the one sound? Thinking that it would be useful to associate density with directionality... but not sure that's accurate if the original recordings were not setup to actually be directional...

    • @cadsonmikael9119
      @cadsonmikael9119 ปีที่แล้ว

      I think this might also introduce distortion in the result, since we have to deal with stereo microphone separation, ideally about 100-150mm for human perception. I think the best idea is to just look at one channel in case of stereo, at least if the microphone separation is high or unknown.

  • @plazon8499
    @plazon8499 2 ปีที่แล้ว +1

    Dear Mr. Renotte,
    I'm trying to use this tutorial as a basis to build a classifier over several music genres. The only thing that I don't know how to adapt is the last layer of the CNN. How should I modify it so that it can get me as output let's say 10 different labels ? Should the labeling be modified upstream ?
    (I want to have 10 outputs instead of 1 at the last Dense layer, but I can't just modify it like it, so I'm wondering how I should do it)
    Thanks a lot !

    • @armelayimdji
      @armelayimdji 2 ปีที่แล้ว

      Since the time you asked the question, you have probably solved it. However, my guess is that for your multi class problem, you should first have data for the 10 classes (samples of each of 9 music genres, plus a non classified genre) and the last layer should be a Dense layer with 10 neurons activated by a softmax function (instead of sigmoid) that gives the predicted probability of each class. You also have to change the loss function to be one of the 'categorical crossentropy' available in tf keras.

    • @plazon8499
      @plazon8499 2 ปีที่แล้ว +1

      @@armelayimdji Hey Armel Tjanks a lot for the advice ! Obviously I'm done with my project and I went for something else : Instead of taking the spectrograms as input of my CNN, I extracted features from the sound wave and all the physical aspects of the music to have an input vector of features that I passed through an MLP and it worked well !

    • @farhankhan5951
      @farhankhan5951 ปีที่แล้ว

      I have similar kind of problem, can you help me?

  • @NuncNuncNuncNunc
    @NuncNuncNuncNunc 2 ปีที่แล้ว

    Maybe a basic question, but what does zero padding do when getting the frequency spectrum?

  • @eggwarrior5630
    @eggwarrior5630 11 หลายเดือนก่อน

    Hi i am working with a new audio dataset which does not require audio slicing part? What should I modify to loop through the folder for the last part. Any help would be greatly appreciated

  • @tims.4396
    @tims.4396 ปีที่แล้ว

    Im not sure about the batch and prefetch part, for me i generates empty training sets afterwards and also it only takes 8 prefetched files for training?

  • @alfasierra95
    @alfasierra95 หลายเดือนก่อน +1

    It gives me a error when i try model.fiit
    +

  • @jawadmansoor2456
    @jawadmansoor2456 ปีที่แล้ว

    Thank you for the great comment. How do you classify multiple sounds in a file and get time information as well like a sound was made at time 5 seconds into the audio file and another was made at 8 seconds how do we get time and class?

  • @carlitos5336
    @carlitos5336 2 ปีที่แล้ว +1

    Thank you so much!!

  • @Kishi1969
    @Kishi1969 2 ปีที่แล้ว +1

    Always given inspiration of new knowledge...You are Great Thank
    Advice please
    Please my question is that can I buy Graphic card of 2G(NVidia for starting Computer Vision because my PC is too slow when I'm using my CPU..

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว +1

      I would be looking at something with more RAM is you can swing it @Benya, 2gb is a little too small for most image computer vision tasks.

    • @unteejo3678
      @unteejo3678 2 ปีที่แล้ว

      Use Google Colab’s TPU (Tensor Processing Unit).
      It’s very fast if your model use only CNN.

    • @Kishi1969
      @Kishi1969 2 ปีที่แล้ว

      @@NicholasRenotte Thanks 🙏

  • @tatvamkrishnam6691
    @tatvamkrishnam6691 2 ปีที่แล้ว

    23:30 What is the significance of that len(pos) or len(neg)?
    When len(pos) is replaced with 2 , I expect only first 2 sample data to have '1' label.
    However when I run -> positives.as_numpy_iterator().next(), I get '1' labelled not only for the first 2 samples but also for the rest.

  • @eranfeit
    @eranfeit ปีที่แล้ว

    Thank you
    Eran

  • @dumi7177
    @dumi7177 5 หลายเดือนก่อน

    what computer specifications do you have? training the model for me took 8 hrs

  • @malice112
    @malice112 2 ปีที่แล้ว +1

    thanks for the great video, is it possible to use .mp3 files in python instead of .wav to save disk space?

  • @Sachinkenny
    @Sachinkenny 2 ปีที่แล้ว

    What happens when there are multiple birds in the dataset. Now how good is a CNN model on this kinda dataset? Again the source training audio samples can vary in length, sometimes in minutes. How can we do the pre processing in such cases?

  • @GArvinthKrishna
    @GArvinthKrishna 5 หลายเดือนก่อน

    what approach is the best to find a number of blows in the recording of a Jackhammer?

  • @SA-oj3bo
    @SA-oj3bo 2 ปีที่แล้ว +1

    If I want to count how many times 1 specific dog barks / day ? Then it is clear that samples of this dog barking are needed, but how many? And what other sounds must be sampled and how many if at the same place many other sounds can be heard? Thx!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      Heya, I would suggest starting out with 100+ samples of the dog, you can then augment with different background sounds and white noise to build out the dataset. This is purely a starting point though, would need to do error analysis from there to determine where to put emphasis next!

    • @SA-oj3bo
      @SA-oj3bo 2 ปีที่แล้ว +1

      @@NicholasRenotte It would be very interesting if an accurate counter can be made. This is a case of animal abuse ( dog in a small cache and neglected by the owner and barking for attention for over 2 years), so an accurate counter would be very helpful and usefull for other projects.
      What I not understand is why 1 sample/spectogram of the barking dog would not work good enough to detect it in a recording of for example 24h, because there must be very few or no sounds that have the same spectogram. I understand it will always be different but can 2 different sounds ( cat and dog for example) have 2 spectograms that are very similar? So my question is why it is not possible to identify a specific sound in a recording by comparing the spectogram of the sound to detect to all possible spectograms in the recording ms after ms? If you accept payed projects I would love your help, because this is all new for me. Regards!

    • @NicholasRenotte
      @NicholasRenotte  2 ปีที่แล้ว

      @@SA-oj3bo I think you could. If there were multiple dogs in the sample you would probably need a ton of data though to be able to clearly identify which dog is barking. This would allow the net to pick up the nuances for that particular bark.

  • @tando90
    @tando90 2 ปีที่แล้ว

    Hey Nick, i try to do like you but a different dataset and i had an error called : Bad bytes per sample, expected 2 but got 4, I checked my data and it's a wav file with 16000 Hz. What should i do?

  • @gangs0846
    @gangs0846 5 วันที่ผ่านมา

    Thank you. Can you make a realtime Audio Classification?

  • @Djsong4u
    @Djsong4u ปีที่แล้ว

    Can I use this to re-create a voice of a singer and then use this model in SVC?

  • @toni3124
    @toni3124 2 ปีที่แล้ว

    Hey, I have a question. I am working with the Mozilla Common Voice dataset and I converted the audio files to a wav file. Now there comes my problem. I want a mfcc of the files with the shape (128,) but it is not possible for me to get it to this shape. I always get a shape like (128, and here a random number)
    My Code is:
    y, sr = librosa.load(os.path.splitext(f"{base_name}\\{f[0]}")[0] + ".wav")
    y = librosa.to_mono(y)
    y = librosa.resample(y, orig_sr=sr, target_sr=16000)
    mfcc = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=128)
    f is the filename extracted of the csv file.

  • @HananAhmed0311
    @HananAhmed0311 25 วันที่ผ่านมา

    @NicholasRenotte
    can we use same thing in human voice classification with this?

  • @raktimdey3154
    @raktimdey3154 2 ปีที่แล้ว

    Hey Nick. I'm getting a Unicode Decode error when I'm trying to grab a single batch of training data using numpy iterator. Can you please help?

  • @paulj9833
    @paulj9833 ปีที่แล้ว

    In the cell 'hist = model.fit(train, epochs=1, validation_data=test)' the Kernel crashes in my case. Seems to be a tensor flow problem. I tried to install different versions of tensor flow, it didnt work though. Does anyone have any advice?

  • @leventelcicek6445
    @leventelcicek6445 2 หลายเดือนก่อน

    you are wanderfull sir

  • @prxninpmi
    @prxninpmi 2 ปีที่แล้ว +1

    VERY COOL!!

  • @gauranshluthra7520
    @gauranshluthra7520 4 หลายเดือนก่อน

    How did you uploaded the file as colab does not support folder upload until it is in zip file format

  • @iPhoneekillerr
    @iPhoneekillerr 8 หลายเดือนก่อน

    please help me, why doesn't colaboratory open this code? How should it be changed so that it can be opened in the colaboratory?

  • @mrsilver8151
    @mrsilver8151 6 หลายเดือนก่อน

    thanks for the great tutorial as always sir
    in case i want to make voice recognition to identify for which person is this voice
    is this steps will help me to reach that or do i need to look for something which is more specific to this task.

  • @Ankur-be7dz
    @Ankur-be7dz 2 ปีที่แล้ว +1

    data = data.map(preprocess)
    in this part im getting an error -----------------> TypeError: tf__decode_wav() got an unexpected keyword argument 'rate_in' although rate_in is the parameter of tfio.audio.resample

    • @Yy.Srinivasan
      @Yy.Srinivasan หลายเดือนก่อน

      How to sort it out, getting the same error

  • @dzulgonzalezmarcosadalbert6511
    @dzulgonzalezmarcosadalbert6511 ปีที่แล้ว

    good video, new subscriber, but is there a way to export that training model, to be able to use it from a python file that can do sound detection through the use of said model

  • @UzairKhan-gs3nq
    @UzairKhan-gs3nq ปีที่แล้ว

    How can we use Linear predictive coding in the preprocessing function of this code?

  • @abdullahalhammadi2940
    @abdullahalhammadi2940 7 หลายเดือนก่อน

    Thasnk you so much for this. however, I have one question
    I am facing a compilation error when I excute this line (wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)). It stated that (unable to open file: libtensorflow_io.so). could you hep me please ?