OpenAI Whisper: Robust Speech Recognition via Large-Scale Weak Supervision | Paper and Code
ฝัง
- เผยแพร่เมื่อ 21 ก.ค. 2024
- ❤️ Become The AI Epiphany Patreon ❤️
/ theaiepiphany
👨👩👧👦 Join our Discord community 👨👩👧👦
/ discord
In this video I cover Whisper, an ASR system from OpenAI's "Robust Speech Recognition via Large-Scale Weak Supervision" paper.
Trained on a huge multi-lingual, multi-task weakly supervised dataset it achieves a very high effective robustness and accuracy closing the gap with the human baseline using only an off-the-shelf transformer.
I walk you through both the paper as well as the actual code. Let me know whether the code part helped!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ Paper: cdn.openai.com/papers/whisper...
✅ Code: github.com/openai/whisper
✅ Nice explanation of mel spectrograms: • Mel Spectrograms Expla...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00:00 Intro
00:02:05 Paper overview
00:07:30 Collecting a large scale weakly supervised dataset
00:13:55 Evaluation metric issues (WER)
00:16:05 Effective robustness
00:18:40 Scaling laws in progress
00:26:30 Decoding is hacky
00:28:30 Code walk-through
00:30:25 Model architecture (diagram vs code)
00:33:30 Transcription task
00:34:10 Loading the audio, mel spectrograms
00:37:50 Language detection
00:45:00 Transcription task continued
00:47:35 Suppressing token logits
00:52:00 Voice activity detection
00:53:35 Decoding and heuristics
01:01:56 Outro
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany - / theaiepiphany
One-time donation - www.paypal.com/paypalme/theai...
Huge thank you to these AI Epiphany patreons:
Eli Mahler
Petar Veličković
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💼 LinkedIn - / aleksagordic
🐦 Twitter - / gordic_aleksa
👨👩👧👦 Discord - / discord
📺 TH-cam - / theaiepiphany
📚 Medium - / gordicaleksa
💻 GitHub - github.com/gordicaleksa
📢 AI Newsletter - aiepiphany.substack.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#whisper #openai #asr
Let me know whether the code part helped! :) Is it adding any value for you guys? Or am I just rambling and it's too hard to follow unless you play with the code yourself? Would really appreciate some feedback!
It definitely did! Is the debugger your first choice when it comes to figuring out how some new codebase works, or did you fire it up for the occasion as a demonstration tool?
@@xl0xl0xl0 thanks!! I have a whole series where I do just that.
And as for your question - it depends. If I am playing with something on my own then yes, always! By far the best way to understand every single detail of your code
@@TheAIEpiphany I missed that series! I actually struggle with debugging ML code with vscode, so I'll check it out!
This code walkthrough has made this paper walkthrough one of the best I've seen. Thanks for that, and please keep doing it!
@@leobeeson1 wow nice, thanks for telling me that! If I get more feedback I might keep doing this in every paper walk through!
I just found this channel and I’m SO THANKFUL for a great walkthrough and explanation. It’s super fun. This is gold!!! Thanks Aleksa!
Thanks for walking through Whisper code together, enjoyed the journey!
mmmaaannnnn !! What a good video. I was like searching for something like this. Where in even a noob like me can understand the entire paper because you took through it step by step!
I knew this was going to be a great video when you stopped to explain log-mel spectrum as well!
Thanks Aleksa
Thanks Aleksa! Really appreciate the effort you put into this videos. Quality content, keep it up.
Thank you so much for doing these videos. You helped me so so so so much.
Very informative and authoritative, thank you!
This is super cool man! Thanks for diving deep into it
Thanks Alex!
Thanks for making this great video!
So well explained! Thx!
Great video!!
Loved it.. need more such videos
Rewatching the stream
You are amazing sir!
@Aleksa Gordic, Thanks for sharing this valuable information. Apart from AI would look to see how you are using VS code so effectively to move between the code and debug it. Would really appreciate it if you could provide more information on the same on video.
Hi Aleksa! Great video! I just wanted to know what would the loss function be for the models? Would it be something like cross-entropy? Because the model predicts tokens..
I wonder if we can use the attention map (of how much each audio token contributes to the prediction of each transcript token) to back out timestamps instead?
Hey really nice video. Can we fine tune whisper model for our dataset. If yes can you show us how
This is great, ty!
🚀
I have watched your video and it was great! But I'm not sure whether the translation and transcription tasks share the same decode parameters.
@TheAIEpiphany , how do you see the effect of "best_of" parameter in the quality of the transcription? Any insight would be helpful. Thanks
impressive work!
can the model be ram locally? how much computing to run this model for inference
Sir,I have read your roadmap to Reinforcement Learning...I wanna do research in RL...1)Should i still follow your roadmap ? 2) Do i need to know the whole maths derivation behind Supervised Unsupervised and Deep Learning Algorithm 3) How can i start doing research in RL in undergraduate in an non research institute?
Can someone explain how are embeddings learnt?
hey can you provide your source code that u have written in vscode?
can you make a video how decoder works?
can you put little emphasis on how the time stamps are generated for transcription
can you do a install video on this
Would be helpful if you could put these models in history a bit. I’m not as familiar with how things were done in the past vs. today SOTA.
Welsh an outlier. Never would have guessed. Anyway, gotta go, heading out to Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch this afternoon.
One can find in the paper that this is because a lot of Welsh was misclassified as English during the data collection process. Imagine them finding out.
Hi. I've found you channel and the videos are totally all mind blowing. I have a question regarding Whisper. Currently I want to return a list of all transcribed words probability. I think that I have something to do with the def update inside Decoding.py. Can you make some help on how to do it? I would be very appreciated!
You can modify the update function to return the logprobs of all words. The max of that logprobs is the selected token's probability.
is it possible to find the timestamps of each transcribed word? Great work!
Not in OpenAI's version but a fork of it has this feature. It is called WhisperCpp
Hey guys please can anyone help me with this issue. I am trying to run whisper on my machine and I am getting this error in cmd. UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead").
I use a windows 10 with gpu RTX2060. Also it seems it runs on my cpu instead of NVIDIA GPU. I created a python virtual environment and pip installed whisper in that virtual environment just for more details.
Try the parameter --device cuda
it's not chinese, it's korean(script is called han-gul). Thanks for the tutorial!
Any guesses on the name of company B?
Hah, hard to infer without repeating the research
1:30 yes it's korean
That's not even Mandarin... it's Korean BTW 😅
Would be helpful if you could put these models in history a bit. I’m not as familiar with how things were done in the past vs. today SOTA.
Thanks for the feedback, you mostly care about transformers here.
"Attention is all you need" paper