Thank you for this ! it was very much helpful. Could you please make a video on psychological signals as well, followed up with a hybrid model between facial and psychological emotion detection =D
Thank you for your interest, its a nice idea, and I have also worked on fusion of bio-signals and vision (video) based techniques using multi-model. your valuable suggestion is noted, I will look forward to publish a video on the topic. Till then, please keep sharing my videos and keep supporting us like this.
Hi, Thanks for the well-explained videos. Our request to you is please make a video on emotion recognition on Bio-signals. also I haven't seen Realtime Face Emotion Recognition with Deep Face
Hello Vincent, DeepLearning_by_PhDScholar posted a video talking about Face Emotion Recognition with Deep Face, here is the link: th-cam.com/video/fkgpvkqcoJc/w-d-xo.html
Sir we are planning to implement sub emotions in a particular major emotion. For example for the major emotion SAD there are different sub emotions like the sad can be due to heartbreak or trouble or when he lost something or even when he is hope less the person will be in sad emotion only. To implement this we need some guidence. So we are requesting you to please accept our requests. We are waiting for your reply sir.
Hello sir. I'm making a model for detecting faces with mask, face shield and without it. I use this video but I have overfitting problems. What did I do wrong?
sorry sir, i have a question. what is feature and label from (at the x and y append)? because i can't running them.. or it because of the datasets that i used is different? thanks
Hello sir, While training the model i.e., using the fit method it started running and gave a Resource exhausted error. Plzzz help me out how to solve this error.
Hi i have some problem when i try to fit the model i get this error MemoryError: Unable to allocate 16.1 GiB for an array with shape (28709, 224, 224, 3) and data type float32
Hello sir at 32:00 min of video,after downloading the dataset,again you are creating "Training" folder.Do we need to extract the all "train" files of archive into training folder
At the end of the video you run it and it reads your emotion in real time. Does any part of the code in the video actually for it to detect emotion in real time?
how can we do this not from webcam but from a feed stream of data. E.g. I setup my model as an API end point and an incoming video data say from a security camera is sent to the API and API endpoint recieves and send the data back to user as one word (angry, sad, etc)
Hello sir, i run the code and face to the below error please help me error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\objdetect\src\cascadedetect.cpp:1389: error: (-215:Assertion failed) scaleFactor > 1 && _image.depth() == CV_8U in function 'cv::CascadeClassifierImpl::detectMultiScale'
I have an error that I tensorflow/core/platform/cpu_feature_guard this tensorflow binary is optimised with oneapi deep neural network library to use the following cpu instructions in performance-critical operation AVX AV2X Please help me sir 🙏
Thank you so much, this video was very helpful for me. Is there a chance you could share the code and/or your pre-trained model? Keep up the good work!
Can you also make a video on domain adoption and transfer learning , for example I have a dataset of Western population of Emotions and i want to apply the model to asian population .
Asslam alaikum! Sir I'm fresh in this Field and our teacher assign this project...I follow your instructions from the start of video but in implementation (start)where you put the image shape it occurs error which i cannot identify kindly sir help me I have no teacher who guide me plz sir
Hey there, great work! For some reason even though the code runs, in the live video it shows the rectangle around my face and it constantly writes happy while in the jupyter it writes "face not detected"...Maybe you can help me out :/
When I run the code in the live video I am getting rectangle around my face and by default happy expression is displayed. And the emotion doesn't change even though I change my expression and in jupyter notebook it is displaying face not detected. Please can someone help me with this.
hwy can you help me out, while using the same code i am geting a memory error status while normalization of the numpy array for dataset size 12k what to do
Thank u so much for all this, had a doubt X = X/255.0 is giving an memory error. So I tried for loop for it's iteration , is there any difference between scaling each element like X[0] = X[0]/255.0 (obviously iteration for all elements) and X = X/255.0
Yes, of course. You can connect to web application using flask or any other framework. You dont need to code it into another IDE. The reason to use Jupyter notebook is for better runtime visualization.
Thank you for your interest, we do have one video on object detection using SSD-MobileNetv2. I will try to upload video tutorial on Yolo soon. keep supporting us like this. Stay blessed
@@deeplearning_by_phdscholar6925 I am a student at Chung Ang University Seoul Campus, could you share with me how I can reach you (email address or phone#) thank you so much
Sir i don't know how your code is running without throwing variable error, as you are using 'face_roi' variable inside loop scope and you are trying to access that variable after that scope.
How to run the real time live video code in Google Colab. I m performing this all in Google Colab. I know how to open webcam in Google Colab but then how to connect code camera capture code with my code
For some reason, when I come to the actual training part, my accuracy starts only at about 17 percent. And then after training for 150 epochs, I've only been able to get the accuracy up to about 32 percent. Any suggestions of what could possibly be going wrong?
@@anshkumar2848maybe because you are not changing your limit of trainable paramters... as not locking the previous layers. will make the whole model trainable... for which obvious reasons your computer cant handlr
at 47:53 while executing that step I got an error "Unable to allocate 27.2 GiB for an array with shape (24257, 224, 224, 3) and data type float64" how to resolve it
Hi, thanks for your tutorial! I have a problem, i cannot crop the face like you did with the face_roi part of the code. When i run this code "plt.imshow(cv2.cvtColor(face_roi, cv2.COLOR_BGR2RGB))", output is just some meaningless pixels. Therefore, i couldn't give required data for final_image. Can you help me, please?
thank you for your video,but I took an error in fit function. ValueError: Input 0 is incompatible with layer model_6: expected shape=(None, 224, 224, 3), found shape=(None, 50, 50, 1) I couldn't fix that
While trying to train, I got this error ValueError: Found unexpected instance while processing input tensors for keras functional model. Expecting KerasTensor which is from tf.keras.Input() or output from keras layer call(). Got: Does anyone know how to fix this?
I tried this tutorial now and given that FER2013 is more than double the size during the making of this video, the steps didn't work for me :/ the size is just too large, even with a RTX 3050 6GB. I was able to make it work by importing the data using tf.keras.utils.image_dataset_from_directory() though
Sir, my camera is not opening after execution of code its only giving output in form of data . for example:- 1/1 [==============================] - 0s 41ms/step 1/1 [==============================] - 0s 39ms/step 1/1 [==============================] - 0s 45ms/step face not detected 1/1 [==============================] - 0s 36ms/step 1/1 [==============================] - 0s 34ms/step sir , how to deal with it pls reply
Hello kind sir, i am facing this error error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:967: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow' can i get some help on this error from any kind sir?
Its a case of Overfitting, your model got too familiar with the dataset hence the accuracy is 93% but it is not able to process the unseen data and predicts incorrectly.
great work shanullah brother. keep it up.
Just a tip for everyone doing the X normalization: Cut your data set in half, your RAM will thank you
@Pradeep Verma Can you share the code?
@Pradeep Verma i also need the code
How to do it, I'm facing that issue
please share the code
😂 literally bro RAM screaming
Thank you for this ! it was very much helpful.
Could you please make a video on psychological signals as well, followed up with a hybrid model between facial and psychological emotion detection =D
Thank you for your interest, its a nice idea, and I have also worked on fusion of bio-signals and vision (video) based techniques using multi-model.
your valuable suggestion is noted, I will look forward to publish a video on the topic.
Till then, please keep sharing my videos and keep supporting us like this.
@@deeplearning_by_phdscholar6925 thank you dear,
what are the h.w specifications that you need to excute this program? please
@@deeplearning_by_phdscholar6925 have you uploaded the video?
It was a very successful presentation of the subject. thanks a lot
Hi, Thanks for the well-explained videos. Our request to you is please make a video on emotion recognition on Bio-signals. also I haven't seen Realtime Face Emotion Recognition with Deep Face
Hello Vincent,
DeepLearning_by_PhDScholar posted a video talking about Face Emotion Recognition with Deep Face, here is the link: th-cam.com/video/fkgpvkqcoJc/w-d-xo.html
Sir we are planning to implement sub emotions in a particular major emotion.
For example for the major emotion SAD there are different sub emotions like the sad can be due to heartbreak or trouble or when he lost something or even when he is hope less the person will be in sad emotion only.
To implement this we need some guidence. So we are requesting you to please accept our requests.
We are waiting for your reply sir.
hello sir, can we get the code and images that you wrote in this video.?
thank you for your videos really it is very usefull
Thank you so mcuh for this great explanation
Sir how will we do for group facial expression recognition? what changes we have to make in this code?
thank you very much, you explain very well, keep up the good work!
Well explained, very clear
Impressive Dr. Shan. Keep it up
which model do u prefer best for FER,as per trainimg complexity n implenting over test dataset...?
Can we have Github Link for This Code? There is not any Link in the Description.
write the code
Thank you so much , you must teaching programming!!!!!
@@sashinkakumarage4739 how ?
Have you telegramm ?
Great explanation. It would be great to share the code.
Hello Dr.Shan, is it possible for you to share the validation accuracy and validation loss instead of the training accuracy and loss? TIA.
Hello, can you make a tutorial also on Bio-signal/physiological based emotion detection? Highly interested for that
if your model taking time to train
switch to google colab and select GPU (as a runtime type)then your model will train quickly
I guess this will be for the Android: "I detect sadness Dr Shan, would you like a back massage?"
Bro where is the pre trained model which u used for new model load??
Thank you for the tutorial! How can I cite your work in my writing?
Hello sir. I'm making a model for detecting faces with mask, face shield and without it. I use this video but I have overfitting problems. What did I do wrong?
sorry sir, i have a question. what is feature and label from (at the x and y append)? because i can't running them.. or it because of the datasets that i used is different? thanks
Hello sir,
While training the model i.e., using the fit method it started running and gave a Resource exhausted error.
Plzzz help me out how to solve this error.
Why don’t we consider the class imbalance in the dataset classes? Im sure that affects the model training (let me know if Im wrong).
Thanks for the well-explained videos.
Hi i have some problem when i try to fit the model i get this error MemoryError: Unable to allocate 16.1 GiB for an array with shape (28709, 224, 224, 3) and data type float32
I always see the same output from neural network. How many input and hidden layer units do i have to use?
Sir, can you explain how to test the model using whole testing dataset?
Hello sir at 32:00 min of video,after downloading the dataset,again you are creating "Training" folder.Do we need to extract the all "train" files of archive into training folder
even after applying transfer learning , epoch started from 0.1-0.2 and not increasing even after 30 epch? what should i do?
At the end of the video you run it and it reads your emotion in real time. Does any part of the code in the video actually for it to detect emotion in real time?
Good lecture
how can we do this not from webcam but from a feed stream of data. E.g. I setup my model as an API end point and an incoming video data say from a security camera is sent to the API and API endpoint recieves and send the data back to user as one word (angry, sad, etc)
would be interested to view content for bio signals as well
The length of the training data is 0. what can be the possible issue? and how to resolve this? Can you share the github link of the code?
Hello sir, i run the code and face to the below error please help me
error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\objdetect\src\cascadedetect.cpp:1389: error: (-215:Assertion failed) scaleFactor > 1 && _image.depth() == CV_8U in function 'cv::CascadeClassifierImpl::detectMultiScale'
Very interesting!
I have an error that I tensorflow/core/platform/cpu_feature_guard this tensorflow binary is optimised with oneapi deep neural network library to use the following cpu instructions in performance-critical operation AVX AV2X
Please help me sir 🙏
Thank you so much, this video was very helpful for me. Is there a chance you could share the code and/or your pre-trained model? Keep up the good work!
णचछःंर्न
Did you get the trained files already? If yes please share with me.
@@mehmetfidanci8943
@@mehmetfidanci8943 i have done a transfer learning with chekpoint but i wand to use webcam to detect
Is there any way to get the "Anaconda Prompt" terminal on Mac? I've only ever seen it open on Windows.
Can you also make a video on domain adoption and transfer learning , for example I have a dataset of Western population of Emotions and i want to apply the model to asian population .
+1
i cant move further epochs, i tried alot iam facin error...
Error
ValueError: Shapes (None, 1) and (None, 1000) are incompatible
Friend, congratulations on your video. I'm also working on this issue. Do you have a tip on how to store these emotions in a database?
Hii sir could you got your answer?
iam getting this error please help me
MemoryError: Unable to allocate 32.2 GiB for an array with shape (28709, 224, 224, 3) and data type float64
Asslam alaikum!
Sir I'm fresh in this Field and our teacher assign this project...I follow your instructions from the start of video but in implementation (start)where you put the image shape it occurs error which i cannot identify kindly sir help me I have no teacher who guide me plz sir
what algorithm did you use to implement for this?
How to get a image dataset of. Diffrently abilied child and how we can classift them.. Because each people have different emotion.
Hey there, great work! For some reason even though the code runs, in the live video it shows the rectangle around my face and it constantly writes happy while in the jupyter it writes "face not detected"...Maybe you can help me out :/
I'm also facing similar problem where it shows surprise constantly and jupyter keep in writing "face not detected"
@@indiananime3125 did you find a solution for this?
@@babayaga2358 No
Please can someone help I am also getting the same thing though I followed the same as shown in the video
new_model.fit(X,Y, epochs = 25)***
ResourceExhaustedError facing.. what to do?
When I run the code in the live video I am getting rectangle around my face and by default happy expression is displayed. And the emotion doesn't change even though I change my expression and in jupyter notebook it is displaying face not detected. Please can someone help me with this.
hwy can you help me out, while using the same code i am geting a memory error status while normalization of the numpy array for dataset size 12k what to do
i guess you must make video on Human emotions recognition
Please make a video of emotion recognition based on bio/physiological data
I have done the tutorial in multiple notebooks and different cpu and gpu setup but the accuracy is just at 14% on the first 5 epochs
55:57 you got this!!
is "create_training_Data()" will run lots of time?
Thank u so much for all this, had a doubt X = X/255.0 is giving an memory error. So I tried for loop for it's iteration , is there any difference between scaling each element like X[0] = X[0]/255.0 (obviously iteration for all elements) and X = X/255.0
can u give the code for this?
@@austinsaldanha8884 i used the same code he's shown in this video
@@saurabhjadhav3680 I am getting memory error that it cannot allocate 32 gb so how did u fix it? i was asking the code for the for loop
@@austinsaldanha8884 I'll share in sometime
@@saurabhjadhav3680 bro?
could you connect this code you've done in tensor flow to a web application or would you have to code it into a whole different IDE?
Yes, of course. You can connect to web application using flask or any other framework. You dont need to code it into another IDE. The reason to use Jupyter notebook is for better runtime visualization.
You are so intelegent gob bless you
Assalamu Alaikum
Could you please make a video on food ingredients recognition ?? Please
Thank you sir.
Can you make videos about yolo?
Thank you for your interest, we do have one video on object detection using SSD-MobileNetv2.
I will try to upload video tutorial on Yolo soon. keep supporting us like this. Stay blessed
@@deeplearning_by_phdscholar6925 thank you sir..
@@deeplearning_by_phdscholar6925 I am a student at Chung Ang University Seoul Campus, could you share with me how I can reach you (email address or phone#) thank you so much
@@deeplearning_by_phdscholar6925 i need this can you privde me this.
The code is running but in the camera, the emotion is stuck in Fear and it is not changing along with my emotion. Can anyone help me ?
33:00 Anaconda Prompt won't allow me to run the command saying that it isn't a runnable program
Sir i don't know how your code is running without throwing variable error, as you are using 'face_roi' variable inside loop scope and you are trying to access that variable after that scope.
Hi Rohit, can you please suggest any solution for this, how to solve this error.
i'm getting memory error everytime after executing this line new_model.fit(X,Y, epochs = 25)
did you fix it ? I have the same error
How to run the real time live video code in Google Colab. I m performing this all in Google Colab. I know how to open webcam in Google Colab but then how to connect code camera capture code with my code
For some reason, when I come to the actual training part, my accuracy starts only at about 17 percent. And then after training for 150 epochs, I've only been able to get the accuracy up to about 32 percent. Any suggestions of what could possibly be going wrong?
sir will you please tell me how to train epochs i am not able to do it for even 1 epochs
@@anshkumar2848maybe because you are not changing your limit of trainable paramters... as not locking the previous layers. will make the whole model trainable... for which obvious reasons your computer cant handlr
@@sumitsoni7559 ok sir got it
Can we have a video on malaria detection using machine learning
Hi i have tried your code but in the last i am getting this 'break' outside loop error. could you help me with this?
Can you please provide the github link or share the notebook of the code
how can i access your sourse code
please share the video with bio signal and intensity of emotioms
Sir when I execute the last part it says Face not detected . What should I do? Please help me
MemoryError: Unable to allocate 26.8 GiB for an array with shape (23875, 224, 224, 3) and data type float64
!
42:01 After calling the function and printing the length of training data am getting 0 as the output . What can be the possible
did you get a solution for this
same
Check for any errors in the code like a typo..
it happened to me but the reason was that i forgot to provide image size in the previous steps
Please did you use any algorithm to train the data
at 47:53 while executing that step I got an error "Unable to allocate 27.2 GiB for an array with shape (24257, 224, 224, 3) and data type float64" how to resolve it
Try with lesser number of images it will work
Where can I get the code?
sir in cnn which model you have used?
IF I WANTED TO PRESENT THIS PROJECT AND SAY IT IS USING AI WOULD THAT BE CORRECT
Hi, thanks for your tutorial! I have a problem, i cannot crop the face like you did with the face_roi part of the code. When i run this code "plt.imshow(cv2.cvtColor(face_roi, cv2.COLOR_BGR2RGB))", output is just some meaningless pixels. Therefore, i couldn't give required data for final_image. Can you help me, please?
have you got the solution?
thank you for your video,but I took an error in fit function. ValueError: Input 0 is incompatible with layer model_6: expected shape=(None, 224, 224, 3), found shape=(None, 50, 50, 1) I couldn't fix that
img = np.expand_dims(img, axis=0)
img = img/255.0
+rep man goodwork
Plz help .. How I do the test to get the accuracy of it ?? Did someone complete it
Hi, My model is taking around 10-12 hours to train one epoch, any particular reason for it? (I'm using a MacBook pro 2020)
same with me bro i think it has somethig to do with optimizer ot no of images in data set
hey , how did you solved it then , i am getting same issue
So did you manage to save the model? Cause i'm getting a tf issue with custom layers
@@sashinkakumarage4739 no I never solved it sorry! Wish u luck
can u tell me which algorithm have u used in this project for facial emotion recognition
While compile the model my losses and accuracy both are 0
Please help me out
While trying to train, I got this error
ValueError: Found unexpected instance while processing input tensors for keras functional model. Expecting KerasTensor which is from tf.keras.Input() or output from keras layer call(). Got:
Does anyone know how to fix this?
do you know how to fix this
I tried this tutorial now and given that FER2013 is more than double the size during the making of this video, the steps didn't work for me :/ the size is just too large, even with a RTX 3050 6GB. I was able to make it work by importing the data using tf.keras.utils.image_dataset_from_directory() though
hey can you share the code please ?
Hello sir how did give img_array=imread("Training...)and it showing error in img_array.shape..please anybody to solve this solution
I'm also stuck in error from 2days
Sir, my camera is not opening after execution of code its only giving output in form of data . for example:- 1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 39ms/step
1/1 [==============================] - 0s 45ms/step
face not detected
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 34ms/step
sir , how to deal with it pls reply
Same for me too. Have you solved the issue?
Sir, why is validation accuracy fluctuating?
@@llqq1744 have u manage to figure it out?
Hello kind sir, i am facing this error
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:967: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
can i get some help on this error from any kind sir?
i am facing this error when i try to run my webcam
@@Justin-yk1jk me too, have you managed to fix it?
hi sir i also have same problem if you solved please help me and send information
Hi. I tried your code on 9 thousand images. Accuracy 93%, but predicts incorrectly. Can you help me?
Its a case of Overfitting, your model got too familiar with the dataset hence the accuracy is 93% but it is not able to process the unseen data and predicts incorrectly.
See the validation accuracy.... Not the testing accuracy
Can i ask you what the model you used ?
How can one Contact you, Mr Professor ?