@@anandidatta9310 First create the database which contains different subfolders where different Gestures are stored . Then run the code for training , and then code for testing , make sure the database , created model everything present in same directory
@@prajwalitaborah9361 you can create the dataset by your own as explained in this video 😊If you want more detail explanation , then you can refer this link, th-cam.com/video/BU4NHgxPyLE/w-d-xo.html , here I explained the database generation from scratch . Hope this will be helpful! Happy Learning
good night, day or afternoon depending on the time you look at my message, I am from Mexico I have proceeded to carry out this project as a practice for my school but I get an error called cuda, investigate and it is about which nvidia graphics cards are necessary is there a way to run the program without a dedicated graphics card
Is there anything I need to do to increase my accuracy I don’t knw if it’s from my dataset , I followed the instruction to capture using the data collection data code…. But my accuracy is still very low ..
thank you for your share , I need to classify eye directions, but I need to find bbox of an eye in the face, how can I do this? Can you make another video about this, actually I solved this problem and I am fixing my problem clc clear all close all fpath='C:\Users\tulip\OneDrive\Masaüstü\merve özdaş\gaze_tracking\veriler'; data=fullfile(fpath,'Datap'); tdata=imageDatastore(data,'includesubfolders',true,'LabelSource','foldername'); [trainingImages,testImages]=splitEachLabel(tdata,0.8,'randomize'); net=alexnet layers=[imageInputLayer([227 227]) net(2:end-3) fullyConnectedLayer(4) softmaxLayer classificationLayer() ] opt=trainingOptions('sgdm','MaxEpochs',20,'InitialLearnRate',0.0001,'MiniBatchSize',64) training=trainNetwork(trainingImages,layers,opt) predictedLabels=classify(training,testImages) accuracy=mean(predictedLabels==testImages.Labels) allclass=[]; % for i=1:length(testImages.Labels) % I=readimage(testImages,i); % class=classify(training,I); % allclass=[allclass class]; % figure(1), % subplot(17,17,i) % imshow(I) % title(char(class)) % % end detector = vision.CascadeObjectDetector(); % Create a detector for face using Viola-Jones detector1 = vision.CascadeObjectDetector('EyePairSmall'); %create detector for eyepair cam=webcam; while true % Infinite loop to continuously detect the face
vid=snapshot(cam); %get a snapshot of webcam vid = rgb2gray(vid); %convert to grayscale img = flip(vid, 2); % Flips the image horizontally
bbox = step(detector, img); % Creating bounding box using detector
if ~ isempty(bbox) %if face exists biggest_box=1; for i=1:rank(bbox) %find the biggest face if bbox(i,3)>bbox(biggest_box,3) biggest_box=i; end end faceImage = imcrop(img,bbox(biggest_box,:)); % extract the face from the image bboxeyes = step(detector1, faceImage); % locations of the eyepair using detector
subplot(2,2,1),subimage(img); hold on; % Displays full image for i=1:size(bbox,1) %draw all the regions that contain face rectangle('position', bbox(i, :), 'lineWidth', 2, 'edgeColor', 'y'); end
subplot(2,2,3),subimage(faceImage); %display face image
if ~ isempty(bboxeyes) %check it eyepair is available
biggest_box_eyes=1; for i=1:rank(bboxeyes) %find the biggest eyepair if bboxeyes(i,3)>bboxeyes(biggest_box_eyes,3) biggest_box_eyes=i; end end
bboxeyeshalf=[bboxeyes(biggest_box_eyes,1),bboxeyes(biggest_box_eyes,2),bboxeyes(biggest_box_eyes,3)/3,bboxeyes(biggest_box_eyes,4)]; %resize the eyepair width in half
eyesImage = imcrop(faceImage,bboxeyeshalf(1,:)); %extract the half eyepair from the face image eyesImage=imresize(eyesImage,[227 227]);
out=classify(training,eyesImage); figure, subplot(2,2,2),subimage(eyesImage); hold on; title(string(out)) end end end I managed but it still couldn't give exact results, like when I look at up it can understand down, maybe my samples are not enough or specified if I send my project to you, can you control?
Error using load Unable to read MAT-file ..\myNet1.mat. Not a binary MAT-file. Try load -ASCII to read as text. I keep getting this error message when I run the code. Any suggestions?
How can we do the same for audio signal so as to detect the language of the signal I am taking eng Hindi Tamil Telugu. I downloaded the audio signals for each of the above but how to write the code.
Hello, I have a small doubt. If I increase the number of layers from 7 to 15, do I have to train the whole system again?. And also the problem in the identification of the fingers can be sorted out by using a clarity webcam by setting a threshold value?
Hello Sir. This is my first time working on MATLAB and I want to ask you what am I doing wrong because when I click on "Evaluate Section" in the testing part I get an error that says: "Unrecognized function or variable 'myNet1'.". When I wrote the code for training I saved the script but I also saved the workspace as myNet1.mat in the right folder and it appears on the left just like in this tutorial. With the code for Data Collection I took the photos in the right folders just like you did and I didn't have any problems with that. And now in the end when I want to run the Testing code I get the error. I really hope you can help me with this, thank you in advance!
Hello can I ask you something? Is it possible to create an if condition inside of this program? For example when I make "2" with my hand then an if condition works. Or could we store which characters that we made since execution? Could you help me pls?
@@KnowledgeAmplifier1 I tried to add this if condition inside of a while loop; label = classify(MyNet1,es); if label == 5 k = k + 1; end; imshow(IFaces); but program does not get the value or character inside of label variable. Do you have any advice about how to do?
Hello Candy TECH, I don't have the dataset as of now , but I have explained and shared the code to generate the dataset (check pinned comment), following that same process , you can generate the dataset yourself with your hand , all you need in a webcam (if MATLAB not able to access webcam , please turn off antivirus protection on webcam and try again to execute the code) ...
Hi @aMVK_shots, what error exactly are you getting ? If related to installation issue, then you can refer to the "Download AlexNet Support Package" section in the following documentation link: in.mathworks.com/help/deeplearning/ref/alexnet.html#bvnyo6s
Hello Swathi Vanneladas, I don't have the database folder currently , but you can regenerate the database as explained in the video 😊if you want more clarity , you can check this video where I have explained how to create these kind off database from scratch -- th-cam.com/video/BU4NHgxPyLE/w-d-xo.html Hope this will be helpful! Happy Learning
Hi @johnlaurencealcantara8417, what error exactly are you getting ? If related to installation issue, then you can refer to the "Download AlexNet Support Package" section in the following documentation link: in.mathworks.com/help/deeplearning/ref/alexnet.html#bvnyo6s
You can remove the code to crop the captured image in that case & train alexnet from the snapshot image directly, but remember the fact that , background might have different contrast variation in different situations , that time , to get good accuracy , you have to make sure that the training set covers a wide range of possible scenarios . Happy Learning :-)
Hello Matthias , I don't have the dataset currently , you can generate your own dataset as I explained in the video using the code given in the pinned comment (refer Code for Data Collection) Hope this will be helpful. Happy Learning :-)
Hello @rakshitashivapooji217, I don't have the dataset now , as explained in the video , you can easily generate your own dataset using the code provided in pinned comment in the comment section..
I currently don't have the dataset , but I have explained , how to create that (just run the Code for Data Collection to take as many images as you want for different Gestures...as taking this much photos manually will be a boring job , so the code is there to simplify the process)
Sir I have not created any report , but Code is posted in the description box & in the comment section. You might some help from this documentation --Transfer Learning Using AlexNet www.mathworks.com/help/deeplearning/ug/transfer-learning-using-alexnet.html Happy Coding :-)
@@KnowledgeAmplifier1I am actually training data using google colab based on CNN mode with python language ... but the demo accuracy is not high .. i don't focus on the subject
I don't know the same in Python . But this video might help you to do the same in python: th-cam.com/video/dO5GU9pXoIY/w-d-xo.html&ab_channel=KrishNaik Hope this will be helpful. Happy Coding :-)
Code for Data Collection:
clc
clear all
close all
warning off
c=webcam;
x=0;
y=0;
height=200;
width=200;
bboxes=[x y height width];
temp=0;
while temp
when i am running this i am unable to load the myNet1. How to do that?
@@anandidatta9310 First create the database which contains different subfolders where different Gestures are stored . Then run the code for training , and then code for testing , make sure the database , created model everything present in same directory
Can u help me please?
How can I get dataset?
@@prajwalitaborah9361 you can create the dataset by your own as explained in this video 😊If you want more detail explanation , then you can refer this link, th-cam.com/video/BU4NHgxPyLE/w-d-xo.html , here I explained the database generation from scratch .
Hope this will be helpful! Happy Learning
good night, day or afternoon depending on the time you look at my message, I am from Mexico I have proceeded to carry out this project as a practice for my school but I get an error called cuda, investigate and it is about which nvidia graphics cards are necessary is there a way to run the program without a dedicated graphics card
It works smoothly thanks sir!
Great to hear Batuhan Mete! Happy Coding :-)
Bro can you share the explanation of the code?
can i get the folder of hand gesture images
hello, when I run the training code , it's showing an error at line-6 i.e., the layers are not defined . what to do to resolve this issue?
Hello.
Do you have any documentation of this? Explaining the concept, network and everything. That would be really helpful for me.
Thank you
Sir, do you have a paper for this like some discussion/ analysis? Thankyou
Where we should add dataset
Is there anything I need to do to increase my accuracy I don’t knw if it’s from my dataset , I followed the instruction to capture using the data collection data code…. But my accuracy is still very low ..
If I use this same program
What all change I have to change with this.
If we can give the picture from mobile cam
If any size required for the picture
thank you for your share , I need to classify eye directions, but I need to find bbox of an eye in the face, how can I do this? Can you make another video about this, actually I solved this problem and I am fixing my problem clc
clear all
close all
fpath='C:\Users\tulip\OneDrive\Masaüstü\merve özdaş\gaze_tracking\veriler';
data=fullfile(fpath,'Datap');
tdata=imageDatastore(data,'includesubfolders',true,'LabelSource','foldername');
[trainingImages,testImages]=splitEachLabel(tdata,0.8,'randomize');
net=alexnet
layers=[imageInputLayer([227 227])
net(2:end-3)
fullyConnectedLayer(4)
softmaxLayer
classificationLayer()
]
opt=trainingOptions('sgdm','MaxEpochs',20,'InitialLearnRate',0.0001,'MiniBatchSize',64)
training=trainNetwork(trainingImages,layers,opt)
predictedLabels=classify(training,testImages)
accuracy=mean(predictedLabels==testImages.Labels)
allclass=[];
% for i=1:length(testImages.Labels)
% I=readimage(testImages,i);
% class=classify(training,I);
% allclass=[allclass class];
% figure(1),
% subplot(17,17,i)
% imshow(I)
% title(char(class))
%
% end
detector = vision.CascadeObjectDetector(); % Create a detector for face using Viola-Jones
detector1 = vision.CascadeObjectDetector('EyePairSmall'); %create detector for eyepair
cam=webcam;
while true % Infinite loop to continuously detect the face
vid=snapshot(cam); %get a snapshot of webcam
vid = rgb2gray(vid); %convert to grayscale
img = flip(vid, 2); % Flips the image horizontally
bbox = step(detector, img); % Creating bounding box using detector
if ~ isempty(bbox) %if face exists
biggest_box=1;
for i=1:rank(bbox) %find the biggest face
if bbox(i,3)>bbox(biggest_box,3)
biggest_box=i;
end
end
faceImage = imcrop(img,bbox(biggest_box,:)); % extract the face from the image
bboxeyes = step(detector1, faceImage); % locations of the eyepair using detector
subplot(2,2,1),subimage(img); hold on; % Displays full image
for i=1:size(bbox,1) %draw all the regions that contain face
rectangle('position', bbox(i, :), 'lineWidth', 2, 'edgeColor', 'y');
end
subplot(2,2,3),subimage(faceImage); %display face image
if ~ isempty(bboxeyes) %check it eyepair is available
biggest_box_eyes=1;
for i=1:rank(bboxeyes) %find the biggest eyepair
if bboxeyes(i,3)>bboxeyes(biggest_box_eyes,3)
biggest_box_eyes=i;
end
end
bboxeyeshalf=[bboxeyes(biggest_box_eyes,1),bboxeyes(biggest_box_eyes,2),bboxeyes(biggest_box_eyes,3)/3,bboxeyes(biggest_box_eyes,4)]; %resize the eyepair width in half
eyesImage = imcrop(faceImage,bboxeyeshalf(1,:)); %extract the half eyepair from the face image
eyesImage=imresize(eyesImage,[227 227]);
out=classify(training,eyesImage);
figure,
subplot(2,2,2),subimage(eyesImage); hold on;
title(string(out))
end
end
end
I managed but it still couldn't give exact results, like when I look at up it can understand down, maybe my samples are not enough or specified if I send my project to you, can you control?
Error using load
Unable to read MAT-file ..\myNet1.mat. Not a
binary MAT-file. Try load -ASCII to read as text.
I keep getting this error message when I run the code. Any suggestions?
How can we do the same for audio signal so as to detect the language of the signal I am taking eng Hindi Tamil Telugu. I downloaded the audio signals for each of the above but how to write the code.
where we can find data set?
Hi @RAKSHITADSHIVAPOOJI as explained in the video , you can generate your own dataset , please refer the pinned comment for Data Collection code..
Hello, the code is working however the title that should display the folder name is not being displayed, how can I fix this?
We are getting error in testing code
Hello sir ,it is showing unrecognised webcam
Hello, I have a small doubt. If I increase the number of layers from 7 to 15, do I have to train the whole system again?. And also the problem in the identification of the fingers can be sorted out by using a clarity webcam by setting a threshold value?
Is there any change for the code if I want to use a phone camera? because webcam has a low resolution
Hello Sir. This is my first time working on MATLAB and I want to ask you what am I doing wrong because when I click on "Evaluate Section" in the testing part I get an error that says: "Unrecognized function or variable 'myNet1'.". When I wrote the code for training I saved the script but I also saved the workspace as myNet1.mat in the right folder and it appears on the left just like in this tutorial. With the code for Data Collection I took the photos in the right folders just like you did and I didn't have any problems with that. And now in the end when I want to run the Testing code I get the error. I really hope you can help me with this, thank you in advance!
Check whether you have written load myNet1;
or not ..
@@KnowledgeAmplifier1 same error, i have written the load command also
@@KnowledgeAmplifier1 I am getting same error, please suggest what to do.
Hi Sir, what function can I use to set threshold for each label. Thanks.
I have not got your question . Can you explain in detail please , so that I can help you..
Can you please give the code that is shown on the beginning of the video
The codes are already posted in the description box & in pinned comment SAWAJ :-)
Hello can I ask you something?
Is it possible to create an if condition inside of this program?
For example when I make "2" with my hand then an if condition works.
Or could we store which characters that we made since execution?
Could you help me pls?
Why not , this can be used as base model , you can play with it :-)
@@KnowledgeAmplifier1 I tried to add this if condition inside of a while loop;
label = classify(MyNet1,es);
if label == 5
k = k + 1;
end;
imshow(IFaces);
but program does not get the value or character inside of label variable.
Do you have any advice about how to do?
what metode do you use?
@ustadzgt5891, I have used Transfer Learning (on AlexNet)
Thanks for the video can I get the dataset pls
Hello Candy TECH, I don't have the dataset as of now , but I have explained and shared the code to generate the dataset (check pinned comment), following that same process , you can generate the dataset yourself with your hand , all you need in a webcam (if MATLAB not able to access webcam , please turn off antivirus protection on webcam and try again to execute the code) ...
Which algorithm is used
AlexNet (in.mathworks.com/help/deeplearning/ref/alexnet.html )
Iam getting error in the g = alexnet
Hi @aMVK_shots, what error exactly are you getting ? If related to installation issue, then you can refer to the "Download AlexNet Support Package" section in the following documentation link:
in.mathworks.com/help/deeplearning/ref/alexnet.html#bvnyo6s
Sir can you please send me code for "Design of ECG simulator and compare the obtained datasets with physionet".
Thank you for your work. It is really helpful. Can I use these codes for my projects?
Yes you can! Happy Learning :-)
Can u please zip the hand dataset folder and share it over here...
Hello Swathi Vanneladas, I don't have the database folder currently , but you can regenerate the database as explained in the video 😊if you want more clarity , you can check this video where I have explained how to create these kind off database from scratch -- th-cam.com/video/BU4NHgxPyLE/w-d-xo.html
Hope this will be helpful! Happy Learning
bro dataset link
Hello Galwin Madurai, please check the pinned comment , the code to generate the dataset is shared there :-)
there is an error in g=alexnet
Did u solve it
Hi @johnlaurencealcantara8417, what error exactly are you getting ? If related to installation issue, then you can refer to the "Download AlexNet Support Package" section in the following documentation link:
in.mathworks.com/help/deeplearning/ref/alexnet.html#bvnyo6s
how to make the whole screen as the 'processing area'?
You can remove the code to crop the captured image in that case & train alexnet from the snapshot image directly, but remember the fact that , background might have different contrast variation in different situations , that time , to get good accuracy , you have to make sure that the training set covers a wide range of possible scenarios .
Happy Learning :-)
@@KnowledgeAmplifier1 I'm trying to make gait recognition with this code. Thanks a lot!
How much time does it take to training through samples its been 30min since I began it?
it will finish 20 epochs
nearly 45 minutes
Please can I get the dataset. Thanks
Hello Matthias , I don't have the dataset currently , you can generate your own dataset as I explained in the video using the code given in the pinned comment (refer Code for Data Collection)
Hope this will be helpful. Happy Learning :-)
@@KnowledgeAmplifier1 Thank you very much. I will do that
Please send dataset
Hello @rakshitashivapooji217, I don't have the dataset now , as explained in the video , you can easily generate your own dataset using the code provided in pinned comment in the comment section..
@@KnowledgeAmplifier1 it is working but it is not generating or capturing images
Is it possible if I change it to dog body language?
Yes , you can :-)
@@KnowledgeAmplifier1 Good day sir Im back again with another concern, how can i deploy this using android simulink?
Can I get the dataset
I currently don't have the dataset , but I have explained , how to create that (just run the Code for Data Collection to take as many images as you want for different Gestures...as taking this much photos manually will be a boring job , so the code is there to simplify the process)
can u provide me with an report on this project
Sir I have not created any report , but Code is posted in the description box & in the comment section.
You might some help from this documentation --Transfer Learning Using AlexNet
www.mathworks.com/help/deeplearning/ug/transfer-learning-using-alexnet.html
Happy Coding :-)
Hello, let me ask thousands of pictures above you took by your phone, or taken with the code to use auto capture on the camera on matlab
with the code , just go through the video , I have explained clearly and given all the codes...Happy Coding :-)
@@KnowledgeAmplifier1I am actually training data using google colab based on CNN mode with python language ... but the demo accuracy is not high .. i don't focus on the subject
I don't know the same in Python . But this video might help you to do the same in python:
th-cam.com/video/dO5GU9pXoIY/w-d-xo.html&ab_channel=KrishNaik
Hope this will be helpful.
Happy Coding :-)