If voice is the future of computing what about those who cannot speak or hear? I used deep learning with TensorFlow.js to make Amazon Echo respond to sign language.
Hey Abhishek, you can get a text output of Alexa's speech from the developer panel. Could even bypass the Amazon Echo entirely - look up the process of building Alexa on a Pi.
It is nice to see someone finally heading in the right direction for the deaf and hard of hearing, however, the vast differences in sign language by areas could make it extremely difficult as well as the speed of signers and processing power required, but a nice start. It seems like tech companies would be extremely interested in this technology, but I suspect they are already working on this in secret.
Hi Abhishek, your project is absolutely brillant can you give us some technical detail ? how long you train your model ? Will you share your code ? I'm sure that could help a lot of people all arround the world not only Alexa's users
Really great work. Combining 2 developing tools to solve a real problem. Really curious in the advances in visual interpretation of sign language as the data set grows.
Fantastic! Congrats for your work, very inspiring. Question: Not sure how it works with english sign language, but here in Brazil there are many folks that can't read portuguese as they've been educated in Libras (Brazilian Sign Language). I'd love to know your thoughts on how to present Alexa's voice to these people. Thanks!
I was wondering if a braille display could be connected to the computer as well? This way a deaf-blind person can use the text to speech. Thanks for making things more accessible.
Abhishek how did you increase the accuracy of the hand gestures which are not static? Did you use multiple cameras and which software applications you implemented!?
Hi Abhisheck, this is an excellent idea, I am a deaf BSL user from the UK and also a Software Developer and I would love to adapt this to recognise BSL, as well as help to see if we can miniturise it, such as using a RaspberryPi or other options. Please let me know if there is anything I can help out with this, even it its just letting us know that this has been open sourced :-)
You can link to Alexa direct from the laptop - there is no need for the Echo, just a decent audio output device (e.g. USB speakers). If you want audio input also for other users, get a new generation laptop with built-in far-field microphones
Hello this is absolutely amazing. I’m trying to do something close to this for a project can I ask how did you make alexa respond to the app you created and how long did it take you to train it?
My husband is deaf, and we would like to get this app that works with Alexa. Will this work on the Amazon show or any tablet or both? What is the name of the app? I want to get this for him for Christmas. Thank you
Technically this project responds to whatever gestures (sign-language) you train it on. So you could use any Sign Language you want and it should work. This is all done directly through your web browser.
Any source code for this? As I want to setup something like this for my workshop. It can often be hard to control an alexa by voice over music, power tools, and while using a mask.
Yes Tensorflow.js was used for this awesome project. All of the ML tasks are processed on device through the browser with no need to send image data to a server.
Absolutely brilliant! That's what I call problem solving.
Hey Abhishek, you can get a text output of Alexa's speech from the developer panel. Could even bypass the Amazon Echo entirely - look up the process of building Alexa on a Pi.
a rare genius.
i really wish to see you opening your own 'idea lab' soon in the future.
It is nice to see someone finally heading in the right direction for the deaf and hard of hearing, however, the vast differences in sign language by areas could make it extremely difficult as well as the speed of signers and processing power required, but a nice start. It seems like tech companies would be extremely interested in this technology, but I suspect they are already working on this in secret.
You're a wiz! Excited to see upcoming projects
Great job! Progress is fast, but it does not mean we should leave anyone behind. Amazing thinking.
Hi Abhishek, your project is absolutely brillant can you give us some technical detail ? how long you train your model ? Will you share your code ? I'm sure that could help a lot of people all arround the world not only Alexa's users
Really great work. Combining 2 developing tools to solve a real problem. Really curious in the advances in visual interpretation of sign language as the data set grows.
Dude, you are genius, keep up the good work! 👍
Brilliant creation of accessible tech for those without speech!
you come up with some cool ideas man
Fantastic! Congrats for your work, very inspiring. Question: Not sure how it works with english sign language, but here in Brazil there are many folks that can't read portuguese as they've been educated in Libras (Brazilian Sign Language). I'd love to know your thoughts on how to present Alexa's voice to these people. Thanks!
I was wondering if a braille display could be connected to the computer as well? This way a deaf-blind person can use the text to speech. Thanks for making things more accessible.
I am doing similar kind of project.Great to know that u already did it with such perfection
Abhishek how did you increase the accuracy of the hand gestures which are not static?
Did you use multiple cameras and which software applications you implemented!?
Hi Abhisheck, this is an excellent idea, I am a deaf BSL user from the UK and also a Software Developer and I would love to adapt this to recognise BSL, as well as help to see if we can miniturise it, such as using a RaspberryPi or other options. Please let me know if there is anything I can help out with this, even it its just letting us know that this has been open sourced :-)
You can link to Alexa direct from the laptop - there is no need for the Echo, just a decent audio output device (e.g. USB speakers). If you want audio input also for other users, get a new generation laptop with built-in far-field microphones
Hello this is absolutely amazing. I’m trying to do something close to this for a project can I ask how did you make alexa respond to the app you created and how long did it take you to train it?
I can’t wait for this! We got an echo dot for Xmas and I have no value to it because hearing and speech impediment.
My husband is deaf, and we would like to get this app that works with Alexa. Will this work on the Amazon show or any tablet or both? What is the name of the app? I want to get this for him for Christmas. Thank you
Great idea. But as you are using American Sign Language, would it be able to read sign language from around the world, like British Sign Language?
Technically this project responds to whatever gestures (sign-language) you train it on. So you could use any Sign Language you want and it should work. This is all done directly through your web browser.
Absolutely brilliant
wow man, I am really impressed! keep up the good work!
Hi Abishek, would it be possible to train this model with German sign language (slightly different gestures)?
Hi! Good project. I have a question, how i can change a language (to russian) ? bcs the programm detect only english words
Thanks!
Brilliant Abhishek!
Superb piece of work. Keep it up
you could just type right?
Hey Abhishek !!! U r awesome..!! I m ur fan now..
Plzz share ur process and codes of gesture to voice conversion...plzzz.. I wanna make it ..🙂🙂🙂
Can you share the code or approch how you did it?
I'm trying to do a project similar to this. I was wondering how you trained it. Where did you get the data sets?
Great job Abhishek, I am highly interested to know about the technical details of this project. Do you have a LinkedIn profile where we can connect?
Wow... Tracking motion based gestures is really hard. How did you do that?
Great Idea my Friend !!!!. I am working with AI Watson but I will this AI tech !!!
Hi Abhishek, do you have the details of this project posted anywhere?
Did he used 2d or 3d convolution
Any source code for this? As I want to setup something like this for my workshop. It can often be hard to control an alexa by voice over music, power tools, and while using a mask.
Please, could explain me how did you do ?
This should go on shark tank to get funded
Cool Idea Bro
Hello, could I know how you do this???
can I learn the source code? I'm have this AI project for college, it'd be really helpful if you can help me with it..
Where can I get this app? It's a godsend!
Amazing and inspiring project!
Is there a place we can access this for people whose language is ASL?
Amazing work
After watching this I am moving into AI field ❤️
Good job ... very interesting
My daughter is Deaf. Please, where can I buy your app? Thank you so much !
Hey!! Hi I am working on a similar project need your help regarding that can you please help us?
great thought!
Just wonderful!!!
Do you know Saraj, just wondering?
is it true that Abishek used JavaScript Machine Learning for this project?
Yes Tensorflow.js was used for this awesome project. All of the ML tasks are processed on device through the browser with no need to send image data to a server.
Is this app available?
Very very nice project ,i m so interested in this project ,if you can share it ith me thank you in advnace !!!!
nice
Ready for steak night?