Anyone following this playlist, my recommendation to them is to please do the assignment, I was shocked at how little we learn by just watching, I did the assignment and what can I say, I was stuck a lot of times and at the end, I completed and now I regularly do Text Preprocessing by making my datasets from Rapid APIs, It gives one soo much flexibility to work on a dataset they created.
Hey Hari! The assignment links given above are not directing to the tmdb website, and if I search of TMDB directly on google, it doesn't work as well. Can you tell me how you did that?
Again Sir your are a great person on you tube.. your explanation in every domain and for every topic is great...i followed you ML playlist A-Z and now i start watching NLP.. i hope you will complete your ML series soon and this too and also making great series for us with new and needed emerging thigs ...Thanks Alot Sir!
Hi, Could you please make the next video on the same IMDB data set and show us how to analyze the linguistic features of the training dataset? I have recently gone through your previous NLP (Movie Review Sentiment Analysis) videos. However, I was quite interested in finding out how can we analyze the linguistic features and what all different algorithms can we apply apart from the Naive Bayes on the same IMDB dataset. PS - your videos are amazing!!! the way you teach the concepts has helped me to understand the basics of NLP. Thank you so much!!
Very good explanation. your explaining every single details. it's very helpful for beginners. and assignements also very intresting. i feel like why im not found your channel before but lucky to have right now
Your videos are full of knowledge. Thanks a lot for this 🙏 you deserve more subscribers... it can attract more viewers if you divide your videos into smaller parts. People generally don't want to engage with long lectures.
The way of teaching is cool loved it. One doubt 12:00 in remove_html_tags() it only removes the tags but in real time when we scrap data from a website it contains tags like style, script etc which aren't required in the text mining or NLP process. Just wanted to know is there any other better approach or method that could solve this thing. Thanks in advance for everyone who tries to solve this.
Ek doubt tha though data set mai chat words dictionary banake bhi nikal sakte hai but agar naya data mila toh there should be a way to identify the chat words then put that in dictionary. or tokenization karke hi we can identify these words?
While using the lowercase conversion function shown at 7:23 , I am getting below warning,even though conversion is successful. Can you let me know if any other way is there to do conversion or we can ignore the warning? A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
Sir I have been following you for long time and glad that I found your channel and learning so much from you and for that I am greatful and thank you from bottom of my heart. Till now I was working with Google colab but as I am moving towards deep learning now I think it's time for me to buy high end laptop.. But I am at a loss which one should I pic if I go for rtx 3080 then the price is way to much for me ... Having this confusion for past few weeks can you please please please suggest me a laptop for ml&Al&dl learning projects and my budget is 1400-1500$ I will be greatful . Or you may make a video on this topic
sir stemmer kyu use karna o aapne bataya nahi...root words me kyu lana hai o bataya nahi apne...we are reducing dimensionality of our data.is that correct?
We use stemmer because tokenization k time hm same meaning wale words ko more than once consider na kre... if hm stemming nhi krnge toh hmara algorithm walk and walking ko different words consider krega..jo ki hai same..which is not good for our model...isiliye we use stemming...moreover it is not dimensionality reduction..we are not reducing the no. of columns here....we are cleaning our data..we are following the principle of "GARBAGE IN GARBAGE OUT"
@@IqraKhan-xh2cp same context ka word deke koi matlab nahi hai.. Usase algo me koi change nahi anevala.. Its just increasing our dimensions ye bhi ek reason hai.. Or stemmer se meaningful word se koi matlab nahi hai.. O to sirf root word me convert karta hai jo ki meaning less bhi ho sakta hai.. Jinke root same hai unhe ek consider karna taki more imp dimension mile.. 👍🏻
Sir i m making a project using the concepts u told in NLP..... Ek week mein heroku ka ek link comment pe dunga sir plz dekhke batana aap kaisa hua hai.... Plz sir.... And thank u 🙏for teaching us so much
Anyone following this playlist, my recommendation to them is to please do the assignment, I was shocked at how little we learn by just watching, I did the assignment and what can I say, I was stuck a lot of times and at the end, I completed and now I regularly do Text Preprocessing by making my datasets from Rapid APIs, It gives one soo much flexibility to work on a dataset they created.
Mam can you explain me or refer some notes or videos on using API's and Create own Dataframe
Hey Hari! The assignment links given above are not directing to the tmdb website, and if I search of TMDB directly on google, it doesn't work as well. Can you tell me how you did that?
hello have you saved that code ,its been removed i need it immediately
Would you please let me know resources for practice
@@surajnikam3327 it is already mentioned in ml playlist created by sir himself
Again Sir your are a great person on you tube.. your explanation in every domain and for every topic is great...i followed you ML playlist A-Z and now i start watching NLP.. i hope you will complete your ML series soon and this too and also making great series for us with new and needed emerging thigs ...Thanks Alot Sir!
Session Was SO Good.
Assignment Was SO SO SO SO Amazing To Do.
Thank For Your Hard Work Sir.
your lectures really help me to understand NLP Text Preprocessing , Thank you so much!
You are a rare gem , I can simply put that in clear short words❤️❤️
Exactly, rarest !!
Hi, Could you please make the next video on the same IMDB data set and show us how to analyze the linguistic features of the training dataset? I have recently gone through your previous NLP (Movie Review Sentiment Analysis) videos. However, I was quite interested in finding out how can we analyze the linguistic features and what all different algorithms can we apply apart from the Naive Bayes on the same IMDB dataset. PS - your videos are amazing!!! the way you teach the concepts has helped me to understand the basics of NLP. Thank you so much!!
Ur way of explaination shows ur concept clearity and ur efforts to prepare this topic...keep it up.
Very good explanation. your explaining every single details. it's very helpful for beginners. and assignements also very intresting.
i feel like why im not found your channel before but lucky to have right now
56:58 can we use the spelling corrector with Stemming ?? we can get better efficiency with correct spellings and no mistake
This series is amazing!
Thank you, you are just awesome. Much waited for this video. You explain things better than other youtubers. Keep it up...!!!
You are really a great teacher, thank you so much for coming up with such informative videos, Thanks a lot
Sir you are a lifesaver.Thankyouuuuuu
Easy way to remove punctuations.
import string
import re
def remove_punctuation(text):
# Define the set of punctuation characters
punctuations = string.punctuation
# Remove punctuation using regular expressions
text_no_punct = re.sub('[' + re.escape(punctuations) + ']', '', text)
return text_no_punct
You are God for me in learning data science
please tag notbook in description,also please complete NLP playlist
Series is amazing sir 👏 kindly provide the regex lecture in the description
Thanks! for the great content!! One small suggestion can you also give us sometime to write code you are explaining otherwise it becomes theoritical.
Your videos are full of knowledge. Thanks a lot for this 🙏 you deserve more subscribers... it can attract more viewers if you divide your videos into smaller parts. People generally don't want to engage with long lectures.
sir could you please share notebook, it is not available on given link
Literally, All In One !
Hi Sir. Regarding the assignment, how can we meagre genre id and genre type with movies data-frame?
I got stuck there.
Thanks!
One suggestion: sir, ek udemy course banaiye.... Data science bootcamp...
Nice assignment Sir. Thankyou
Your lecture are really helpful...all consept are very clear
Awesome lecture 🤗🤗🤗❤️❤️❤️❤️
Dhanyavaad. Can you also start a series on web development ?
You're just an excellent teacher
hey
are you working in NLP or other in python?
i need your help
can you help me?
Its helpful for me ❤️
You didn't link the video for regular expression in description, can u update it
You are the best sir😊.
Bahot acha smjhate ho :)
You are Amazing Sir Love from Pakistan.
Gold contents. Thanks for the video
very detailed explanation. Kudos to you.
Thank a lot Nitish ....i dont have enough words to express my gratitude.
56:30 with 'e' probable hai...
I understand but it was confusing me.
And Thank you Sir such a good video ❤
Sir the notebook link is dysfunctional .....pls upload the notebook discussed in the video
Sir when you will start series on Deep learning..
Congo sir for third video🥳🥳
so far so good.....awesome x 100
Hello Sir, can you reshare code, the link you shared has no code....Thanks !
The way of teaching is cool loved it.
One doubt 12:00 in remove_html_tags() it only removes the tags but in real time when we scrap data from a website it contains tags like style, script etc which aren't required in the text mining or NLP process.
Just wanted to know is there any other better approach or method that could solve this thing.
Thanks in advance for everyone who tries to solve this.
Ek doubt tha though data set mai chat words dictionary banake bhi nikal sakte hai but agar naya data mila toh there should be a way to identify the chat words then put that in dictionary. or tokenization karke hi we can identify these words?
Need, your regular expression TH-cam video,link please
You are the best
Where is the video on Regular Expressions?
that was awsome tutorial can you pls link to your Regular expression video ?
While using the lowercase conversion function shown at 7:23 , I am getting below warning,even though conversion is successful. Can you let me know if any other way is there to do conversion or we can ignore the warning?
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Ignore
Getting problem while doing assignment as I have no idea how to get data into a dataframe using api.
Sir I have been following you for long time and glad that I found your channel and learning so much from you and for that I am greatful and thank you from bottom of my heart.
Till now I was working with Google colab but as I am moving towards deep learning now I think it's time for me to buy high end laptop..
But I am at a loss which one should I pic if I go for rtx 3080 then the price is way to much for me ... Having this confusion for past few weeks can you please please please suggest me a laptop for ml&Al&dl learning projects and my budget is 1400-1500$
I will be greatful .
Or you may make a video on this topic
actually tokenization doesn't work in dataset. can u write code to tokenize only the reviews in ur dataset
Amazing video but from where can i download the notebooks.
I would also request you to share the notebook url's in the video description.
Hi Sir, Can you please re add the data links here as unable to load it.
How to explain a data science project in interview for fresher please make it one video.
where is the template notebook?
@campusX : can you please suggest how can we use text for regression (for eg. use comments to predict number of subscribers)
Can you please provide solution for this assignment
Sir thank you so much😊
Couldn't find the Notebook link!
Does anyone know how to apply word/sentence tokenizer on columns? if you know please reply.
Can we get the pdf of code that you have written in ths vedio
I checked both methods (removing punctuation)but they are similar in speed sometimes the second one is slower why is it so
great content
Thankyou Sir .
please tag notebook used in this video in description,
Great
I am not able to find the notebook of the code.
Could anyone please help?
sir TMDB website is blocked in india
notebook ka koi saved version nahi dikhara hai.
Thanku Bhaiya
I got an error by using spacy library which is OSError
the code link is not found?
Sir at timestamp 3.30 you said you will provide notebook , can you please provide that , Thank you
In the assignment, Can anyone have the solution on how to change genres ID to it's Name ?
awesome
Thank you
Nice video👍
can you please share the colab file
where is the notebook ?
do you have videos on Nlp with deep learning ?
Yes. Check my playlists
where is notebbok of this lecture?? could u please just upload the notebook
Can Anyone explain me how to create dataframe for assignment using thia API . PLEASE!🙏
Hello Sir ,
make a video for R programming language plezzz......
code used is not available in the link. if anyone has please share.
Nice video
Awesome
👍
Thanks sir
the notebook/code is not available .!!!
sir stemmer kyu use karna o aapne bataya nahi...root words me kyu lana hai o bataya nahi apne...we are reducing dimensionality of our data.is that correct?
We use stemmer because tokenization k time hm same meaning wale words ko more than once consider na kre... if hm stemming nhi krnge toh hmara algorithm walk and walking ko different words consider krega..jo ki hai same..which is not good for our model...isiliye we use stemming...moreover it is not dimensionality reduction..we are not reducing the no. of columns here....we are cleaning our data..we are following the principle of "GARBAGE IN GARBAGE OUT"
@@IqraKhan-xh2cp same context ka word deke koi matlab nahi hai.. Usase algo me koi change nahi anevala.. Its just increasing our dimensions ye bhi ek reason hai.. Or stemmer se meaningful word se koi matlab nahi hai.. O to sirf root word me convert karta hai jo ki meaning less bhi ho sakta hai.. Jinke root same hai unhe ek consider karna taki more imp dimension mile.. 👍🏻
I don't get the code
colab.research.google.com/drive/1sAjdLZStcavDt4ktHe3j_NUllO_yAZ-v?usp=sharing
Sir i m making a project using the concepts u told in NLP..... Ek week mein heroku ka ek link comment pe dunga sir plz dekhke batana aap kaisa hua hai.... Plz sir.... And thank u 🙏for teaching us so much
Sure
In the assignment, How can i change genres ID to its Name after creating both datasets?@@campusx-official
sir can you please share the link of "chatword" used in chatword treatment
where is the notebook link?
the above link only showing csv file.
How to use textblob for a large dataset?
anyone tried the assignment? if please reply I have few doubts
@campusX I cant find the codes. can you plz plz give the link?
thank sir
can anyone send the link to the notebook, the given link does not work