Anyone following this playlist, my recommendation to them is to please do the assignment, I was shocked at how little we learn by just watching, I did the assignment and what can I say, I was stuck a lot of times and at the end, I completed and now I regularly do Text Preprocessing by making my datasets from Rapid APIs, It gives one soo much flexibility to work on a dataset they created.
Hey Hari! The assignment links given above are not directing to the tmdb website, and if I search of TMDB directly on google, it doesn't work as well. Can you tell me how you did that?
Again Sir your are a great person on you tube.. your explanation in every domain and for every topic is great...i followed you ML playlist A-Z and now i start watching NLP.. i hope you will complete your ML series soon and this too and also making great series for us with new and needed emerging thigs ...Thanks Alot Sir!
Very good explanation. your explaining every single details. it's very helpful for beginners. and assignements also very intresting. i feel like why im not found your channel before but lucky to have right now
Hi, Could you please make the next video on the same IMDB data set and show us how to analyze the linguistic features of the training dataset? I have recently gone through your previous NLP (Movie Review Sentiment Analysis) videos. However, I was quite interested in finding out how can we analyze the linguistic features and what all different algorithms can we apply apart from the Naive Bayes on the same IMDB dataset. PS - your videos are amazing!!! the way you teach the concepts has helped me to understand the basics of NLP. Thank you so much!!
While using the lowercase conversion function shown at 7:23 , I am getting below warning,even though conversion is successful. Can you let me know if any other way is there to do conversion or we can ignore the warning? A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
Your videos are full of knowledge. Thanks a lot for this 🙏 you deserve more subscribers... it can attract more viewers if you divide your videos into smaller parts. People generally don't want to engage with long lectures.
The way of teaching is cool loved it. One doubt 12:00 in remove_html_tags() it only removes the tags but in real time when we scrap data from a website it contains tags like style, script etc which aren't required in the text mining or NLP process. Just wanted to know is there any other better approach or method that could solve this thing. Thanks in advance for everyone who tries to solve this.
Ek doubt tha though data set mai chat words dictionary banake bhi nikal sakte hai but agar naya data mila toh there should be a way to identify the chat words then put that in dictionary. or tokenization karke hi we can identify these words?
Sir I have been following you for long time and glad that I found your channel and learning so much from you and for that I am greatful and thank you from bottom of my heart. Till now I was working with Google colab but as I am moving towards deep learning now I think it's time for me to buy high end laptop.. But I am at a loss which one should I pic if I go for rtx 3080 then the price is way to much for me ... Having this confusion for past few weeks can you please please please suggest me a laptop for ml&Al&dl learning projects and my budget is 1400-1500$ I will be greatful . Or you may make a video on this topic
sir stemmer kyu use karna o aapne bataya nahi...root words me kyu lana hai o bataya nahi apne...we are reducing dimensionality of our data.is that correct?
We use stemmer because tokenization k time hm same meaning wale words ko more than once consider na kre... if hm stemming nhi krnge toh hmara algorithm walk and walking ko different words consider krega..jo ki hai same..which is not good for our model...isiliye we use stemming...moreover it is not dimensionality reduction..we are not reducing the no. of columns here....we are cleaning our data..we are following the principle of "GARBAGE IN GARBAGE OUT"
@@IqraKhan-xh2cp same context ka word deke koi matlab nahi hai.. Usase algo me koi change nahi anevala.. Its just increasing our dimensions ye bhi ek reason hai.. Or stemmer se meaningful word se koi matlab nahi hai.. O to sirf root word me convert karta hai jo ki meaning less bhi ho sakta hai.. Jinke root same hai unhe ek consider karna taki more imp dimension mile.. 👍🏻
sir, i am weak in programming and after doing lot of courses and watching lot's pf yt videos i am not able to understand it properly, even though the assignment you've suggested i don't know how to make the loop and feed all the dataset as per rows, columns and how to make a proper dataset for such. if it could be possible could you help by doing this assignment?
text = '''AFAIK=As Far As I Know AFK=Away From Keyboard ASAP=As Soon As Possible ATK=At The Keyboard ATM=At The Moment A3=Anytime, Anywhere, Anyplace BAK=Back At Keyboard''' dictionary = {} # Split the text by new line and iterate over each line for line in text.split(' '): # Split the line by the equal sign to get key and value key, value = line.split('=') # Add the key-value pair to the dictionary dictionary[key] = value print(dictionary)
At the end of the API URL, you can see a query is using named "page". Simply change the number of that query parameter like "page=1", "page=2", "page=3" and so on.
text = '''AFAIK=As Far As I Know AFK=Away From Keyboard ASAP=As Soon As Possible ATK=At The Keyboard ATM=At The Moment A3=Anytime, Anywhere, Anyplace BAK=Back At Keyboard''' dictionary = {} # Split the text by new line and iterate over each line for line in text.split(' '): # Split the line by the equal sign to get key and value key, value = line.split('=') # Add the key-value pair to the dictionary dictionary[key] = value print(dictionary)
Anyone following this playlist, my recommendation to them is to please do the assignment, I was shocked at how little we learn by just watching, I did the assignment and what can I say, I was stuck a lot of times and at the end, I completed and now I regularly do Text Preprocessing by making my datasets from Rapid APIs, It gives one soo much flexibility to work on a dataset they created.
Mam can you explain me or refer some notes or videos on using API's and Create own Dataframe
Hey Hari! The assignment links given above are not directing to the tmdb website, and if I search of TMDB directly on google, it doesn't work as well. Can you tell me how you did that?
hello have you saved that code ,its been removed i need it immediately
Would you please let me know resources for practice
@@surajnikam3327 it is already mentioned in ml playlist created by sir himself
Thanks!
Again Sir your are a great person on you tube.. your explanation in every domain and for every topic is great...i followed you ML playlist A-Z and now i start watching NLP.. i hope you will complete your ML series soon and this too and also making great series for us with new and needed emerging thigs ...Thanks Alot Sir!
Session Was SO Good.
Assignment Was SO SO SO SO Amazing To Do.
Thank For Your Hard Work Sir.
Ur way of explaination shows ur concept clearity and ur efforts to prepare this topic...keep it up.
your lectures really help me to understand NLP Text Preprocessing , Thank you so much!
You are a rare gem , I can simply put that in clear short words❤️❤️
Exactly, rarest !!
Very good explanation. your explaining every single details. it's very helpful for beginners. and assignements also very intresting.
i feel like why im not found your channel before but lucky to have right now
Thank a lot Nitish ....i dont have enough words to express my gratitude.
This series is amazing!
Hi, Could you please make the next video on the same IMDB data set and show us how to analyze the linguistic features of the training dataset? I have recently gone through your previous NLP (Movie Review Sentiment Analysis) videos. However, I was quite interested in finding out how can we analyze the linguistic features and what all different algorithms can we apply apart from the Naive Bayes on the same IMDB dataset. PS - your videos are amazing!!! the way you teach the concepts has helped me to understand the basics of NLP. Thank you so much!!
While using the lowercase conversion function shown at 7:23 , I am getting below warning,even though conversion is successful. Can you let me know if any other way is there to do conversion or we can ignore the warning?
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Ignore
please tag notbook in description,also please complete NLP playlist
You are really a great teacher, thank you so much for coming up with such informative videos, Thanks a lot
Sir you are a lifesaver.Thankyouuuuuu
Series is amazing sir 👏 kindly provide the regex lecture in the description
56:58 can we use the spelling corrector with Stemming ?? we can get better efficiency with correct spellings and no mistake
You are God for me in learning data science
Thank you, you are just awesome. Much waited for this video. You explain things better than other youtubers. Keep it up...!!!
Your lecture are really helpful...all consept are very clear
Easy way to remove punctuations.
import string
import re
def remove_punctuation(text):
# Define the set of punctuation characters
punctuations = string.punctuation
# Remove punctuation using regular expressions
text_no_punct = re.sub('[' + re.escape(punctuations) + ']', '', text)
return text_no_punct
You just saved my life
sir could you please share notebook, it is not available on given link
Awesome lecture 🤗🤗🤗❤️❤️❤️❤️
You are the best sir😊.
Thanks! for the great content!! One small suggestion can you also give us sometime to write code you are explaining otherwise it becomes theoritical.
very detailed explanation. Kudos to you.
Dhanyavaad. Can you also start a series on web development ?
You're just an excellent teacher
hey
are you working in NLP or other in python?
i need your help
can you help me?
Gold contents. Thanks for the video
so far so good.....awesome x 100
Nice assignment Sir. Thankyou
You didn't link the video for regular expression in description, can u update it
Your videos are full of knowledge. Thanks a lot for this 🙏 you deserve more subscribers... it can attract more viewers if you divide your videos into smaller parts. People generally don't want to engage with long lectures.
You are Amazing Sir Love from Pakistan.
Hi Sir. Regarding the assignment, how can we meagre genre id and genre type with movies data-frame?
I got stuck there.
Bahot acha smjhate ho :)
Congo sir for third video🥳🥳
One suggestion: sir, ek udemy course banaiye.... Data science bootcamp...
Literally, All In One !
Its helpful for me ❤️
56:30 with 'e' probable hai...
I understand but it was confusing me.
And Thank you Sir such a good video ❤
The way of teaching is cool loved it.
One doubt 12:00 in remove_html_tags() it only removes the tags but in real time when we scrap data from a website it contains tags like style, script etc which aren't required in the text mining or NLP process.
Just wanted to know is there any other better approach or method that could solve this thing.
Thanks in advance for everyone who tries to solve this.
that was awsome tutorial can you pls link to your Regular expression video ?
Sir the notebook link is dysfunctional .....pls upload the notebook discussed in the video
Sir when you will start series on Deep learning..
Hello Sir, can you reshare code, the link you shared has no code....Thanks !
Ek doubt tha though data set mai chat words dictionary banake bhi nikal sakte hai but agar naya data mila toh there should be a way to identify the chat words then put that in dictionary. or tokenization karke hi we can identify these words?
Thank you sir
Amazing video but from where can i download the notebooks.
I would also request you to share the notebook url's in the video description.
You are the best
Can we get the pdf of code that you have written in ths vedio
where is the template notebook?
Sir I have been following you for long time and glad that I found your channel and learning so much from you and for that I am greatful and thank you from bottom of my heart.
Till now I was working with Google colab but as I am moving towards deep learning now I think it's time for me to buy high end laptop..
But I am at a loss which one should I pic if I go for rtx 3080 then the price is way to much for me ... Having this confusion for past few weeks can you please please please suggest me a laptop for ml&Al&dl learning projects and my budget is 1400-1500$
I will be greatful .
Or you may make a video on this topic
Hi Sir, Can you please re add the data links here as unable to load it.
@campusX : can you please suggest how can we use text for regression (for eg. use comments to predict number of subscribers)
I checked both methods (removing punctuation)but they are similar in speed sometimes the second one is slower why is it so
where is notebbok of this lecture?? could u please just upload the notebook
where is the notebook link?
the above link only showing csv file.
Great
Getting problem while doing assignment as I have no idea how to get data into a dataframe using api.
great content
sir stemmer kyu use karna o aapne bataya nahi...root words me kyu lana hai o bataya nahi apne...we are reducing dimensionality of our data.is that correct?
We use stemmer because tokenization k time hm same meaning wale words ko more than once consider na kre... if hm stemming nhi krnge toh hmara algorithm walk and walking ko different words consider krega..jo ki hai same..which is not good for our model...isiliye we use stemming...moreover it is not dimensionality reduction..we are not reducing the no. of columns here....we are cleaning our data..we are following the principle of "GARBAGE IN GARBAGE OUT"
@@IqraKhan-xh2cp same context ka word deke koi matlab nahi hai.. Usase algo me koi change nahi anevala.. Its just increasing our dimensions ye bhi ek reason hai.. Or stemmer se meaningful word se koi matlab nahi hai.. O to sirf root word me convert karta hai jo ki meaning less bhi ho sakta hai.. Jinke root same hai unhe ek consider karna taki more imp dimension mile.. 👍🏻
please tag notebook used in this video in description,
Does anyone know how to apply word/sentence tokenizer on columns? if you know please reply.
Where is the video on Regular Expressions?
Sir at timestamp 3.30 you said you will provide notebook , can you please provide that , Thank you
do you have videos on Nlp with deep learning ?
Yes. Check my playlists
Need, your regular expression TH-cam video,link please
How to explain a data science project in interview for fresher please make it one video.
actually tokenization doesn't work in dataset. can u write code to tokenize only the reviews in ur dataset
can anyone send the link to the notebook, the given link does not work
Sir thank you so much😊
code used is not available in the link. if anyone has please share.
sir can you please share the link of "chatword" used in chatword treatment
can you please share the colab file
In the assignment, Can anyone have the solution on how to change genres ID to it's Name ?
sir, i am weak in programming and after doing lot of courses and watching lot's pf yt videos i am not able to understand it properly, even though the assignment you've suggested i don't know how to make the loop and feed all the dataset as per rows, columns and how to make a proper dataset for such. if it could be possible could you help by doing this assignment?
colab.research.google.com/drive/1e3WwxKYZvl5eKusUxE_NTi21K7GR3YiC?usp=sharing
the code link is not found?
Nice video👍
Couldn't find the Notebook link!
@campusX I cant find the codes. can you plz plz give the link?
Can Anyone explain me how to create dataframe for assignment using thia API . PLEASE!🙏
the notebook/code is not available .!!!
how can i convert the chat txt data to a python dictionary?
mila kya iska solution?
text = '''AFAIK=As Far As I Know
AFK=Away From Keyboard
ASAP=As Soon As Possible
ATK=At The Keyboard
ATM=At The Moment
A3=Anytime, Anywhere, Anyplace
BAK=Back At Keyboard'''
dictionary = {}
# Split the text by new line and iterate over each line
for line in text.split('
'):
# Split the line by the equal sign to get key and value
key, value = line.split('=')
# Add the key-value pair to the dictionary
dictionary[key] = value
print(dictionary)
where is the notebook ?
How to use textblob for a large dataset?
Can you please provide solution for this assignment
I got an error by using spacy library which is OSError
please someone help me with converting that chat words file into dictionary
Hello sir your code is unavailable please make it available.
notebook ka koi saved version nahi dikhara hai.
sir code page nai mil raha hai kaggle me ,can any one help?
sir TMDB website is blocked in india
OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.
help please
!pip install spacy && python -m spacy download en
import spacy
nlp = spacy.load('en_core_web_sm')
Try this, it worked for me
@@samanabdy9281 It's worked. Tysm❣
awesome
Hello Sir ,
make a video for R programming language plezzz......
sir in api how can i change page number 1 to another pages i am getting confused please tell me
Yes
At the end of the API URL, you can see a query is using named "page". Simply change the number of that query parameter like "page=1", "page=2", "page=3" and so on.
how to make this dataset ?
I am not able to find the notebook of the code.
Could anyone please help?
did you find notebook or i should help you??
Bhaiya how you converted chat text data to python dictionary?
text = '''AFAIK=As Far As I Know
AFK=Away From Keyboard
ASAP=As Soon As Possible
ATK=At The Keyboard
ATM=At The Moment
A3=Anytime, Anywhere, Anyplace
BAK=Back At Keyboard'''
dictionary = {}
# Split the text by new line and iterate over each line
for line in text.split('
'):
# Split the line by the equal sign to get key and value
key, value = line.split('=')
# Add the key-value pair to the dictionary
dictionary[key] = value
print(dictionary)