Text Preprocessing | NLP Course Lecture 3

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น •

  • @harinair3002
    @harinair3002 ปีที่แล้ว +60

    Anyone following this playlist, my recommendation to them is to please do the assignment, I was shocked at how little we learn by just watching, I did the assignment and what can I say, I was stuck a lot of times and at the end, I completed and now I regularly do Text Preprocessing by making my datasets from Rapid APIs, It gives one soo much flexibility to work on a dataset they created.

    • @surajnikam3327
      @surajnikam3327 ปีที่แล้ว

      Mam can you explain me or refer some notes or videos on using API's and Create own Dataframe

    • @komalkumbhare4789
      @komalkumbhare4789 11 หลายเดือนก่อน +1

      Hey Hari! The assignment links given above are not directing to the tmdb website, and if I search of TMDB directly on google, it doesn't work as well. Can you tell me how you did that?

    • @sampath4150
      @sampath4150 5 หลายเดือนก่อน +1

      hello have you saved that code ,its been removed i need it immediately

    • @venualli3917
      @venualli3917 5 หลายเดือนก่อน

      Would you please let me know resources for practice

    • @divyatagupta597
      @divyatagupta597 4 หลายเดือนก่อน

      @@surajnikam3327 it is already mentioned in ml playlist created by sir himself

  • @gautampatadiya6096
    @gautampatadiya6096 11 หลายเดือนก่อน +3

    Thanks!

  • @GamerBoy-ii4jc
    @GamerBoy-ii4jc 3 ปีที่แล้ว +11

    Again Sir your are a great person on you tube.. your explanation in every domain and for every topic is great...i followed you ML playlist A-Z and now i start watching NLP.. i hope you will complete your ML series soon and this too and also making great series for us with new and needed emerging thigs ...Thanks Alot Sir!

  • @usmanhaider5255
    @usmanhaider5255 5 หลายเดือนก่อน +4

    Session Was SO Good.
    Assignment Was SO SO SO SO Amazing To Do.
    Thank For Your Hard Work Sir.

  • @prashantlakde
    @prashantlakde 2 ปีที่แล้ว +3

    Ur way of explaination shows ur concept clearity and ur efforts to prepare this topic...keep it up.

  • @shikhasoni9346
    @shikhasoni9346 3 ปีที่แล้ว +7

    your lectures really help me to understand NLP Text Preprocessing , Thank you so much!

  • @sukantb1980
    @sukantb1980 3 ปีที่แล้ว +22

    You are a rare gem , I can simply put that in clear short words❤️❤️

    • @bhanu0925
      @bhanu0925 3 ปีที่แล้ว +3

      Exactly, rarest !!

  • @sarithajaligama9548
    @sarithajaligama9548 9 หลายเดือนก่อน

    Very good explanation. your explaining every single details. it's very helpful for beginners. and assignements also very intresting.
    i feel like why im not found your channel before but lucky to have right now

  • @siddharth4251
    @siddharth4251 ปีที่แล้ว +1

    Thank a lot Nitish ....i dont have enough words to express my gratitude.

  • @Riya-zb1iz
    @Riya-zb1iz ปีที่แล้ว +2

    This series is amazing!

  • @siddharthbhardwaj7664
    @siddharthbhardwaj7664 3 ปีที่แล้ว +4

    Hi, Could you please make the next video on the same IMDB data set and show us how to analyze the linguistic features of the training dataset? I have recently gone through your previous NLP (Movie Review Sentiment Analysis) videos. However, I was quite interested in finding out how can we analyze the linguistic features and what all different algorithms can we apply apart from the Naive Bayes on the same IMDB dataset. PS - your videos are amazing!!! the way you teach the concepts has helped me to understand the basics of NLP. Thank you so much!!

  • @anshuman_madhav
    @anshuman_madhav 3 ปีที่แล้ว +3

    While using the lowercase conversion function shown at 7:23 , I am getting below warning,even though conversion is successful. Can you let me know if any other way is there to do conversion or we can ignore the warning?
    A value is trying to be set on a copy of a slice from a DataFrame.
    Try using .loc[row_indexer,col_indexer] = value instead

  • @rafibasha4145
    @rafibasha4145 2 ปีที่แล้ว +3

    please tag notbook in description,also please complete NLP playlist

  • @sachi-4750
    @sachi-4750 2 ปีที่แล้ว +1

    You are really a great teacher, thank you so much for coming up with such informative videos, Thanks a lot

  • @mohaiminrahat4974
    @mohaiminrahat4974 3 ปีที่แล้ว +2

    Sir you are a lifesaver.Thankyouuuuuu

  • @manishachaurasia3405
    @manishachaurasia3405 ปีที่แล้ว +1

    Series is amazing sir 👏 kindly provide the regex lecture in the description

  • @ayushsachdeva4635
    @ayushsachdeva4635 4 หลายเดือนก่อน +2

    56:58 can we use the spelling corrector with Stemming ?? we can get better efficiency with correct spellings and no mistake

  • @pankajbeldar9799
    @pankajbeldar9799 2 ปีที่แล้ว +1

    You are God for me in learning data science

  • @pralaymondal3324
    @pralaymondal3324 3 ปีที่แล้ว +4

    Thank you, you are just awesome. Much waited for this video. You explain things better than other youtubers. Keep it up...!!!

  • @miteshkumar7739
    @miteshkumar7739 3 ปีที่แล้ว

    Your lecture are really helpful...all consept are very clear

  • @satyamtiwari7680
    @satyamtiwari7680 ปีที่แล้ว +1

    Easy way to remove punctuations.
    import string
    import re
    def remove_punctuation(text):
    # Define the set of punctuation characters
    punctuations = string.punctuation
    # Remove punctuation using regular expressions
    text_no_punct = re.sub('[' + re.escape(punctuations) + ']', '', text)
    return text_no_punct

  • @ShafiulShafi-zd4im
    @ShafiulShafi-zd4im 4 วันที่ผ่านมา

    You just saved my life

  • @saurabhdeshmane8714
    @saurabhdeshmane8714 2 ปีที่แล้ว +5

    sir could you please share notebook, it is not available on given link

  • @shipradhiman08
    @shipradhiman08 3 ปีที่แล้ว +1

    Awesome lecture 🤗🤗🤗❤️❤️❤️❤️

  • @stunninghealer7442
    @stunninghealer7442 10 หลายเดือนก่อน

    You are the best sir😊.

  • @BTStechnicalchannel
    @BTStechnicalchannel 2 ปีที่แล้ว +2

    Thanks! for the great content!! One small suggestion can you also give us sometime to write code you are explaining otherwise it becomes theoritical.

  • @NishantKumar-dw5er
    @NishantKumar-dw5er ปีที่แล้ว

    very detailed explanation. Kudos to you.

  • @mridang2064
    @mridang2064 2 ปีที่แล้ว +3

    Dhanyavaad. Can you also start a series on web development ?
    You're just an excellent teacher

    • @Codingon_lup
      @Codingon_lup ปีที่แล้ว

      hey

    • @Codingon_lup
      @Codingon_lup ปีที่แล้ว

      are you working in NLP or other in python?
      i need your help
      can you help me?

  • @raj-nq8ke
    @raj-nq8ke 3 ปีที่แล้ว

    Gold contents. Thanks for the video

  • @raj4624
    @raj4624 3 ปีที่แล้ว

    so far so good.....awesome x 100

  • @rajeevranjan5007
    @rajeevranjan5007 3 ปีที่แล้ว +1

    Nice assignment Sir. Thankyou

  • @NaryVip
    @NaryVip 3 ปีที่แล้ว +2

    You didn't link the video for regular expression in description, can u update it

  • @abhishek_iith
    @abhishek_iith ปีที่แล้ว +3

    Your videos are full of knowledge. Thanks a lot for this 🙏 you deserve more subscribers... it can attract more viewers if you divide your videos into smaller parts. People generally don't want to engage with long lectures.

  • @abdulqadar9580
    @abdulqadar9580 ปีที่แล้ว

    You are Amazing Sir Love from Pakistan.

  • @Akashphs7217
    @Akashphs7217 7 หลายเดือนก่อน +1

    Hi Sir. Regarding the assignment, how can we meagre genre id and genre type with movies data-frame?
    I got stuck there.

  • @anjalihansda437
    @anjalihansda437 5 หลายเดือนก่อน

    Bahot acha smjhate ho :)

  • @jandaabdulla9335
    @jandaabdulla9335 3 ปีที่แล้ว

    Congo sir for third video🥳🥳

  • @anupprasad695
    @anupprasad695 2 ปีที่แล้ว +1

    One suggestion: sir, ek udemy course banaiye.... Data science bootcamp...

  • @samt5682
    @samt5682 3 ปีที่แล้ว

    Literally, All In One !

  • @MRBAM
    @MRBAM 2 ปีที่แล้ว +1

    Its helpful for me ❤️

  • @faizahmed007
    @faizahmed007 ปีที่แล้ว

    56:30 with 'e' probable hai...
    I understand but it was confusing me.
    And Thank you Sir such a good video ❤

  • @dilipkumarbk7657
    @dilipkumarbk7657 ปีที่แล้ว

    The way of teaching is cool loved it.
    One doubt 12:00 in remove_html_tags() it only removes the tags but in real time when we scrap data from a website it contains tags like style, script etc which aren't required in the text mining or NLP process.
    Just wanted to know is there any other better approach or method that could solve this thing.
    Thanks in advance for everyone who tries to solve this.

  • @unknown-ho4wk
    @unknown-ho4wk ปีที่แล้ว

    that was awsome tutorial can you pls link to your Regular expression video ?

  • @manucmgowda
    @manucmgowda 2 ปีที่แล้ว +1

    Sir the notebook link is dysfunctional .....pls upload the notebook discussed in the video

  • @piyushpathak7311
    @piyushpathak7311 3 ปีที่แล้ว +1

    Sir when you will start series on Deep learning..

  • @ajitkulkarni1702
    @ajitkulkarni1702 ปีที่แล้ว +1

    Hello Sir, can you reshare code, the link you shared has no code....Thanks !

  • @Auruenjuhshsh1999
    @Auruenjuhshsh1999 5 หลายเดือนก่อน

    Ek doubt tha though data set mai chat words dictionary banake bhi nikal sakte hai but agar naya data mila toh there should be a way to identify the chat words then put that in dictionary. or tokenization karke hi we can identify these words?

  • @charanpoojary4804
    @charanpoojary4804 2 หลายเดือนก่อน

    Thank you sir

  • @deepankarmullick3121
    @deepankarmullick3121 2 ปีที่แล้ว

    Amazing video but from where can i download the notebooks.
    I would also request you to share the notebook url's in the video description.

  • @pankajnaik1574
    @pankajnaik1574 ปีที่แล้ว

    You are the best

  • @rishabhvarshney2234
    @rishabhvarshney2234 3 ปีที่แล้ว +1

    Can we get the pdf of code that you have written in ths vedio

  • @kumarabhishek1064
    @kumarabhishek1064 2 ปีที่แล้ว +1

    where is the template notebook?

  • @cipher4811
    @cipher4811 3 ปีที่แล้ว +1

    Sir I have been following you for long time and glad that I found your channel and learning so much from you and for that I am greatful and thank you from bottom of my heart.
    Till now I was working with Google colab but as I am moving towards deep learning now I think it's time for me to buy high end laptop..
    But I am at a loss which one should I pic if I go for rtx 3080 then the price is way to much for me ... Having this confusion for past few weeks can you please please please suggest me a laptop for ml&Al&dl learning projects and my budget is 1400-1500$
    I will be greatful .
    Or you may make a video on this topic

  • @riiyyyaaaa
    @riiyyyaaaa 10 หลายเดือนก่อน

    Hi Sir, Can you please re add the data links here as unable to load it.

  • @anitabhandari3886
    @anitabhandari3886 9 หลายเดือนก่อน

    @campusX : can you please suggest how can we use text for regression (for eg. use comments to predict number of subscribers)

  • @jasonbourn29
    @jasonbourn29 ปีที่แล้ว

    I checked both methods (removing punctuation)but they are similar in speed sometimes the second one is slower why is it so

  • @bhanuprakash5060
    @bhanuprakash5060 ปีที่แล้ว

    where is notebbok of this lecture?? could u please just upload the notebook

  • @ahmedullahkhan9166
    @ahmedullahkhan9166 ปีที่แล้ว

    where is the notebook link?
    the above link only showing csv file.

  • @swet_gokugod9382
    @swet_gokugod9382 ปีที่แล้ว

    Great

  • @siddharthkarale3100
    @siddharthkarale3100 9 หลายเดือนก่อน

    Getting problem while doing assignment as I have no idea how to get data into a dataframe using api.

  • @kislaykrishna5599
    @kislaykrishna5599 3 ปีที่แล้ว

    great content

  • @kalpesh_saindane108
    @kalpesh_saindane108 4 หลายเดือนก่อน

    sir stemmer kyu use karna o aapne bataya nahi...root words me kyu lana hai o bataya nahi apne...we are reducing dimensionality of our data.is that correct?

    • @IqraKhan-xh2cp
      @IqraKhan-xh2cp 3 หลายเดือนก่อน +1

      We use stemmer because tokenization k time hm same meaning wale words ko more than once consider na kre... if hm stemming nhi krnge toh hmara algorithm walk and walking ko different words consider krega..jo ki hai same..which is not good for our model...isiliye we use stemming...moreover it is not dimensionality reduction..we are not reducing the no. of columns here....we are cleaning our data..we are following the principle of "GARBAGE IN GARBAGE OUT"

    • @kalpesh_saindane108
      @kalpesh_saindane108 3 หลายเดือนก่อน

      @@IqraKhan-xh2cp same context ka word deke koi matlab nahi hai.. Usase algo me koi change nahi anevala.. Its just increasing our dimensions ye bhi ek reason hai.. Or stemmer se meaningful word se koi matlab nahi hai.. O to sirf root word me convert karta hai jo ki meaning less bhi ho sakta hai.. Jinke root same hai unhe ek consider karna taki more imp dimension mile.. 👍🏻

  • @sachin2725
    @sachin2725 2 ปีที่แล้ว

    please tag notebook used in this video in description,

  • @tanmayshinde7853
    @tanmayshinde7853 2 ปีที่แล้ว +1

    Does anyone know how to apply word/sentence tokenizer on columns? if you know please reply.

  • @imamasafeer4536
    @imamasafeer4536 10 หลายเดือนก่อน

    Where is the video on Regular Expressions?

  • @bhushanbowlekar4539
    @bhushanbowlekar4539 ปีที่แล้ว

    Sir at timestamp 3.30 you said you will provide notebook , can you please provide that , Thank you

  • @bibasrai752
    @bibasrai752 ปีที่แล้ว +1

    do you have videos on Nlp with deep learning ?

  • @romanahmed4754
    @romanahmed4754 4 หลายเดือนก่อน

    Need, your regular expression TH-cam video,link please

  • @rahulrajbhar7012
    @rahulrajbhar7012 3 ปีที่แล้ว

    How to explain a data science project in interview for fresher please make it one video.

  • @potjason2132
    @potjason2132 10 หลายเดือนก่อน

    actually tokenization doesn't work in dataset. can u write code to tokenize only the reviews in ur dataset

  • @furry2fun
    @furry2fun ปีที่แล้ว

    can anyone send the link to the notebook, the given link does not work

  • @shaiksalavuddin5976
    @shaiksalavuddin5976 3 ปีที่แล้ว

    Sir thank you so much😊

  • @abhishekvashistha2398
    @abhishekvashistha2398 6 หลายเดือนก่อน

    code used is not available in the link. if anyone has please share.

  • @pradumankumar7607
    @pradumankumar7607 3 ปีที่แล้ว

    sir can you please share the link of "chatword" used in chatword treatment

  • @bhushanbowlekar4539
    @bhushanbowlekar4539 ปีที่แล้ว +1

    can you please share the colab file

  • @ritakathrotiya
    @ritakathrotiya 9 หลายเดือนก่อน

    In the assignment, Can anyone have the solution on how to change genres ID to it's Name ?

  • @shlokkumar6257
    @shlokkumar6257 11 หลายเดือนก่อน

    sir, i am weak in programming and after doing lot of courses and watching lot's pf yt videos i am not able to understand it properly, even though the assignment you've suggested i don't know how to make the loop and feed all the dataset as per rows, columns and how to make a proper dataset for such. if it could be possible could you help by doing this assignment?

    • @Shobhitchoudhary321
      @Shobhitchoudhary321 6 หลายเดือนก่อน

      colab.research.google.com/drive/1e3WwxKYZvl5eKusUxE_NTi21K7GR3YiC?usp=sharing

  • @ambarkumar7805
    @ambarkumar7805 4 หลายเดือนก่อน

    the code link is not found?

  • @snrmedia8965
    @snrmedia8965 3 ปีที่แล้ว

    Nice video👍

  • @SLADE-VA
    @SLADE-VA 11 หลายเดือนก่อน

    Couldn't find the Notebook link!

  • @ShivaniSharma-tk4bl
    @ShivaniSharma-tk4bl ปีที่แล้ว

    @campusX I cant find the codes. can you plz plz give the link?

  • @surajnikam3327
    @surajnikam3327 ปีที่แล้ว

    Can Anyone explain me how to create dataframe for assignment using thia API . PLEASE!🙏

  • @adityasoni1639
    @adityasoni1639 2 ปีที่แล้ว

    the notebook/code is not available .!!!

  • @tanveer9348
    @tanveer9348 2 ปีที่แล้ว +1

    how can i convert the chat txt data to a python dictionary?

    • @rupakjha539
      @rupakjha539 ปีที่แล้ว

      mila kya iska solution?

    • @rupakjha539
      @rupakjha539 ปีที่แล้ว

      text = '''AFAIK=As Far As I Know
      AFK=Away From Keyboard
      ASAP=As Soon As Possible
      ATK=At The Keyboard
      ATM=At The Moment
      A3=Anytime, Anywhere, Anyplace
      BAK=Back At Keyboard'''
      dictionary = {}
      # Split the text by new line and iterate over each line
      for line in text.split('
      '):
      # Split the line by the equal sign to get key and value
      key, value = line.split('=')
      # Add the key-value pair to the dictionary
      dictionary[key] = value
      print(dictionary)

  • @anshumanmahabhoi5771
    @anshumanmahabhoi5771 ปีที่แล้ว

    where is the notebook ?

  • @shrutianand285
    @shrutianand285 2 ปีที่แล้ว

    How to use textblob for a large dataset?

  • @mdaliarmaghan8292
    @mdaliarmaghan8292 7 หลายเดือนก่อน

    Can you please provide solution for this assignment

  • @piyushpawar75
    @piyushpawar75 ปีที่แล้ว

    I got an error by using spacy library which is OSError

  • @maheshbhatt1505
    @maheshbhatt1505 ปีที่แล้ว

    please someone help me with converting that chat words file into dictionary

  • @dipeshsilwal8098
    @dipeshsilwal8098 2 ปีที่แล้ว

    Hello sir your code is unavailable please make it available.

  • @anooshkaa
    @anooshkaa 10 หลายเดือนก่อน

    notebook ka koi saved version nahi dikhara hai.

  • @waqaralam7519
    @waqaralam7519 2 ปีที่แล้ว

    sir code page nai mil raha hai kaggle me ,can any one help?

  • @AshishSharma-tf3fy
    @AshishSharma-tf3fy 7 หลายเดือนก่อน

    sir TMDB website is blocked in india

  • @JavedKhan-nr2oo
    @JavedKhan-nr2oo 2 ปีที่แล้ว

    OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.

    • @JavedKhan-nr2oo
      @JavedKhan-nr2oo 2 ปีที่แล้ว

      help please

    • @samanabdy9281
      @samanabdy9281 2 ปีที่แล้ว

      !pip install spacy && python -m spacy download en
      import spacy
      nlp = spacy.load('en_core_web_sm')
      Try this, it worked for me

    • @amishakhetani7611
      @amishakhetani7611 2 ปีที่แล้ว

      @@samanabdy9281 It's worked. Tysm❣

  • @tusarmundhra5560
    @tusarmundhra5560 ปีที่แล้ว

    awesome

  • @miteshkumar7739
    @miteshkumar7739 3 ปีที่แล้ว

    Hello Sir ,
    make a video for R programming language plezzz......

  • @gauravverma4433
    @gauravverma4433 3 ปีที่แล้ว

    sir in api how can i change page number 1 to another pages i am getting confused please tell me

    • @campusx-official
      @campusx-official  3 ปีที่แล้ว

      Yes

    • @sayanroy281
      @sayanroy281 3 ปีที่แล้ว

      At the end of the API URL, you can see a query is using named "page". Simply change the number of that query parameter like "page=1", "page=2", "page=3" and so on.

  • @PRIYANSHUPRIYANSHU-v9n
    @PRIYANSHUPRIYANSHU-v9n ปีที่แล้ว

    how to make this dataset ?

  • @ashishsom3849
    @ashishsom3849 6 หลายเดือนก่อน

    I am not able to find the notebook of the code.
    Could anyone please help?

    • @positivevibes2714
      @positivevibes2714 หลายเดือนก่อน

      did you find notebook or i should help you??

  • @rupakjha539
    @rupakjha539 ปีที่แล้ว

    Bhaiya how you converted chat text data to python dictionary?

    • @rupakjha539
      @rupakjha539 ปีที่แล้ว

      text = '''AFAIK=As Far As I Know
      AFK=Away From Keyboard
      ASAP=As Soon As Possible
      ATK=At The Keyboard
      ATM=At The Moment
      A3=Anytime, Anywhere, Anyplace
      BAK=Back At Keyboard'''
      dictionary = {}
      # Split the text by new line and iterate over each line
      for line in text.split('
      '):
      # Split the line by the equal sign to get key and value
      key, value = line.split('=')
      # Add the key-value pair to the dictionary
      dictionary[key] = value
      print(dictionary)