How to Analyze the SERPs with NLP

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 54

  • @shunmax
    @shunmax 3 ปีที่แล้ว +1

    You are doing great videos, just discovered your channel and watched to many of them

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      Glad you like them! Really appreciate it.

  • @sharathnair7323
    @sharathnair7323 3 ปีที่แล้ว +1

    Unique content. Love it. Thanks.

  • @mariakorniets9539
    @mariakorniets9539 ปีที่แล้ว

    Awesome! Thanks a lot for this gem❤

  • @amalaugustine2183
    @amalaugustine2183 3 ปีที่แล้ว +1

    Great work Mate, Always wanted to look at SERP with NLP from 2019, but now you showed us how to do that. And you showed NLP or Google is nt perfect, they don't know how these all really work once deployed.

  • @peymanhalimi7945
    @peymanhalimi7945 2 ปีที่แล้ว +1

    Hello
    Thank you for your very good content.
    I had a seva, when I run the code I get this error
    Traceback (most recent call last)
    How can I fix it?

  • @alexismaresca7765
    @alexismaresca7765 3 ปีที่แล้ว

    Love your content

  • @famifami5771
    @famifami5771 3 ปีที่แล้ว +1

    Please make one more detailed video on this topic

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว +1

      I'll be doing more on this topic!

  • @SpiritTracker7
    @SpiritTracker7 2 ปีที่แล้ว

    Do you know of a notebook that does this exact same thing but with google nlp? i have an api. thanks

  • @shunmax
    @shunmax 3 ปีที่แล้ว

    After extracting words it would useful co confront them again at the Google knowledge graph (in order to see which of the topics are entities). I would also use a stop words or common words list to clean the final table (top 25 terms)

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      Great feedback!
      I agree, the stop words make it a little messy. I do like keeping in the prepositions. BERT takes those into account when understanding intent.

  • @vinayshastri7210
    @vinayshastri7210 ปีที่แล้ว

    Hello, the tool isn't working for me. Getting this - ERROR: Failed building wheel for tokenizers

    • @smamarketing
      @smamarketing  ปีที่แล้ว

      Please try now. Code updated and it should all work.

  • @Wanderbug
    @Wanderbug 3 ปีที่แล้ว

    Quick Question, How can you scrap results for only usa or particular country?

  • @burhanuddinmetro170
    @burhanuddinmetro170 3 ปีที่แล้ว

    It is asking for access if i open the link mentioned in the description

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      Sorry about that. Here you go colab.research.google.com/drive/1PI6JBn06i3xNUdEuHZ9xKPG3oSRi1AUm?usp=sharing

  • @SpiritTracker7
    @SpiritTracker7 2 ปีที่แล้ว

    Do we need to add our own NLP API to this? I've been using this but lately I've been getting a lot of errors even when restarting runtime etc.

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว +1

      You should be good. SpaCy is open.
      If you stop runtime, you'll need to rerun the cells.

    • @SpiritTracker7
      @SpiritTracker7 2 ปีที่แล้ว

      @@smamarketing I might be having a caching issue then, will try another browser. But those are real google NLP entities and not some other custom trained NLP classifications right?

  • @stenliseo
    @stenliseo 3 ปีที่แล้ว

    Great video! Which languages are supported for entities, except EN?

  • @SpiritTracker7
    @SpiritTracker7 2 ปีที่แล้ว

    Unfortunately, spacy isn't extacting all entities from the text, there are hundreds of entities it's missing. It's only pulling known entities that either have a wikipedia page or knowledge graph. However "other" as in other entities types plays an important role in content and how they are used increases the salience score of the focus known entity. Is there a way to get spacy to show ALL entity types including "other"?

    • @SpiritTracker7
      @SpiritTracker7 2 ปีที่แล้ว

      Nevermind, I just realized that spacy NLP is not the same as Google's NLP.. so the question is spacy's NLP similar to googles, and if not, why use it for SEO then>?

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว

      All of these models are slightly different. To use the Google API, you'll need to set up an account with Google cosnole and pay for usage. It may not be perfect, but it's a good start.
      All of these NLP tools have their pros and cons and SpaCy does a decent job. you can train it to make it better.
      I also like Textrazor. It seems to do good a good job, but it's also paid.

    • @SpiritTracker7
      @SpiritTracker7 2 ปีที่แล้ว

      @@smamarketing We're using Google's NLP because that's how BERT reads content, so i.e. Google NLP for SEO content analysis. I already have a google NLP API all setup and ready to use. I was saying that how hard would it be to convert the spacy notebook that you shared over to google's NLP, using it's API... I found recently a colab notebook that uses google's NLP API to grab text/content from URLs (that you input) and outputs entities, text classification, salience etc.. but it's broken, and I don't know python enough to figure out the problem... its like 1 through 6 cells fire off just fine but when it gets to the 7th cell it throws an error and won't proceed. If I could get this notebook working it would be pretty powerful tool.

    • @SpiritTracker7
      @SpiritTracker7 2 ปีที่แล้ว

      @@smamarketing Also, "paid" is literally pennies though. You get 5000 free API calls a month anyway, and you'd have to be making pretty large amount of calls to go over that... I am not. Even if I did go over 5000 API calls a month, it would be near impossible for me to incur anything over a $5 a month bill.

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว

      @@SpiritTracker7 Send me the link to the notebook and I'll see what I can do!
      As fart as Google using the console NLP library for BERT, it's a little more complex than that.
      BERT is one type NLP and the out of the box NLP most people use in the console isn't BERT. BERT is open source and you can use a number of the models for NLP research. See this post for more info: ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html

  • @Groupe_Espace3
    @Groupe_Espace3 3 ปีที่แล้ว

    Hi !
    How can we change de language of the query ?

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      You can set the language in SpaCy following this: spacy.io/api/language

  • @WillemNout1
    @WillemNout1 2 ปีที่แล้ว

    Is this still working? I tried using the template, but its giving me errors. Is that me messing up or has this not been updated a while?

    • @WillemNout1
      @WillemNout1 2 ปีที่แล้ว +1

      My bad, I was too quick on the gun. Got it. Pressed one script too early.

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว +1

      @@WillemNout1 Let me know if you have any questions!

  • @mmadog1981
    @mmadog1981 3 ปีที่แล้ว

    Thank you - very interesting. Is there a way I can get UK Google results, rather than US please?

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      Change the Google URL to the UK one.

    • @davidgalvin5791
      @davidgalvin5791 2 ปีที่แล้ว

      @@smamarketing would you mind explaining how to do this? Please

  • @islamahmed1041
    @islamahmed1041 3 ปีที่แล้ว

    ModuleNotFoundError: No module named 'sklearn.feature_extraction.stop_words'

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      Did you run every cell in order?

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      I just updated it. Working on a few more tweaks and it should be fixed later today!

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      @Ryan Working on fixing it.

  • @Rinzler.18
    @Rinzler.18 2 ปีที่แล้ว

    THERE IS A MAJOR ISSUE THE TOP 10 RESULTS DOES NOT INCLUDE FEATURED SNIPPETS AND 2ND POSITION, FIX IT SO PEOPLE WOULD GET ACCURATE DATA . THANKS

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว

      This will only pull organic results.
      This is a free colab that others can use to explore.

  • @smartpersonalfinance366
    @smartpersonalfinance366 ปีที่แล้ว

    As of June 2023, the
    ```
    !pip install "transformers == 3.3.0"
    ```
    fails with can't build wheels

    • @smamarketing
      @smamarketing  ปีที่แล้ว

      Please try now. Code updated and it should all work.

  • @jonth3978
    @jonth3978 2 ปีที่แล้ว

    Looks like Trafilatura is coming up with problems
    "NameError Traceback (most recent call last)
    in ()
    ----> 1 pd.set_option('display.max_colwidth', None) # make sure output is not truncated (cols width)
    2 pd.set_option("display.max_rows", 100) # make sure output is not truncated (rows)"

  • @redwan.affiliate
    @redwan.affiliate 2 ปีที่แล้ว

    Hi mate, SpaCy did not work for me. It stops here on this ### Scraping results with Trafilatura###.

    • @smamarketing
      @smamarketing  2 ปีที่แล้ว +1

      Just made an update. SpaCy made a few changes. Try now and let me know!

    • @redwan.affiliate
      @redwan.affiliate 2 ปีที่แล้ว

      @@smamarketing It works now. Thank you.

  • @nenadlatinovic3830
    @nenadlatinovic3830 2 ปีที่แล้ว

    Hi.
    I run all the cells and when I get to the visualization part, I get this output:
    ---------------------------------------------------------------------------
    AssertionError Traceback (most recent call last)
    in ()
    5 width_in_pixels=900,
    6 minimum_term_frequency=3,
    ----> 7 term_significance = st.LogOddsRatioUninformativeDirichletPrior())
    8 open("SERP-Visualization_top3.html", 'wb').write(html.encode('utf-8'))
    9 display(HTML(html))
    2 frames
    /usr/local/lib/python3.7/dist-packages/scattertext/ScatterChart.py in to_dict(self, category, category_name, not_category_name, scores, transform, title_case_names, not_categories, neutral_categories, extra_categories, background_scorer, use_offsets, **kwargs)
    274
    275 all_categories = self.term_doc_matrix.get_categories()
    --> 276 assert category in all_categories
    277
    278 if not_categories is None:

  • @mortenruus126
    @mortenruus126 3 ปีที่แล้ว

    Thanks for a great and interesting video. Can you change the stop word list to another language in this line of code?
    import scattertext as st
    from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS

    • @smamarketing
      @smamarketing  3 ปีที่แล้ว

      I believe you can. Here is a list of supported languages compiled by advertools advertools.readthedocs.io/en/master/advertools.stopwords.html