two questions :) 1) Could i write a sentence and they give me after the training the probability for the topic based on the training ? 2) Could i use for example customer requests for training ? in this case you are using a unstructured data. I hope u understand my questions :D
This is incredible, I subscribed to your channel today while looking for topic modeling content, I found very good content. However, I would also like to find something from BERTopic, and a few minutes later after subscribed, I receive a notification from TH-cam of your channel, and I said, it can't be true! Thank a lot!
Great intro, but the default has too many topics to be useful for human understanding, is there a way to reduce the number of topics naturally? Also can we measure perplexity and coherence of these topics like LDA? Thanks
Thanks so much for the high quality content you published so far, your playlist are a gold mine for beginners and enthusiast into the AI field. Have you ever considered making a video to explain principles of creating an efficient dataset for text summarization, or other specific tasks? Many thanks in advance for your consideration!
Thanks, great tutorial. A question, what's your experience with quality of the model and sentence? Short sentences don't really work (to little semantics), long won't work either (too "much" semantics). Thoughts?
Thanks! And great question. If you are looking for an off the shelf solution try top2vec, but I think you may run into similar issues. What language are your docs? Also, how varied are they in size? A more custom solution may be necessary.
@@python-programming I've standard English web sites, from product reviews to travel reports. Generally a page contains some 10 paragraphs. Content on a page is highly correlated (you'd expect), so the page content is defined by a few paragraphs. The topic of a paragraph is mostly in a single sentence; the rest is "glue". This turns out to be a reasonable assumption (eye balling). BERTopic supports these observations, especially if you remove paragraphs with the topic probability for the most dominant topic less than some cutoff (say 0.6; the reason that, worst case another topic is present for at most 0.4). From experience you're left with 3% unallocated documents; each allocated document has at most 3 topics. This is all nice, assuming BERTopic gives good results for both long and short paragraphs with the same hyper parameters. If my assumption is incorrect I've a problem :( So, thoughts?
Thanks for the question! You absolutely can. I have a whole other tutorial that walks through each of those steps. I think BERTopic, LeetTopic (my library), and Top2Vec provide a simpler solution for those who may not be familiar with a custom UMAP, HDBScan workflow. I try to make tutorials for users at all levels and I think these other libraries address the needs of those newer to Python/ML.
Nice tutorial, thank you! If I follow the video correctly, about 25% of your documents are marked as outliers. Is that normal? Can you maybe talk about this a bit in a further video?
@@python-programming thanks a lot, I actually went on to search for it and found another one of your videos explaining EXACTLY what I wanted! For reference it's this one: "The EASIEST! way to do Text Classification with spaCy and Classy Classification" thanks again!
BERTopic or Top2Vec will both work, but you'll need to reduce your corpus to shorter text. You can use an introduction or conclusion as your text, or perform some summarization before you start modelling
As long as there is a BERT model for Arabic, yes. I know there is an NEH funded project for this but I am not sure if it is available yet. There is a lot of research in Arabic NLP so I would be surprised if another does not already exost. I do not have Arabic, though, so I cannot validate the results.
I am brand new to DAW and soft soft - these tutorials are excellent an very helpful to get soone like up and running. Appreciate
wow the features this bert approach provides really improves explanation of topic models
i found your channel today
and man I must say thank you
very good content
Thanks so much!! =)
Great video. I experimented with Top2Vec after that video, so looking forward to experimenting with BERTopic too.
Do you happen to have a tutorial that explains how to turn articles into a dataset for topic modeling. Thanks!
two questions :) 1) Could i write a sentence and they give me after the training the probability for the topic based on the training ? 2) Could i use for example customer requests for training ? in this case you are using a unstructured data. I hope u understand my questions :D
This is incredible, I subscribed to your channel today while looking for topic modeling content, I found very good content. However, I would also like to find something from BERTopic, and a few minutes later after subscribed, I receive a notification from TH-cam of your channel, and I said, it can't be true!
Thank a lot!
Haha! That is so perfect! Hope this video helps!!
@@python-programming Definitely helped!
Just simply put the code, it works! thanks!
Great intro, but the default has too many topics to be useful for human understanding, is there a way to reduce the number of topics naturally? Also can we measure perplexity and coherence of these topics like LDA? Thanks
Best video on topic modeling I've seen so far. Can I get all documents related to a topic, instead of just the top 3?
Thanks! Indeed you can. BerTopic has changed a bit since I made this video, so I will have to check the docs but I am certain you can.
Thank you for your good video.
Does BERTopic need any preprocesing like lemmatization or tokenization like LDA?
Thanks so much for the high quality content you published so far, your playlist are a gold mine for beginners and enthusiast into the AI field.
Have you ever considered making a video to explain principles of creating an efficient dataset for text summarization, or other specific tasks?
Many thanks in advance for your consideration!
Awesome. That was so informative. And explained so clearly. Thank you so much.
Thanks so much! I am planning a new video on BERTopic soon to cover its new features.
Thanks, great tutorial.
A question, what's your experience with quality of the model and sentence? Short sentences don't really work (to little semantics), long won't work either (too "much" semantics). Thoughts?
Thanks! And great question. If you are looking for an off the shelf solution try top2vec, but I think you may run into similar issues. What language are your docs? Also, how varied are they in size? A more custom solution may be necessary.
@@python-programming I've standard English web sites, from product reviews to travel reports. Generally a page contains some 10 paragraphs. Content on a page is highly correlated (you'd expect), so the page content is defined by a few paragraphs. The topic of a paragraph is mostly in a single sentence; the rest is "glue". This turns out to be a reasonable assumption (eye balling).
BERTopic supports these observations, especially if you remove paragraphs with the topic probability for the most dominant topic less than some cutoff (say 0.6; the reason that, worst case another topic is present for at most 0.4). From experience you're left with 3% unallocated documents; each allocated document has at most 3 topics.
This is all nice, assuming BERTopic gives good results for both long and short paragraphs with the same hyper parameters. If my assumption is incorrect I've a problem :(
So, thoughts?
Please can you explain why didnt you use UMAP, HDBSCAN and C-TF-IDF for this?
Thanks for the question! You absolutely can. I have a whole other tutorial that walks through each of those steps. I think BERTopic, LeetTopic (my library), and Top2Vec provide a simpler solution for those who may not be familiar with a custom UMAP, HDBScan workflow. I try to make tutorials for users at all levels and I think these other libraries address the needs of those newer to Python/ML.
this was so informative, thank you.
I am so glad it was helpful!
How to run it on dataset with more than 12k rows?It is showing some "correct_alternative_cosine" error. Please help
Greeeeeeeeat!. Thanks. Another useful video
Great presentation.
Thanks!!! very much helpful!
Is it possible define number of topics here ?
Fantastic explanation
hi ! where can i find the source file you used?
Nice tutorial, thank you! If I follow the video correctly, about 25% of your documents are marked as outliers. Is that normal? Can you maybe talk about this a bit in a further video?
Yea that is a bit normal woth BERTopic. I plan to do another video that compares dofferent topic modeling approaches and that will be a key feature
@@python-programming Great, I’m looking forward to that video! 😊
Thanks for the amazing content! Do you know if BERTopic could be used to train a model to identify similarity to custom, pre-defined topic?
Thanks! I would not use BERTopic, rather soaCy for text classification. You could use BERTopic to gather data for easy labeling.
@@python-programming thanks a lot, I actually went on to search for it and found another one of your videos explaining EXACTLY what I wanted! For reference it's this one: "The EASIEST! way to do Text Classification with spaCy and Classy Classification"
thanks again!
@@luiztauffer8513 haha! Perfect! No problem!
Great info!
Thank you!!
Best topic model to use for modelling 3000 documents each having 3 pages of text ?
BERTopic or Top2Vec will both work, but you'll need to reduce your corpus to shorter text. You can use an introduction or conclusion as your text, or perform some summarization before you start modelling
Wow
Does this work for Arabic documents?
As long as there is a BERT model for Arabic, yes. I know there is an NEH funded project for this but I am not sure if it is available yet. There is a lot of research in Arabic NLP so I would be surprised if another does not already exost. I do not have Arabic, though, so I cannot validate the results.
@@python-programming Thank you for answering.
is tNice tutorials ASMR?
no way to install this shit, get error popping from everywhere and when i resolve them, thre others appear, unusable crap
So quiet
Comment to say thanks and support this absolutely awesome channel 🪩
Huge thanks and this is sooo clearly explained, good luck ⚡
Thank you so much for your support and this wonderful comment!