Thank you soooooooo much! It works perfectly! Even having around 8k files, R finished the process in less than 20 minutes. I can finally understand after watching your part 2 (The video for AI.) Thanks again!
Such a cool stuff. I am in HR recruitment and visualizing a use case for screening various CVs for given key words (in job description). Let me see how it turns out to be (high hopes). Thank you for this.
This is really helpful, however, when I execute the code, it gives me numbers from 1-60 and not the actual word count for the keywords. What to do if I want the word count for keywords from PDFs in a table?
My word count is coming up as a some across all papers. Is there a way to modify the code so that I can just see how many times a keyword appears in each of the papers individually?
Would you be able to make a video on how create a similar model to extract information from a large number of PDF files, and export in an excel spreadsheet?
Oh I have done something similiar on part 2 oninary classification on journals. However, if you already had an rough idea on the contents on the papers, you can simply setup a matrix and use certain clustering algorithm to seprate them instead of a neural network. th-cam.com/video/GihOdZUkH1Y/w-d-xo.html
Hi, I am facing the following error while loading many pdf files in R.PDF error: Invalid shared object hint table offset PDF error (5393291): insufficient arguments for Marked Content PDF error (5393300): insufficient arguments for Marked Content PDF error: Invalid shared object hint table offset PDF error: Invalid Font Weight Please can you help me.
You do a fantastic job keeping this simple, focused and clear. Your presenting skills are impressive
Thank you soooooooo much! It works perfectly! Even having around 8k files, R finished the process in less than 20 minutes. I can finally understand after watching your part 2 (The video for AI.) Thanks again!
It's awesome to know that R could do this!
Hey, thanks for this good instruction. I tried your code but it will only search for my first word? How can I analyze more than one word at the time?
Thank you very much for this tutorial, really helpful. I am looking forward to Part 2. Thanks
You can find part 2 on the channel as " Text mining with R -Part 2"
Such a cool stuff.
I am in HR recruitment and visualizing a use case for screening various CVs for given key words (in job description).
Let me see how it turns out to be (high hopes).
Thank you for this.
This is really helpful, however, when I execute the code, it gives me numbers from 1-60 and not the actual word count for the keywords. What to do if I want the word count for keywords from PDFs in a table?
My word count is coming up as a some across all papers. Is there a way to modify the code so that I can just see how many times a keyword appears in each of the papers individually?
Hello, did you get the answer for your query from somewhere? Actually, I am looking for the same thing.
What's an alternative for txt files rather than pdf ones?
Would you be able to make a video on how create a similar model to extract information from a large number of PDF files, and export in an excel spreadsheet?
Oh I have done something similiar on part 2 oninary classification on journals. However, if you already had an rough idea on the contents on the papers, you can simply setup a matrix and use certain clustering algorithm to seprate them instead of a neural network.
th-cam.com/video/GihOdZUkH1Y/w-d-xo.html
Hi, I am facing the following error while loading many pdf files in R.PDF
error: Invalid shared object hint table offset
PDF error (5393291): insufficient arguments for Marked Content
PDF error (5393300): insufficient arguments for Marked Content
PDF error: Invalid shared object hint table offset
PDF error: Invalid Font Weight
Please can you help me.
Very good thanks
Is there a part two to this?
Ya, it's call text mining part 2 on the channel
Thank you!
The Github script is not available :(
Could you please upload it?
hmm, that's funny, I have updated the link. if not see if you can access it here (github.com/brandonyph/Text-Mining-With-R)