Hi Mark. Just started watching and your intro used to have you commenting on your previous years in ai b4 your business. There plenty out there with 2 years experience now but your point of differnce is your education in this field. Keep this in your intro for newbies. It gave you my subscribe.
Tony, this is a fantastic callout and I really appreciate it -- I tend to think that constantly mentioning the 10 year background is a bit of a 'subtle flex' so I typically stray away from it. Wording it the way you did though, I think there's some merit for me to mention it in the context as to 'why you'll find this more valuable than someone else's video who doesn't have the background' which I didn't consider as much -- noted and TY for reminding me!
Thank you Mark. You will never know how helpful this was and what you've made possible for me to achieve with a Custom GPT. Fully deployed in 52 minutes by a non-programmer, and that included re-playing the relevant sections of your TH-cam video a bunch of times :)👏
I’m up late night here recording my next video and saw your amazing comment and unbelievably generous donation to the channel - makes me so happy I can help! Thank you SO much for your support and I hope I can keep delivering for you 🦾
Thanks for the video, great. One question is still in my head regarding the file sizes that this applies to. In my experience the Rag vom OpenAI is perfect for small sized documents like text, docx or markdown files and only messes up the big PDFs like in your example. Can you confirm this?
You're spot on! For now, the OpenAI Assistants API hasn't received any tangible updates in a while -- it does alright with simpler and more compact documents, but struggles with larger documents with more nuance.
Great video. The use of Pinecone helps a lot to get accurate information from the reference document. Thank you. Question: Where is the text file referenced at 21:46, "Link in Description," to add to the MyGPT schema?
Thanks Mauricio! Glad it was helpful 🦾 The first bitly link in the description that says ‘mastery kit’ will take you to a Gumroad page where you can access the content by entering your email!
I really enjoy the videos you put out-thank you for sharing your knowledge! For this one, I noticed that the instructions could be a bit clearer, especially when you drop the OpenAI schema into the GPT . It gets a little tricky in the next steps with Replit. Just a suggestion to help make your awesome teaching even better. Thanks again!
Thanks for the feedback my friend! NotebookLM is awesome, I’d say this is a cool way to supercharge any existing application or GPT, where is NotebookLM is good for some quick and dirty RAG tasks 🦾
Thanks Mark, has replit recently changed the plans? In that case is there an option to go for something else ? It seems in order to deploy I have to upgrade to plan and pay.
thanks so much! When I first started using Replit, you didn't need to pay for the standard deployment -- I just created a new account and was prompted to indeed upgrade. You should be able to deploy this also on Render: render.com/pricing
Hey Mark, great video! super helpful. However, my database has files up to 100mb. Any recommendations on how I can efficiently break these up and embed them into pinecone? I have 0 coding experience but I'd be willing to learn basic stuff if required. Thanks!
Yeah they are PDFs, open sourced textbooks actually. So if I converted to txt, I imagine I'd have to go through manually and remove the page numbers and other formatting things to make it easier to interpret for the vb?
Wow, thanks, Mark. Great walkthrough. You've made it really easy to understand. I'm new to your channel and I've instantly subscribed. Could you explain what a microservice is and why you need it? And why Replit? Could you not create a webhook with a script in the likes of n8n, Make or Zapier which would do the same thing?
hey Ian! Thanks so much. The primary use case for the micro-service is that if you have multiple automations, custom GPTs don't allow you to have more than one domain specific to each platform (n8n, zapier, make). Meaning if you have 3 separate automations, you most likely need to select one or find a way to combine all which is a bit of a headache. By using Replit as an intermediary, I can create as many connections as I wish, since they'll all have a custom domain I use Replit since for $25 USD/mo, I can deploy quite a few apps that I can easily replicate once you get the syntax of creating flask apps via ChatGPT or Claude.
This sounds amazing. BTW I watch lots of TH-cam and you’re the best at properly explaining AI. I’m immediately thinking about how to organise my files. Is there a number of files at which it doesn’t choose the correct file reliably? I’m wondering about whether to have an AI staff team folder that’s all their files - like a human staff shared drive rather than each customGPT having separate files? At the moment they have knowledge files, some unique to the customGPT , some which most of them have 🤔
first of all, thank you so much for the feedback! Really appreciate it :) It has a limit on the front-end of 10 files (can be very long files) -- I would organize the files to be bundled by theme ideally if you want to load many files. Via the API, I think you can get upto 20 files; it's really good at retrieval even with 10 large files from my testing. I would personally have a different google drive per person so there isn't too much accidental cross-pollination,
@Mark_Kashef Vercel as a product as a whole. I was steered there as another option for deployment due to its integration with Supabase. I built a landing page with Vo and needed a db to capture the contact information from a download included on the landing page.
I was looking for a working example of custom GPTs that connect to a user RAG server for a long time. Thank you for this recipe! I want to host that vector db (+ key store) locally (no replit/pinecone servers needed), and your workflow here is still a good practical model to contemplate. But for those without local servers, wondering if you can show something similar using the Supabase free tier as the OpenAPI endpoint in custom actions?
I'm glad the recipe tastes good aha -- interesting question, are you trying to run custom GPTs locally using the ChatGPT Desktop app? I use OpenAI and Qdrant locally, so curious to hear your stack to see if I can help!
Thanks mate. Really enjoyed this one as it gave me ideas. Is it possible to somehow bring in LlamaParse into this mix where it handles technically complex documents (with tables, charts, illustrations, formulars/maths etc.) as well as non-true pdfs requiring OCR first, before the extracted Markdown then gets passed to pinecone for embedding via Google drive?
happy to hear you liked it! In terms of your idea, what I would do is extend that Make scenario I showed at the very end to enable an OCR-friendly workflow. Perhaps you wait for any document in the Gdrive, then you use the PDF4Me module with the PDF OCR trigger OR run it through a custom code python module that runs your llamaparse script, and then follows the existing workflow I outlined. Totally possible, just will need some automation love to get it to work.
@@Mark_Kashef Got it! Thanks for that perspective. Could you share briefly what is likely to go wrong with relevance as you scale? I think a lot of people in the comments might want to know.
@@florianrolke for sure -- again my experience is 4-5 months old. When executing a relevance AI workflow, I many times received timeout errors when the execution of that workflow exceeded 1-1.5 minutes at a given time. Also, the way they do RAG (retrieval augmented generation) 'was' fairly basic, so when it comes to more complex true PDFs, it struggled. Not to mention that it was hard to pinpoint exactly where the information was coming from, and whether a wrong answer was because the LLM was wrong, versus the knowledge base, or both. Again, this is a multiple month old personal review :) could have gotten way better since then.
@@Mark_Kashef Thank you for that detailled response. Super valuable ! Last question if you don't mind: I am right now wrestling o1 to give me an open api schema to deploy with agentive instead of replit. That's technically possible, right, or am I missing something?
@@florianrolke agentive and replit are not interchangeable; agentive has specific endpoints you can deploy to, replit allows you to build those endpoints from scratch
glad you got it working! the best way to get the rawest response is to write a prompt in the custom gpt to deliver the response from 'functionname' 'as-is' without transforming, paraphrasing, or changing it in any way.
Did you mean the ‘old fashion way’? One of my older videos is just about that with some custom code that still works today (my infinite memory video from a few months ago)
Hey Natalya! Thanks so much, means a lot 🦾 The hack for using Replit for free is to ‘Run’ the code in another window every time you want to use the Custom GPT so that the service is online; you can avoid deploying it this way
In the logs, below, it'll usually tell you what the underlying issue is; usually; API keys are mis-entered, or perhaps additional characters have been added to the code that are clashing its deployment. If not, I've had a few times where Replit itself was backed up, so I waited an hour or two and the deployment worked with no changes
from what I've seen, I don't think Claude Projects has access to custom actions yet -- POE bots don't seem to accept API requests either from an initial scan
Hello, my friend... I’d like to ask a question: Why use Pinecone instead of OpenAI's own Vector Store? Is the difference in results significant enough to justify using a separate tool just to have a vector store?
Hey! OpenAI’s vector store hasn’t truly been updated in a very long time - it uses very basic RAG, and technically you have to play around with the chunking size for each document to determine the best parameter settings for retrieval. I’ve found that pinecone assistant is far more accurate, and they’re actively working on constantly improving it versus OpenAI who has left this API in Beta for the past year with no new updates in sight.
I got this code working. Thank you so much! But when I ask a question in the custom GPT it always asks for my permission to “talk” to the Replit app, and asks me to “Confirm” or “Deny”. The it says below that “Only sites you trust”. Is this a setting within ChatGPT that I can change?
Another question: how could I use Pinecone's Vector Store as the "brain" for my OpenAI agents (note that I'm not referring to "Custom GPTs," but actual AGENTS, okay)? Pinecone has indeed proven to be an excellent tool, and it might be very worthwhile to test it in my learning projects. However, I’d like to leverage its data processing capabilities within OpenAI agents. Do you think this is possible? Could you make a video about it?
You absolutely can, we’ve done it for clients where we just take advantage of function calling with the assistants api - if you go back in my video list to my assistants api v2 video, I discussed how to implement function calling fairly in depth. Should sort you out on experimenting with it!
@@Mark_Kashef It works, thanks. Instead of PDF, can pinecone accept relational CSVs? I hope to hear from you more about the economical alternative to GraphRAG someday soon. After a decade, you're the first resourceful TH-camr I have started to comment and interact with. Keep up the great work!!
@@philosafi thanks so much again for the complement, super appreciated :) The Pinecone Assistant API only accepts PDF or txt files currently -- I usually format CSVs into PDFs so that I can process them.
@@Mark_Kashef Formatting the CSVs into PDFs could lose the dataset's relational nature I guess.I have spent months on my custom dataset and also have a pretty good panel-of-experts CustomGPT (thanks to you again) but being apprehensive, I've never uploaded my dataset onto OpenAI platform. The advantage of the latter is I can access the chatbot on my other devices, therefore I'm now looking for a local CustomGPT powered by GraphRAG level accuracy using my relational CSV dataset.
NotebookLM is a full end-product; you can use it, but it's not something that you can natively 'call' via API or microservice from other products like custom GPTs; creating something like this allows you to nimbly call the service from anywhere and have some configurability on how it works without building your own RAG system. NotebookLM is awesome for rapid Q&As on your documents within the context of that product.
@@Mark_Kashef fair enough. I'm looking to mix some of what you did with flowise as flowise can upload to pinecone easily and update as well, then point chatgpt to the flowise end point for chat. By the way flowise has CHANGED alot over the past 6 months.
@@Mark_Kashef either way, I like your style, deliver and content, you got a new subscriber, and if I manage to replicate what you did with Flowise, I'll let you know how it goes.
The pinecone assistant looks like some customer service chatbot. And pinecone hides its most important feature at the bottom. How does one even know it can replace index
@@Mark_Kashef doesn't matter Mark. You give great value to the community, appreciate you. I am a student of Helena's AI accelerator, to which you are a regular guest speak. :) thank you.
That's a tutorial of my liking! Love the depth!
Thanks Mark 🙌, implemented this today using render instead of replit for the micro service.
pumped to hear that! Was it an easy implementation on Render?
Haven't tried myself.
Excellent material, thank you, Mark!
Thank you!! My pleasure 🦾
Hi Mark. Just started watching and your intro used to have you commenting on your previous years in ai b4 your business. There plenty out there with 2 years experience now but your point of differnce is your education in this field. Keep this in your intro for newbies. It gave you my subscribe.
Tony, this is a fantastic callout and I really appreciate it -- I tend to think that constantly mentioning the 10 year background is a bit of a 'subtle flex' so I typically stray away from it.
Wording it the way you did though, I think there's some merit for me to mention it in the context as to 'why you'll find this more valuable than someone else's video who doesn't have the background' which I didn't consider as much -- noted and TY for reminding me!
You make it look so easy! Great work, and I love the content. Thanks for your generosity!
I'm glad it comes off that way haha - and really appreciate your feedback 🦾
Thank you Mark. You will never know how helpful this was and what you've made possible for me to achieve with a Custom GPT. Fully deployed in 52 minutes by a non-programmer, and that included re-playing the relevant sections of your TH-cam video a bunch of times :)👏
I’m up late night here recording my next video and saw your amazing comment and unbelievably generous donation to the channel - makes me so happy I can help!
Thank you SO much for your support and I hope I can keep delivering for you 🦾
@@Mark_KashefI'm gonna recommend you to my friend who runs a niche software company here in Vancouver. Gotta support our fellow Canadians!
I appreciate you! Thanks 🍁
As always, SUPER content. Please create an update to this video when the Pinecone assistant can be modified with a custom system prompt....
Excellent demo and very close to my use cases..Thank you very much Mark!
Thank you so much for providing this insane amount of value. Great video!❤
My pleasure! So happy it was valuable 🦾
Loved the tutorial! This is super useful Mark - Thanks a lot!
Happy you found it useful, thanks so much for the feedback 🦾🦾
that video helps me a lot but i dont know how to create the Pinecone Assistent Demo like 22:25
did you check the gumroad link? I provided the whole schema and codebase there.
great tutorial, got what I needed, and a bit more, thanks boss
awesome to hear that it delivered, thanks for the comment!
Thanks for the video, great. One question is still in my head regarding the file sizes that this applies to. In my experience the Rag vom OpenAI is perfect for small sized documents like text, docx or markdown files and only messes up the big PDFs like in your example. Can you confirm this?
You're spot on! For now, the OpenAI Assistants API hasn't received any tangible updates in a while -- it does alright with simpler and more compact documents, but struggles with larger documents with more nuance.
Great vid Mark! Would be great if there's an 'Assistants API' version of this vid coming soon :) Jam-packed value in this one
Thanks so much!
As soon as the OpenAI remembers to care about their assistants api again and update their file search, I’ll be right on it haha 🦾
@@Mark_Kashef ahaha legend!
Thanks Mark, great content. Keep it coming!
Peter! Thanks so much for this feedback as well as your generous support for the channel -- very much appreciate both. Cheers!
Great video. The use of Pinecone helps a lot to get accurate information from the reference document. Thank you. Question: Where is the text file referenced at 21:46, "Link in Description," to add to the MyGPT schema?
Thanks Mauricio! Glad it was helpful 🦾
The first bitly link in the description that says ‘mastery kit’ will take you to a Gumroad page where you can access the content by entering your email!
I really enjoy the videos you put out-thank you for sharing your knowledge! For this one, I noticed that the instructions could be a bit clearer, especially when you drop the OpenAI schema into the GPT . It gets a little tricky in the next steps with Replit. Just a suggestion to help make your awesome teaching even better. Thanks again!
Thanks for the feedback! I appreciate you sharing; noted for next time 🦾
This is too amazing! I love it. Been digging into RAG and thought NotebookLM is the next best thing but this is on another level!
Thanks for the feedback my friend!
NotebookLM is awesome, I’d say this is a cool way to supercharge any existing application or GPT, where is NotebookLM is good for some quick and dirty RAG tasks 🦾
Thanks Mark, has replit recently changed the plans? In that case is there an option to go for something else ? It seems in order to deploy I have to upgrade to plan and pay.
As a cheaper alternative, Render should also do the trick!
nice video. thanks. It seems replit requires paid account to deploy. Am I doing something wrong? Did this change recently. Any other alternatives?
thanks so much!
When I first started using Replit, you didn't need to pay for the standard deployment -- I just created a new account and was prompted to indeed upgrade.
You should be able to deploy this also on Render:
render.com/pricing
Hey Mark, great video! super helpful. However, my database has files up to 100mb. Any recommendations on how I can efficiently break these up and embed them into pinecone? I have 0 coding experience but I'd be willing to learn basic stuff if required. Thanks!
Glad it was helpful! Are these files PDFs?
I've noticed the converting PDFs to .txt files immediately cuts down size significantly across the board.
Yeah they are PDFs, open sourced textbooks actually. So if I converted to txt, I imagine I'd have to go through manually and remove the page numbers and other formatting things to make it easier to interpret for the vb?
Great video brother, thank you!
thanks so much!! appreciate your feedback
Wow, thanks, Mark. Great walkthrough. You've made it really easy to understand. I'm new to your channel and I've instantly subscribed. Could you explain what a microservice is and why you need it? And why Replit? Could you not create a webhook with a script in the likes of n8n, Make or Zapier which would do the same thing?
hey Ian! Thanks so much.
The primary use case for the micro-service is that if you have multiple automations, custom GPTs don't allow you to have more than one domain specific to each platform (n8n, zapier, make).
Meaning if you have 3 separate automations, you most likely need to select one or find a way to combine all which is a bit of a headache.
By using Replit as an intermediary, I can create as many connections as I wish, since they'll all have a custom domain
I use Replit since for $25 USD/mo, I can deploy quite a few apps that I can easily replicate once you get the syntax of creating flask apps via ChatGPT or Claude.
This sounds amazing. BTW I watch lots of TH-cam and you’re the best at properly explaining AI. I’m immediately thinking about how to organise my files. Is there a number of files at which it doesn’t choose the correct file reliably? I’m wondering about whether to have an AI staff team folder that’s all their files - like a human staff shared drive rather than each customGPT having separate files? At the moment they have knowledge files, some unique to the customGPT , some which most of them have 🤔
first of all, thank you so much for the feedback! Really appreciate it :)
It has a limit on the front-end of 10 files (can be very long files) -- I would organize the files to be bundled by theme ideally if you want to load many files.
Via the API, I think you can get upto 20 files; it's really good at retrieval even with 10 large files from my testing.
I would personally have a different google drive per person so there isn't too much accidental cross-pollination,
That's exactly what I search 🙌🏻❤
Glad I could drop this for you just in time!
@@Mark_KashefPLEASE do provide the code files and the links in the description as soon as possible for free 🙏 thank you so much.
you can just enter $0 and not pay :)
@@Mark_Kashef I didn't get you brother. Where I need to enter 0 dollar and where is the link for that ?
@@pandipatipavan3804 this is the link to the resources that's already in my description: bit.ly/4gJr9CR
Does pinecone have Canadian instances? Also any recommendation on Canadian hosted LLM 's for business use cases?
It’s hosted on AWS and GCP environments so I think it’s primarily US based regions
Mark!! Again, top tier content! Can you do a video on Vercel?
Thanks so much! Did you mean Vercel as a product as a whole or just v0?
@Mark_Kashef Vercel as a product as a whole. I was steered there as another option for deployment due to its integration with Supabase. I built a landing page with Vo and needed a db to capture the contact information from a download included on the landing page.
@@Reflowflow hmm okay! I'll see how I can weave that in
Great video!
thanks so much Ted!
I was looking for a working example of custom GPTs that connect to a user RAG server for a long time. Thank you for this recipe!
I want to host that vector db (+ key store) locally (no replit/pinecone servers needed), and your workflow here is still a good practical model to contemplate.
But for those without local servers, wondering if you can show something similar using the Supabase free tier as the OpenAPI endpoint in custom actions?
I'm glad the recipe tastes good aha -- interesting question, are you trying to run custom GPTs locally using the ChatGPT Desktop app?
I use OpenAI and Qdrant locally, so curious to hear your stack to see if I can help!
Thanks mate. Really enjoyed this one as it gave me ideas.
Is it possible to somehow bring in LlamaParse into this mix where it handles technically complex documents (with tables, charts, illustrations, formulars/maths etc.) as well as non-true pdfs requiring OCR first, before the extracted Markdown then gets passed to pinecone for embedding via Google drive?
happy to hear you liked it!
In terms of your idea, what I would do is extend that Make scenario I showed at the very end to enable an OCR-friendly workflow.
Perhaps you wait for any document in the Gdrive, then you use the PDF4Me module with the PDF OCR trigger OR run it through a custom code python module that runs your llamaparse script, and then follows the existing workflow I outlined.
Totally possible, just will need some automation love to get it to work.
Hey Mark! Great video. Would you be willing to make a tutorial how to set up the relevance ai in-built vector database?
hey Florian! thanks so much -- frankly I stopped using Relevance AI a long time ago after running into many scalability issues with it.
Not a big fan
@@Mark_Kashef Got it! Thanks for that perspective. Could you share briefly what is likely to go wrong with relevance as you scale? I think a lot of people in the comments might want to know.
@@florianrolke for sure -- again my experience is 4-5 months old.
When executing a relevance AI workflow, I many times received timeout errors when the execution of that workflow exceeded 1-1.5 minutes at a given time.
Also, the way they do RAG (retrieval augmented generation) 'was' fairly basic, so when it comes to more complex true PDFs, it struggled.
Not to mention that it was hard to pinpoint exactly where the information was coming from, and whether a wrong answer was because the LLM was wrong, versus the knowledge base, or both.
Again, this is a multiple month old personal review :) could have gotten way better since then.
@@Mark_Kashef Thank you for that detailled response. Super valuable ! Last question if you don't mind: I am right now wrestling o1 to give me an open api schema to deploy with agentive instead of replit. That's technically possible, right, or am I missing something?
@@florianrolke agentive and replit are not interchangeable; agentive has specific endpoints you can deploy to, replit allows you to build those endpoints from scratch
Is this possible to do with just the openai's API?
OpenAI has the assistant API that has vector storage as well, but it’s nowhere near the performance of Pinecone’s assistant.
i managed to get it to work, now im trying to get the same out put from the pinecone assistant to be reflected in my custom gpt. how do i do that?
glad you got it working!
the best way to get the rawest response is to write a prompt in the custom gpt to deliver the response from 'functionname' 'as-is' without transforming, paraphrasing, or changing it in any way.
How about how to use make to access the pinecone vector database?
Did you mean the ‘old fashion way’?
One of my older videos is just about that with some custom code that still works today (my infinite memory video from a few months ago)
Hi Mark, love your tutorials. Question: replit deployment costs 25$/mo, right? Noway to use this AI agent for free?
Hey Natalya! Thanks so much, means a lot 🦾
The hack for using Replit for free is to ‘Run’ the code in another window every time you want to use the Custom GPT so that the service is online; you can avoid deploying it this way
Isnt there a way to give prompt (instructions) to pinecone assistant? So we can get a response following some instructions.
In a few weeks to a month, I believe this is coming! 🦾
this is now available btw!
HI i keep gettin an error in the promote of deployment. How can If fix this.
In the logs, below, it'll usually tell you what the underlying issue is; usually; API keys are mis-entered, or perhaps additional characters have been added to the code that are clashing its deployment.
If not, I've had a few times where Replit itself was backed up, so I waited an hour or two and the deployment worked with no changes
Any way to do this within Poe or Claude?
from what I've seen, I don't think Claude Projects has access to custom actions yet -- POE bots don't seem to accept API requests either from an initial scan
Hello, my friend...
I’d like to ask a question:
Why use Pinecone instead of OpenAI's own Vector Store?
Is the difference in results significant enough to justify using a separate tool just to have a vector store?
Hey!
OpenAI’s vector store hasn’t truly been updated in a very long time - it uses very basic RAG, and technically you have to play around with the chunking size for each document to determine the best parameter settings for retrieval.
I’ve found that pinecone assistant is far more accurate, and they’re actively working on constantly improving it versus OpenAI who has left this API in Beta for the past year with no new updates in sight.
@Mark_Kashef thank you VERY MUCH
I got this code working. Thank you so much! But when I ask a question in the custom GPT it always asks for my permission to “talk” to the Replit app, and asks me to “Confirm” or “Deny”. The it says below that “Only sites you trust”. Is this a setting within ChatGPT that I can change?
my pleasure!
Best bet to minimize that is to pick the Privacy Settings and select the 'Always Allow' option.
here's a screen clip:
paste.pics/S79QI
Is notebook LM poor with their RAG methods as well? I found it to be pretty good
NotebookLM is actually pretty good! This use case is super helpful to improve existing apps that need an extra bit of juice for extra performance!
Had to subscribe.
appreciate that! Glad I could deliver for you.
Another question: how could I use Pinecone's Vector Store as the "brain" for my OpenAI agents (note that I'm not referring to "Custom GPTs," but actual AGENTS, okay)?
Pinecone has indeed proven to be an excellent tool, and it might be very worthwhile to test it in my learning projects. However, I’d like to leverage its data processing capabilities within OpenAI agents.
Do you think this is possible? Could you make a video about it?
You absolutely can, we’ve done it for clients where we just take advantage of function calling with the assistants api - if you go back in my video list to my assistants api v2 video, I discussed how to implement function calling fairly in depth.
Should sort you out on experimenting with it!
is there an opensource alternative ?
I believe Render offers a ‘hobby’ plan that should allow you to deploy this - again though haven’t tried it myself!
My ChatGPT is giving this error "405 Method Not Allowed" when call the action, do you know what could be?
I found the problem; it was because there was a '/' at the end of the address that I copied.
Was about to literally mention that, glad you spotted it!
The Pincone Kit link in description is broken.. Thanks for timely video.
I just tested it, seems to work fine for me!
Here’s the link again:
bit.ly/4gJr9CR
@@Mark_Kashef It works, thanks. Instead of PDF, can pinecone accept relational CSVs? I hope to hear from you more about the economical alternative to GraphRAG someday soon. After a decade, you're the first resourceful TH-camr I have started to comment and interact with. Keep up the great work!!
@@philosafi thanks so much again for the complement, super appreciated :)
The Pinecone Assistant API only accepts PDF or txt files currently -- I usually format CSVs into PDFs so that I can process them.
@@Mark_Kashef Formatting the CSVs into PDFs could lose the dataset's relational nature I guess.I have spent months on my custom dataset and also have a pretty good panel-of-experts CustomGPT (thanks to you again) but being apprehensive, I've never uploaded my dataset onto OpenAI platform. The advantage of the latter is I can access the chatbot on my other devices, therefore I'm now looking for a local CustomGPT powered by GraphRAG level accuracy using my relational CSV dataset.
Value 💣ing again. So greatful. 👊 Thank you, Uncle Mark & Co 🙏🎯🫡🤙
Does it work with german?
it should work with it!
How is different from using NotebookLM from Google?
NotebookLM is a full end-product; you can use it, but it's not something that you can natively 'call' via API or microservice from other products like custom GPTs; creating something like this allows you to nimbly call the service from anywhere and have some configurability on how it works without building your own RAG system.
NotebookLM is awesome for rapid Q&As on your documents within the context of that product.
Is it not possible to hit pinecone directly?
it's possible, this is just more accessible
what happens when you select gpt-4?
If you mean the selection in Pinecone Assistant, you can switch which LLM will be responding with the answer
Hey man this is amazing ! Thanks a lot, when i'll get rich, i'll pay you back for all your distilled knowledge and effort☺
Hahaha my pleasure! No need for payment, this thank you is enough 🦾
wow...
as for greece , sustainability is 45.78 at page 16 ...
that one shows up on multiple pages I remember :)
just redid all your questions in chatgpt , all of them were correct ...
phew! glad we passed the test haha.
@@Mark_Kashef what I meant is that chatgpt was correct unlike in your vidéo :)
Ah yes I understand now - it’s hit or miss
could i be that replit just made deploying a paid future, or am i missing something?
Ever since this video dropped, it indeed became a paid feature unfortunately
@@Mark_Kashef well that unfortunate indeed
Try flowise instead of replit. No code, low code.
I tried to love it, but I couldn't
@@Mark_Kashef fair enough. I'm looking to mix some of what you did with flowise as flowise can upload to pinecone easily and update as well, then point chatgpt to the flowise end point for chat. By the way flowise has CHANGED alot over the past 6 months.
@@AssassinUK I'm betting on Replit Agent getting way better in the next few months, so I'd rather centralize everything -- but good to know!
@@Mark_Kashef either way, I like your style, deliver and content, you got a new subscriber, and if I manage to replicate what you did with Flowise, I'll let you know how it goes.
@@Mark_Kashef why Pinecone and not QDrant or SupaBase?
The pinecone assistant looks like some customer service chatbot. And pinecone hides its most important feature at the bottom. How does one even know it can replace index
It’s meant to enable an easier approach to RAG via auto pilot uploading the files
Replit deployment is a paid feature now.
I was just told! Alternative is to click ‘Run’ on Replit every time you want to use it in your GPT if you want to avoid paying the $25/mo
@@Mark_Kashef doesn't matter Mark. You give great value to the community, appreciate you. I am a student of Helena's AI accelerator, to which you are a regular guest speak. :) thank you.
@@jordanting705 thank you so much for the kind words Jordan, much appreciated!!
This was a very informative and helpful video. Thanks for your help 🥹
pleasure to hear that Desire! My pleasure as usual 🦾