Excellent explanation, thank you! It would be great to have a video explaining how to integrate assistant v2 into botpress so that it can be used as a card in any part of the flow, in the same way it works in typebot.
I have a video on that (as well as a template)! --> th-cam.com/video/6zS0LToLOiA/w-d-xo.html You just have to change the header from V1 to V2 in the API call card.
@@show-me-the-dataSorry for the late reply. I have no Idea. But I tried to use several models to value/rank content. The greatest troubles are to find a ranking system that is functional on every topic and/or type of content and to have a format, which is then constantly being used. Especially when using ollama onboard models.
Done! I sent an email to everyone who downloaded it and updated the download flow from the other youtube video: th-cam.com/video/6zS0LToLOiA/w-d-xo.html
I would like to know the cost of such requests. In the playground when I ask a question and the answer is extracted from the document it takes about 10,000 tokens.
Great video. Waiting for the next!
Great overview. Thx! Happy to find you.
Thanks for sharing! You’re really good at explaining things for newbies!
Great explanation and exactly what I was looking for.
Excellent explanation, thank you! It would be great to have a video explaining how to integrate assistant v2 into botpress so that it can be used as a card in any part of the flow, in the same way it works in typebot.
I have a video on that (as well as a template)! --> th-cam.com/video/6zS0LToLOiA/w-d-xo.html
You just have to change the header from V1 to V2 in the API call card.
thank you for the interesting video!!!
Great! I think I should try this method with flowise and llama3.
Great idea, let me know if flowise supports something like reranking, cause I haven't had a chance to try!
@@show-me-the-dataSorry for the late reply. I have no Idea. But I tried to use several models to value/rank content. The greatest troubles are to find a ranking system that is functional on every topic and/or type of content and to have a format, which is then constantly being used. Especially when using ollama onboard models.
Are they added an ability to measure prompt , completion and total tokens in v2 ?
Please update the Botpress template with the OpenAI API V2. Thanks
Done! I sent an email to everyone who downloaded it and updated the download flow from the other youtube video: th-cam.com/video/6zS0LToLOiA/w-d-xo.html
@@show-me-the-data Thanks!
I would like to know the cost of such requests. In the playground when I ask a question and the answer is extracted from the document it takes about 10,000 tokens.
How to hook it up with botpress?
I explain that in this video here: th-cam.com/video/6zS0LToLOiA/w-d-xo.html
@@show-me-the-data Since it was three months back, I thought it covered only v1. Does it work for v2 api as well?