Streamlit Knowledge Graph: Unearth Insights from PDFs and Text Files

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ต.ค. 2024

ความคิดเห็น • 26

  • @echohive
    @echohive  2 หลายเดือนก่อน +1

    Download the full Project files for this project at my Patreon along with 250+ other projects:www.patreon.com/posts/streamlit-graph-109131557
    Talk with me this Sunday the 4th, AMA for Architect+ level Patrons: www.patreon.com/posts/ama-meetings-for-109136215
    Learn to code fast with AI assistance with my 1000x MasterClass: www.patreon.com/posts/1000x-dev-103326330\
    Discounts for my Patrons: www.patreon.com/posts/announcing-perks-108997398
    Search 200+ echohive videos and code download lnks:www.echohive.live/
    Auto Streamer: www.autostreamer.live/
    Fastapi course: www.patreon.com/posts/learn-fastapi-26-95041684
    Chat with us on Discord: discord.gg/PPxTP3Cs3G
    Follow on twitter(X) : twitter.com/hive_echo

    • @Kalaanoo
      @Kalaanoo 2 หลายเดือนก่อน +1

      Hey Bro, Bozorg here. The startup guy from Berlin. Love this content, so fluent and compelling to follow! Keep it up ☺😍

    • @echohive
      @echohive  2 หลายเดือนก่อน

      @Kalaanoo thank you 🙏

  • @andydataguy
    @andydataguy 2 หลายเดือนก่อน +1

    I love the way that youve made the streamlit app so functional. Its great to see how to interact with these tools easier.
    Note about generating triplets for knowledge graph. You get best results by chunking your document and then having the model generate relationships iterative through each chunk

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you 🙏 and my next video is going to be exactly about that!

  • @data01010
    @data01010 2 หลายเดือนก่อน +1

    All gas no breaks. All of these usecases are amazing, keep them coming.

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you 🙏 I hope to🙂

  • @GiovanneAfonso
    @GiovanneAfonso 2 หลายเดือนก่อน +1

    Oh man, that is incredible, congratulations!

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you very much 🙏

  • @parthwagh3607
    @parthwagh3607 2 หลายเดือนก่อน +1

    I watched some of your videos, they are highly informative. Thank you so much for bringing such knowledge on youtube. I have one request for your next video. Could you please create a fully automatic RAG that can do following work and most importantly running it totatlly on locally, without any paid API. Using LMstudio, obaboooga or ollama. 1) research current news using search browser tools. 2) providing topics of current news 3) selecting one from them, 4) again as per selected one topic finding all the other news on that 5) combining all relevant information 6) creating and drafting news for our website 7) uploading it to the website. Or another project of creating entire book using local llm models, not paid Apis. For both of these projects, we will not use any paid services, using all opensource models. Thank you again.

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you very much 🙏 You are making an interesting suggestion. Will try to build something like that. In the mean time take a look at this: th-cam.com/video/VZNReaVKC80/w-d-xo.html

  • @andydataguy
    @andydataguy 2 หลายเดือนก่อน +1

    Oooo this is exactly what I've been looking for! Am in the middle of a move. Excited to dive back in 🙌🏾

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Hope you will find it useful! Wish you the best with your move ❤️

  • @micbab-vg2mu
    @micbab-vg2mu 2 หลายเดือนก่อน +1

    It looks great:) i will test it on medical publications:)

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you 🙏

  • @parth-club
    @parth-club 2 หลายเดือนก่อน +1

    legend

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you very much! 🙏❤️

  • @akash_chaudhary_
    @akash_chaudhary_ 2 หลายเดือนก่อน +1

    @echohive how to handle such documents which holds unstructured data as well as tabular format data, by normal RAG it sometimes fails to get the data from table when table is long. Is there any video i can refer to? Thanks

    • @echohive
      @echohive  2 หลายเดือนก่อน

      You can try vision models of course but take a look at llama parse, unstructured API and adobe pdf extract and most importantly llmwhisperer(best but most expensive)

  • @totallyautonomous
    @totallyautonomous 2 หลายเดือนก่อน +1

    Is your patreon begineer friendly?

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Yes, I have several courses on how to code faster(1000x masterclass) and on streamlit and others. I also have AMA meetings every week with Architect patrons. As a patron, you will be able to download the source code to all my projects as well.

  • @Dave-nv5rv
    @Dave-nv5rv 2 หลายเดือนก่อน +1

    Hi, new subscriber here, love the channel. I don't have time for watching all the things on pareon etc but was wondering if you do consulting?
    I'm interested in perhaps just having a teams meeting or similar for 30-45 minutes.
    I'm working on poorly coded and structured vb application that has about 700k lines.
    I've been working with cursor plus visual studio with resharper.
    My goal is to maximize my cursor usage and optimize my claude api token usage the best possible. Also if there is some tool to accurately diagram this app with detail that would be nice too.
    I have all paid subscriptions so spending a bit is not a big deal, I'm trying to just get the most from ai to make this project easier and faster.
    Let me know if you're open to something like that and if so the best way to contact you. Thanks

    • @echohive
      @echohive  2 หลายเดือนก่อน +1

      Thank you subscribing and for the nice words 🙏 Yes I do have Patreon tiers for consulting. When you go thru my Patreon memberships just scroll right and you will see “virtuoso tier” and “prodigy tier” virtuoso is for 1 hr 1-on-1 meetings per month and Prodigy is for 3 hour per month. Please let me know if you have any questions.

  • @vijayjadhav287
    @vijayjadhav287 2 หลายเดือนก่อน

    Hello there. We are working on a social impact inititiave in the field of education. Would like to contact you for the purpose to mutual mutual synergies.

  • @MichaelWoodrum
    @MichaelWoodrum 2 หลายเดือนก่อน +1

    Seriously, I'm so busy at work and it's not capable of automation the, you keep pumping out. I'm fairly certain that you can increase your income and viewers by exploiting something. Obviously you and I think similarly in this way. I make awesome stuff. You make awesome stuff. We can't market it because marketers are similar to lawyers. Boring until you have to remove them from existence to create a new existence. I think that is Plato, or another Greek philosopher. It's insane to me that the consistently less capability algorithms and the ridiculously terrible capabilities of Google's AI have somehow made us connect. Not that I'm upset, legit surprised. Google has systematically stepped their game down by what I can only assume is hiring of all Apple employees that somehow looked good to Apple at first and then failed miserably at working towards making siri anything but sometime that could be could be built in a few hours by anyone with access to gpt.
    I want you to continue as far as possible. I wish I could devote the time you do myself. Not for TH-cam, for my own selfish purposes. You are a King. You deserve more than what shows on numbers. Legit, the most informative and the least filled with puffery of any "AI" channel. Anyone using extensive libraries, especially if using langchain, are attracting people. Those people cannot possibly compete. Maybe they will figure out some investment from the investor Bros. You are legit in my eyes.
    Keep pumping it out. I legitimately appreciate your tangential thinking. It tends to lead to the answers in some way.

    • @echohive
      @echohive  2 หลายเดือนก่อน

      Thank you very much! 🙏I truly appreciate your supportive sentiment❤️ one of my goals is to create these projects for others who dont have the time to just pick it up and customize it for their own use case. As far as marketing, I learn and get better but I just like to be as factual as possible with titles and etc. Slow, consistent growth is fine too :)