Make Your Own GPT With h2oGPT & H2O LLM Studio

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ม.ค. 2025

ความคิดเห็น • 21

  • @jacques42
    @jacques42 ปีที่แล้ว +6

    00:03 🎙️ LLMS (Large Language Models) are AI systems with billions of parameters trained on vast amounts of data to generate human-like text.
    15:11 📊 LLMS come in different sizes, but larger models require more hardware and don't necessarily guarantee linear improvements in performance.
    21:55 🧠 The foundational training, data sources, and fine-tuning tasks significantly impact the LLMS' capabilities and performance.
    26:08 🔒 Personalizing LLMS offers customization, data privacy, and security benefits, especially for proprietary or sensitive information.
    26:52 ⚙️ Hosting personalized LLMS locally or on your own instance ensures data confidentiality and control over model behavior.
    27:06 🧐 Prompt engineering can significantly influence an LLM's response. Crafting effective prompts and rephrasing questions can yield desired results.
    32:05 🧠 Memory-centric LLMs respond based on their training data, while retrieval-centric LLMs fetch data from live sources. Striking a balance between the two is a challenge.
    37:08 💡 H2O GPT allows you to ground an LLM by providing it with dynamic unstructured data, enabling it to respond based on your data and cite sources when necessary.
    38:03 🔄 Fine-tuning LLMs involves modifying a pre-trained model's behavior, such as its tone, style, or response format, to better suit specific use cases.
    45:20 🚀 H2O GPT and LLM Studio offer tools to interact with LLMs, fine-tune them, and ground their responses in specific datasets, making them more useful and context-aware.
    53:55 🖥️ H2O LLM Studio allows you to fine-tune language models easily for your specific tasks with state-of-the-art training techniques.
    55:20 🤖 Experiments in H2O LLM Studio let you fine-tune models using your own datasets, and you can track the training progress and chat with the model as it learns.
    57:00 🧠 Generative AI can improve various tasks, and it's crucial to identify which tasks can benefit from it and then work backward to determine whether fine-tuning, H2O GPT, or prompt engineering is the solution.
    58:49 💻 The GPU requirements depend on the model's size, and the number of instances needed for multiple users depends on query frequency and latency tolerance.
    01:00:50 🔄 Decide between in-context learning and fine-tuning based on whether you need to provide the model with new information (context) or adjust its response style and voice (fine-tuning).

  • @behzadkhosravanimajd
    @behzadkhosravanimajd 22 วันที่ผ่านมา

    Amazing webinar and appreciated for sharing with us. 🙂💙💙💙

  • @consig1iere294
    @consig1iere294 ปีที่แล้ว +7

    This was great! I would really appreciate it if H2O team could expand more on H2OGPT and LLM Studio. An in depth tutorial would be great for beginners.

  • @alx8439
    @alx8439 ปีที่แล้ว

    Guys you're gems. I was looking after whatever you were doing for last half a year - thanks a lot for that

  • @jaffarbh
    @jaffarbh ปีที่แล้ว

    That was brilliant. Thank you H2O team. I got an Nvidia A2000 with 6GB RAM only. I will try our the quantization method to load a small LLM. Fingers crossed it would work. btw, quantization converts 32bit or 16bit floating points into 8bit integers, reducing the memory requirements significantly without affecting the accuracy. Recent GPUs have the hardware capabilities to accelerate this process.

  • @geoff1337
    @geoff1337 ปีที่แล้ว

    This looks very promising. Will be great when you can feed in PDFs and other docs.
    Also interested in one where you could feed in financial data. Stock history etc

  • @A.M.8181
    @A.M.8181 ปีที่แล้ว +1

    How can I handle a large volume of PDF documents, with some files reaching 1000 pages, and efficiently search for specific information within them? Should I use a vector database for this purpose, and how can I organize everything effectively? Have you had any experience with similar tasks, where companies have asked you to train LLM on their databases?

  • @ΚατερίναΣταμούλη-β5ψ
    @ΚατερίναΣταμούλη-β5ψ 7 หลายเดือนก่อน

    6:37 where is #Grok ?
    why isn't it in here since Grok has the largest database.

  • @wissamMetawee-tf9ui
    @wissamMetawee-tf9ui ปีที่แล้ว

    Hi Rob Great content as usual, I was hoping you have any videos on how to use code-llama with llama cpp python locally for windows.

  • @jacques42
    @jacques42 ปีที่แล้ว +1

    Thank you for this. I have joined your Discord channels and find that there are a lot of installation questions for h2ogpt and it using a model. I am one of these people and would welcome someone who can assist us there with technical answers and solution ideas. Thank you

  • @roshangeorge796
    @roshangeorge796 ปีที่แล้ว

    sir, i need to delete loaded models in h2ogpt, coz of it takes a lot of data. But I was not able to do it. Can you explain me.

  • @mansigrover359
    @mansigrover359 10 หลายเดือนก่อน

    Can we create PPT from H2o gpt?

  • @mezxo
    @mezxo ปีที่แล้ว +1

    Is it possible to use this model with AMD cards?

    • @H2Oai
      @H2Oai  ปีที่แล้ว +2

      At this time it is mainly NVIDIA cards with CUDA that are supported.

    • @BoominGame
      @BoominGame ปีที่แล้ว +1

      In theory you could with ROCm or with Llama CPP on GPT4all, that is bypassing your Radeon, in practice CPP is very slow and ROCm never works for many reasons across the whole stack, but mainly, because Triton and pythin and environments that fall flat from 1 day to the other... unless you are ready to reach the depth of your patience, don't try.

    • @odai.shalaan
      @odai.shalaan ปีที่แล้ว

      Can I have your account? I want to communicate with you. Because you are Arab, we understand each other

  • @alexanderroodt5052
    @alexanderroodt5052 ปีที่แล้ว

    Most user unfriendly LLM app I have encountered to date and I have used almost every single one there is. Ambiguous AF when performing tasks. 1/10 experience.

    • @unkim7085
      @unkim7085 ปีที่แล้ว

      Any better suggestion? I was using oobabooga

  • @3nityC
    @3nityC ปีที่แล้ว

    How about text to speech?

  • @BoominGame
    @BoominGame ปีที่แล้ว

    H2o is a great software and smooth with that even on the xwin model it was very impressive, unfortunately I was disappointed because it put my cache into c:/ instead of putting it in the Drive I installed it into, if people change drive install it's because they know the space they have - why install .cache on C:/ and junk my whole drive???
    So many options and not 1 to change download drive?
    UAT Failed!