Great stuff. Maybe you could do a sum up also with using embedded vectors and get code search/ created based on human language requirements but also getting those 'reengineered' from given code maybe using code2seq tools? Thx!!
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
I have built an platform to learn datascience practically. Where a learner choose dataset, then follow step by step guide with AI, to build the model and consideration while building model. Also include, how to derive a business question/problem and draw a path to make a decision on problem. The software do everything from A to Z. But at the moment it's only limited to small datasets about 2 gb max and no CVision models. If possible I am looking for place to sell this platform or interested in deal with collaboration.
Updated Code of Chain-lit import os from langchain.prompts import PromptTemplate from langchain.chains.llm import LLMChain from chainlit import on_chat_start, on_message from chainlit.message import Message import chainlit as cl from langchain_groq import ChatGroq template = """ Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = ChatGroq(groq_api_key = "") # Instantiate the chain for that user session llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True) @on_chat_start def main(): # Store the chain in the user session (optional) cl.user_session.set("llm_chain", llm_chain) @on_message async def handle_message(message: Message): try: # Extract the plain text question from the Message object question = message.content llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True) # Call the chain with the extracted question res = await llm_chain.acall(question) # Do any post-processing here (optional) # Send the response await Message(content=res["text"]).send() except Exception as e: # Handle the error gracefully await Message(content="An error occurred. Please try again later.").send()
Great video sir! Can you please make one end to end video on making LLM model with tunning pre trained model for our use case and without Open AI Key so students like me can make project on LLM and include in our resume.
@krushnaik please cover from scratch so everyone can understand..we r new to llm
Can we just use serpAPI key and make a chainlit application? As a student I cannot buy OpenAI's API Key.
What an amazing library!!! Thank you sir for uploading this video.
Thanks a lot please be continue in publishing ai content like this your videos are really interactive and helpful for me to understand new tech stuff
Thanks Krish for your GREAT videos.
Please consider building opensource ChatGPT like models and how to fine tune them.
Be amazing as usual.
❤
Great stuff. Maybe you could do a sum up also with using embedded vectors and get code search/ created based on human language requirements but also getting those 'reengineered' from given code maybe using code2seq tools? Thx!!
Is there a way to deploy Chainlit on the cloud, so others can use it?
where are the codes ???
Can you please provide the code for the above mentioned implementation
Thanks for the video and pls continue with mlops tools
Hi krish, is there any way to generate gpt or llm model using our own data due
To data privacy. Pls clarify
Thankyou Krish hope it would be easy creating UI with chainlit 👍
Please upload tutorial on Creating algorithms for analysing data like 5PL, 4PL, Quadratic, Log-Log, exponential.
Thanks SIR ❤🎉
Can you make a video of performing authentication using chainlit
AttributeError: partially initialized module 'chil' has no attribute 'langchain_factory' (most likely due to a circular import)
Hey, can anyone help me with chat history in frontend?, like how can we add it in custom frontend?
Hello Sir, how this will be useful for HR analytics / People Analytics?
couldn't we just use simple Tkinter to create a UI in Python?
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
siŕ please make a playlist on how to make a L.L.M. model from scratch to advanced
can we use this in organization by providing some customized data resources which it can search provide us the information
can we run it using Docker or Docker-compose? Locally it's running fine. If not running on the docker then no use :(
can OpenAI function calling be used instead of LangChain agents?
Amazing!
what does the -w flag do?
please tell us how to do EDA on chat Gpt
more on this brother please : Chainlit
Can someone drop me the api key
I have built an platform to learn datascience practically. Where a learner choose dataset, then follow step by step guide with AI, to build the model and consideration while building model. Also include, how to derive a business question/problem and draw a path to make a decision on problem. The software do everything from A to Z. But at the moment it's only limited to small datasets about 2 gb max and no CVision models. If possible I am looking for place to sell this platform or interested in deal with collaboration.
Hi krish please upload tutorials if possible of using GPT4All and langchain which allows us to chat with our pdf files or csv without using OpenAI
Updated Code of Chain-lit
import os
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from chainlit import on_chat_start, on_message
from chainlit.message import Message
import chainlit as cl
from langchain_groq import ChatGroq
template = """ Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = ChatGroq(groq_api_key = "")
# Instantiate the chain for that user session
llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)
@on_chat_start
def main():
# Store the chain in the user session (optional)
cl.user_session.set("llm_chain", llm_chain)
@on_message
async def handle_message(message: Message):
try:
# Extract the plain text question from the Message object
question = message.content
llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)
# Call the chain with the extracted question
res = await llm_chain.acall(question)
# Do any post-processing here (optional)
# Send the response
await Message(content=res["text"]).send()
except Exception as e:
# Handle the error gracefully
await Message(content="An error occurred. Please try again later.").send()
im new to this, this is great and has got be started, just tried your code, why do i get a different response to the same question to the chatbot?
Great video sir! Can you please make one end to end video on making LLM model with tunning pre trained model for our use case and without Open AI Key so students like me can make project on LLM and include in our resume.