hey the video is really helpful. i have an doubt can we able to use langchain to get the inference because in langchain there are output parser which are helpful for getting a desired output
@@vetrivels505 yes, you can definitely use it. Unsloth is just for faster inference and training compared to other libraries. If you don't worry about the speed, you can use any other module for inference
Great video! what is the max input length for llama 3.1 ?
it's 128k if i remember correctly!!
hey the video is really helpful. i have an doubt can we able to use langchain to get the inference because in langchain there are output parser which are helpful for getting a desired output
@@vetrivels505 yes, you can definitely use it. Unsloth is just for faster inference and training compared to other libraries. If you don't worry about the speed, you can use any other module for inference