crushing the content m8. Lets gooooooo. LLMs are getting so much better. Not where I "need" em yet, but we are already incredibly spoiled to have this kind of power available to us :P
Thank you!! And I totally agree, it's crazy that I complain about the "weaker" Llama 3.2 models when even the 1b parameter version would have been unbelievable 4-5 years ago.
I've found putting basic instructions for available tools into the system prompt helps. Like 'you've got a bunch of tools available, x for working with asana etc. If you call x make sure you do y etc
@@arinco3817 Thank you and yes I appreciate you calling that out! That's actually one of the things I had in mind specifically when I said at the end you could probably make it work for "weaker" models if you really want. Just takes extra work but if you want to run locally it's worth it!
@@navotdk Yeah fair point! And this is something I'm actually going to be exploring in the near future! It's too bad it's necessary when it isn't for even GPT-4o-mini, but local LLMs are often a requirement for a use case so fine tuning is an awesome option to make it work.
@@nanaboakyeoseitutu6896 Great question! I do not since n8n doesn't always have the models I want to test with. But I still really like the idea and will probably implement it in the near future!
@@edengate1 Seems so at least for some things! I don't think Meta lied, it's just that specifically for function calling Llama 3.2 is weaker than GPT-4o-mini even though it compares for a lot of other things. You can always work with the prompts/fine tune to making function calling work for Llama 3.2! Just more work that might not be worth it if you can just GPT-4o-mini right out the gate.
Nice comparison! Have you been able to use the vision abilities with Llama 3.2? I'm interested in learning how to do it. Tried it in LM studio and in Open Web UI but it doesn't really recognize the image input. Llava does work with vision out of the gate using Open WebUI but it's kinda terrible, and they haven't added the new Pixtral yet. Anyway, thanks for posting these, they are very helpful. I'll give 3.2 with function calling a try in some of my chains and see how it does. I wonder if the 3B models are just not trained on function calling at all?
Thank you Oscar! I have not tried the vision capabilities for Llama 3.2 yet. It's SUPER cool, don't get me wrong, but my use cases really don't benefit from it at this point. But I'd love to explore it more. Sorry to hear it doesn't seem to be working for LM Studio and OpenWebUI for you. Good luck trying Llama 3.2 in your chains! Yes, it really does seem the smaller models aren't trained on function calling at all. 11b seemed to be trying to spit out function calling syntax (it's responses started with ""), but it never did it successfully even after trying for a while. 3b and 1b didn't even try.
Great content, subscribed earlier today from your n8n video.
Thank you, I appreciate it a lot! :)
crushing the content m8. Lets gooooooo.
LLMs are getting so much better. Not where I "need" em yet, but we are already incredibly spoiled to have this kind of power available to us :P
Thank you!! And I totally agree, it's crazy that I complain about the "weaker" Llama 3.2 models when even the 1b parameter version would have been unbelievable 4-5 years ago.
I've found putting basic instructions for available tools into the system prompt helps. Like 'you've got a bunch of tools available, x for working with asana etc. If you call x make sure you do y etc
Cool video tho, I've not tried lang graph before
@@arinco3817 Thank you and yes I appreciate you calling that out! That's actually one of the things I had in mind specifically when I said at the end you could probably make it work for "weaker" models if you really want. Just takes extra work but if you want to run locally it's worth it!
Great video. You explain things well and I learned a lot. Subscribed!
Thank you, I appreciate it a lot!!
finetuning will probably make it work.
you can use synthetic data from GPT-4o for the tuning
@@navotdk Yeah fair point! And this is something I'm actually going to be exploring in the near future! It's too bad it's necessary when it isn't for even GPT-4o-mini, but local LLMs are often a requirement for a use case so fine tuning is an awesome option to make it work.
Found these new models subpar, but the beat local LLMs I have seen for function calling.
Yeah that's the same experience for me! Huge bummer they aren't as good at function calling as GPT-4o-mini, but still a lot better than Llama 3.1.
Do you have any implementations with n8n and openrouter ?
@@nanaboakyeoseitutu6896 Great question! I do not since n8n doesn't always have the models I want to test with. But I still really like the idea and will probably implement it in the near future!
@@ColeMedin You can use all the open source models in n8n with the ollama.
@TurkerTUNALI That's true! There are some platforms I like to use sometimes that aren't in n8n though like Together or Fireworks.
So... meta lies in the benchmarks and we still hostage from OpenAI?
@@edengate1 Seems so at least for some things! I don't think Meta lied, it's just that specifically for function calling Llama 3.2 is weaker than GPT-4o-mini even though it compares for a lot of other things. You can always work with the prompts/fine tune to making function calling work for Llama 3.2! Just more work that might not be worth it if you can just GPT-4o-mini right out the gate.
Great tests, thanks!
Of course, thank you!
Nice comparison!
Have you been able to use the vision abilities with Llama 3.2?
I'm interested in learning how to do it. Tried it in LM studio and in Open Web UI but it doesn't really recognize the image input. Llava does work with vision out of the gate using Open WebUI but it's kinda terrible, and they haven't added the new Pixtral yet.
Anyway, thanks for posting these, they are very helpful. I'll give 3.2 with function calling a try in some of my chains and see how it does. I wonder if the 3B models are just not trained on function calling at all?
Thank you Oscar!
I have not tried the vision capabilities for Llama 3.2 yet. It's SUPER cool, don't get me wrong, but my use cases really don't benefit from it at this point. But I'd love to explore it more. Sorry to hear it doesn't seem to be working for LM Studio and OpenWebUI for you.
Good luck trying Llama 3.2 in your chains! Yes, it really does seem the smaller models aren't trained on function calling at all. 11b seemed to be trying to spit out function calling syntax (it's responses started with ""), but it never did it successfully even after trying for a while. 3b and 1b didn't even try.
Any issues with your eyes ?? (thumbnail)
@@michabbb Haha no issues with my eyes! Just a silly thumbnail photo. What seems off to you? 😂