Man, sometimes i feel like i see the future but nobody around me really understands what is coming, what is possible. You and this community are the only ones i truly can feel like are on the same wavelength. The possibilities are opening up like a new continent to be explored and you my man, you are Neil Armstrong in this exploration!
I'm totally invested in your whole journey. My only thing is is that I'm trying to figure out how to integrate my free chat bot that I have. Just because I can see myself running up very large bill with Chat GPT doing all of this. Because my house is super automated and if I integrate your solutions which I'm trying to, it will be very costly. Thank you so much for all of this. You've inspired me to do some very interesting and cool things.
Come along for the ride! Im exploring local LLM and hope to create a video soon on what i discover. The space is still new and volatile so videos on the topic tend to have a short shelf-life
@@technithusiast I totally understand. But like I stated I can't see myself spending that much money based off of how much I would be using your solution. A lot of things in my house run off text to speech and it would be a massive cost. I definitely would be sleeping on the couch if I did that. If not sleeping in my truck
Man your content is good. You deliver your message in such a different way from the thousands of people that are out there that you are destined to explode your audience at any moment. Thank you!
I’m glad you’re enjoying the content! If this is your first time seeing me, you should check out these two videos that viewers seemed to love: I Installed Chat GPT In Home Assistant And The Results Were Amazing! th-cam.com/video/lXOtpL8iMgk/w-d-xo.html GPT TOOK OVER MY HOME - I learned why it's SCARY | | Chapter 4 th-cam.com/video/4ZxUsLnDjTA/w-d-xo.html
Just discovered your channel recently... You are a genius! When I started HASS a few years ago, I was going to minimize node-reds tasks, but you have found such great things to combine the two.
Damn! Great breakdown as always, but how did I not know about the Render Template node! I've been doing this in functions forever. This is easier and, more importantly, easier to maintain. I get that this wasn't the point here, but thanks for "showing" as well as "telling". I know what I'm to be iterating on this weekend!
Im glad you like the video. And I feel you about the render template! When i found it, i revamped all my automations from the function and get entities node. Game Changing.
Excellent job Michael! Congratulations on yet another amazing video. Keep it coming. Would you consider trying local LLMs one day? I would be really interested to see what you could do with it.
+1 to local LLMs please 🙏 I would love to run a low-power, local LLM at home. It’s possible to do today with some GPUs but the energy cost where I live is worse than using OpenAI.
I love how you are pushing the envelope with creativity and sophisticated development! Can’t wait to have some time to start following in your footsteps. Please keep it up! The subscribers will come, these things take time and commitment to get going, and then they steamroll😊
I appreciate it! I’m trying to move quickly in this space because I’m certain tech companies will start monopolizing and gatekeeping the space and take the innovations we come up for themselves for obscene profits
@@technithusiast The Open sourced AI community is thriving massively. Custom API’s and models are extremely easy to run now and very competitive. I believe the HACS ( Extended_openai_integration) AI chat integration source code is completely open sourced and active. It’s true that tech is gatekeeping certain advanced assets and utilities but for our use case this will not be a problem. We would need to worry more about foolish and poorly written legislation ushering in such gatekeeping as they try to restrict access under the misguided attempts to protect the public. But it’s erroneous as the open sourced projects and knowledge is ever expanding. The models available are insane. New services are also popping up each month that allows us to run Large open sourced models with remote compute with pre established API’s. LLM’s integrated into HA are a game changer. My wife cannot write home assistant automations but now she only has to ask the assistant to write the automation. My family hardly ever used the old assistants but now speak to them like family. It’s nice to just ask if any windows are open without renting in node red and getting intelligent follow up questions that are useful without a trigger. Also, your personality is perfect for content creation. You even captivated my teenage daughter who could give a damη about home automation. Your content is informative and dope!
I appreciate it. Trying to work on my storytelling skills 😬. It’s great having cool ideas but if i can’t get anyone interested in hearing it then they’re nothing more than fantasies in my head.
Glad you’re enjoying the content! If my pain points resonate with you, you may enjoy channel hanging out as a channel member! I have extra content and it’s a great way to support the channel youtube.com/@technithusiast/join
I've been watching since your second video. I couldn't believe how polished you were then and your style and content keeps getting better! Thank you for what you've done and what you're building.
This would be much more reliable if AI was only used where it was absolutely necessary. You could have a response template and hard-coded filters to check for datetime differences to make sure it only shows things from today.
Of course! And that’s where i started when I first created the automation last year. But after I learned how to use LLMs in my automations, my existing automation like this became more scalable and dynamic. I no longer needed complex filter logic or create string templates and loops. I simple give the system the data and use simple English and it works..
I'm glad I'm not the only one that has to keep a headphone off of one ear. My mic monitoring on these headphones isn't very strong it is hard to speak in meetings.
My home assistant server at the time was running on a VM in my NAS but I winced moved it to a raspberry pi 4. My nose red server is in a docker container on my NAS. To talk to GPT I use a custom node-red plug-in that connects to OpenAI.
That’s what I’m trying to get working. I tried installing it via docker in my NAS as it technically hit the minimum requirements (to the best of my knowledge) but every time I try to query it hangs and then fails due to some non-descriptive error messaging. I also tried setting up a VM in the NAS but I think that level of inception may be too weak to handle the llm. At this point I might setup something simple on my Mac Studio to just for testing but I would like to find a long term scalable (and cost effective) solution that I can share with my audience.
I have a question, in the 27:30 you have a node called "Advanced Assist", with 4 different options, how do you define what is a question and what is a command?
Sorry for the late reply and thank you for the interest! That is a sub-node and contains a ton of other nodes and logic. At some point I ask GPT to determine if the statements is a question or command and based on the answer the output will change
@@technithusiast got, I have something very similar, I use gpt3.5 to identify if is a question or a command and has to call an specific intent, then I have to use gpt4 (very expensive) to extract the data from respond the message.
I previously had those as separate calls but I manage to combine it into one call by using OpenAI’s function tool. When I send data, the function has a parameter “isQuestion”, and the additional properties I need to do work locally.
Hi thanks for the interest. That node is rather complex so I don’t have a video for it. it causes more questions than answers and I typically reserve content like that for channel members as I am better able to answer the their questions and dive deeper to help them get it working.
Yup. It's a small community but i've posted a lot of Automations there that don't make it to the public. One of the great advantages to our small size is that I can try out interesting ideas. For example, some members will post a copy of their automations and I respond with a video of how i would reimplement it in Node-Red. You can check out details about the channel membership here: www.youtube.com/@technithusiast/join
In the 2.5 version I could not get the get_events to actually work with the template. It just does not get the events even though I can see that it get's the event in the full message.
Not entirely sure what the issue could be. I would first check to make sure you're system is up-to-date as well as your node-red integrations. At the time of making the video, some of the methods were being deprecated so there is a chance that you could be using a deprecated function.
Mate, you have no idea how much I like your vibe, I’m not the only crazy one out there, 🎉🎉🎉🎉
He’s destined to be a GREAT content creator.
I appreciate that! I know there must be crazy people like me out there! Really glad you enjoy the content.
Man, sometimes i feel like i see the future but nobody around me really understands what is coming, what is possible. You and this community are the only ones i truly can feel like are on the same wavelength. The possibilities are opening up like a new continent to be explored and you my man, you are Neil Armstrong in this exploration!
I really appreciate that and I’m glad you’re down for the ride 😁
@@technithusiast Lets gooo!!
I'm totally invested in your whole journey. My only thing is is that I'm trying to figure out how to integrate my free chat bot that I have. Just because I can see myself running up very large bill with Chat GPT doing all of this. Because my house is super automated and if I integrate your solutions which I'm trying to, it will be very costly. Thank you so much for all of this. You've inspired me to do some very interesting and cool things.
Come along for the ride! Im exploring local LLM and hope to create a video soon on what i discover. The space is still new and volatile so videos on the topic tend to have a short shelf-life
@@technithusiast I totally understand. But like I stated I can't see myself spending that much money based off of how much I would be using your solution. A lot of things in my house run off text to speech and it would be a massive cost. I definitely would be sleeping on the couch if I did that. If not sleeping in my truck
I got you. Stay tuned!
Man your content is good. You deliver your message in such a different way from the thousands of people that are out there that you are destined to explode your audience at any moment. Thank you!
Im waiting for that moment. Other viewers have the same sentiment but its a pretty slow grind so far. Hopefully my time will come soon 😁
First time algo suggests you, and I love the intro, the glossary, the explanation, the soft tones, dude ur nailing it!
I’m glad you’re enjoying the content! If this is your first time seeing me, you should check out these two videos that viewers seemed to love:
I Installed Chat GPT In Home Assistant And The Results Were Amazing!
th-cam.com/video/lXOtpL8iMgk/w-d-xo.html
GPT TOOK OVER MY HOME - I learned why it's SCARY | | Chapter 4
th-cam.com/video/4ZxUsLnDjTA/w-d-xo.html
Just discovered your channel recently... You are a genius! When I started HASS a few years ago, I was going to minimize node-reds tasks, but you have found such great things to combine the two.
Really glad you like the video! I have many more videos that give great automation examples
Damn! Great breakdown as always, but how did I not know about the Render Template node! I've been doing this in functions forever. This is easier and, more importantly, easier to maintain. I get that this wasn't the point here, but thanks for "showing" as well as "telling". I know what I'm to be iterating on this weekend!
Im glad you like the video. And I feel you about the render template! When i found it, i revamped all my automations from the function and get entities node. Game Changing.
The knowledge level, story telling, content breakdown and enthusiasm is just top notch!! Great channel Michael! Thanks for sharing ++1sub
Hey im really glad you like it. Doing my best to make this content captivating through storytelling and engaging examples
Just commenting so that when you blow up I can say I was here at 12k👌👌
Awesome content though, I look forward to seeing more from you!!
Hahaha 🤣
Excellent job Michael!
Congratulations on yet another amazing video. Keep it coming.
Would you consider trying local LLMs one day? I would be really interested to see what you could do with it.
Im glad you liked the video! And im looking into using Local LLMs. I got a few folks asking and im deep diving it behind the scenes.
+1 to local LLMs please 🙏
I would love to run a low-power, local LLM at home. It’s possible to do today with some GPUs but the energy cost where I live is worse than using OpenAI.
I love how you are pushing the envelope with creativity and sophisticated development! Can’t wait to have some time to start following in your footsteps. Please keep it up! The subscribers will come, these things take time and commitment to get going, and then they steamroll😊
I really appreciate the encouragement. I’ll keep fighting the good fight 😬
I like your story telling and can see your passion. Keep up the great work!
Thank you! Will do!!!
🤯 it's really awesome how you built that up. Congratulations, it's really exciting to see the possibilities.
I appreciate it! I’m trying to move quickly in this space because I’m certain tech companies will start monopolizing and gatekeeping the space and take the innovations we come up for themselves for obscene profits
@@technithusiast The Open sourced AI community is thriving massively. Custom API’s and models are extremely easy to run now and very competitive. I believe the HACS ( Extended_openai_integration) AI chat integration source code is completely open sourced and active. It’s true that tech is gatekeeping certain advanced assets and utilities but for our use case this will not be a problem. We would need to worry more about foolish and poorly written legislation ushering in such gatekeeping as they try to restrict access under the misguided attempts to protect the public. But it’s erroneous as the open sourced projects and knowledge is ever expanding. The models available are insane. New services are also popping up each month that allows us to run Large open sourced models with remote compute with pre established API’s.
LLM’s integrated into HA are a game changer. My wife cannot write home assistant automations but now she only has to ask the assistant to write the automation. My family hardly ever used the old assistants but now speak to them like family. It’s nice to just ask if any windows are open without renting in node red and getting intelligent follow up questions that are useful without a trigger.
Also, your personality is perfect for content creation. You even captivated my teenage daughter who could give a damη about home automation. Your content is informative and dope!
I appreciate it. Trying to work on my storytelling skills 😬. It’s great having cool ideas but if i can’t get anyone interested in hearing it then they’re nothing more than fantasies in my head.
This was great content. Good luck to you, subscribed forever, LOL. This is great learning content!
Thanks for the sub! Really glad you like the content!
Great videos.. I've been watching for a while. Our pain points are always very similar. Thanks! Also, I create very similar things that you do.... :)
Glad you’re enjoying the content! If my pain points resonate with you, you may enjoy channel hanging out as a channel member! I have extra content and it’s a great way to support the channel youtube.com/@technithusiast/join
I've been watching since your second video. I couldn't believe how polished you were then and your style and content keeps getting better! Thank you for what you've done and what you're building.
I appreciate the compliment and I hope you continue to enjoy!
This would be much more reliable if AI was only used where it was absolutely necessary. You could have a response template and hard-coded filters to check for datetime differences to make sure it only shows things from today.
Of course! And that’s where i started when I first created the automation last year. But after I learned how to use LLMs in my automations, my existing automation like this became more scalable and dynamic. I no longer needed complex filter logic or create string templates and loops. I simple give the system the data and use simple English and it works..
I'm glad I'm not the only one that has to keep a headphone off of one ear. My mic monitoring on these headphones isn't very strong it is hard to speak in meetings.
You want to reduce the top-p sampling a little bit to make the LLM model more predictable.
When you're editing to correct yourself, you can just delete the audio and replace it with the new audio, and put it on screen.
Love it!!
Thank you!!
I'm very interested to hear about your experience with local A.I. 🙂 What kind of hardware are you running?
My home assistant server at the time was running on a VM in my NAS but I winced moved it to a raspberry pi 4. My nose red server is in a docker container on my NAS. To talk to GPT I use a custom node-red plug-in that connects to OpenAI.
@@technithusiast What about for local A.I.?
That’s what I’m trying to get working. I tried installing it via docker in my NAS as it technically hit the minimum requirements (to the best of my knowledge) but every time I try to query it hangs and then fails due to some non-descriptive error messaging. I also tried setting up a VM in the NAS but I think that level of inception may be too weak to handle the llm. At this point I might setup something simple on my Mac Studio to just for testing but I would like to find a long term scalable (and cost effective) solution that I can share with my audience.
Brilliant
Glad you like it!
I have a question, in the 27:30 you have a node called "Advanced Assist", with 4 different options, how do you define what is a question and what is a command?
Sorry for the late reply and thank you for the interest! That is a sub-node and contains a ton of other nodes and logic. At some point I ask GPT to determine if the statements is a question or command and based on the answer the output will change
@@technithusiast got, I have something very similar, I use gpt3.5 to identify if is a question or a command and has to call an specific intent, then I have to use gpt4 (very expensive) to extract the data from respond the message.
I previously had those as separate calls but I manage to combine it into one call by using OpenAI’s function tool. When I send data, the function has a parameter “isQuestion”, and the additional properties I need to do work locally.
Do you have a video showing the ‘backoffice’ of the node - Advanced Assist?
Hi thanks for the interest. That node is rather complex so I don’t have a video for it. it causes more questions than answers and I typically reserve content like that for channel members as I am better able to answer the their questions and dive deeper to help them get it working.
@@technithusiastI wasn’t aware that you started a membership channel. So as a member I would get access to more details and background knowledge?
Yup. It's a small community but i've posted a lot of Automations there that don't make it to the public. One of the great advantages to our small size is that I can try out interesting ideas. For example, some members will post a copy of their automations and I respond with a video of how i would reimplement it in Node-Red.
You can check out details about the channel membership here: www.youtube.com/@technithusiast/join
Hello,
can we use another LLM ? (ex: mistral ai)
Currently working on creating local-llm node. Stay tuned!
In the 2.5 version I could not get the get_events to actually work with the template. It just does not get the events even though I can see that it get's the event in the full message.
Not entirely sure what the issue could be. I would first check to make sure you're system is up-to-date as well as your node-red integrations. At the time of making the video, some of the methods were being deprecated so there is a chance that you could be using a deprecated function.
Respect.
🫡
would be good to see this run using a local AI :)
No worries! Im working on a feature to allow the use of Local AI 😁
@@technithusiast looking forward to that then