The magic begins ! You have peaked my interest in self hosting N8N (i may need to upgrade my MacBook) and build a few automations…obsidian being my destination to accumulate all the content I discover on YT, podcasts, news articles and various and sundry RSS feeds. Thank you so much ! You are presenting and teaching at the exact level that I’m at right now.
Hey Matt, I came across your TH-cam channel a while back and have been watching since. Being in the AI space myself from both a professional and personal perspective, I've found your videos valuable. Good luck with the road to 1million! Keep putting out the content you have and I'm sure you will get there 💯
I mainly use NodeRed as my low code and AI Workflow platform. Yes i know it is more rudimentary than n8n but back in the time when I used n8n for my various email related workflows I wasn’t happy with the performance as it often stalled doing things. My nodered instance sits in HomeAssistant (so it can also easily access my smarthome) and my ollama installation on a discrete machine in my 19” rack. I’m doing all kind of stuff like summarizing and filter my emails, telling me (via homeassistant text-to-speech) that my various devices have finished working, that my car finished charging and how long it needed/electricity it consumed - all with different and funny texts. Most of my ollama knowledge I got from this channel 😉
I basically threw away two years of building something like n8n and have no regrets it is so well thought out and soooo many integrations! I like the “random” tweet will be doing that soon
Matt, I can't believe you are covering n8n. I just found out after a week of searching for an alternative to Make. Man, I'm so glad I found your channel. Brilliant explanation by the way.
This is what made me drop everything and focus on AI. When I saw Matt Wolf embed a call to an LLM from within a workflow, my world changed. For me, this showed a way of blending deterministic tasks with inference. It showed a glimpse at how AI could be utilized to make traditional approaches work better without falling victim to what AI was not good at - deterministic tasks. I hope n8n can continue to find a way to blend the corporate and community model. NOW: Is it useful to use Huginn to trigger or act as agents for your workflow, e.g. trigger your webhook?
Really enjoying the content and appreciate the time and effort you put in 😊 Very glad you got me on to n8n, I think it's the missing piece to a range of problems I've been searching for a while. Very interested in custom node creation, I'll check out your catalogue and keep an eye on future videos
Video very much appreciated. Taking a side detour to look at standing up a Windmill instance in the proxmox cluster to immediately start reducing some workloads, +1 for the extra tip!
Thank very much Matt, I've been using n8n for the past 2 months and I have created many cool workflows! The one I'm most proud of sends random messages to my parents on WhatsApp, sharing updates about daily events!
Dear Matt - would love to leran more on why you use 1 instance of n8n locally (npm) and 1 in the cloud. Am at the beginning of my automation journey and appreciate your style as a teacher. Thanks for all thr great content you have shared so far! Simon
I mentioned it in one of the videos. The cloud instance is always available, even when my laptop is in my backpack. But my laptop has a great gpu so can run ai models locally.
node-red is way more versatile than n8n imo, I wouldn't dismiss it so quickly. I created a chat & image generation interface for ollama/comfy ui running in node-red that I can access from anywhere, which in turn can use/fire off anything I want. Based on the trigger words i use in the interface, it is routed to the AI i want. There are 1000's of nodes/plugins available for any type of integration. I do agree that oauth2 integration out of the box is very nice, as it is a horrible mechanism to deal with, but also nodes available in node-red, but a bit more finicky. Creating your own api in node-red takes like 5 seconds, which means that you can use it in other flows or tools.
Ive got two instances of N8N running. One handles an email account, responds to emails and there are some things I can ask it to do in email. It also summarizes news articles, etc. The other one is for the AI. It handles specific requests. For example, I wrote a python script that gathers system info on my ollama server. Things like cup and memory usage, gpu usage for all the gpus, etc. So when I ask the AI how it is doing, or how it is feeling, it pulls system information and feeds it to the model to write the response. If I ask it to run a full internal diagnostic it uses that info plus some other info it gathers to tell me about the status of the system it is running on and the subprocesses that handle other parts of the AI.
Good. This is about the same kind of task that i wanted to automate through my ollama machine. I was thinking about auto scraping news, webpages and youtube videos and feed those information to Fabric to evaluate, summarize and notify me the news/information that worth to read, consume, digest.
I’m very interested in learning more about n8n. A couple moths back I had it running locally on my pc but when I reloaded windows I haven’t reinstalled it mostly because I was still in the learning stages and well some new image generation models (Flux) came out and I was off on that adventure lol. They honeymoon time is over and now I’ve back to working with llm’s and actually just installed Preplexica using your video, thank you. I appreciate all your efforts. Jason aka SouthbayJay
That was super interesting. I would love to hear more. I am currently using IFTTT. I was looking to make the move to Make but N8n looks like a better fit.
4 หลายเดือนก่อน
Awesome video as always, Matt! Might be interested checking Active Pieces. I found it to be more reliable although it has less native integrations. But you can code way better on it if you wish and this one is fully open source, which I much prefer. Anyways, n8n is awesome too!
Very interesting video, thank you! I'm currently playing around with Flowise, but this looks even more powerful and intuitive. Would love to see a video about creating custom nodes in n8n like you suggested!
Matt great video thanks for the content. Newbie here, I am going to give it a try. I've seen your both n8n video and SearXNG video. Would be nice to see a video combining them both.
Matt, this video is exactly what I needed! 🚀 The combination of n8n and Ollama for AI automation is mind-blowing. You've inspired me to dive into setting up a self-hosted instance to automate my own workflows-especially integrating news aggregation and summarization for social media. The way you explain things makes it super accessible, even for someone just getting into automation. Keep these coming, I'd love to see more about building out complex workflows using community nodes! Thanks for making AI automation feel doable. 👍
I was commenting on comments and deleted one accidentally. Yes I will do a video about why I installed via npm vs docker on my local machine. I should have learned my lesson that it’s never good to work with TH-cam comments from the hot tub.
Excellent content! I am very interested in any follow-up suggestions you may have, as I believe many viewers would benefit from seeing additional approaches to these processes-even if we are already implementing similar methods.
Really really wanna see where you can force JSON output with AI. Also very curious about CrewAI if that can be integrated somehow instead of just raw langChain.
Have you tried Information extractor node? Have configuration to output json in defined schema. It’s an AI node, not sure if it could use Ollama also? Like the output of first AI node to this Node for json structuring.
Can we use something like this to login to a website copy the page source and convert it into spreadsheet and the take the spreadsheet and upload it to a web app? I do all this manually now and this would be amazing if possible!
Hey , I want to make a automation for my father 's day to day workflows , for example let's say search in particular website and grab a text or result from that and according to that result go to excel and update and grab some text from excel and search into website ....is that possible to Automate? Like with conditions if in excel this text so go to this website and search that and if this then do this like that??
This is close to what I am looking for. I will admit to being a long term coder, but reasonably new to AI. What I am looking for is a way to have one LLM running on one device call a LLM running on a different device, then get the results passed back. My thought is, have fine tuned models for solving specific types of problems, with one model to rule them all. Creating a Langchain tool that SSHes into the other computer sounds like a solution, but not a fun one. Any ideas? Again, this is very early in the project. I have a bit of sample test running, but nothing close to solving the problems.
Please guide me.I want to run ollama +70B: q4 or a larger model through iMac 2017 (RAM 64G) + eGPU + AMD graphics card. Here is a choice: a graphics card with faster speed/less memory, or a graphics card with slower speed/large memory. For example, radeon VII 16G (I prefer to choose MI50, I'm not sure if macOS can drive it) or radeon pro duo (32G, Polaris)?
in ubuntu24 with egup,the system often clash,i have back to MacOS.I recompiled llama.cpp, which can run LLM use GPU (Metal). But only the built-in GPU of the machine can be used.😂🤕
@@technovangelist I found a strange phenomenon: in Windows 10, when vulkan is installed, there are 2 vulkancups, which belong to 2 GPUs. Choose under gpt4all: CPU/vulkan(rx570/4G)/vulkan(VII/16G), and the tokent generation speed is slower than the other, which is completely opposite than expected.
You have to have a lot of content for supabase to cost anything. Most individuals will never hit that. But I am now using nocodb that I self host. And the x api is free for a single app which I use with N8n and stuff gets posted from there
Great video! Langchain 😅 Lot’s of juggling involved, feels like calling APIs in your dreams, very creative and non deterministic. Any help taming this beast would be appreciated.
cool but can you focus more on locally focused services and solutions? and if you do use payed 3rd party services can you specify how open source they are and the approximate costs associated with using them for one fully processed flow so we can know what we are potentially heading down. also feel free to through shade to any provider that does not offer BYOK (bring your own (ai) key) and limited or no api access.
You should watch the video. I only focused on installing it locally and running it from there. I don't really know how much it costs since I only use it locally.
there are a bunch of fans of it in this thread, so definitely not dead. I just find it a lot less powerful at these higher level tasks. if i was dealing with individual sensors, then that’s when I use nodered...like in home assistant, since i have hundreds of lutron devices and many dozen sensors, its useful, though so slow on decent hw.
@@technovangelist Wow. At the hype of no-code tools when n8n was just the decent alternative to zapier and nodered, I’ve heard an influencer mentioning n8n as Neithan, so the pronunciation stuck with me. I saw it written n8n in multiple articles since, but I was always reading it as Neithan. Today I’ve checked, and of course you’re right.
I didn’t mention it because they didn’t. This is a tool I have used for years that I really like. You don’t like me sharing tools I like to use? That’s kinda the point of the channel.
@@technovangelist no, don’t take it so negative, please. I didn’t mean to imply that. We are constantly bombarded by sponsored video-reviews, it just felt like another one. My humble advice, you could explicitly mention they did not sponsor it. Great you shared the tool. Knowing it is not sponsored, I will watch it again from a new perspective. I love your content, specially the course! Thanks! 😊
really want this but locally hosted and without any 3rd party monthly paid saas services. push against the AI overloads. support open source and locally ran everythign.
Why not NodeRed? It's opensource and much more powerful than N8N and you can use custom plugins from npm. I use NodeRed with Ollama and HomeAssistant. In fact NodeRed talks to all the MQTT and REST stuff. I selfhost it all though.
@@technovangelist Oh I mean over MQTT and API. NodeRed is running on its own. It communicates with Home Assistant with the Home Assistant API (node-red-contrib-home-assistant-websocket plugin) and Home Assistant's discovery over MQTT. I also run zwave2js and zigbee2mqtt with a Matter controller, all on docker instances too, and use MQTT to have them talk to Home Assistant. That way if Home Assistant goes down for whatever reason, everything else still works. I wish they'd pull the AI assistant out of Home Assistant, as I would like that to work if Home Assistant dies, so I just built my own with Ollama instead, and provide it all the details about my sensors and lights and stuff via RAG.
The magic begins ! You have peaked my interest in self hosting N8N (i may need to upgrade my MacBook) and build a few automations…obsidian being my destination to accumulate all the content I discover on YT, podcasts, news articles and various and sundry RSS feeds. Thank you so much ! You are presenting and teaching at the exact level that I’m at right now.
Ditto ;)
Thanks for all your great vids Matt. Would be great to see a vid combining Ollama with OpenWebUI using N8N as ”tools”.
Good to hear the mentions of the history of workflow automation. Solid Respect.
you are the best teacher in tech world
Hey Matt,
I came across your TH-cam channel a while back and have been watching since.
Being in the AI space myself from both a professional and personal perspective, I've found your videos valuable.
Good luck with the road to 1million! Keep putting out the content you have and I'm sure you will get there 💯
Dude these are awesome. I’m an experienced dev and your channel is helping me cut through the fat of gimmicky AI tools. Keep it up bro.
This is one of the best videos you’ve made. Same with the last searchxng one you did. On a roll.
I mainly use NodeRed as my low code and AI Workflow platform. Yes i know it is more rudimentary than n8n but back in the time when I used n8n for my various email related workflows I wasn’t happy with the performance as it often stalled doing things. My nodered instance sits in HomeAssistant (so it can also easily access my smarthome) and my ollama installation on a discrete machine in my 19” rack. I’m doing all kind of stuff like summarizing and filter my emails, telling me (via homeassistant text-to-speech) that my various devices have finished working, that my car finished charging and how long it needed/electricity it consumed - all with different and funny texts. Most of my ollama knowledge I got from this channel 😉
That’s super cool
Thank You ! We are checking out N8N right now !
Thanks for the video. We want more n8n videos!
What a wonderful fella, love the video!
Thanks for posting another great video. I’m going the try n8n tomorrow.
Explained a lot to me thank you! These webhooks and APIs are something I battle with
I basically threw away two years of building something like n8n and have no regrets it is so well thought out and soooo many integrations!
I like the “random” tweet will be doing that soon
Great video, I’ve been using N8N too, so it is very insightful to see what you are doing with it.
And “Joyner division” great shirt Matt.
Thanks.
Matt, I can't believe you are covering n8n. I just found out after a week of searching for an alternative to Make. Man, I'm so glad I found your channel. Brilliant explanation by the way.
Thanks
This is what made me drop everything and focus on AI. When I saw Matt Wolf embed a call to an LLM from within a workflow, my world changed.
For me, this showed a way of blending deterministic tasks with inference. It showed a glimpse at how AI could be utilized to make traditional approaches work better without falling victim to what AI was not good at - deterministic tasks.
I hope n8n can continue to find a way to blend the corporate and community model.
NOW: Is it useful to use Huginn to trigger or act as agents for your workflow, e.g. trigger your webhook?
Thanks, Matt. I will certainly try it out. Subscribed & Liked.
love your n8n video series!
Really enjoying the content and appreciate the time and effort you put in 😊
Very glad you got me on to n8n, I think it's the missing piece to a range of problems I've been searching for a while.
Very interested in custom node creation, I'll check out your catalogue and keep an eye on future videos
I would love to see more videos on using n8n and how you work with Ollama directly!
Awesome video Matt! Many thanks for sharing this.
This is very interesting.
Video very much appreciated. Taking a side detour to look at standing up a Windmill instance in the proxmox cluster to immediately start reducing some workloads, +1 for the extra tip!
thanks Matt, great video. more n8n please, very interesting ...
Phenomenal! I have gpt writing my n8n workflows then I just past them into n8n.
Love the presentation!
I use n8n all the time, its my go to tool for automation both internally and for my clients.
@@nocodecreative can it be used inside enterprises as a self hosted thing,
Or only SAAS?
Thank very much Matt, I've been using n8n for the past 2 months and I have created many cool workflows! The one I'm most proud of sends random messages to my parents on WhatsApp, sharing updates about daily events!
This is great thanks. A lot of the trouble I've been having lately is just organizing things and this would really help.
Great Video! subscribed and liked!
Sooper cool videos super cool person sooper cool technology waiting for more videos . On n8n .... ❤❤
Dear Matt - would love to leran more on why you use 1 instance of n8n locally (npm) and 1 in the cloud. Am at the beginning of my automation journey and appreciate your style as a teacher. Thanks for all thr great content you have shared so far! Simon
I mentioned it in one of the videos. The cloud instance is always available, even when my laptop is in my backpack. But my laptop has a great gpu so can run ai models locally.
More N8N videos pls
What a great explanation on this topic! Hopefully, there will be a tutorial on how to use this both locally and in the cloud?
For the json output that Llm and ai agent nodes have the force output format and do a good job there btw.
N8N is very interesting, coming myself from some automation stuff. Docker self hosted looks the best. Thanks :)
Nice ! That's pretty cool. 👍
I am definitely interested in finding out why you have two instances of n8n and also learning how to do that on my Mac!
Hello. Thank you for lesson stuff
Thanks Matt - just subscribed!
node-red is way more versatile than n8n imo, I wouldn't dismiss it so quickly. I created a chat & image generation interface for ollama/comfy ui running in node-red that I can access from anywhere, which in turn can use/fire off anything I want. Based on the trigger words i use in the interface, it is routed to the AI i want. There are 1000's of nodes/plugins available for any type of integration. I do agree that oauth2 integration out of the box is very nice, as it is a horrible mechanism to deal with, but also nodes available in node-red, but a bit more finicky. Creating your own api in node-red takes like 5 seconds, which means that you can use it in other flows or tools.
Now I want to try n8n again!
Ive got two instances of N8N running. One handles an email account, responds to emails and there are some things I can ask it to do in email. It also summarizes news articles, etc. The other one is for the AI. It handles specific requests. For example, I wrote a python script that gathers system info on my ollama server. Things like cup and memory usage, gpu usage for all the gpus, etc. So when I ask the AI how it is doing, or how it is feeling, it pulls system information and feeds it to the model to write the response. If I ask it to run a full internal diagnostic it uses that info plus some other info it gathers to tell me about the status of the system it is running on and the subprocesses that handle other parts of the AI.
WOW, Matt, thanks for this special tip. Please show and explain us the AI-Capabilities of n8n 🙏
Yes please get into the why 🎉
Good. This is about the same kind of task that i wanted to automate through my ollama machine. I was thinking about auto scraping news, webpages and youtube videos and feed those information to Fabric to evaluate, summarize and notify me the news/information that worth to read, consume, digest.
Yes it's very awesome, I try it ❤
I’m very interested in learning more about n8n. A couple moths back I had it running locally on my pc but when I reloaded windows I haven’t reinstalled it mostly because I was still in the learning stages and well some new image generation models (Flux) came out and I was off on that adventure lol. They honeymoon time is over and now I’ve back to working with llm’s and actually just installed Preplexica using your video, thank you. I appreciate all your efforts. Jason aka SouthbayJay
Cool Tshirt Man
That was super interesting. I would love to hear more. I am currently using IFTTT. I was looking to make the move to Make but N8n looks like a better fit.
Awesome video as always, Matt!
Might be interested checking Active Pieces.
I found it to be more reliable although it has less native integrations. But you can code way better on it if you wish and this one is fully open source, which I much prefer.
Anyways, n8n is awesome too!
Very interesting video, thank you! I'm currently playing around with Flowise, but this looks even more powerful and intuitive.
Would love to see a video about creating custom nodes in n8n like you suggested!
Matt great video thanks for the content.
Newbie here, I am going to give it a try. I've seen your both n8n video and SearXNG video. Would be nice to see a video combining them both.
🙏 thank you
Great content, thanks!
Great work Matt! One por follower to help you reaching your 1M goal!
hi Matt, whenever you say "in a previous video i show how to do X" -- might be a good idea to add the reference to this vid somewhere
So are you Italian? 😂 thank you so much for the overview!
Matt, make a workflow that tracks your water intake during the day, based on the signal of a sensor in the bottle as you described a few videos back.
Yahoo pipes was genius, I'd forgotten about it TBH
Matt, this video is exactly what I needed! 🚀 The combination of n8n and Ollama for AI automation is mind-blowing. You've inspired me to dive into setting up a self-hosted instance to automate my own workflows-especially integrating news aggregation and summarization for social media. The way you explain things makes it super accessible, even for someone just getting into automation. Keep these coming, I'd love to see more about building out complex workflows using community nodes! Thanks for making AI automation feel doable. 👍
Great content.
Would really appreciate if you'd share more details about your laptop (and the 64gb VRAM setup)!
It’s a M1 Max MacBook Pro. That model had a max of 64gb ram and its unified so the gpu uses it too
@@technovangelist - neat 👍🏽
Appreciate the clarification.
I was commenting on comments and deleted one accidentally. Yes I will do a video about why I installed via npm vs docker on my local machine. I should have learned my lesson that it’s never good to work with TH-cam comments from the hot tub.
Excellent content! I am very interested in any follow-up suggestions you may have, as I believe many viewers would benefit from seeing additional approaches to these processes-even if we are already implementing similar methods.
I just found out n8n had a self hosted option, would love to see what you can do with it here. It could fit perfectly into my projects
super cool, there is also another similar tool: kestra
Flowise, Langflow, Camel, dust tt, haystack playground - there are tons of them right now
Hi Matt,
I'm interested in the nodes, that makes it easier instead of using langchain and json!
I'm very interested in seeing some better N8N ollama nodes and more
Really really wanna see where you can force JSON output with AI.
Also very curious about CrewAI if that can be integrated somehow instead of just raw langChain.
What do you think about using Slack to trigger your automations? Curious I haven’t delved in yet.
sure, I did that before on the project we worked on before we started making ollama, RBAC for kubernetes. It worked great.
Would you be doing a video on how to install and use Llama Stack?
Have you tried Information extractor node? Have configuration to output json in defined schema. It’s an AI node, not sure if it could use Ollama also? Like the output of first AI node to this Node for json structuring.
Is it possible to schedule an automated daily documents upload to vectorize, that get added to the knowledge base?
llama3.2 w/ 32k ctx... you don't have to inc via modelfile to create a. custom 32k ctx model first?
You do for anything over 2k
Can we use something like this to login to a website copy the page source and convert it into spreadsheet and the take the spreadsheet and upload it to a web app? I do all this manually now and this would be amazing if possible!
Can I can use n8n with Open Interpreter? Could Open Interpreter LLM start the workflows on n8n?
Hey , I want to make a automation for my father 's day to day workflows , for example let's say search in particular website and grab a text or result from that and according to that result go to excel and update and grab some text from excel and search into website ....is that possible to Automate? Like with conditions if in excel this text so go to this website and search that and if this then do this like that??
This is close to what I am looking for. I will admit to being a long term coder, but reasonably new to AI. What I am looking for is a way to have one LLM running on one device call a LLM running on a different device, then get the results passed back. My thought is, have fine tuned models for solving specific types of problems, with one model to rule them all. Creating a Langchain tool that SSHes into the other computer sounds like a solution, but not a fun one. Any ideas?
Again, this is very early in the project. I have a bit of sample test running, but nothing close to solving the problems.
@@s.patrickmarino7289 👀
Motleycrew and gorilla llm agent marketplace come to mind. Technically speaking adding fastapi to python agents can get you a long way
Please guide me.I want to run ollama +70B: q4 or a larger model through iMac 2017 (RAM 64G) + eGPU + AMD graphics card. Here is a choice: a graphics card with faster speed/less memory, or a graphics card with slower speed/large memory. For example, radeon VII 16G (I prefer to choose MI50, I'm not sure if macOS can drive it) or radeon pro duo (32G, Polaris)?
I think for that hardware you are going to have to install Linux instead of macOS.
Thank you for help . I have installed Ubuntu24 and installing rocm.
in ubuntu24 with egup,the system often clash,i have back to MacOS.I recompiled llama.cpp, which can run LLM use GPU (Metal). But only the built-in GPU of the machine can be used.😂🤕
@@technovangelist I found a strange phenomenon: in Windows 10, when vulkan is installed, there are 2 vulkancups, which belong to 2 GPUs. Choose under gpt4all: CPU/vulkan(rx570/4G)/vulkan(VII/16G), and the tokent generation speed is slower than the other, which is completely opposite than expected.
11:15 what’s “readability”?
a library from Mozilla.
Are we purposely not mentioning comfyUI? I'm not an expert but it _does_ look the same
Would love an episode on ollama based webcrawlers
I didn’t mention it because it is completely unrelated. I’m not going to automate general stuff with that.
This sounds expensive to run. Id be curious to know what all the APIs end up adding up to. I really want to do this stuff too though
You can host it all on your own computer
@@technovangelist superbase isn't free for example though x api isn't either... What am I missing here?
You have to have a lot of content for supabase to cost anything. Most individuals will never hit that. But I am now using nocodb that I self host. And the x api is free for a single app which I use with N8n and stuff gets posted from there
@@technovangelist ah thanks for clearing that up. I'm gonna have to look at using nocodb, trying to keep as much local as possible
Great video! Langchain 😅 Lot’s of juggling involved, feels like calling APIs in your dreams, very creative and non deterministic. Any help taming this beast would be appreciated.
cool but can you focus more on locally focused services and solutions? and if you do use payed 3rd party services can you specify how open source they are and the approximate costs associated with using them for one fully processed flow so we can know what we are potentially heading down. also feel free to through shade to any provider that does not offer BYOK (bring your own (ai) key) and limited or no api access.
You should watch the video. I only focused on installing it locally and running it from there. I don't really know how much it costs since I only use it locally.
Everytime I see a n8n video I can't stop thinking: Why no one talks about node-red anymore, is it just a dead horse?
there are a bunch of fans of it in this thread, so definitely not dead. I just find it a lot less powerful at these higher level tasks. if i was dealing with individual sensors, then that’s when I use nodered...like in home assistant, since i have hundreds of lutron devices and many dozen sensors, its useful, though so slow on decent hw.
Isn't n8n supposed to be called Neithan?
Never heard anyone say that
@@technovangelist Wow. At the hype of no-code tools when n8n was just the decent alternative to zapier and nodered, I’ve heard an influencer mentioning n8n as Neithan, so the pronunciation stuck with me. I saw it written n8n in multiple articles since, but I was always reading it as Neithan. Today I’ve checked, and of course you’re right.
I think 5 years ago the name nodemation got used a bit, but it never really stuck
As a software developer I'm not a big fan of another web UI. Howver I'm a burnout ex-devops so going back to dev is life or death.
Your laptop has 64gb vram?!?
Yup. MacBook Pro
That was mesmerizing, but I kind of lacking funding these $$ to third party and trying to build ai agentic workflows through langraph..
Good thing everything I showed is completely free
Seems windmill hasn't been updated in 4 years, doesn't look like it's going anywhere
By 4 years, do you mean 31 minutes? Not sure where you got that number from. It gets a lot of updates everyday.
@@technovangelist hmmm... You're right, I was looking at a different windmill UI project!
It felt they sponsored the video, maybe I missed you mentioning?
I didn’t mention it because they didn’t. This is a tool I have used for years that I really like. You don’t like me sharing tools I like to use? That’s kinda the point of the channel.
@@technovangelist no, don’t take it so negative, please. I didn’t mean to imply that. We are constantly bombarded by sponsored video-reviews, it just felt like another one. My humble advice, you could explicitly mention they did not sponsor it.
Great you shared the tool. Knowing it is not sponsored, I will watch it again from a new perspective.
I love your content, specially the course! Thanks! 😊
really want this but locally hosted and without any 3rd party monthly paid saas services. push against the AI overloads. support open source and locally ran everythign.
You should watch the video. That’s what it shows. There is no monthly service required. It’s locally hosted.
Nice shirt 😂
Your site is not that good on mobile.
My site is not that good on desktop either
Why not NodeRed? It's opensource and much more powerful than N8N and you can use custom plugins from npm. I use NodeRed with Ollama and HomeAssistant. In fact NodeRed talks to all the MQTT and REST stuff. I selfhost it all though.
I have to suffer through node red too with home assistant. I wouldn’t put them at the same level.
I don't use Home Assistant for NodeRed. NodeRed is hosted by itself. No suffering required!
But you just said you use it with home assistant
@@technovangelist Oh I mean over MQTT and API. NodeRed is running on its own. It communicates with Home Assistant with the Home Assistant API (node-red-contrib-home-assistant-websocket plugin) and Home Assistant's discovery over MQTT. I also run zwave2js and zigbee2mqtt with a Matter controller, all on docker instances too, and use MQTT to have them talk to Home Assistant. That way if Home Assistant goes down for whatever reason, everything else still works. I wish they'd pull the AI assistant out of Home Assistant, as I would like that to work if Home Assistant dies, so I just built my own with Ollama instead, and provide it all the details about my sensors and lights and stuff via RAG.
That might be the nerdiest shirt ive ever seen
I loved joy division when I was in college and I do so much with node so the joyent connection was also there.
@technovangelist I got it right away, the meta goes deep
Your content is really great, but please, we are not children. You talk like you're speaking to a 3 year old child.
Wow. I’d love to see your 3 year olds. I don’t even talk like this to my 5 going on 6 year old.
Sooper cool videos super cool person sooper cool technology waiting for more videos . On n8n .... ❤❤