I'm not a front end developer but i did a bit of full stack back in the pure js and the jQuery days, and it blows my mind what modern front end devs have done to themselves. They make problems for themselves, then solve them by adding more rubbish, then have more problems and the cycle repeats. e.g. make things massive and slow and complex and then add server side rendering because the client struggles with the insane things you're asking it to do. IMHO bundling and minification are fine, and TypeScript is fine, so a build step is fine. The madness comes from npm depdndency hell and overengineered frameworks. Megabytes of hello world with hundreds or thousands of dependencies. node_modules can be gigabytes in size, and people think this is fine.
What an ignorant take. To insist that JS ecosystem was easier/better/performant 10 years ago than it is now suggests that either you haven’t worked with JS 10 years ago or all you know about the JS of today is through memes.
@@imaliazhar it's definitely not an ignorant take... I have a client, I took over their website from another dev (because that dev was clueless), it's a NextJS monstrosity. Even fully compressed into a tar.gz for deployment is still 1.8GB, and almost all of that is node_modules nonsense! For a site that could be rebuilt entirely in a static site generator! It's absolute madness! I have another client with an almost identical level of site functionality that I built with Hugo and... it's
"Write once, run forever" is way more wise than it first appears. Consider that the majority of the websites are deployed in exactly that manner. It's written once, and then it runs forever. It's a small fraction of websites that consistently get new updates and features.
I’ve worked as a web dev for 3 years now and 90% of the websites have gotten upgrades during this time. It’s not that the devs want to upgrade the site, our clients do. Their websites are literally how they earn their money…
The issue is that people don't distinguish between websites with dynamic interaction from full web apps. 95% of the times you don't need a desktop like experience and complex components. In these cases having a templating language for generating HTML on the frontend is absurd.
I never understood why JS people had super complex build systems. In my mind, the entire purpose of interpreted languages was to eliminate the mental overhead of cmake/make/etc.
To think that JS is inspired in Self, and the DX of both those things couldn't be further apart... If you never played with Self (or at least Smalltalk), it's an interesting experience :) You live inside the program, all the devtools, libraries, everything is part of the same system, you grow that system into your program step by step, you can edit your code in the debugger as you go... the absolute polar oposite of having to write, build, test. But if you are a Vim user, the experience is very mouse centric, so that might make it a bit less pleasurable (and no, GNU Smalltalk is not the same, that's just like Ruby with less Perl in it).
If you want to understand why, let us know and I can try to give an explanation of why a build system is required. If you’re just being cheeky then ignore
One crucial point that was missing is that with HOWL, not only does it free people up to use languages other than js, but it makes it ACCESSIBLE to people who are not using/familiar with js. Some of those people might want to make contributions to the library and learning/writing basic JavaScript is far simpler than figuring out some typescript build system - I'd simply never even try and all the project could receive is an issue rather than a PR.
Htmx seems more maintainable than JS, I'm planning to go deeper in that. My team has an application in webpack3 and is emotionally painful always when we try to upgrade and seems more easy start new front from zero!
Adding my two cents, a current dev exp issue we are experiencing with TS in my company is that the project is so big that the tsserver lang struggles to keep up at the point that it becomes unusable. We are talking about 4s if not more for the autocomplete, when it doesn’t crash.
I've been writing javascript professionally for 14 years. Never thought I'd be excited to stop using it. I have no doubt that in a few years time HTMX will dominate the market. Because it is the new revolutionary in its simplicity and brilliance idea, kind of like what React was when it first came out. I'm experiencing this "how tf have I not come up with something this obvious myself" moment all over again.
Not having a build system is one the main reasons that I've resisted going from knockoutJS to another library ever since I was moved to the project I'm currently working on at my current job. Not only would it require time and effort migrating, but it wouldn't actually improve developer experience unless we did something drastic. Keep in mind this is a very backend-heavy solution (which I'm assuming you could guess based on the fact that we're still on knockoutJS lol), and we rarely change actual code. I'm guessing 1 line of JavaScript or CSS per week on average. Knockout JS - like htmx - allows us to just change the template without compiling the Typescript - it's really 👍
logged in to up vote and say: i miss knockout-js. I think its approach is my favourite tbh: observables and simple to plug and play. shame it didnt have a good components solution.
To Prime's point at 9:45: For one of my team's current projects, we tried using jsdoc instead of typescript, and it didn't go very well when we started using libraries that were built with typescript in mind. I would say, If you have control over the whole code base, it would be very sufficient, but if you rely on external dependencies, then you're reliant on their assumptions about how DX should work (which usually includes typescript). Now, our tool (vite react) already has a few build steps, so integrating typescript took just me half a day of work. (Mostly renaming files to tsx lol) The cost of build steps and such are definitely higher if you need to manage, or configure them yourselves, or if you're on a proprietary system and you would need to build it yourselves. Blech. For us, rhe DX improvements of type safety will make our team a lot more efficient due to the complexity of our API's types which we can't change, so the benefit is substantial.
A lot of the discussions here happen just because people aren’t being explicit about their priors. Dependencies add costs that are mostly *in the future*. If you are working at a startup, you’ll always trade speed in the present for cost in the future, because you expect future revenues to pick up the tab. If you’re an OSS project, your incentives are *very* different.
I agree that HTMX is a step in the right direction. I have spent most of my career building Angular and React applications. 99% of our apps' behaviors are easier to implement with HTMX and/or Alpine. We should only be using React for the 1% of problems it simplifies.
In JetBrains IDEs there is a toolbar that lists all the functions in the file or all the members of the class etc. So it's pretty simple to maintain these kinds of long files
Neovim LSP has the same structure and it's builtin to telescope too. So I have it remapped to st since I come from Intellij (STructure) and can pull up the telescope search to jump by method, function, class
The "Build" for HTMX in CI/CD pipelines is to timestamp the HTMX file at a particular point in time on when you package the HTMX file for deployments into target environments - Dev, QA, Staging, Prod, etc - so it can be integrated into the full ecosystem of an enterprise's applications.
"Rememeber when they said they were going to run useEffect twice in debug mode, I'm still tender". No jokes this is the reason I abandoned my previous attempt to learn React. Spent 1 day trying to figure out what was going on. Spent another day reading terrible justifications for it. Then returned to the safety of vanilla. That was over a year ago. Don't think I'm every going to try again.
One of the most important qualities in life is to be able to "go one step back" and evaluate a situation from a non subjective perspective. About 50% of the time i realize, that what i do is not smart and adopt early. If you cant be critical about your own work, you will find yourself in hell over and over again. The road we went down with frontend toolchains proofed to be stupid and very costly. Every time i "just do the thing" and create a .js file that does the job without transpiling etc, it feels right. Its productive and very "targeted".
A good example is to go beyond your own example of non-bias judgment and step into the shoes of a totally different person who has completely removed concepts like goals and history. If you can imagine what they see when they look at the project it could be so different that you can walk away with things we would never otherwise think to even question.
A 3.5K lines is big, but not outrageous. It's about 100 A4 pages in length... a large document but you wouldn't insist it was split into several shorter docs - and code is not as dense as English. Cognitively challenging but if you are working on that project it's all you need to know. Issues of structure and scope are not fixed by splitting code into separate files any more than microservices make components loosely coupled . Splitting the code just necessitates structure and limited scopes, or else bad things happen, and they frequently do. In a large file mutual/self-discipline is essential. When learning toy examples are used to demo "enterprise" project structures. When does "fizz-buzz" need to become "Enterprise Fizz-Buzz"? with a bigger quantity of code than most people believe.
Did we really need an article for this? it was always baffeling to me why we need a build step for interpreted scripts…. other then having an excuse to get a coffee
@@vitorguidorizzzi7538 yep I am aware on what people use it . the polyfill can included as such in the source or as an additional script inlcue and I have a CDN that can do minification. Types is a valid argument for people that prefer TS. I prefer vanilla JS
Vite seems to be an opinionated tool that is built around Rollup. I haven't worked extensively with it, but you do get access to some amount of rollup configuration directly which lets you set inputs and outputs.
@@heinzerbrewcatch bugs during build time with types, minify the code, delete unused code and target different browsers, and don't forget about esm and cjs
It should be like that since inception. What is the reason behind having a dynamic loosely-typed language? So that we don't have to spend time on config+compile. And guess what JS developers salivate on? Their own versions of Make: Grunt, Gulp, Webpack, Esbuild, Rollup, Rolldown... you name it. I am determined to detoxicate myself from this whole JS overtly complex mess. HTMx is the way for me.
7:10 and at that stage you check out the deployed version from Git, start it locally, and you have full debug capabilities. If you cannot do this, then I'm not sure how can you create hotfixes TBH.
Seems that HTMX should go great with VUE in global mode where you can serve templates from server and make them interactive on the fly and without any builds
Emacs Lisp is often written in single, huge files. To make this easier, page break characters "^L" are often used, combined with page header comments clearly seperating the sections of the file. In Emacs you can then navigate between the pages and also restrict your buffer to only display a single page.
In my first IT job our manager told us when he first started in helpdesk his manager would go around and take their mouse if he felt they were to reliant on it. They frequently held workshops on how to navigate around without a mouse and at the time he told us this he still maintained he could work faster without a mouse.
I probably could make my app in HTMX and have benefits from it. My big ass backend is in C# ASP NET Core and frontend is SvelteKit. And I should maintain 3 repositories and teach all my devs how to write TypeScript. But I probably will not switch in the nearest time, because I don't want to rewrite it.
so, the ES6 thing is a point against the whole backwards compatibility thing. Eventually htmx will stop supporting ie11 and they will rewrite some parts to use ES6 features. Not any different from typescript, but less frequent changes perhaps.
Great programmers are great programmers because they have a passion for learning. Once proficient in a technology, we’re always looking for the next thing to learn. I love functional programming, but I honestly think the reason it’s being considered superior to OO is because so many programmers that have only experienced OO are feeling like they’re relearning programming for the first time again. I myself am guilty of this. Elixir rekindled my love for programming, but I don’t think functional is inherently superior to OO. I think OO is still a better choice for most tasks, especially web based applications. Languages like Crystal show us that OO languages don’t have to be slow or prone to errors.
btw with my lightweight frontend js lib (with reactivity like in solid) I use not index.html as an entry point for vite. it bundles my jsx omponents to js and generates additional assets.json files with updated hashes for srever-side templates (non js backend). I like how it works
speaking of ES6 I decided to go this route - write lib code and components with use of ES6 but if ever will need ES5 compatibility I will just compile specific version for ES5. I even made boundary api compatible with ES5. any code that lives outside of bundle will able to use api exposed by components and reactivity lib
As you got kinda 'triggered' by mut pronounced as mutt, I got 'triggered' by your pronunciation of vite. A small excerpt: Vite (French word for "quick", pronounced /vit/, like "veet")
25:00 You're missing a step: 1. Server checks if user can delete and sends value to client. 2. Client checks value to see if delete button should be hidden/disabled. 3. If user clicks delete button, server checks that user can delete. HTMX removes step 2.
Which is a non issue if you use OpenAPI and generate your validators from it. By doing so you in certain cases can skip a backend call, and do the work in the browsers, reducing load on your single source of truth.
@@noherczegin what case can you skip a server validation call?
ปีที่แล้ว
I hate it when the variable names are minified, bet resolved through sourceMaps. When you put a debugger breakpoint, and you hover a variable - you see the value. You see it in the "locals" pane on the right panel. But you cannot write it in the console to see it there., it is undefined. Sometimes this works, sometimes it does not..
What's he talking about, that not being true for Java and C? Of course you can take code written in 1.4 syntax and use it nowadays without any issues. Opting for new language features?
Three or 4000 lines yeah that's long but not unwieldy so. Maybe if it was complete spaghetti code and 12,000 lines like a PHP app I knew then there would be a problem but I doubt that is the case with HTMX
I agree with you about the state management problem. Nowadays, usually there is a lot of state in client side that is not necessary. But about the "duplication of logic" cannot be avoid using HTMX. The real oposition here is not frontend vs backend, but presentation vs business logic. Even rendering thing in backend, you have to prepare your presentation layer to, according to the information available, show or not the delete button. Using HTMX, you have a tighter client, but the same logic should be placed in backend instead. The only difference here is where the data is rendered.
You either seem to not get the main point - or you would try to "program in htmx in react style"... because The point is that the STATE is not in two different architectural points. The state in modern web dev is represented and kept sync both on cliend and server side. Here the state itself only exists on the server side. Much simpler and much less code. The duplication of code is maybe not the best phrase as the main issue is "duplication of the state handling code" which this way easily can be spared.
@@u9vata Of course. The "duplication of the state handling code" issue is something that I agree. Usually, there is no need to create the state transition both in client and server. You send the request to the server and it returns a payload that represents the new state and then I re-render the affected elements using it. The ideia is the same that you got with HTMX, but I already worked in that way using VueJS in 2018. The difference here is that I don't buy the idea to use HTML payloads. The problem here is not only the size, but that it is not the right level of abstraction that I expect from the server. If i receive a JSON payload that contains the same information, I still have the possibility to just render it using an HTML-like template and get the data in same format as HTMX, but I have the flexibility to do another things with that, like use it in special components. Here, still no duplication of code because the code responsible to render it is only in one place (and in my opinion, the right place).
@@danilomendes977 > Usually, there is no need to create the state transition both in client and server. VS. > then I re-render the affected elements using it. I honestly take this re-render as part of the syncing of state. And I think its much worse to send around random json that the client somehow builds html from - compared to literally sending the html. Again... this way the state is at two places: you can easily do as crazy things with that data on the client as you wish - and server does not know. The responsibilities and system state is totally conceptually shared this way and I honestly think it is an antipattern... > a JSON payload that contains the same information, I still have the possibility to just render it using an HTML-like template and get the data in same format as HTMX, but I have the flexibility to do another things with that, First off: you totally waste CPU with parsing that json on the client - especially if no templating is needed at all. Secondly, I think this makes the whole system much harder to reason about and easier to spaghettify because of the "can do other things" part which is actually why I prefer the HTMX approach here. I actually did this much before HTMX too. Actually did this not that long ago with pure JS and no frameworks at all. Just pure JS and REST endpoints with json... I actually even thought about making a library for these kind of things (different than htmx in approach, but again: not that different in a way) but now that I look back, most of the things I did I can imagine doing with htmx. My approach was more clieant-heavy just as yours... In mine I just had a raw html (this was micro services) that they could literally copy-paste to anywhere they want - let it be a static page or some endpoint generating this html. Just copy paste it where you want it appear... I had naming and other conventions for CSS-ing it if you wanted it look different too or actually change html a bit. What the JS nearby this did was that it literally changed the existing placeholders with microservice data and communicate back andd forth with backend in json and that is all it did. The good property of this system is that the non-dynamic, really static simple html example design was used AS-IS for the copy paste and looked exactly as when designed alone in a minimal html... ^^I can imagine some points to this that htmx cannot do - but honestly more often than not it would just make things even more clean... > Here, still no duplication of code because the code responsible to render it is only in one place I disagree: you have the message passing from backend and then the rendering. Just saying... the backend is not getting data in your json right straight from database as the json end result, but your backend literally renders its various data... into json... then you render that json into html... Technically I would totally call that "rendering into json" just similarily as people say rendering into xyz... So this totally is a duplication at that point in 99% of the cases - and guess what? If you are in that 1% where you want something more complex, you can still use regular JS for that part.... Just honestly it will be so rare that most pages would not even need you to enable javascript anymore if browsers would natively support this functionality! I literally hope native support for this happens from browsers. This is even more sandboxed than running JS - so people who otherwise turn off javascript can still run this maybe. Also little browsers like command line browsers might be able to more easily support this than supporting a full JS engine "properly". This thus enable a wide range of applications - like embedded browsing use cases much more easier. > The problem here is not only the size, but that it is not the right level of abstraction that I expect from the server About size... I guess you know that this can be easily gzipped and likely not even that big of a deal at all. Honestly sounds like parsing the json would take longer than the whole sending without even any gzip anyways. I also think server is totally right abstraction level here. Imagine putting the json-rendering logic part unified on the client: that would mean extremely thick client with communicating literally to database via the backend as proxy that does not have logic. This is basically a no-no. The opposite: that making it a thin-client however is really doable and what htmx is doing: then you can remove the "render to json" part and as I said in most cases render directly to html to be exchanged on the viewer machine at some location. Also this is more fitting to overall web architecture where HTML were always what is to be sent around directly as state and the result to view: full page? html. streamed content that get slow pulled? html. Fragments to change out dynamically? Html. It is very unified this way, also loads more simple to debug just by looking at logs for example on production!
json and html have very littel difference in payload size, and unless you are never exceeding 1500 bytes per message, timing becomes a lot different second, its easier for a server to produce html than json... its... a weird thing, but it is what it is.
@@u9vata JSON here is just an example. If i am looking for performance in data exchange between client and server, I would probably look for another thing (a binary format?). But, in my applications, usually the bottleneck is in other place. The main point here is the right level of abstraction. With HTML payload, I don't know how to deal with the information inside the generated html in client side and sometimes you need it.
3000 lines of code is nothing to me.. developing high integrated Magento 2 modules which include widgets, import/export functionality, multiple tables and factory collections, add your own APIs and log files, helper files.. it’s a nightmare.
6:01 why would you upgrade your TypeScript version in your package.json if you know your version of the code was built with an older one? If you do, do you think your other deps would work the same? Do you think 3 years old NodeJS has the exact same API-s? What are we talking about here?
I like types a lot, really against dynamic typing - BUT!!! BUTTT! I say htmx should only be javascript and nothing else!!! ---> HTMX is just too small for any TS / build to worth the effort!!! Look up how nice "single header libraries" usually became just because they need to be little!!! Very good to positives from simplicity! Why? Because htmx should be very-very minimalistic. If webassembly could do all the things htmx need, I would prefer even handwritten wasm honestly. But this is cleanest for now. The best would be if browser vendors just start supporting htmx even with JS engines being turned off though. So despite I do like type systems in this case I honestly much better would be on the side to keep htmx as-is and no build!!!
19:35 OMG did you just suggest they introduce a build system so they can support IE11? That completely defeats the choice of not having a build system. You either do or you don’t.. there is not inbetwix. I guess some people just get stuck in a particular mindset
I've touched everything except front end legitimately at this point. Swore it off and said I'd try it when Javascript died. Guess it's going to be a while ;(
TS can target ES3 for IE8. It is both forward compatible and backward compatible. JS-only assumes that everyone is keeping their browser up-to-date. It's only possible in the world of Chromium market dominance. Making your codebase more dependent on that is only going to further cement Chromium dominance. But perhaps that war is already done and gone, and it is pointless to think about such things. If you're doing minification in any environment, you're already using a build system and might as well go all-in. In most business settings, you're likely going to want some degree of obscurity in production. So if you are happy to bend the knee to Google, work on non-business FOSS projects, and hate running build systems, then JS-only makes sense for you.
Feels wrong try to extrapolate HTMX to others projects, this makes sense because htmx is tiny others projects benefy for the optimizations bundlers provide
JSDocs FTW over TypeScript any day of the week. (Assuming you have an IDE or some neovim thing (for the cool kids, of which I am not one) which interprets it and gives you the same type warnings, etc.
How the hell do you constantly select a paragraph and exclude the first and last character? And why?!? You just made me realize one of my pet peeves. Thank you for the content though.
Guys, if you don't know the benefits of a type system and a build step, maybe learn a bit about technology, and post about it later. Or maybe ask the Java / Rust / etc... people. CRA is a joke, but we had Rollup for years, also now ESBuild , Vite, etc. Everyone keeps looking at the most basic stuff and applauds it like it's nothing we could do with 4 lines of standard JS WITHOUT a library. You are not forced to use React.
I'm not a front end developer but i did a bit of full stack back in the pure js and the jQuery days, and it blows my mind what modern front end devs have done to themselves.
They make problems for themselves, then solve them by adding more rubbish, then have more problems and the cycle repeats. e.g. make things massive and slow and complex and then add server side rendering because the client struggles with the insane things you're asking it to do.
IMHO bundling and minification are fine, and TypeScript is fine, so a build step is fine.
The madness comes from npm depdndency hell and overengineered frameworks. Megabytes of hello world with hundreds or thousands of dependencies. node_modules can be gigabytes in size, and people think this is fine.
What an ignorant take. To insist that JS ecosystem was easier/better/performant 10 years ago than it is now suggests that either you haven’t worked with JS 10 years ago or all you know about the JS of today is through memes.
@@imaliazhar it's definitely not an ignorant take... I have a client, I took over their website from another dev (because that dev was clueless), it's a NextJS monstrosity. Even fully compressed into a tar.gz for deployment is still 1.8GB, and almost all of that is node_modules nonsense! For a site that could be rebuilt entirely in a static site generator! It's absolute madness!
I have another client with an almost identical level of site functionality that I built with Hugo and... it's
"Write once, run forever" is way more wise than it first appears. Consider that the majority of the websites are deployed in exactly that manner. It's written once, and then it runs forever.
It's a small fraction of websites that consistently get new updates and features.
I’ve worked as a web dev for 3 years now and 90% of the websites have gotten upgrades during this time. It’s not that the devs want to upgrade the site, our clients do. Their websites are literally how they earn their money…
@@ciril2643 it logically follows that a web developer would be working on the sites getting the regular updates. What is your point?
@@c-spam9581 that there are rarely websites that are “built once and run forever”, unless you’re talking about blogs or showcase sites
The issue is that people don't distinguish between websites with dynamic interaction from full web apps. 95% of the times you don't need a desktop like experience and complex components. In these cases having a templating language for generating HTML on the frontend is absurd.
I never understood why JS people had super complex build systems. In my mind, the entire purpose of interpreted languages was to eliminate the mental overhead of cmake/make/etc.
To think that JS is inspired in Self, and the DX of both those things couldn't be further apart...
If you never played with Self (or at least Smalltalk), it's an interesting experience :) You live inside the program, all the devtools, libraries, everything is part of the same system, you grow that system into your program step by step, you can edit your code in the debugger as you go... the absolute polar oposite of having to write, build, test.
But if you are a Vim user, the experience is very mouse centric, so that might make it a bit less pleasurable (and no, GNU Smalltalk is not the same, that's just like Ruby with less Perl in it).
JS was never meant to be good and useful. v8 didn't get the memo and made it fast/popular. People built tools to escape from the mess that JS is
What do you consider a complex build system?
types > minify is used in most cases.
If you want to understand why, let us know and I can try to give an explanation of why a build system is required. If you’re just being cheeky then ignore
JS is still a bad foundation to build on since it was never its intended purpose to handle heavy logic@@TheSaintsVEVO
HTMX is so based, glad it’s getting more attention!
One crucial point that was missing is that with HOWL, not only does it free people up to use languages other than js, but it makes it ACCESSIBLE to people who are not using/familiar with js.
Some of those people might want to make contributions to the library and learning/writing basic JavaScript is far simpler than figuring out some typescript build system - I'd simply never even try and all the project could receive is an issue rather than a PR.
Htmx seems more maintainable than JS, I'm planning to go deeper in that. My team has an application in webpack3 and is emotionally painful always when we try to upgrade and seems more easy start new front from zero!
You clearly didn't check out the "hyperscript" section of the docs.
@@noherczegnot op. But how does that relate. I am a systems programmer, please speak in laymens terms.
Adding my two cents, a current dev exp issue we are experiencing with TS in my company is that the project is so big that the tsserver lang struggles to keep up at the point that it becomes unusable. We are talking about 4s if not more for the autocomplete, when it doesn’t crash.
I've been writing javascript professionally for 14 years. Never thought I'd be excited to stop using it. I have no doubt that in a few years time HTMX will dominate the market. Because it is the new revolutionary in its simplicity and brilliance idea, kind of like what React was when it first came out. I'm experiencing this "how tf have I not come up with something this obvious myself" moment all over again.
Not having a build system is one the main reasons that I've resisted going from knockoutJS to another library ever since I was moved to the project I'm currently working on at my current job. Not only would it require time and effort migrating, but it wouldn't actually improve developer experience unless we did something drastic.
Keep in mind this is a very backend-heavy solution (which I'm assuming you could guess based on the fact that we're still on knockoutJS lol), and we rarely change actual code. I'm guessing 1 line of JavaScript or CSS per week on average. Knockout JS - like htmx - allows us to just change the template without compiling the Typescript - it's really 👍
logged in to up vote and say: i miss knockout-js. I think its approach is my favourite tbh: observables and simple to plug and play. shame it didnt have a good components solution.
One line of frontend stuff per week... Living the dream, my man
To Prime's point at 9:45:
For one of my team's current projects, we tried using jsdoc instead of typescript, and it didn't go very well when we started using libraries that were built with typescript in mind. I would say, If you have control over the whole code base, it would be very sufficient, but if you rely on external dependencies, then you're reliant on their assumptions about how DX should work (which usually includes typescript).
Now, our tool (vite react) already has a few build steps, so integrating typescript took just me half a day of work. (Mostly renaming files to tsx lol)
The cost of build steps and such are definitely higher if you need to manage, or configure them yourselves, or if you're on a proprietary system and you would need to build it yourselves. Blech.
For us, rhe DX improvements of type safety will make our team a lot more efficient due to the complexity of our API's types which we can't change, so the benefit is substantial.
The fact that's a question people ask about a .js file is worrying
A lot of the discussions here happen just because people aren’t being explicit about their priors. Dependencies add costs that are mostly *in the future*.
If you are working at a startup, you’ll always trade speed in the present for cost in the future, because you expect future revenues to pick up the tab.
If you’re an OSS project, your incentives are *very* different.
HTMX is a step in the right direction for the future of web development.
@@DeveloperOfGames Can you explain your rationale?
Why? For me it only makes sense when the backend and frontend are coupled.
For 2 two different teams, technologies or complex UI, I don't see it yet
I agree that HTMX is a step in the right direction. I have spent most of my career building Angular and React applications. 99% of our apps' behaviors are easier to implement with HTMX and/or Alpine. We should only be using React for the 1% of problems it simplifies.
already working on some java / htmx ... feels soo good... love having the power of java in the backend
Remember that time they decided to run useEffect twice in debug mode?
AAAAAAAAAHHHHHHHHHH
In JetBrains IDEs there is a toolbar that lists all the functions in the file or all the members of the class etc. So it's pretty simple to maintain these kinds of long files
Neovim LSP has the same structure and it's builtin to telescope too. So I have it remapped to st since I come from Intellij (STructure) and can pull up the telescope search to jump by method, function, class
I might cancel Netflix and subscribe to this guy. The irony is I'm not even a developer.
A ton of large C++ programs are whole program compiled.
The "Build" for HTMX in CI/CD pipelines is to timestamp the HTMX file at a particular point in time on when you package the HTMX file for deployments into target environments - Dev, QA, Staging, Prod, etc - so it can be integrated into the full ecosystem of an enterprise's applications.
Holy shit, finally something for regular people to invest time in? Tech that will stay relevant for more than a year?
"Rememeber when they said they were going to run useEffect twice in debug mode, I'm still tender". No jokes this is the reason I abandoned my previous attempt to learn React. Spent 1 day trying to figure out what was going on. Spent another day reading terrible justifications for it. Then returned to the safety of vanilla. That was over a year ago. Don't think I'm every going to try again.
One of the most important qualities in life is to be able to "go one step back" and evaluate a situation from a non subjective perspective. About 50% of the time i realize, that what i do is not smart and adopt early.
If you cant be critical about your own work, you will find yourself in hell over and over again. The road we went down with frontend toolchains proofed to be stupid and very costly. Every time i "just do the thing" and create a .js file that does the job without transpiling etc, it feels right. Its productive and very "targeted".
A good example is to go beyond your own example of non-bias judgment and step into the shoes of a totally different person who has completely removed concepts like goals and history. If you can imagine what they see when they look at the project it could be so different that you can walk away with things we would never otherwise think to even question.
So much love for HTMX!
I was reading this exact article on the toilet last night. whoooaaaaa. spooky.
Try to use esbuild as your bundler, it is amazing and it is faster than any other bundler, in my project it compiles 200 ts files in 70ms
A 3.5K lines is big, but not outrageous. It's about 100 A4 pages in length... a large document but you wouldn't insist it was split into several shorter docs - and code is not as dense as English. Cognitively challenging but if you are working on that project it's all you need to know.
Issues of structure and scope are not fixed by splitting code into separate files any more than microservices make components loosely coupled . Splitting the code just necessitates structure and limited scopes, or else bad things happen, and they frequently do. In a large file mutual/self-discipline is essential.
When learning toy examples are used to demo "enterprise" project structures. When does "fizz-buzz" need to become "Enterprise Fizz-Buzz"? with a bigger quantity of code than most people believe.
The legitimate successor to jQuery 🎉🎉
Did we really need an article for this? it was always baffeling to me why we need a build step for interpreted scripts…. other then having an excuse to get a coffee
types, pollyfills, minimization, etc
@@vitorguidorizzzi7538 yep I am aware on what people use it . the polyfill can included as such in the source or as an additional script inlcue and I have a CDN that can do minification.
Types is a valid argument for people that prefer TS. I prefer vanilla JS
The web is a mess, esm, cjs we need a build step for targetting both
Vite seems to be an opinionated tool that is built around Rollup. I haven't worked extensively with it, but you do get access to some amount of rollup configuration directly which lets you set inputs and outputs.
No types, no build step (or optimizing the crap out of the app), JS from before 2015, IE 11. Sounds like the future to me.
Sounds like hell to me
@@danvilela what advantage does a build step provide?
@@heinzerbrewcatch bugs during build time with types, minify the code, delete unused code and target different browsers, and don't forget about esm and cjs
It should be like that since inception. What is the reason behind having a dynamic loosely-typed language? So that we don't have to spend time on config+compile. And guess what JS developers salivate on? Their own versions of Make: Grunt, Gulp, Webpack, Esbuild, Rollup, Rolldown... you name it.
I am determined to detoxicate myself from this whole JS overtly complex mess. HTMx is the way for me.
This just reinforces my view that JS is just a runtime with a dev console on top
Ok, now we need hmtx native for mobile and we may have a chance to get rid of React
There is already a hmtx native for mobile development. It is called Hyperview.
Not having dependencies would be so nice dude
7:10 and at that stage you check out the deployed version from Git, start it locally, and you have full debug capabilities. If you cannot do this, then I'm not sure how can you create hotfixes TBH.
Seems that HTMX should go great with VUE in global mode where you can serve templates from server and make them interactive on the fly and without any builds
Can we take a moment to appreciate how many times Prime said html instead of htmx.
Yes, vim is an unnatural abomination. I agree.
Emacs Lisp is often written in single, huge files. To make this easier, page break characters "^L" are often used, combined with page header comments clearly seperating the sections of the file. In Emacs you can then navigate between the pages and also restrict your buffer to only display a single page.
In the opposite direction, you have smalltalk, where you usually have one text window per method which hides how verbose smalltalk actually is
nice article but how did the guy thought that 1999 C code doesn't run today ? it's litterally one of the only languages where you can't say that lmao
In my first IT job our manager told us when he first started in helpdesk his manager would go around and take their mouse if he felt they were to reliant on it. They frequently held workshops on how to navigate around without a mouse and at the time he told us this he still maintained he could work faster without a mouse.
What's compelling is that it's probably going to work in 10-20 years from now. And the approach is not totally screwy.
HTMX is just so good
I wouldn't use it
@@BinaryReaderThen don't
20:00 ES6 does have anonymous functions: () => { /* stuff */}
I wish people never thought they could have state on stateless HTML...
Just don’t use it. Go wild
@@danvilela I always write stateless web apps
I probably could make my app in HTMX and have benefits from it. My big ass backend is in C# ASP NET Core and frontend is SvelteKit. And I should maintain 3 repositories and teach all my devs how to write TypeScript. But I probably will not switch in the nearest time, because I don't want to rewrite it.
so, the ES6 thing is a point against the whole backwards compatibility thing. Eventually htmx will stop supporting ie11 and they will rewrite some parts to use ES6 features. Not any different from typescript, but less frequent changes perhaps.
Great programmers are great programmers because they have a passion for learning. Once proficient in a technology, we’re always looking for the next thing to learn. I love functional programming, but I honestly think the reason it’s being considered superior to OO is because so many programmers that have only experienced OO are feeling like they’re relearning programming for the first time again. I myself am guilty of this. Elixir rekindled my love for programming, but I don’t think functional is inherently superior to OO. I think OO is still a better choice for most tasks, especially web based applications. Languages like Crystal show us that OO languages don’t have to be slow or prone to errors.
Nice article choice! Thanks prime.
I really liked this one. Excited for htmx
not only TS is the best reason for having build, but also JSX (I mean case of usage unrelated to react)
btw with my lightweight frontend js lib (with reactivity like in solid) I use not index.html as an entry point for vite. it bundles my jsx omponents to js and generates additional assets.json files with updated hashes for srever-side templates (non js backend). I like how it works
speaking of ES6 I decided to go this route - write lib code and components with use of ES6 but if ever will need ES5 compatibility I will just compile specific version for ES5. I even made boundary api compatible with ES5. any code that lives outside of bundle will able to use api exposed by components and reactivity lib
Am I missing something? I thought a lambda function was an anonymous function? Is there a difference?
es6 introduced lambda functions. Anonymous functions existed before that
As you got kinda 'triggered' by mut pronounced as mutt, I got 'triggered' by your pronunciation of vite. A small excerpt: Vite (French word for "quick", pronounced /vit/, like "veet")
😂😂
Sir, this is America
Gondor has no build step. Gondor needs no build step
+1 for htmx
25:00 You're missing a step:
1. Server checks if user can delete and sends value to client.
2. Client checks value to see if delete button should be hidden/disabled.
3. If user clicks delete button, server checks that user can delete.
HTMX removes step 2.
Which is a non issue if you use OpenAPI and generate your validators from it. By doing so you in certain cases can skip a backend call, and do the work in the browsers, reducing load on your single source of truth.
@@noherczegin what case can you skip a server validation call?
I hate it when the variable names are minified, bet resolved through sourceMaps. When you put a debugger breakpoint, and you hover a variable - you see the value. You see it in the "locals" pane on the right panel. But you cannot write it in the console to see it there., it is undefined. Sometimes this works, sometimes it does not..
What's he talking about, that not being true for Java and C? Of course you can take code written in 1.4 syntax and use it nowadays without any issues. Opting for new language features?
Love this
Three or 4000 lines yeah that's long but not unwieldy so. Maybe if it was complete spaghetti code and 12,000 lines like a PHP app I knew then there would be a problem but I doubt that is the case with HTMX
I agree with you about the state management problem. Nowadays, usually there is a lot of state in client side that is not necessary. But about the "duplication of logic" cannot be avoid using HTMX.
The real oposition here is not frontend vs backend, but presentation vs business logic. Even rendering thing in backend, you have to prepare your presentation layer to, according to the information available, show or not the delete button. Using HTMX, you have a tighter client, but the same logic should be placed in backend instead. The only difference here is where the data is rendered.
You either seem to not get the main point - or you would try to "program in htmx in react style"... because The point is that the STATE is not in two different architectural points. The state in modern web dev is represented and kept sync both on cliend and server side. Here the state itself only exists on the server side. Much simpler and much less code. The duplication of code is maybe not the best phrase as the main issue is "duplication of the state handling code" which this way easily can be spared.
@@u9vata Of course. The "duplication of the state handling code" issue is something that I agree. Usually, there is no need to create the state transition both in client and server. You send the request to the server and it returns a payload that represents the new state and then I re-render the affected elements using it. The ideia is the same that you got with HTMX, but I already worked in that way using VueJS in 2018.
The difference here is that I don't buy the idea to use HTML payloads. The problem here is not only the size, but that it is not the right level of abstraction that I expect from the server. If i receive a JSON payload that contains the same information, I still have the possibility to just render it using an HTML-like template and get the data in same format as HTMX, but I have the flexibility to do another things with that, like use it in special components. Here, still no duplication of code because the code responsible to render it is only in one place (and in my opinion, the right place).
@@danilomendes977
> Usually, there is no need to create the state transition both in client and server.
VS.
> then I re-render the affected elements using it.
I honestly take this re-render as part of the syncing of state. And I think its much worse to send around random json that the client somehow builds html from - compared to literally sending the html. Again... this way the state is at two places: you can easily do as crazy things with that data on the client as you wish - and server does not know. The responsibilities and system state is totally conceptually shared this way and I honestly think it is an antipattern...
> a JSON payload that contains the same information, I still have the possibility to just render it using an HTML-like template and get the data in same format as HTMX, but I have the flexibility to do another things with that,
First off: you totally waste CPU with parsing that json on the client - especially if no templating is needed at all.
Secondly, I think this makes the whole system much harder to reason about and easier to spaghettify because of the "can do other things" part which is actually why I prefer the HTMX approach here.
I actually did this much before HTMX too. Actually did this not that long ago with pure JS and no frameworks at all. Just pure JS and REST endpoints with json... I actually even thought about making a library for these kind of things (different than htmx in approach, but again: not that different in a way) but now that I look back, most of the things I did I can imagine doing with htmx. My approach was more clieant-heavy just as yours... In mine I just had a raw html (this was micro services) that they could literally copy-paste to anywhere they want - let it be a static page or some endpoint generating this html. Just copy paste it where you want it appear... I had naming and other conventions for CSS-ing it if you wanted it look different too or actually change html a bit. What the JS nearby this did was that it literally changed the existing placeholders with microservice data and communicate back andd forth with backend in json and that is all it did. The good property of this system is that the non-dynamic, really static simple html example design was used AS-IS for the copy paste and looked exactly as when designed alone in a minimal html...
^^I can imagine some points to this that htmx cannot do - but honestly more often than not it would just make things even more clean...
> Here, still no duplication of code because the code responsible to render it is only in one place
I disagree: you have the message passing from backend and then the rendering. Just saying... the backend is not getting data in your json right straight from database as the json end result, but your backend literally renders its various data... into json... then you render that json into html...
Technically I would totally call that "rendering into json" just similarily as people say rendering into xyz... So this totally is a duplication at that point in 99% of the cases - and guess what? If you are in that 1% where you want something more complex, you can still use regular JS for that part.... Just honestly it will be so rare that most pages would not even need you to enable javascript anymore if browsers would natively support this functionality!
I literally hope native support for this happens from browsers. This is even more sandboxed than running JS - so people who otherwise turn off javascript can still run this maybe. Also little browsers like command line browsers might be able to more easily support this than supporting a full JS engine "properly". This thus enable a wide range of applications - like embedded browsing use cases much more easier.
> The problem here is not only the size, but that it is not the right level of abstraction that I expect from the server
About size... I guess you know that this can be easily gzipped and likely not even that big of a deal at all. Honestly sounds like parsing the json would take longer than the whole sending without even any gzip anyways.
I also think server is totally right abstraction level here. Imagine putting the json-rendering logic part unified on the client: that would mean extremely thick client with communicating literally to database via the backend as proxy that does not have logic. This is basically a no-no. The opposite: that making it a thin-client however is really doable and what htmx is doing: then you can remove the "render to json" part and as I said in most cases render directly to html to be exchanged on the viewer machine at some location. Also this is more fitting to overall web architecture where HTML were always what is to be sent around directly as state and the result to view: full page? html. streamed content that get slow pulled? html. Fragments to change out dynamically? Html. It is very unified this way, also loads more simple to debug just by looking at logs for example on production!
json and html have very littel difference in payload size, and unless you are never exceeding 1500 bytes per message, timing becomes a lot different
second, its easier for a server to produce html than json... its... a weird thing, but it is what it is.
@@u9vata
JSON here is just an example. If i am looking for performance in data exchange between client and server, I would probably look for another thing (a binary format?). But, in my applications, usually the bottleneck is in other place. The main point here is the right level of abstraction.
With HTML payload, I don't know how to deal with the information inside the generated html in client side and sometimes you need it.
3000 lines of code is nothing to me.. developing high integrated Magento 2 modules which include widgets, import/export functionality, multiple tables and factory collections, add your own APIs and log files, helper files.. it’s a nightmare.
why would you even need one for HTMX? I mean is HTML with hidden js
HTMX provides the wisdom of wholeness for healing our full-stack fractured souls.
The best part is no part, likewise, the best build system is no build system.
6:01 why would you upgrade your TypeScript version in your package.json if you know your version of the code was built with an older one? If you do, do you think your other deps would work the same? Do you think 3 years old NodeJS has the exact same API-s? What are we talking about here?
You will if you want to use a newer version of a library which is probably using a newer version of Typescript
can I implement map tiles by just htmx without js.
as we know there is alot of js library as leaflet to handle it
I like types a lot, really against dynamic typing - BUT!!! BUTTT! I say htmx should only be javascript and nothing else!!! ---> HTMX is just too small for any TS / build to worth the effort!!! Look up how nice "single header libraries" usually became just because they need to be little!!! Very good to positives from simplicity!
Why? Because htmx should be very-very minimalistic. If webassembly could do all the things htmx need, I would prefer even handwritten wasm honestly. But this is cleanest for now. The best would be if browser vendors just start supporting htmx even with JS engines being turned off though.
So despite I do like type systems in this case I honestly much better would be on the side to keep htmx as-is and no build!!!
I try to keep js and npm out of build systems. They are so unstable
19:35 OMG did you just suggest they introduce a build system so they can support IE11? That completely defeats the choice of not having a build system. You either do or you don’t.. there is not inbetwix. I guess some people just get stuck in a particular mindset
Man really said "I do really like htmxagen" like he was getting married to it "I do"
Why do we need a build step for a 14kb library?
Just some average React developer trying to overcomplicate everything
I've touched everything except front end legitimately at this point. Swore it off and said I'd try it when Javascript died. Guess it's going to be a while ;(
Yes! Use the platform
TS can target ES3 for IE8. It is both forward compatible and backward compatible.
JS-only assumes that everyone is keeping their browser up-to-date. It's only possible in the world of Chromium market dominance. Making your codebase more dependent on that is only going to further cement Chromium dominance. But perhaps that war is already done and gone, and it is pointless to think about such things.
If you're doing minification in any environment, you're already using a build system and might as well go all-in. In most business settings, you're likely going to want some degree of obscurity in production.
So if you are happy to bend the knee to Google, work on non-business FOSS projects, and hate running build systems, then JS-only makes sense for you.
Feels wrong try to extrapolate HTMX to others projects, this makes sense because htmx is tiny others projects benefy for the optimizations bundlers provide
htmx is the way (if you’re not using Rails)
hitmax is dope
If Python can have types, so should JavaScript
JSDocs FTW over TypeScript any day of the week. (Assuming you have an IDE or some neovim thing (for the cool kids, of which I am not one) which interprets it and gives you the same type warnings, etc.
"I swear this is not pre-read"
We know. 🙂
What type of site is HTMX suitable for? Obviously you can't make very interactive interfaces on it.
What do you mean by that? The whole point of the htmx is interaction
How the hell do you constantly select a paragraph and exclude the first and last character? And why?!? You just made me realize one of my pet peeves.
Thank you for the content though.
When you select text you always select from the second character to the second to last character. Why?
It ensures that if you are off by 1 character that you don’t highlight an entirely different word
3500 lines, in sensible order, doesn’t sound that bad. Much better than 70 files with 50 lines each.
It depends, well written code is easy to follow.
I only "build" code that compiles to a binary format.
Vite is pronounced "vit". French for "fast". Sorry… the first time you said it, I didn't understand what you were referring to.
The build system is ctrl + s
It baffles and amuses me to watch programmers just allow the web to be the web
come-MEN-sir-it
Guys, if you don't know the benefits of a type system and a build step, maybe learn a bit about technology, and post about it later. Or maybe ask the Java / Rust / etc... people. CRA is a joke, but we had Rollup for years, also now ESBuild , Vite, etc. Everyone keeps looking at the most basic stuff and applauds it like it's nothing we could do with 4 lines of standard JS WITHOUT a library. You are not forced to use React.
No build because it’s not tryna be like the rest of the non sense js ecosystem
Chrome deprecates JS API from time to time
I think I just got enboomered
i read conmenstruate
I use mouse in vim
I think Prime doesn’t like Redux
web dev is atrocious anyway. htmx or otherwise
14:25 I learned that word from Internet Historian th-cam.com/video/Qh9KBwqGxTI/w-d-xo.html