@@username7763 The dynamic typing isn't nearly as bad as people claim it is. Elixir's pattern matching and type-specific operators prevent those "whole classes of errors" people talk about. They aren't caught at compile time, but your general tests should catch them-that is, tests you would have anyway, NOT type-specific tests (I don't have any of those myself). The whole "let is crash" philosophy makes it more than fine for most systems. All that said, I'm not mad it's getting static typing, but I'm in no rush. I also don't work on HUGE systems, though.
i actually like elixir, sounds great for performance heavy apps, but... as it "should be"? we got some celestial beings in the yt comments. elixir has no mature type checking before runtime. js has multiple issues (inexperiencedly designed lang, ecosystem rebirths every 8 years) but it has (pseudo) static typing with typescript. I change the type of a field in my zod types, I get to know before pushing about edge cases with proper eslint config. people choose, man. something better comes along? people switch. webdev via wasm+some lang has yet to be proven as a good setup to switch out of js. I write a web app with node+some framework, only learn 1 language for client and server side. guess what happens when you are 1 year into building a really cool elixir app. oh, you need to learn js to deal with all the edge cases: the frankenstein app begins. the guy is writing html inside a string for fuck's sake. i don't care if it has good highlighting, remember how nice it was writing jsx with typing? so you get know if you messed up the name or value of an attribute? gone idk what else to say man.
let me tell you as someone who found elixir a few months ago after severe JS burnout: elixir phoenix is the way and is a beautiful piece of tech. come to this side, it is in fact filled with greener grass
As someone who tried it out of interest year or two ago, I couldnt make mx command work even while following tutorial step by step. Imagine not being able to get working project after running npx create-next-app
Doing Elixir now for 7 years and it is getting better and better every year. My biggest concern with it is that devs are rare as unicorns. However, once you've assembled the right team, the possibilities with Elixir are endless!
It's awesome, but sooo different .. I highly recommend Pragmatic Studio's Elixir & OTP or Codegnome's Elixir courses. I was blown away by some features like the omnipresent pattern matching, etc.
I think one of the overlooked advantages of the LiveView approach is how it simplifies things that used to take multiple solutions down to a single answer. Want to send push notifications to a connected user when they get a message? Just send a message to the liveview process. Want to have long-running asynchronous processes on the server that get killed when the user leaves? Just spawn a process off the liveview. Want to track user presence? Have the liveview process handle it. These are all things that used to require separate pub-sub systems and lots of code on the front end. Now, they are all handled with one solution.
@@glebtsoy4139 It is so much more lol. The first 3 seconds of the vidoe, Theo shows it updating a whole network of servers, their build caches AND pages on clients that are currently connected. Elixir/Beam/OTP are instanely well designed and battle tested.
While I agree that PHP is amazingly well suited for the types of systems it is usually applied to, Elixir/BEAM is five steps ahead of everything in the industry. You souldn't focus solely on the templating, instead take a wider look at the runtime model, ability to spawn thousands of processes, to create server networks, use other technologies, have sane throws and sane logging mechanisms, switch between HTTP APIs, Websockets, RPC, async APIs without needing half of AWS to achieve it.
could you tell a bit more about your take on this ? I'm studying the Elixir Ecosystem, and everything looks so good and feels so good to develop with that I'm afraid that's no turning back.
@@joaopedrogoncalves3783 Well for one thing the core developer is from Brazil haha. Beyond that, there's much to love: simple concurrency and distribution, fault tolerance, easy scaling. Key feature: PhoenixLiveView - a single code base in a single language is far better than JS on the front end and another language and architecture on the back end.
19:18 are you referring to The old Turbolinks? Or the current iteration of Turbo under Hotwire? Payloads now operate by either providing a frame tag to replace an existing frame tag (without loading whole template/layout) or specifying a swap of any ID'd DOM element with a small payload via turbo_stream (still without rendering template/layout). I guess the latter supports their argument of requiring additional dev tuning.
Yeah I think Theo is a bit out of date on Rails' frontend solutions but it looks like LiveView blows Turbo out of the water. LiveView's creator seems to have spent time with React and internalized its advantages whereas I don't think DHH ever gave it any real consideration. Still I don't think Theo should shit on Turbo and praise HTMX when they're doing fundamentally the same thing.
Using elixir for last 6 years full time and very happy (moved from ruby). Thanks to mature core (OTP/beam) we have a lot of instruments for live debugging that helps with day to day debugging and dev. But I want to aware about hot-code reloading (swapping) - it's very hard and almost nobody from elixir community uses it for release processes. It's just increase release process complexity to non-acceptable level. It's only sound great but in reality state management and swapping is very hard. And when we are talking about web apps where state-less (http nature) is natural approach it's much simpler to go just with regular blue/green releases. However during development hot-code reloading (specially on test-servers) it's a game changer.
The BEAM VM solves a number of difficult problems very well and those same problems exists in other problem domains (i.e. managing a large system of telephones is a similar problem as managing a large number of async activities which is often a part of the business logic in web applications). IMHO hot-code reloading isn't one of those problems we have to deal with in our web applications; or if it is, we have developed other tools like containers and rolling deployments that make reasoning about the state of the system during a new deploy easier to reason about. Or the tolerance for rolling on new application processes isn't as stringent as it is for active phone calls. This is the basis on which I stay away from hot-code reload.
Loved this video Theo - as someone who hasn't written Elixir or Erlang since 2014 where I built a custom Unity3D Component Serialisation System for Realtime Networking, so happy to see Elixir coverage.
- rewrite your backend to Ruby on Rails - rewrite your backend to nodejs - rewrite your backend to go - rewrite your backend to rust - rewrite your backend to elixir Will so much time spent rewriting it's a real wonder how anything gets done in this fkn industry.
A lot of the web still works on PHP and will for years to come. There are many new projects that still start on PHP, not no mention Java or C#. All of these cool new stuff happen only in twitter and youtube.
Douglas Crokford was right! The industry needs a whole new alternative to JavaScript itself not just libraries! Something to end all that fuckups Changes of technology And adopt widely with all browser bases
I think this video could do with a follow up. A lot of people are missing that the BEAM is the real special sauce and how well it compliments LiveView. Almost like flipped the other way around.
To me I like incredibly thin clients where the server sends HTML and it’s done (an archivable resource); OR, I like thick clients where the user has so much to hand that the app might even run offline or in difficult environments. Even though this is amazing, I have almost no need for something in the middle - an app that is symbiotically tangled with the server and at the whims of my connectivity.
You make a good point about offline, it's one of the reason I'm not a fan of any form of SSR. If I have a section of an app I want to make offline, going from client side is pretty easy, work out some sync logic / storage and your done. If all my eggs are in the SSR camp, I'll have a much bigger job on my hands.
Maybe this is the reason many of the offline-first tech that I like in theory has never gotten traction. Like, either you need the server or you don’t. Either you need the server, and you run on the user’s cell phone that’s always connected, or you run on their tablet/ laptop and they only pull you out when there’s Wi-Fi. (Or they use your phone as a mobile hotspot.)
Create a HTML MPA. Rewrite to PHP. Rewrite to Django. Rewrite to Ruby. Rewrite to Next.js. Rewrite to Svelte. Rewrite to Go. Rewrite to Rust. Rewrite to Elixir. We'll never get anything done lol 😂
We should have stopped at "create an MPA". This was never hard. All the "requirements" that came after were invented by developers with feelings. I'm half joking, but I'm half serious as well - the large majority of projects would be *just fine* as an MPA with a bit of backend in Whatever. This stuff is as complex as we choose to make it.
Used elixir to deploy a service maybe 4 years ago and it was a blast, but back then it seemed like the elixir ecosystem was a bit stagnant. Might have to consider this again, now that I am bootstrapping a new company
I remember back in the C++ Borland Builder days, you could change a property on the representation of a UI component and it just updated live. In MS Visual C++ however you had to call a function and say if you wanted to push UI state into your model, or your model into UI state. The later won out... why was that?
I don't understand this problem of out of sync state. My front end only ever maintains its own state. We solved this decades ago with MVVM (model, view, view model). The frontend is the view, it works off a view model (it's own state), and then the model comes from the server. This feels like a specific problem that's being appied to every situation, much like redux. Making your server responsible for UI state means your client is now tightly bound to the backend, making backend changes riskier. Want a simple UI change? Make two changes and two deployments. The benefits of even having an API fall away because every client now has to accept an HTML response. The moment you have a frontend that needs to have a different HTML structure returned, this architecture becomes a horrible mess. I expect to see "server side UI API responses were a mistake" videos in the the future.
It sounds like in MVVM can be expensive to make changes to the server model. Usually the UI is driving changes to the shape of the server model and development can become hampered by the UI team having to wait 3 to 5 business days for prioritisation, implementation, testing and deployment of server changes.
Just a consequence of trying to force JS on both backend and frontend and then blurring the line between the two of them. The server sends the DB data as JSON or whatever, the frontend uses it to create an UI. If that data changes, the server sends the updated version. This is not hard at all, it is just made hard as marketing for certain tools.
@@simonhartley9158 MVC and it''s children are VERY battle tested by now and no, the UI shouldn't drive server model changes. The UI should display what the app needs to display and the server should be providing enough information to do that. You don't change the server in response to UI changes, you change BOTH in response to app design changes. Management paperwork is a separate, unrelated issue. Changing the color of a button should definitely not require a backend change. I feel every framework is driving towards being a new Visual Basic, which we abandoned for good reasons. JSON is pretty good for me, I can still do rich, dynamic UI without having to fight a renderer to produce the HTML I want the client to use.
@@Leonhart_93 your model doesn't seem to take advantage of the benefits of streaming, nor deal with update granularity vs. client side request waterfalls.
The reactive aspect is interesting. My main issues with it are: - Focus of HTML over the wire instead of a JSON API means that you can't reuse the API for other applications/services for integration. - Mixing server and HTML forces everyone to be a full stack developer and creates unnecessarily tight coupling between front end and back end which harms modularity. It was one thing for React to combine JS + HTML, but that was fine because both of these are front end concerns; they overlap significantly as they both deal with UX/UI. Frontend and backend are often different concerns and they don't overlap so much. Front end requirements changes often arise independently of back end requirement changes. I don't want to have to read and modify back end code when making front end UX changes; it adds unnecessary security risks.
So wait, server side includes were the way to go all along? I was about 11 or 12 when they were the standard way to include dynamic content in static pages so my memory and understanding may be off, but I swear that at least conceptually it's roughly the same idea
People love to rediscover shit in this industry and then pitch it as a gospel 5 years from now on the view independent APIs will be all the fashion again
Coming from blazor I am very hesitant to have a server call to update any state. This sucks if you have a slow Internet connection, and makes it unusable if connections drop. Curious if their solution works better
They're closing the connection once the page has loaded, and then the client components can take over for interactivity etc. at least that's one way to do it. I'm also using blazor and while i love what microsoft is doing i'm starting to see the pitfalls... still has a long long way to go. (Currently using Blazor Hybrid on android and iOS, i love C# especially the functional parts)
@@hauleth That doesn't make any sense, you can use Phoenix without keeping the connection alive, performance with that is equivalent with a normal request, you just get to draw faster
What do you do with your iOS Android apps? This doesn't make sense when you have native apps and treat the website as just another one of your platforms.
The innerHTML reminds me what I built in the pre-js-framework time period, where we used jQuery for being compatible with all browsers. Also we had to support IE 6.0, where it was easier to update the DOM with innerHTML and not do DOM operations, because they were too slow. The only downside is that you also lose the focus of an input field if the HTML is being updated that contains the input field. We did try to reassign the cursor position after an update. It was much faster in loading the page and also in building the javascript.
How you thoroughly discussed each topic and touched on imho"just write better software ." Made me sub and look out for these types of videos. I passed out when you were talking about ssr and how it didn't solve the problem and its caveats. You get people thinking the right way. Software should be easy and good to solve day to day problems. Consider you a leader here
been doing elixir liveview for 4 months now for my job. sockets are memory intensive so for heavy pages we need to ditch liveview for cost reasons lack of types makes it easy to mess up a map can have its keys as atoms or strings. if you have the same object with different key types it cauaes a huge headache you will eventually need javascript. when you do it is very unpleasant to deal with i do not think it makes sense to pick liveview over react for the frontend
Sockets themselves are not memory intensive, but if liveview is storing lots of state information with the socket connection then that could add up. I use React clientside with Sockets, so it feels I get the best of both worlds.
@bruceleeharrison9284 getting away from cloud would be hard these days. But if i was gonna dream up a perfect scenario(imo) id say CS devs/engineers create a union that works to protects our rights and experience. But ALSO creates AT cost data centers for union members to utilize.
@@Frostbytedigital it wouldn't be a reversion to what came before. It would be a new, updated approach that revisits the concept with a modern approach. Probably a way to outright buy capacity in a datacenter such that you "own" the machines. Services would allow web reconfiguration of the setup, some being immediate (since they can be controlled in software) and some having a lead time to physically setup. (e.g. installing a direct network line) I can see this being so much cheaper than clouds that excess capacity will be bought to ensure little to no lead time for teams requesting hardware. Which will work fine right up until someone decides further excess capacity isn't needed and trims the budget. Then we'll be back to long lead times and cloud will become more appealing again. Yay, the CompSci pendulum...
The pendulum between client and server has been swinging since the late teletype early dumb-terminal days. We seem to have new names every few years for mostly the same concepts. At least it's a client-vs-server loop... and not recursive... Unless you look at the bare metal/image/package/VM/container progression... that definitely _feels_ recursive.
@@t3dotgg Actually yes, there are people (crazy people) who run a local liveview instance on your device, that powers the frontend, and communicates with the backends.
22:44 I don't know about GraphQL. But if you have multiple services you don't want them to call the auth server all the time. So you use JWT instead. With that you just ask the auth server once for their public key and then you can check every request with checking the signature of the JWT. If it is valid you can read the data from the JWT and know which user it is. No need to do any additional call. Btw. with the web sockets solution, either you don't use microservices but a monolithic application. Then you don't have multiple services, you just have one monolith that does authentication and all the endpoints. Or you have to split the web sockets and basically have 3 separate connections. So you kind of compare different things / different architectures with each other.
But what if there is a change in the user permissions? The JWT will be outdated and that may be dangerous, so you need to specify short times to live and re-validate (going all the way to the DB or centralized auth service). If you're going to do that why not simply use a traditional server side in-memory cache? In that case you can use cookies or whatever (that only hold user id) and check in the cache for the permissions. If the cache is short lived you are in the same situation as JWT except for the specific instance where the client is connected where you can edit the cache record directly (considering how load balancers work, in most cases will be the same server anyway).
@@Robert-zc8hr the privileges rarely change. You can have for example 15 minutes tokens and a refresh token. If privileges change you create a new 15 minutes token with the changed privileges after the other expired. The alternative with the cache isn't good, as the auth service needs to be online all the time and each application server has to ask it. So if it is down, everything is down. It you have JWT and the auth server is down for 10 minutes, it's not that big of an issue. Some users can not use the services, but others can. Also you can use stateless logic like Cloudfront functions to check JWT tokens.
@@kezzu5849well is it any more overkill than having hundreds of thousands of JavaScript lines running to compile a 600kByte JS bundle to run in the users browser to show a hello world?
"might seem great if you are near the servers it is hosted, but as soon as you go somewhere else your experience sucks" is a perfect description of a specifc problem and not a generic one. You might think that all apps are like email clients or file uploads or streaming, targeting all possible universe's users where distances could easily matter and introduce niche problems to those revenue generating specs, but most apps are fairly local. And most of their problems can be solved horizontaly. I would argue that most MVPs are not worth the effort of fast speeds either. So you are only left with those niche apps. not niche in terms of user volume but in type of app. Also keep in mind that distance is not the only factor for speed experience. So how many flies did those bazookas killed?
Streaming is not just an SSR tech stack thing, been using Sockets for years now. Generally speaking the types of web pages I create are for commercial dashboard, data entry type systems, the majority of comms are via sockets, and like you pointed out in the video one advantage here is Auth is only required once, other advantages is that data can also be sent in binary, and even before HTTP2 allowed you to create a protocol that multiplexed the requests. Still use Rest end points, but generally this if for legacy comms, or B2B logic. In the long run this takes way less data than sending HTML, mainly because the data can be cached aggressively, and invalidated by the server triggering updates. SSR makes streaming easy, but please don't make claims that it takes less bandwidth than client side rendering, because it depends on how you do client side rendering, using REST is just one option that because of it's stateless model is not the best for performance.
I bet both of your guy are right. First half of video, It just auto ajax and auto reload project when project code change. (At least on real time development, both php laravel livewire and elixir are equivalence, only final feature(shipped feature) always varies, because browser always changing, different web technology update at different pace against new browser feature) To be fair. Web is a big and large ecosystem, too many stuff hiding that framework and library already do for us... and niche place that help here and there, just phoenix is slowly steady and never use shortcut like most javascript developer do, so they think it is magic.....especially those work around facebook era, should know that trick, but maybe they never use it, but they know
There is a fallback to long-polling. Curious to hear which other issues you are referring to, because LiveView has been running in production for years.
Noob, but serious question: 24:55 -> Amongst Elixir, Go, Rust, Zig and C#, which ones have similar "included" approaches, delivering comparable results, in terms of requests volume reduction? Or is it exclusive to Elixir's ecosystem? I'm just trying to understand if this is something new for Elixir specifically or actually for the whole scene.
This is new to the web. There's been some similar ish things in game dev for multiplayer games, but this is entirely new as a way to update HTML and manage a user's session over time People are gonna come in here and reply "but TurboLink did this before!!!1!". Those people are wrong. Nobody's done a real concept of "long running per-user sessions on the server" in Ruby.
@@t3dotgg Thank you so much for such a fast reply! I'll definitely take a further look to understand more about this new feature, it indeed seems to be a very relevant mark! How big of a project (number of online concurrent users) would you say is enough to justify choosing Elixir instead of Go (for example), if your main concern were to reduce financial costs while utilizing cloud services as the main backend? I know this is a broad question and heavily depends on the way the code is structured, but as an independent dev living outside the US, that's kinda my main nightmare and I rarely see people talking about these managerial aspects of development...
I'm an economist and a newbie in web development and I definitely have skill issues with react in the past. But with Elixir and Phoenix I can start to ship web app to my students.
This is awesome news! I used to work with Elixir/Phoenix and LiveView. But I had the choice to work on a questionable product with those technologies, or on a great product with React/Next.js, and went for the latter. But sometimes I miss the elegance of Elixir/Phoenix. I'm not a big fan of Tailwind - it's like a hammer that makes everything look like a nail. But Elixir/Phoenix IS the right nail for it, it's the perfect match and I wouldn't like to use anything other than Tailwind with it.
15:00 same thing I am still thinking in case of RSC when I got to know of it, for every updates sending the whole json representation of the page. How optimal is that ?? Or maybe be revalidateTag (sends the respective component's json representation) works differently than revalidatePath (sends the all components json representation) I guess
17:12 "This is like Qwik, but good and solving real problems" - I really wish you'd elaborate a bit more on this, because to me both libs seem to solving the same problems, just doing it differently. Both libs identify they need of splitting into static and dynamic parts, both only send minimal code for interactivity and both do fine-grained updates. What is the "real problems" that Phoenix is solving and why is it "good"?
I believe that a real difference between Qwik and Phoenix is that Qwik has a better user-story when it comes to frontend/browser only components/islands. Phoenix dev-experience would favor the back and forth between server and client over WS (at the moment of writing).
@@bas080 that is (more or less) my understanding as well. Each solution has its pros and cons, but claiming one is simply "good" and "solving real problems" (implying that the other one doesn't do neither of those things) is just childish.
Also to be fair. Users being on the existing page and not getting the update till they refresh is not some huge problem. Even massive companies like amazon accept that behavior and wont be changing frameworks to "fix" it.
I have used the .Net version of this which is called blazor server. However, the big problem is the high latency for anyone who does not live close to the server.
Thank you. We desperately need more variety in architecture, the json API monoculture needs to be challenged, HTMX and Liveview are two complementary approaches to do this
Why not using SOAP? Why not using XML? We moved away from them to JSON, because it's more simple. Sending HTML snippets is a way back in the wrong direction. I don't say it's bad in any case. And I agree that we need more variety in architecture. A better solution than React and Phoenix LiveView is for example static HTML. If you don't need dynamic content, just put it there as a static HTML. You can generate a blog with Hugo and don't need computing. Neither on client nor on server side. Of course that is not working for everything. Where you need a bit of dynamic content, you can provide it as a custom element. For example comments in a blog. But if you have for example a live ticker on that page, Phoenix LiveView is maybe a good solution for that. Can you add a custom element and serve it with Phoenix LiveView? 🤔
LiveView, HTMX, Hotwire (basically rails version of LiveView, it is different but it has similar abstractions), etc… should all make everyone consider alternatives to react/vuejs for new products and/or major features. It doesn’t prevent react or front end JS code for truly dynamic interfaces when needed, but allows developers to server render 98% of their UX on the backend with the tools that they know and love. Also, when you are taking about rails and turbo, that is tech from 5+ years ago, their upgraded turbo (turbodrive/hotwire) is much better and solves problems that the previous generation didn’t. I like LiveView better, but you need to stop referring to old framework problems that haven’t existed for a long time.
In 23:28 we were talking about Auth and how it would require 3 roundtrips to the server. Wouldn't it make sense to have the profile and permissions within a JWT or Secure Cookie? If there's an update in one of these you could update the JWT or Cookie. I know this depends a bunch on the architecture and the server, and whether you have it split into several services...
Secure cookie still require authentication if you use normal bearer token. JWT have an issue of invalidation. If you want real-time JWT revoke, you again, need to do authentication for every round trip.
@@chakritlikitkhajorn8730 Yes. But you'd have less roundtrips as they are part of the token (correct me if I'm wrong). Additionally, you can always have an in-memory storage with it, right? That way you can blacklist the JWT and it'd be fast. If you have short lived JWTs (Expiring an hour or so) you can minimize an attack surface in which the in-memory storage becomes unavailable
I rather wish for something that is focused on being local (client side) first. So like everything you do is local and synced with the server if available at some point in time, including the updating of the client itself.
10:54, Ok I can see why being able to overload funcs like this is good, but these are literally three identical funcs, why would I want to write something three times if all of them do the same thing?
Elixir do not have overloading functions. These **are** the same function just with different heads (as it is called in Elixir). It is literally compiled to single function with big switch inside of it. And no, these functions do not need to do the same thing in several places.
@@hauleth thanks for correcting me, but after checking again the first two are identical. I do not see a reason to write something twice if it compiles into the same thing
@@k3rnel-p4n1c I think you're right, the code here could be condensed slightly. One thing to note is that `handle_info/3` is the generic callback for _any message_ being received by the process. This means that you have to be a little more specific in your pattern matching, to make sure it doesn't match on other messages that could come in. That might be the reason for the verbosity here.
@@k3rnel-p4n1c I think it's an example to show people that it's possible to write multi-clause functions. `handle_info/2` is an important callback as it allows you to receive messages from external processes. In practice you'd use a guard: `handle_info({ref, {status, %Browser.Timing{} = timing}}, socket) when status in [:loading, :complete]`.
In a BEAM world, nope, not really. It's very linear and the tooling to scale and share data between multiple servers is baked in and core to Elixir and erlang. It was literally designed for this from the ground up.
Still not a huge fan of elixir syntax, but other than that, this looks fantastic! I'm not entirely sure how well it would work with highly interactive sites that have animations etc, there might be some interaction delay still, just because network latency. Not talking about updating a progress bar, but things like tooltips, drawers, popups etc (unless I'm understanding this wrong. But I wouldn't want 100ms delay after clicking a button to show a popup). But if you don't need that stuff and just want a semi-interactive site, it's very much promising. Kudos to the team for pushing this pattern!
They ran 60fps animations across the ocean in one of the keynotes and it worked without issue. Some folks even build browser games with LiveView. And usually you'd use css animations for most things anyway
damn this seems like THE upgrade to the go/htmx stack. Im writing a side project in go and htmx and yes it is way easier and faster than the whole bloated Frontend JS frameworks but i gotta say that it still is a bit hard as the project grows because htmx is.... just htmx. Its like windows running on any random PC. It works, but its not a really nice experience but it does what it has to. Elixir and Pheonix are like a Mac running MacOS, they are crafted for eachother. Which makes especially the templating part a breeze.
I use blazor server at work which I guess works similarly to liveview. The problems we encounter at work is scaling and connection issues due to the constant need of being connected to the socket. For example chrome browser saving features disconnect the socket and the state of said page is gone forever due to needing to reload the page. If the server is getting crowded latency becomes a big issue and interactive elements on the page feel sluggish. I can’t say I’m a big fan of needing to be connected at all times to the server. How does Phoenix tackle these problems?
I have no .NET experience but the BEAM is exceptionally good at handling lots of connections. It favours equal distribution over raw speed. In terms of losing state, I think it's a general misconception with these technologies that you should be storing lot of ephemeral state. While the initial sell of LiveView was "no JS," it's moved FAR on from that tagline. It encourages doing many things on the client, like opening menus and whatnot, and they have helpers for that. If you have state that needs to survive a refresh then it needs to go in the database or local/sessionStorage. You have the same problem with with client-side JS frameworks if you are just storing state in memory.
you don’t need the round trip for every interaction, but if you do, Beam (VM) handles the connection trough lightweight processes that works independently handling millions of connections without increasing latency (unless you have another bottleneck)
You have to decode the JWT and verify the signature every time genius. And that's if you're using JWT. There is other types of authentication, and tbh most services nowadays leave the auth part for an external service, so you will have to do 3 requests to those external services.
What app are you working on where you don't need the ability to ban people or change their roles? JWT doesn't magically solve those issues. Besides that JWT still requires resources on every request just cryptography instead of a DB lookup.
Just a heads up, when said as a noun, “attribute“ is pronounced with stress at the start, like “AH-tri-bute”. When it’s a verb, it’s pronounced as you did “uh-TRIB-Ute”. Wiktiomary has accurate transcriptions if you can read IPA. This changing of stress happens with lots of verb/nouns, like “the blue record”, and “they record a podcast”. Regardless, awesome video, love hearing you talk about Erlang/Elixir adjacent things. And I’m 100% sure this mistake didn’t hinder anyone’s comprehension, just felt a little off when I heard it.
I've know the liveview since it's inception, but I'm not using elixir for the last few years and really thought it was already 1.0/ready for production for a long time, really surprised to see that 1.0 was just released now. It shows the care the team takes.
All these frameworks tend to make things really slow, development is slow, compiling is slow, browsing the result is slow. Misunderstand me correctly, we need development and new tech but many of the current technologies seem like worse versions of what we already had. Use the potential of Javascript correctly, do more clientside, minimize backend communication, try to deliver as much of the data as possible up front and only dynamically load that which can not be loaded up front. When I see APIs loading lists of 10-15 of something simple using JSON I want to go on a rampage.
i started learning phoenix liveview an year ago; haven't seen a complete framework like this. It is little difficult to learn it as there are not many tutorials available but books are solid.
It is unforunately very true. There are a lot of resources out there, sure, but nothing beats working on a real project or seeing someone's real-world codebase. I remember watching Jose Valim use Livebooks on his Advent Of Code Twitch stream, and god darn i learned so much from seeeing that. I suppose, a bit of a lucky thing is that the whole ecosystem, including Elixir itself, is written in Elixir, so we can fairly easily inspect their files and the way they manage projects. There are also a couple of amazing podcasts on Spotify worth listening to.
If the communication between browser-API server goes through HTTP2/3, the many requests might get a boost by running in kind of a batch on the same socket
@@gerritweiermann79 The auth point is moot. If you are using JWTs you don't need to hit the Auth system every time. I hope no serious application checks back with their auth system on every single request.
I really love modern PHP and that's what bring me money (and working for companies that people would never imagine php is used by)... but said that, I have my own baby product and we already have a roadmap that will require IoT on factory machines, so we think about moving everything to Elixir to have less languages as possible.
It's great but interactions can feel laggy if you are far from the server and over-relying on it to do everything without JS. Having a websocket for each client also scares me for scalability
refreshing a complete template using a templating engine is very quick... I don't see the practical advantage to complicate it by separating dynamic and static parts just to see parts of the page rendering automatically instead of refreshing all the page
At 10:53 I paused on the 3 handle_info(...) methods and noticed they all have identical bodies... Where's the benefit of matching on three different parameter _values_ when all three method bodies are identical? To be charitable, maybe this code is an example of someone (mistakenly) overly broadly applying elixir's form of method overloading... forgetting there's more than a hammer in the toolbox. And of course I'm guilty of copypasta too. But this is a 350 LOC brag-metric example, so consolidating handle_info(...) seems like it would be worth improving the brag metric by 14 lines.
I didn't notice immediately that the signature pattern for the :error version of handle_info() has a different parameter structure(?) wrapping the BrowserInfo in curly braces... So maybe only 7 lines of copypasta could be removed from the handle_info(..) to be shared between :loading and :complete calls.. Hmm. Is it possible to define a pattern that matches _both_ :loading and :complete to avoid the copypasta? I honestly have zero clue. I don't know elixir or phoenix or any of this. But seeing nearly identical code scrolling by triggers my "something is copypasta'd" code-skimming neurons.
@@willcoder You can do that with guards but you end up with one long ugly def line to save a few lines of code. The separate function heads in the video are much easier to read so imo it's better than combining them. You could improve the code by moving the body (the duplicate lines) into a separate function that can be called by all the handle_infos.
@@zegg90 Moving the body to a separate function, and calling it from all 3 sounds like the cleanest. Thanks for the info about guards. All new to me. :)
@@willcoder The reason is that `handle_info/3` is the catch-all callback for _any message_ that comes to your process. So you have to be more specific in your pattern matching to make sure you don't match any other message by accident. Rewriting with `def handle_info(msg, socket)` is therefore not recommended.
Except that sending rendered UIs to the client is almost always a larger payload that just some underlying JSON which sometimes can be very small. If one of your stated reasons was "bad internet connection", then all the more reason to keep the data transfer to a minimum.
In practice, payload sizes are very reasonable. In most cases, smaller than the equivalent JSON payload, which sounds counter-intuitive, but you have to realize that LiveView only updates the parts of the page that have changed. So a lot of information about the underlying models is never sent over the wire.
@@DerekKraan I doubt that. The average HTML page UI is 20 kb+, measured from my page. There is no way any JSON transfer ever comes close to that. Usually it's like an object with 5 keys rendering a whole subsection of a page. Or no JSON at all, that's what static elements are.
@@Leonhart_93 Note, I am not talking about the initial page render. On that one, we are making a trade-off between megabytes of JS and just sending the HTML. (I think LV still wins here by the way.) On re-render, though, the payload is often very small. Phoenix LiveView, when it compiles your template, splits it into "dynamic" parts and "static" parts. The static parts never get sent over the wire after initial page render. Only the dynamic parts do. This trick keeps updates tiny for the most part.
@@DerekKraan Yeah but I can add countless arguments to that. Everyone has a powerful CPU in their pockets these days. Why make my server do extra load when their phones barely even register some extra processing? The main bottleneck is always the internet connection, and I have no reason to believe that HTML will ever be less in size than JSON. That's like creating new problems just because we are bored with the current solutions (which also happen to be back to much older approaches of server side rendering which I ditched).
@@Leonhart_93 Not everyone has a powerful CPU in their pockets. The update payloads _are_ tiny. I explained how they do it, so if you "have no reason to believe", that's on you. This is not "creating new problems". LiveView eliminates entire layers from your application. If this is not a benefit then I don't know what is. I have been a happy user for years and will remain one.
I guess this is old school or "old guard" mentality...but I HATE the idea of having the server update or even know anything about the front end. In most of the applications I develop, a web frontend is only one option (and most of the time, not the only one being utilized)... this has always been my gripe with react. It was developer by engineers who could not figure out how to use the MVC design pattern.
I dont agree with the premise to move rendering to the server. I like to keep concerns separated so that different teams can work on them. In addition i would not put all the load on the server nor would i want to restrict the user to what UI they use. I keep things seperate: API backend and letting the client render thr UI however it seems fitting for the user. This is also beneficial wrt to scaling and state handling. So tja, i know this is a hot take right now. convince me otherwise.
This still super exciting. I will have to check out a bunch of these technologies. I still think Haskell is the closest syntax needed for modern full stack development. Everything is based off the C imperative syntax but most of what we do now is functional reactive programming so it makes sense to have a syntax more optimized for that.
The auth diagram doesn’t make sense. If you have your token in a cookie your backend should just need that to get checked when you hit the endpoint. That doesn’t spam your IDP
28:00 I think a lot of back-end devs start to realize that MVC doesn't work and the Component are the correct way to abstract, I think Django have them and Laravel also
Reminds me a bit of atozed Intraweb at least the old versions I used a while ago. All the code you wrote was server-side. The dev tools acted like you were writing a desktop GUI application but it would do all the magic of syncing the GUI from the server. It had it's problems, it didn't scale at all and was resource heavy. The abstraction would break down at times. Maybe it's gotten better, I'm not sure. But I like the basic model.
Elixir is a nice language and LiveView is an ok (albeit buggy) alternative to the React model. But after giving it a fair shot, I just couldn't make the sacrifice of type safety. Maybe in a few years if they get their static type system implemented. The fact that their JS side is so half-assed and lacks TS types (the Elixir/Phoenix team is AGAINST typescript) is also a massive turnoff.
Legit comment, I had the same, but I implore you to give it a chance. After working on a real Elixir project for a while, I've realized that all the tools you need for typesafety are built in. Most of the time you simply don't need types because the language provides significantly stronger type-checks than TS can. Additionally, you do have types, in the form of typespecs. As for what you do in your own JS/TS, it isn't really Elixir's concern if you have a separate build system or a headless frontend.
the more the server does the more it cost to scale the application. A server should do the bare minimum in my opinion. The cost of going from 550 milisecond initial load to 50ms initial load is not worth in my opinion
Theorethically true, but - the BEAM processes are so lightweight that you can run millions of them on a single Raspberry Pi. OTP's horizontal scaling capabilities make it super easy to build a network of servers. Metal is cheaper than Functions when active runtime is above 67%. You also don't have additional costs related to in-memory stores, queues. Finally, and most importantly, unless you are running Google, developer time costs you significantly more than servers, and Elixir is famous for the ease of development it allows.
@@MrManafon I had some experiences with elixir and might be my typescript brain giving me headaches tbh (some pleroma and akkoma stuff for testing my own fedi software)
The Liveview runtime, the BEAM is a battle-tested beast. It was designed 40 years ago to handle concurrency and Fault-tolerance from the ground up. Way ahead of it's time. Look up Erlang-OTP.
LiveView payloads are usually smaller than the equivalent JSON payloads would be. Also LiveView has no problems scaling. It will be a long time before you need more than 1 server to handle the load.
I'm not a full stack guy, but I like to know what is going on. I hadn't heard a walk through of Elixir before but when I saw Theo react at time code 6:18 I immediately thought, I bet this is written in Erlang. Yup, that's what it is. That hot swapping stuff is pretty cool.
It is crazy how Elixir/Erlang is not as popular as it should be.
It is dynamic-typing so it is probably about as popular as it should be. JavaScript, on the other hand, is way more popular than it should be.
JS is forced into popularity.
@@username7763 The dynamic typing isn't nearly as bad as people claim it is. Elixir's pattern matching and type-specific operators prevent those "whole classes of errors" people talk about. They aren't caught at compile time, but your general tests should catch them-that is, tests you would have anyway, NOT type-specific tests (I don't have any of those myself). The whole "let is crash" philosophy makes it more than fine for most systems. All that said, I'm not mad it's getting static typing, but I'm in no rush. I also don't work on HUGE systems, though.
i actually like elixir, sounds great for performance heavy apps, but...
as it "should be"? we got some celestial beings in the yt comments.
elixir has no mature type checking before runtime.
js has multiple issues (inexperiencedly designed lang, ecosystem rebirths every 8 years) but it has (pseudo) static typing with typescript.
I change the type of a field in my zod types, I get to know before pushing about edge cases with proper eslint config.
people choose, man. something better comes along? people switch.
webdev via wasm+some lang has yet to be proven as a good setup to switch out of js.
I write a web app with node+some framework, only learn 1 language for client and server side.
guess what happens when you are 1 year into building a really cool elixir app. oh, you need to learn js to deal with all the edge cases: the frankenstein app begins.
the guy is writing html inside a string for fuck's sake. i don't care if it has good highlighting, remember how nice it was writing jsx with typing? so you get know if you messed up the name or value of an attribute? gone
idk what else to say man.
@@username7763 bullshit
let me tell you as someone who found elixir a few months ago after severe JS burnout: elixir phoenix is the way and is a beautiful piece of tech. come to this side, it is in fact filled with greener grass
As someone who tried it out of interest year or two ago, I couldnt make mx command work even while following tutorial step by step. Imagine not being able to get working project after running npx create-next-app
He used to program in Elixir when he was at twitch
now we will make phoenix for gleam and call it thunderbird
Is elixir blazor but not Microsoft?
We've never heard anything like this before
Doing Elixir now for 7 years and it is getting better and better every year. My biggest concern with it is that devs are rare as unicorns. However, once you've assembled the right team, the possibilities with Elixir are endless!
I once had a recruiter from samsung who contacted me for an elixir position there!
@@grimm_gen how did they find you?
That's not a big deal since it's quite easy to onboard pretty much any FP fanboy (Haskell, OCaml, even Scala)
is work in Elixir quite rare, I feel like it's a small niche.
Dev are rare maybe is an pros. Because you got less compatitive
Elixir/Phoenix has been tempting me for years, might have to finally take the Elixir pill.
silly, you drink an elixir, you dont take it in pill form
It's awesome, but sooo different .. I highly recommend Pragmatic Studio's Elixir & OTP or Codegnome's Elixir courses. I was blown away by some features like the omnipresent pattern matching, etc.
😂 very true!
every elixir is a love elixir when you love elixirs :)
i took it and i ended up wanting more and more only to realise there's 5 companies in the whole country using it
I think one of the overlooked advantages of the LiveView approach is how it simplifies things that used to take multiple solutions down to a single answer.
Want to send push notifications to a connected user when they get a message? Just send a message to the liveview process.
Want to have long-running asynchronous processes on the server that get killed when the user leaves? Just spawn a process off the liveview.
Want to track user presence? Have the liveview process handle it.
These are all things that used to require separate pub-sub systems and lots of code on the front end. Now, they are all handled with one solution.
So we're back to PHP?
laravel livewire
@@glebtsoy4139 It is so much more lol. The first 3 seconds of the vidoe, Theo shows it updating a whole network of servers, their build caches AND pages on clients that are currently connected. Elixir/Beam/OTP are instanely well designed and battle tested.
@@glebtsoy4139 I'll get on that once php implements it
While I agree that PHP is amazingly well suited for the types of systems it is usually applied to, Elixir/BEAM is five steps ahead of everything in the industry. You souldn't focus solely on the templating, instead take a wider look at the runtime model, ability to spawn thousands of processes, to create server networks, use other technologies, have sane throws and sane logging mechanisms, switch between HTTP APIs, Websockets, RPC, async APIs without needing half of AWS to achieve it.
Blazor uses similar approach witj web-sockets
I moved from React to Phoenix and LiveView about a year ago. Never going back.
JS. JS is hell
Legit a legacy vanilla JS app almost made me quit dev because of how everything was strewn all over the place
could you tell a bit more about your take on this ? I'm studying the Elixir Ecosystem, and everything looks so good and feels so good to develop with that I'm afraid that's no turning back.
@@joaopedrogoncalves3783 Well for one thing the core developer is from Brazil haha. Beyond that, there's much to love: simple concurrency and distribution, fault tolerance, easy scaling. Key feature: PhoenixLiveView - a single code base in a single language is far better than JS on the front end and another language and architecture on the back end.
good luck finding job for it (unless you make your own job, ofc)
@@slamislife74 skill issue
I have moved from Python to Elixir... I won't come back to anything else! Ruby syntax, Erlang ecosystem, beauty!
19:18 are you referring to The old Turbolinks? Or the current iteration of Turbo under Hotwire? Payloads now operate by either providing a frame tag to replace an existing frame tag (without loading whole template/layout) or specifying a swap of any ID'd DOM element with a small payload via turbo_stream (still without rendering template/layout). I guess the latter supports their argument of requiring additional dev tuning.
Yeah I think Theo is a bit out of date on Rails' frontend solutions but it looks like LiveView blows Turbo out of the water.
LiveView's creator seems to have spent time with React and internalized its advantages whereas I don't think DHH ever gave it any real consideration.
Still I don't think Theo should shit on Turbo and praise HTMX when they're doing fundamentally the same thing.
P
Using elixir for last 6 years full time and very happy (moved from ruby). Thanks to mature core (OTP/beam) we have a lot of instruments for live debugging that helps with day to day debugging and dev. But I want to aware about hot-code reloading (swapping) - it's very hard and almost nobody from elixir community uses it for release processes. It's just increase release process complexity to non-acceptable level. It's only sound great but in reality state management and swapping is very hard. And when we are talking about web apps where state-less (http nature) is natural approach it's much simpler to go just with regular blue/green releases.
However during development hot-code reloading (specially on test-servers) it's a game changer.
This sounds like a realistic Senior Dev take. “Use it a lot, but never in prod.” is a completely legit conclusion.
The BEAM VM solves a number of difficult problems very well and those same problems exists in other problem domains (i.e. managing a large system of telephones is a similar problem as managing a large number of async activities which is often a part of the business logic in web applications). IMHO hot-code reloading isn't one of those problems we have to deal with in our web applications; or if it is, we have developed other tools like containers and rolling deployments that make reasoning about the state of the system during a new deploy easier to reason about. Or the tolerance for rolling on new application processes isn't as stringent as it is for active phone calls. This is the basis on which I stay away from hot-code reload.
i'm so glad that i'm learning this tech stack for a few months beside my full time job, looking forward to switch to this stack as my full time job
Loved this video Theo - as someone who hasn't written Elixir or Erlang since 2014 where I built a custom Unity3D Component Serialisation System for Realtime Networking, so happy to see Elixir coverage.
- rewrite your backend to Ruby on Rails
- rewrite your backend to nodejs
- rewrite your backend to go
- rewrite your backend to rust
- rewrite your backend to elixir
Will so much time spent rewriting it's a real wonder how anything gets done in this fkn industry.
The truth is nobody rewrites except for tech youtubers. The project I work on still uses PHP/backbone/jquery like it did 10 years ago.
A lot of the web still works on PHP and will for years to come. There are many new projects that still start on PHP, not no mention Java or C#. All of these cool new stuff happen only in twitter and youtube.
@@lastrae8129 and noobs
Nobody is rewriting anything lol. This is just the infinite content mill going
Douglas Crokford was right!
The industry needs a whole new alternative to JavaScript itself not just libraries!
Something to end all that fuckups
Changes of technology
And adopt widely with all browser bases
I think this video could do with a follow up. A lot of people are missing that the BEAM is the real special sauce and how well it compliments LiveView. Almost like flipped the other way around.
To me I like incredibly thin clients where the server sends HTML and it’s done (an archivable resource); OR, I like thick clients where the user has so much to hand that the app might even run offline or in difficult environments. Even though this is amazing, I have almost no need for something in the middle - an app that is symbiotically tangled with the server and at the whims of my connectivity.
You make a good point about offline, it's one of the reason I'm not a fan of any form of SSR. If I have a section of an app I want to make offline, going from client side is pretty easy, work out some sync logic / storage and your done. If all my eggs are in the SSR camp, I'll have a much bigger job on my hands.
Maybe this is the reason many of the offline-first tech that I like in theory has never gotten traction. Like, either you need the server or you don’t. Either you need the server, and you run on the user’s cell phone that’s always connected, or you run on their tablet/ laptop and they only pull you out when there’s Wi-Fi. (Or they use your phone as a mobile hotspot.)
The reliance on websocket is what makes this sketchy to me
Crazy Good! And such a well written post to be able to capture the complex nuance in short sentences. Thanks.
Create a HTML MPA. Rewrite to PHP. Rewrite to Django. Rewrite to Ruby. Rewrite to Next.js. Rewrite to Svelte. Rewrite to Go. Rewrite to Rust. Rewrite to Elixir. We'll never get anything done lol 😂
Because social media developers only care about coding, they really dont care about make a product or solution. Just the fun of making it.
We should have stopped at "create an MPA". This was never hard. All the "requirements" that came after were invented by developers with feelings. I'm half joking, but I'm half serious as well - the large majority of projects would be *just fine* as an MPA with a bit of backend in Whatever. This stuff is as complex as we choose to make it.
What an odd joke/complain, are you annoyed by variety and innovation?
@@Meuhandle are you even a developer?
@@professormikeoxlong bruh
Used elixir to deploy a service maybe 4 years ago and it was a blast, but back then it seemed like the elixir ecosystem was a bit stagnant. Might have to consider this again, now that I am bootstrapping a new company
How's the new company going?
I remember back in the C++ Borland Builder days, you could change a property on the representation of a UI component and it just updated live. In MS Visual C++ however you had to call a function and say if you wanted to push UI state into your model, or your model into UI state. The later won out... why was that?
Like mfc updatedata(false)
I don't understand this problem of out of sync state. My front end only ever maintains its own state. We solved this decades ago with MVVM (model, view, view model). The frontend is the view, it works off a view model (it's own state), and then the model comes from the server. This feels like a specific problem that's being appied to every situation, much like redux. Making your server responsible for UI state means your client is now tightly bound to the backend, making backend changes riskier. Want a simple UI change? Make two changes and two deployments. The benefits of even having an API fall away because every client now has to accept an HTML response. The moment you have a frontend that needs to have a different HTML structure returned, this architecture becomes a horrible mess. I expect to see "server side UI API responses were a mistake" videos in the the future.
It sounds like in MVVM can be expensive to make changes to the server model. Usually the UI is driving changes to the shape of the server model and development can become hampered by the UI team having to wait 3 to 5 business days for prioritisation, implementation, testing and deployment of server changes.
Stop making sense. If we made web development solved and boring, how could we talk about the new mistak-- err... "technology" of the week?
Just a consequence of trying to force JS on both backend and frontend and then blurring the line between the two of them. The server sends the DB data as JSON or whatever, the frontend uses it to create an UI. If that data changes, the server sends the updated version.
This is not hard at all, it is just made hard as marketing for certain tools.
@@simonhartley9158 MVC and it''s children are VERY battle tested by now and no, the UI shouldn't drive server model changes. The UI should display what the app needs to display and the server should be providing enough information to do that. You don't change the server in response to UI changes, you change BOTH in response to app design changes.
Management paperwork is a separate, unrelated issue. Changing the color of a button should definitely not require a backend change.
I feel every framework is driving towards being a new Visual Basic, which we abandoned for good reasons. JSON is pretty good for me, I can still do rich, dynamic UI without having to fight a renderer to produce the HTML I want the client to use.
@@Leonhart_93 your model doesn't seem to take advantage of the benefits of streaming, nor deal with update granularity vs. client side request waterfalls.
The reactive aspect is interesting.
My main issues with it are:
- Focus of HTML over the wire instead of a JSON API means that you can't reuse the API for other applications/services for integration.
- Mixing server and HTML forces everyone to be a full stack developer and creates unnecessarily tight coupling between front end and back end which harms modularity. It was one thing for React to combine JS + HTML, but that was fine because both of these are front end concerns; they overlap significantly as they both deal with UX/UI. Frontend and backend are often different concerns and they don't overlap so much. Front end requirements changes often arise independently of back end requirement changes. I don't want to have to read and modify back end code when making front end UX changes; it adds unnecessary security risks.
28:32 Still pretty new to that idea, what's the actual difference here if you're making all of your elements under custom functions or even classes?
Recently picked up Elixir after being in JS for a decade and have to say it feels revolutionary.
Did Vercel... just use "The Conjoined Triangles of Success" for their graphic about blue-green deployments?
HA! Good spot
So wait, server side includes were the way to go all along? I was about 11 or 12 when they were the standard way to include dynamic content in static pages so my memory and understanding may be off, but I swear that at least conceptually it's roughly the same idea
People love to rediscover shit in this industry and then pitch it as a gospel
5 years from now on the view independent APIs will be all the fashion again
You know how in the beginning he references Intercooler.js? Well, we were using Phoenix with Intercooler.js. LiveView is an evolution of this pattern.
just came to make the "so we're back to PHP?" comment without doing reading up on anything for even 5 minutes
I feel like that every day I go to work... I miss Elixir so bad
Coming from blazor I am very hesitant to have a server call to update any state. This sucks if you have a slow Internet connection, and makes it unusable if connections drop.
Curious if their solution works better
They're closing the connection once the page has loaded, and then the client components can take over for interactivity etc. at least that's one way to do it.
I'm also using blazor and while i love what microsoft is doing i'm starting to see the pitfalls... still has a long long way to go. (Currently using Blazor Hybrid on android and iOS, i love C# especially the functional parts)
It is not. It will suck with poor network exactly the same way as Blazor.
@@hauleth That doesn't make any sense, you can use Phoenix without keeping the connection alive, performance with that is equivalent with a normal request, you just get to draw faster
@@hauleth Same goes for blazor btw, and idk why you think blazor sucks lol
@@dahahaka I didn't said that Blazor nor LiveView sucks. Just both technologies will suck when there is poor connection between server and client.
What do you do with your iOS Android apps? This doesn't make sense when you have native apps and treat the website as just another one of your platforms.
The innerHTML reminds me what I built in the pre-js-framework time period, where we used jQuery for being compatible with all browsers. Also we had to support IE 6.0, where it was easier to update the DOM with innerHTML and not do DOM operations, because they were too slow. The only downside is that you also lose the focus of an input field if the HTML is being updated that contains the input field. We did try to reassign the cursor position after an update.
It was much faster in loading the page and also in building the javascript.
How you thoroughly discussed each topic and touched on imho"just write better software ." Made me sub and look out for these types of videos. I passed out when you were talking about ssr and how it didn't solve the problem and its caveats. You get people thinking the right way. Software should be easy and good to solve day to day problems. Consider you a leader here
been doing elixir liveview for 4 months now for my job.
sockets are memory intensive so for heavy pages we need to ditch liveview for cost reasons
lack of types makes it easy to mess up
a map can have its keys as atoms or strings. if you have the same object with different key types it cauaes a huge headache
you will eventually need javascript. when you do it is very unpleasant to deal with
i do not think it makes sense to pick liveview over react for the frontend
Sockets themselves are not memory intensive, but if liveview is storing lots of state information with the socket connection then that could add up. I use React clientside with Sockets, so it feels I get the best of both worlds.
Tried mixing with Svelte?
I am sure that some proxies do not work with web-sockets, so does live view have a workaround for this or it just wont work
The Phoenix channels client, which is used to implement LiveView, falls back to longpolling if a websocket can't be established.
@@dougvought awesome that's what I wanted to hear
Making server side cool again? I feel like we just gone through a loop
Welcome to CS. I'm waiting for the inevitable backlash to cloud providers, and everyone runs back to in-house data centers.
@bruceleeharrison9284 getting away from cloud would be hard these days. But if i was gonna dream up a perfect scenario(imo) id say CS devs/engineers create a union that works to protects our rights and experience. But ALSO creates AT cost data centers for union members to utilize.
@@Frostbytedigital it wouldn't be a reversion to what came before. It would be a new, updated approach that revisits the concept with a modern approach. Probably a way to outright buy capacity in a datacenter such that you "own" the machines. Services would allow web reconfiguration of the setup, some being immediate (since they can be controlled in software) and some having a lead time to physically setup. (e.g. installing a direct network line)
I can see this being so much cheaper than clouds that excess capacity will be bought to ensure little to no lead time for teams requesting hardware. Which will work fine right up until someone decides further excess capacity isn't needed and trims the budget. Then we'll be back to long lead times and cloud will become more appealing again.
Yay, the CompSci pendulum...
The pendulum between client and server has been swinging since the late teletype early dumb-terminal days. We seem to have new names every few years for mostly the same concepts. At least it's a client-vs-server loop... and not recursive... Unless you look at the bare metal/image/package/VM/container progression... that definitely _feels_ recursive.
@@bruceleeharrison9284 that is sooooo true ! Ha! I am already catching the vibe
Can we do offline first PWA apps with live view? Seems limiting...
No lol
@@t3dotgg that's a huge load of functionality just boom, poof, gone..
@@t3dotgg Actually yes, there are people (crazy people) who run a local liveview instance on your device, that powers the frontend, and communicates with the backends.
😂
@@t3dotgg liveview-svelte-pwa I am not sure is it a "offline first PWA apps with live view"...
What is this drawing illustration app at 21:11?
It is Excalidraw
Been using it for 5+ years now. Getting better all the time.
Wow this is incredible I haven’t used phoenix in a long time and I want to go back to it🎉 Amazing
It’s no wonder so many people burn out in this industry.
The end result (demoed in the intro) reminds me of how cool it was to work with Meteor ~10 years ago.
22:44 I don't know about GraphQL. But if you have multiple services you don't want them to call the auth server all the time. So you use JWT instead. With that you just ask the auth server once for their public key and then you can check every request with checking the signature of the JWT. If it is valid you can read the data from the JWT and know which user it is. No need to do any additional call.
Btw. with the web sockets solution, either you don't use microservices but a monolithic application. Then you don't have multiple services, you just have one monolith that does authentication and all the endpoints. Or you have to split the web sockets and basically have 3 separate connections. So you kind of compare different things / different architectures with each other.
But what if there is a change in the user permissions? The JWT will be outdated and that may be dangerous, so you need to specify short times to live and re-validate (going all the way to the DB or centralized auth service). If you're going to do that why not simply use a traditional server side in-memory cache? In that case you can use cookies or whatever (that only hold user id) and check in the cache for the permissions. If the cache is short lived you are in the same situation as JWT except for the specific instance where the client is connected where you can edit the cache record directly (considering how load balancers work, in most cases will be the same server anyway).
@@Robert-zc8hr the privileges rarely change. You can have for example 15 minutes tokens and a refresh token. If privileges change you create a new 15 minutes token with the changed privileges after the other expired.
The alternative with the cache isn't good, as the auth service needs to be online all the time and each application server has to ask it. So if it is down, everything is down. It you have JWT and the auth server is down for 10 minutes, it's not that big of an issue. Some users can not use the services, but others can. Also you can use stateless logic like Cloudfront functions to check JWT tokens.
@@Robert-zc8hrwell thats why you dont keep permissions in JWT.
@Duconi yeah I also have no idea how they are going edge cache their stuff given they serve everything over websockets
Using web sockets to update the html is a bit overkill
Welcome to web dev in 2024.
@@kezzu5849well is it any more overkill than having hundreds of thousands of JavaScript lines running to compile a 600kByte JS bundle to run in the users browser to show a hello world?
?? me not uderstand
You update it in all web-apps by something .. by json .. by html .. by websockets etc.
@@jsonkody why not just update it with js using fetch?
Good luck when you have lots of people hitting your api endpoint at the same time instead of streaming the change via websocket.
"might seem great if you are near the servers it is hosted, but as soon as you go somewhere else your experience sucks" is a perfect description of a specifc problem and not a generic one.
You might think that all apps are like email clients or file uploads or streaming, targeting all possible universe's users where distances could easily matter and introduce niche problems to those revenue generating specs, but most apps are fairly local. And most of their problems can be solved horizontaly.
I would argue that most MVPs are not worth the effort of fast speeds either. So you are only left with those niche apps. not niche in terms of user volume but in type of app.
Also keep in mind that distance is not the only factor for speed experience.
So how many flies did those bazookas killed?
Well, this is definitely the push I needed to check out Elixir/Phoenix. This looks really impressive.
Streaming is not just an SSR tech stack thing, been using Sockets for years now. Generally speaking the types of web pages I create are for commercial dashboard, data entry type systems, the majority of comms are via sockets, and like you pointed out in the video one advantage here is Auth is only required once, other advantages is that data can also be sent in binary, and even before HTTP2 allowed you to create a protocol that multiplexed the requests. Still use Rest end points, but generally this if for legacy comms, or B2B logic. In the long run this takes way less data than sending HTML, mainly because the data can be cached aggressively, and invalidated by the server triggering updates. SSR makes streaming easy, but please don't make claims that it takes less bandwidth than client side rendering, because it depends on how you do client side rendering, using REST is just one option that because of it's stateless model is not the best for performance.
Those saying that PHP already did this don't get it...
PHP already did this
@@DKLHensenYou don't get it.
I bet both of your guy are right. First half of video, It just auto ajax and auto reload project when project code change. (At least on real time development, both php laravel livewire and elixir are equivalence, only final feature(shipped feature) always varies, because browser always changing, different web technology update at different pace against new browser feature)
To be fair. Web is a big and large ecosystem, too many stuff hiding that framework and library already do for us... and niche place that help here and there, just phoenix is slowly steady and never use shortcut like most javascript developer do, so they think it is magic.....especially those work around facebook era, should know that trick, but maybe they never use it, but they know
Laravel copied LiveView. And I don't blame them
Keeping realtime web socket connection comes with lots of issues
There is a fallback to long-polling. Curious to hear which other issues you are referring to, because LiveView has been running in production for years.
Not a lot of issues, no, just some issues like anything in tech.
What drawing tool is Theo using at 21:14?
Looks like Excalidraw
Noob, but serious question: 24:55 -> Amongst Elixir, Go, Rust, Zig and C#, which ones have similar "included" approaches, delivering comparable results, in terms of requests volume reduction?
Or is it exclusive to Elixir's ecosystem? I'm just trying to understand if this is something new for Elixir specifically or actually for the whole scene.
This is new to the web. There's been some similar ish things in game dev for multiplayer games, but this is entirely new as a way to update HTML and manage a user's session over time
People are gonna come in here and reply "but TurboLink did this before!!!1!". Those people are wrong. Nobody's done a real concept of "long running per-user sessions on the server" in Ruby.
@@t3dotgg Thank you so much for such a fast reply! I'll definitely take a further look to understand more about this new feature, it indeed seems to be a very relevant mark!
How big of a project (number of online concurrent users) would you say is enough to justify choosing Elixir instead of Go (for example), if your main concern were to reduce financial costs while utilizing cloud services as the main backend? I know this is a broad question and heavily depends on the way the code is structured, but as an independent dev living outside the US, that's kinda my main nightmare and I rarely see people talking about these managerial aspects of development...
I'm an economist and a newbie in web development and I definitely have skill issues with react in the past. But with Elixir and Phoenix I can start to ship web app to my students.
This is awesome news! I used to work with Elixir/Phoenix and LiveView. But I had the choice to work on a questionable product with those technologies, or on a great product with React/Next.js, and went for the latter. But sometimes I miss the elegance of Elixir/Phoenix. I'm not a big fan of Tailwind - it's like a hammer that makes everything look like a nail. But Elixir/Phoenix IS the right nail for it, it's the perfect match and I wouldn't like to use anything other than Tailwind with it.
Let me guess: a gambling site?
@@ironhammer4095 Not that shady, but close 😅
15:00 same thing I am still thinking in case of RSC when I got to know of it, for every updates sending the whole json representation of the page. How optimal is that ??
Or maybe be revalidateTag (sends the respective component's json representation) works differently than revalidatePath (sends the all components json representation) I guess
17:12 "This is like Qwik, but good and solving real problems" - I really wish you'd elaborate a bit more on this, because to me both libs seem to solving the same problems, just doing it differently. Both libs identify they need of splitting into static and dynamic parts, both only send minimal code for interactivity and both do fine-grained updates. What is the "real problems" that Phoenix is solving and why is it "good"?
I believe that a real difference between Qwik and Phoenix is that Qwik has a better user-story when it comes to frontend/browser only components/islands. Phoenix dev-experience would favor the back and forth between server and client over WS (at the moment of writing).
@@bas080 that is (more or less) my understanding as well. Each solution has its pros and cons, but claiming one is simply "good" and "solving real problems" (implying that the other one doesn't do neither of those things) is just childish.
Also to be fair. Users being on the existing page and not getting the update till they refresh is not some huge problem. Even massive companies like amazon accept that behavior and wont be changing frameworks to "fix" it.
He has talked about it in a js framework tier list. I think you can boil it down to having quik's weird syntax that he can't digest.
brother it can scale to 1m websocket connections its all REAL-TIME. Quik does not even compare in the slightest
Grandpa PHP has that "Told you so" face rn
I'm Brazilian, Elixir was developed by a Brazilian coincidence? I have been programming elixir for over years
I have used the .Net version of this which is called blazor server. However, the big problem is the high latency for anyone who does not live close to the server.
You mean high latency? Because low latency is good as it's measured in seconds (or milliseconds).
I meant high
Thank you. We desperately need more variety in architecture, the json API monoculture needs to be challenged, HTMX and Liveview are two complementary approaches to do this
Why not using SOAP? Why not using XML? We moved away from them to JSON, because it's more simple. Sending HTML snippets is a way back in the wrong direction. I don't say it's bad in any case. And I agree that we need more variety in architecture. A better solution than React and Phoenix LiveView is for example static HTML. If you don't need dynamic content, just put it there as a static HTML. You can generate a blog with Hugo and don't need computing. Neither on client nor on server side. Of course that is not working for everything. Where you need a bit of dynamic content, you can provide it as a custom element. For example comments in a blog. But if you have for example a live ticker on that page, Phoenix LiveView is maybe a good solution for that. Can you add a custom element and serve it with Phoenix LiveView? 🤔
LiveView, HTMX, Hotwire (basically rails version of LiveView, it is different but it has similar abstractions), etc… should all make everyone consider alternatives to react/vuejs for new products and/or major features. It doesn’t prevent react or front end JS code for truly dynamic interfaces when needed, but allows developers to server render 98% of their UX on the backend with the tools that they know and love.
Also, when you are taking about rails and turbo, that is tech from 5+ years ago, their upgraded turbo (turbodrive/hotwire) is much better and solves problems that the previous generation didn’t. I like LiveView better, but you need to stop referring to old framework problems that haven’t existed for a long time.
This is the best theo. The educator with a great tone and not sounding pompous like in some videos.
In 23:28 we were talking about Auth and how it would require 3 roundtrips to the server.
Wouldn't it make sense to have the profile and permissions within a JWT or Secure Cookie? If there's an update in one of these you could update the JWT or Cookie.
I know this depends a bunch on the architecture and the server, and whether you have it split into several services...
Secure cookie still require authentication if you use normal bearer token.
JWT have an issue of invalidation. If you want real-time JWT revoke, you again, need to do authentication for every round trip.
@@chakritlikitkhajorn8730
Yes. But you'd have less roundtrips as they are part of the token (correct me if I'm wrong).
Additionally, you can always have an in-memory storage with it, right? That way you can blacklist the JWT and it'd be fast.
If you have short lived JWTs (Expiring an hour or so) you can minimize an attack surface in which the in-memory storage becomes unavailable
I rather wish for something that is focused on being local (client side) first. So like everything you do is local and synced with the server if available at some point in time, including the updating of the client itself.
10:54, Ok I can see why being able to overload funcs like this is good, but these are literally three identical funcs, why would I want to write something three times if all of them do the same thing?
Elixir do not have overloading functions. These **are** the same function just with different heads (as it is called in Elixir). It is literally compiled to single function with big switch inside of it. And no, these functions do not need to do the same thing in several places.
@@hauleth thanks for correcting me, but after checking again the first two are identical. I do not see a reason to write something twice if it compiles into the same thing
@@k3rnel-p4n1c I think you're right, the code here could be condensed slightly. One thing to note is that `handle_info/3` is the generic callback for _any message_ being received by the process. This means that you have to be a little more specific in your pattern matching, to make sure it doesn't match on other messages that could come in.
That might be the reason for the verbosity here.
@@k3rnel-p4n1c I think it's an example to show people that it's possible to write multi-clause functions. `handle_info/2` is an important callback as it allows you to receive messages from external processes. In practice you'd use a guard: `handle_info({ref, {status, %Browser.Timing{} = timing}}, socket) when status in [:loading, :complete]`.
what about scaleability? isn't scaling websockets super hard?
In a BEAM world, nope, not really. It's very linear and the tooling to scale and share data between multiple servers is baked in and core to Elixir and erlang. It was literally designed for this from the ground up.
@@EightNineOne thanks for the answer!
It said 25k max users
@@explorertoad8882 nooo, a fairly modest server can handle hundreds of thousands concurrently
Still not a huge fan of elixir syntax, but other than that, this looks fantastic!
I'm not entirely sure how well it would work with highly interactive sites that have animations etc, there might be some interaction delay still, just because network latency. Not talking about updating a progress bar, but things like tooltips, drawers, popups etc (unless I'm understanding this wrong. But I wouldn't want 100ms delay after clicking a button to show a popup).
But if you don't need that stuff and just want a semi-interactive site, it's very much promising. Kudos to the team for pushing this pattern!
Drawers and stuff are things that JS does good. You don't need liveview for that
They ran 60fps animations across the ocean in one of the keynotes and it worked without issue. Some folks even build browser games with LiveView. And usually you'd use css animations for most things anyway
damn this seems like THE upgrade to the go/htmx stack. Im writing a side project in go and htmx and yes it is way easier and faster than the whole bloated Frontend JS frameworks but i gotta say that it still is a bit hard as the project grows because htmx is.... just htmx. Its like windows running on any random PC. It works, but its not a really nice experience but it does what it has to. Elixir and Pheonix are like a Mac running MacOS, they are crafted for eachother. Which makes especially the templating part a breeze.
Obv this breaks down for things which need frequent rerenders. Like a game for example
Yep! Server components do as well. IMO it's not a "break down" thing so much as not the solution space server-first tools operate in :)
Even react shits the bed for anything that is real-time. Canvas with WebGL/WebGPU is the only way to go.
I use blazor server at work which I guess works similarly to liveview. The problems we encounter at work is scaling and connection issues due to the constant need of being connected to the socket.
For example chrome browser saving features disconnect the socket and the state of said page is gone forever due to needing to reload the page. If the server is getting crowded latency becomes a big issue and interactive elements on the page feel sluggish.
I can’t say I’m a big fan of needing to be connected at all times to the server. How does Phoenix tackle these problems?
I have no .NET experience but the BEAM is exceptionally good at handling lots of connections. It favours equal distribution over raw speed. In terms of losing state, I think it's a general misconception with these technologies that you should be storing lot of ephemeral state. While the initial sell of LiveView was "no JS," it's moved FAR on from that tagline. It encourages doing many things on the client, like opening menus and whatnot, and they have helpers for that. If you have state that needs to survive a refresh then it needs to go in the database or local/sessionStorage. You have the same problem with with client-side JS frameworks if you are just storing state in memory.
you don’t need the round trip for every interaction, but if you do, Beam (VM) handles the connection trough lightweight processes that works independently handling millions of connections without increasing latency (unless you have another bottleneck)
What do you mean by “needing to auth 3 times”? We usually just use a JWT that is signed with a BE secret.
You have to decode the JWT and verify the signature every time genius. And that's if you're using JWT. There is other types of authentication, and tbh most services nowadays leave the auth part for an external service, so you will have to do 3 requests to those external services.
@@upsxace wow, toxic trash talk. No thanks. Just study some more :)
@@AlanPCS saying "genius" is enough to hurt you? wow. Tell me where I'm wrong please, I'm open to it (unironically)
@@upsxace nah… I think you are Genius enough to find it by yourself :) have fun!
What app are you working on where you don't need the ability to ban people or change their roles?
JWT doesn't magically solve those issues. Besides that JWT still requires resources on every request just cryptography instead of a DB lookup.
Just a heads up, when said as a noun, “attribute“ is pronounced with stress at the start, like “AH-tri-bute”. When it’s a verb, it’s pronounced as you did “uh-TRIB-Ute”. Wiktiomary has accurate transcriptions if you can read IPA. This changing of stress happens with lots of verb/nouns, like “the blue record”, and “they record a podcast”.
Regardless, awesome video, love hearing you talk about Erlang/Elixir adjacent things. And I’m 100% sure this mistake didn’t hinder anyone’s comprehension, just felt a little off when I heard it.
You need help
I've know the liveview since it's inception, but I'm not using elixir for the last few years and really thought it was already 1.0/ready for production for a long time, really surprised to see that 1.0 was just released now. It shows the care the team takes.
All these frameworks tend to make things really slow, development is slow, compiling is slow, browsing the result is slow. Misunderstand me correctly, we need development and new tech but many of the current technologies seem like worse versions of what we already had. Use the potential of Javascript correctly, do more clientside, minimize backend communication, try to deliver as much of the data as possible up front and only dynamically load that which can not be loaded up front. When I see APIs loading lists of 10-15 of something simple using JSON I want to go on a rampage.
The issue with live view is the lack of js libraries like material. Building ui components from raw css is a pain.
i started learning phoenix liveview an year ago; haven't seen a complete framework like this. It is little difficult to learn it as there are not many tutorials available but books are solid.
It is unforunately very true. There are a lot of resources out there, sure, but nothing beats working on a real project or seeing someone's real-world codebase. I remember watching Jose Valim use Livebooks on his Advent Of Code Twitch stream, and god darn i learned so much from seeeing that. I suppose, a bit of a lucky thing is that the whole ecosystem, including Elixir itself, is written in Elixir, so we can fairly easily inspect their files and the way they manage projects. There are also a couple of amazing podcasts on Spotify worth listening to.
If the communication between browser-API server goes through HTTP2/3, the many requests might get a boost by running in kind of a batch on the same socket
But not auth what was his main concern
@@gerritweiermann79 The auth point is moot. If you are using JWTs you don't need to hit the Auth system every time.
I hope no serious application checks back with their auth system on every single request.
Can someone explain to me why people prefer to use k8s and react for all these?
I really love modern PHP and that's what bring me money (and working for companies that people would never imagine php is used by)... but said that, I have my own baby product and we already have a roadmap that will require IoT on factory machines, so we think about moving everything to Elixir to have less languages as possible.
It's great but interactions can feel laggy if you are far from the server and over-relying on it to do everything without JS. Having a websocket for each client also scares me for scalability
refreshing a complete template using a templating engine is very quick... I don't see the practical advantage to complicate it by separating dynamic and static parts just to see parts of the page rendering automatically instead of refreshing all the page
At 10:53 I paused on the 3 handle_info(...) methods and noticed they all have identical bodies... Where's the benefit of matching on three different parameter _values_ when all three method bodies are identical? To be charitable, maybe this code is an example of someone (mistakenly) overly broadly applying elixir's form of method overloading... forgetting there's more than a hammer in the toolbox. And of course I'm guilty of copypasta too. But this is a 350 LOC brag-metric example, so consolidating handle_info(...) seems like it would be worth improving the brag metric by 14 lines.
Unless elixir _demands_ three versions of handle_info(...)? I don't know. ?
I didn't notice immediately that the signature pattern for the :error version of handle_info() has a different parameter structure(?) wrapping the BrowserInfo in curly braces... So maybe only 7 lines of copypasta could be removed from the handle_info(..) to be shared between :loading and :complete calls.. Hmm. Is it possible to define a pattern that matches _both_ :loading and :complete to avoid the copypasta? I honestly have zero clue. I don't know elixir or phoenix or any of this. But seeing nearly identical code scrolling by triggers my "something is copypasta'd" code-skimming neurons.
@@willcoder You can do that with guards but you end up with one long ugly def line to save a few lines of code. The separate function heads in the video are much easier to read so imo it's better than combining them. You could improve the code by moving the body (the duplicate lines) into a separate function that can be called by all the handle_infos.
@@zegg90 Moving the body to a separate function, and calling it from all 3 sounds like the cleanest. Thanks for the info about guards. All new to me. :)
@@willcoder The reason is that `handle_info/3` is the catch-all callback for _any message_ that comes to your process. So you have to be more specific in your pattern matching to make sure you don't match any other message by accident.
Rewriting with `def handle_info(msg, socket)` is therefore not recommended.
Except that sending rendered UIs to the client is almost always a larger payload that just some underlying JSON which sometimes can be very small. If one of your stated reasons was "bad internet connection", then all the more reason to keep the data transfer to a minimum.
In practice, payload sizes are very reasonable. In most cases, smaller than the equivalent JSON payload, which sounds counter-intuitive, but you have to realize that LiveView only updates the parts of the page that have changed. So a lot of information about the underlying models is never sent over the wire.
@@DerekKraan I doubt that. The average HTML page UI is 20 kb+, measured from my page. There is no way any JSON transfer ever comes close to that.
Usually it's like an object with 5 keys rendering a whole subsection of a page.
Or no JSON at all, that's what static elements are.
@@Leonhart_93 Note, I am not talking about the initial page render. On that one, we are making a trade-off between megabytes of JS and just sending the HTML. (I think LV still wins here by the way.)
On re-render, though, the payload is often very small. Phoenix LiveView, when it compiles your template, splits it into "dynamic" parts and "static" parts. The static parts never get sent over the wire after initial page render. Only the dynamic parts do. This trick keeps updates tiny for the most part.
@@DerekKraan Yeah but I can add countless arguments to that. Everyone has a powerful CPU in their pockets these days. Why make my server do extra load when their phones barely even register some extra processing?
The main bottleneck is always the internet connection, and I have no reason to believe that HTML will ever be less in size than JSON.
That's like creating new problems just because we are bored with the current solutions (which also happen to be back to much older approaches of server side rendering which I ditched).
@@Leonhart_93 Not everyone has a powerful CPU in their pockets.
The update payloads _are_ tiny. I explained how they do it, so if you "have no reason to believe", that's on you.
This is not "creating new problems". LiveView eliminates entire layers from your application. If this is not a benefit then I don't know what is.
I have been a happy user for years and will remain one.
I guess this is old school or "old guard" mentality...but I HATE the idea of having the server update or even know anything about the front end. In most of the applications I develop, a web frontend is only one option (and most of the time, not the only one being utilized)... this has always been my gripe with react. It was developer by engineers who could not figure out how to use the MVC design pattern.
I dont agree with the premise to move rendering to the server. I like to keep concerns separated so that different teams can work on them. In addition i would not put all the load on the server nor would i want to restrict the user to what UI they use. I keep things seperate: API backend and letting the client render thr UI however it seems fitting for the user. This is also beneficial wrt to scaling and state handling.
So tja, i know this is a hot take right now. convince me otherwise.
No, thank you. Isn't that what Blazor Server is already doing?
Does Blazor have the superpower of the Erlang-OTP BEAM? I don't think so. LiveView is powered by the most powerful virtual machine on the planet.
Yes, and it comes with a bunch of issues.
Can we use this with a templating language that is not horrible ?
Could be worse. It could use JSX 🤢
Is "basically just HTML" terrible?
Why does he keep looking to the right side of the camera? Is he looking at his captors?
This still super exciting. I will have to check out a bunch of these technologies. I still think Haskell is the closest syntax needed for modern full stack development. Everything is based off the C imperative syntax but most of what we do now is functional reactive programming so it makes sense to have a syntax more optimized for that.
I continue using react for react native, and for static sites, only use elixir for admin dashboards and graphql apis
The auth diagram doesn’t make sense. If you have your token in a cookie your backend should just need that to get checked when you hit the endpoint. That doesn’t spam your IDP
Still need to check the signature (if using JWT or similar).
Yeah but that’s not a full IDP check like was shown
28:00 I think a lot of back-end devs start to realize that MVC doesn't work and the Component are the correct way to abstract, I think Django have them and Laravel also
How is offline support?
What field is this?
That the web still isn't a pubsub thing by default in 2024 would make my blood boil back when I learned programming in 2011.
Try an early return from a function. Or even a simple fibonacci with a cache. It is not a straightforward jump.
Reminds me a bit of atozed Intraweb at least the old versions I used a while ago. All the code you wrote was server-side. The dev tools acted like you were writing a desktop GUI application but it would do all the magic of syncing the GUI from the server. It had it's problems, it didn't scale at all and was resource heavy. The abstraction would break down at times. Maybe it's gotten better, I'm not sure. But I like the basic model.
Elixir is a nice language and LiveView is an ok (albeit buggy) alternative to the React model. But after giving it a fair shot, I just couldn't make the sacrifice of type safety. Maybe in a few years if they get their static type system implemented. The fact that their JS side is so half-assed and lacks TS types (the Elixir/Phoenix team is AGAINST typescript) is also a massive turnoff.
Gleam + Lustre might get there first
Legit comment, I had the same, but I implore you to give it a chance. After working on a real Elixir project for a while, I've realized that all the tools you need for typesafety are built in. Most of the time you simply don't need types because the language provides significantly stronger type-checks than TS can. Additionally, you do have types, in the form of typespecs. As for what you do in your own JS/TS, it isn't really Elixir's concern if you have a separate build system or a headless frontend.
Phoenix was truly built by Full stack devs for full stack devs
nah, json is fine
Yeah I mean I understand some size concerns and the whole truthiness of JavaScript but I’m yet to see a better alternative and a more convenient one
the more the server does the more it cost to scale the application. A server should do the bare minimum in my opinion. The cost of going from 550 milisecond initial load to 50ms initial load is not worth in my opinion
Theorethically true, but - the BEAM processes are so lightweight that you can run millions of them on a single Raspberry Pi. OTP's horizontal scaling capabilities make it super easy to build a network of servers. Metal is cheaper than Functions when active runtime is above 67%. You also don't have additional costs related to in-memory stores, queues. Finally, and most importantly, unless you are running Google, developer time costs you significantly more than servers, and Elixir is famous for the ease of development it allows.
@@MrManafon I had some experiences with elixir and might be my typescript brain giving me headaches tbh (some pleroma and akkoma stuff for testing my own fedi software)
The Liveview runtime, the BEAM is a battle-tested beast. It was designed 40 years ago to handle concurrency and Fault-tolerance from the ground up. Way ahead of it's time. Look up Erlang-OTP.
@@MrManafon You won't be running "millions" of LiveView processes on a RPi, but it will be a long time before you need to go past 1 server.
LiveView payloads are usually smaller than the equivalent JSON payloads would be. Also LiveView has no problems scaling. It will be a long time before you need more than 1 server to handle the load.
This is so good, reminds me a bit of Laravel's LiveWire (which I've never used). Cannot wait to try it out!!
I'm not a full stack guy, but I like to know what is going on. I hadn't heard a walk through of Elixir before but when I saw Theo react at time code 6:18 I immediately thought, I bet this is written in Erlang. Yup, that's what it is. That hot swapping stuff is pretty cool.