masfoobar 2 days ago

I enjoyed reading this. It follows a similar experience with our first htmx website, away from using modern frontends, or just simple jQuery with ajax json data.

I remember, working with a co-worker, we planned out the process (a step-by-step on the application like this post) and it made sense to me - but this journey was much harder for my co-worker.

Why is this? Simply because he is familiar with MVC pattern of sending json data back and forth, and getting the frontend to update and render it, etc. The idea of html flying with htmx as the behaviour (inside html tags) was just too much.

For me, I always preferred the old school way of just letting the server-side generate the html. All htmx does is adds extra functionality.

I tried hard to explain that we are sending html back, and to break things down one at a time but each new task was left scratching his head.

In the end, our website had somewhere around 20-50 lines of javascript! Much smaller footprint than over 400 lines in our previous project (that being generous). Sure, our server side code was larger, but it was all organised into View/PartalView files. To me, it made really good sense.

In the end, I dont think I won htmx over with my co-worker. As for another co-worker, who had a chance to build a new project with htmx, decided on some client javascript tool instead. I dont think I got a legit answer why he did that.

With all this above, I learned that some (perhaps most... perhaps all) struggle to adapt to htmx based on their many years building websites a particular way, with popular tools and javascript libraries. Overall htmx does not really change anything - you are still building a website. If anything htmx just add an additional layer to have website really work.

Yoda's words now have new meaning :- "No! No different. Only different in your mind. You must unlearn what you have learned."

For some its just not happening. I guess Luke Skywalker really shows his willpower, being able to adapt to htmx easier than others. :-)

  • ksec 2 days ago

    I remember someone on HN puts it really well. There is a whole generation of developers that thinks frontend = React. And more importantly they are in much larger number than those of us who went through DHTML and Ajax.

    We are now the minority, and they are the norm.

    • prisenco 2 days ago

      Not just developers, but designers. Finding designers who understand hypermedia in that way is nearly impossible.

      A team that decides to shift towards this approach to development has to get buy-in from the designers as well. It's not just the devs who have to retrain.

      • masfoobar 2 days ago

        > Not just developers, but designers. Finding designers who understand hypermedia in that way is nearly impossible.

        This is true. 100%

        The difference for me and my AppDev career history is -- I never worked with a team of dedicated frontend developers. We are primarily backend developers who are frontend devs as well (though we can admit it is secondary to our backend skills)

        Personally, the change for us was placing our html on the server side. So, to me, styling is not a problem and easy to test. It should be, with some training, easy for a dedicated frontend developer to jump in as well... though we might have to shuffle things around with their tools, to gel nicely with the backend team.

        If anything, I think this transition would keep the two departments closer together - communication is needed especially for htmx webapps.

        I think it can be difficult to win over other backend developers with htmx, as my original post suggests. Adding a frontend layer as well... it is unlikely htmx will be taken seriously when the majority want to stick with what they know.

        • prisenco a day ago

          It's best to not sell it as using HTMX the way you would sell using React as a framework. Instead, I say "let the web be the web."

          Building things using standards and not fighting the paradigm with a new paradigm. What's great about HTMX is that it fits alongside the standards in a way that SPA frameworks generally don't.

          But focusing on the web itself and everything it's capable of without a framework means you can easily move off of HTMX if need be.

          • masfoobar 20 hours ago

            "let the web be the web."

            100%

      • darqis 2 days ago

        You're acting as if "hypermedia" was some kind of standard everyone should know. It is not.

    • masfoobar 2 days ago

      It is hard to know our future. What I am about to write could be wrong, but I will try...

      At a basic level - it is all html, css, javascript and a server side language at the end of the day. Whether we are talking today or back in the early 00's.

      For few years now, we add nodejs, typescript, React, etc.. on top of it. Personally, while I understand the purpose of using such tools for complicated web development, I still believe good websites can be created without them. It keeps is simple, small in size, etc.

      Of course, a few years before that the push was angular or knockoutjs. Before that the push was jQuery, etc.

      For the future. Lets say in the next 15 years, while I still believe that html, css and javascript will remain.. I do think react, like angular, will be replaced by something else.

      Honestly, I think its just a matter of time before WASM takes over or the evolution of such technology. Personally, I have toyed WASM builds in compiled languages and think it will win for web development for speed, performance, and lack of fluff. However, we are not there yet.

      For example, I had to build a internal web application for staff. They have a number of drop downs and text fields, etc. I was experimenting implementing it with (something like) immediate mode UI such as IMGUI in Go. While the results were great, it reached a deadend not because of WASM, langauge, but lack of UI features. I needed to include OpenStreetMap, which is not supported. I had to bite the bullet and accept writing it as a typical Website.

      I went with htmx + leafletjs in the end. Again, it worked out well.

    • owebmaster 2 days ago

      In 10 years, the people that knows React will be the minority compared to the ones that only know vibecoding.

      • xp84 a day ago

        True, but all the FE generated by that will be echoes of React, so my head will still hurt

  • jbreckmckye 2 days ago

    The change was mobile. Once you had multiple clients, with varying levels of thick state (e.g. offline first for Android), it started making sense to streamline around a data-driven API and rich client apps.

    That's honestly the main reason. It's so you can build all three channels the same(ish) way

bookofcooks 2 days ago

Hey, author here! Ask me anything!

I want to make the intent of this blog post extremely clear (which tragically got lost when I got deep into the writing).

I love HTMX, and I've built entire sites around it. But all over the internet, I've seen HTMX praised as this pristine perfect one-stop-solution that makes all problems simple & easy (or at least... easier than any framework could ever do).

This is a sentiment I have not found to be true in my work, and even one where the author of HTMX spoke out against (although I can't find the link :(

It's not a bad solution (it's actually a very good solution), but in real production sites, you will find yourself scratching your head sometimes. For most applications, I believe it will make ALMOST everything simpler (and lighter) than traditional SPA frameworks.

But for some "parts" of it, it is a little tricker, do read "When Should You Use Hypermedia?" [1];

In the next blog post (where we'll be implementing the "REAL" killer features), I hope to demonstrate that "yes, HTMX can do this, but it's not all sunshine & rainbows."

---

On a completely separate note, one may ask, then, "why use HTMX?" Personally, for me, it's not even about the features of HTMX. It's actually all about rendering HTML in the backend with something like Templ [2] (or any type-safe html templating language).

With Templ (or any type-safe templating language), I get to render UI from the server in a type-safe language (Golang) accessing properties that I KNOW exist in my data model. As in, the application literally won't compile & run if I reference a property in the UI that doesn't exist or is of the incorrect type.

You don't get that with a middle-man API communication layer maintained between frontend and backend.

All I need now is reactivity, and htmx was the answer. Hope you understand!

[1] https://htmx.org/essays/when-to-use-hypermedia/#if-your-ui-h...

[2] https://templ.guide/

  • yawaramin a day ago

    Looking at the solution you ended up with, I feel like it's actually fairly reasonable in terms of implementation complexity compared to the feature it is delivering. I have a hard time believing that a pure SPA approach would be simpler to implement. Certainly a SPA would deliver a much more bloated JS payload to the client, which really doesn't seem like a good tradeoff for basically filling out a form and uploading a couple of files.

    • bookofcooks 14 hours ago

      > I have a hard time believing that a pure SPA approach would be simpler to implement.

      IF you used a pure SPA approach with client-side validation for each step, and server-side validation only done at the last step, I believe it would be simpler.

      However, let's say you introduce anything slightly more complicated. Like say you do server-side validation with each step, now you have to somehow persist that "validated data". In that case, the implementation in the article is indeed simpler (or at least not as complicated as a traditional SPA).

  • exiguus a day ago

    That was really good reading. Lately, I've often heard the statement 'HTMX is frontend for backend developers.' What do you think of that quote?

    • bookofcooks a day ago

      Honestly, I think the frontend for backend developers has always been those simple Multi-Page Applications. I know they're not the hottest new thing, but they've been around, they work, and they've had time to integrate deeply into langauages & browsers (think PHP for example).

      Maybe it's more accurate to say, "HTMX is frontend for backend developers who want a SPA."

throw310822 2 days ago

I read through this and I don't get it. Recreating an entire form on the backend and swapping it with the current one, and then missing the update of the label status?

Then solving this by recreating the entire stepper html at each step, with the added complexity that if it contains something you want to keep "it's a nightmare"?

Then having to create a temporary server-side session to store data that somehow the browser can't keep between two clicks?

Etc.. it's write web apps like it's 1999.

  • junon 2 days ago

    Also the title itself does a disservice to HTMX; wasn't it billed as sort of "modern frameworks are bloated and difficult, let's go back to Simple"?

    • fvdessen 2 days ago

      HTMX is indeed simple and lean, but unfortunately that doesn't mean solving common frontend problems with HTMX is simple either. After a decade of frontend dev and being tired of react/vue/etc, I tried HTMX, wanting for something leaner, but IMHO it has big problems, and I am back to react/vue.

      The two biggest problems with HTMX is that being fully server side controlled you need to put the whole app state in the URL and that quickly becomes a nightmare.

      The other is that the code of components is split in two, the part that is rendered on the first time, and the endpoint that returns the updated result. You need a lot of discipline to prevent it from turning that into a mess.

      The final nail on the coffin for me was that the thing I wanted to avoid by picking HTMX; making a rest api to separate the frontend and the backend, was actually a good thing to have. After a while I was missing the clean and unbreakable separation of the back and front. Making the rest api was very quickly done, and the frontend was quicker to write as a result. So HTMX ended up slower than react / vue. Nowadays react/vue provide server side rendering as well so i'm not sure what Htmx has to bring.

      • withinboredom 2 days ago

        > you need to put the whole app state in the URL and that quickly becomes a nightmare.

        You should be doing this anyway ... it's so annoying when my wife sends me a link at work and it just goes to a generic page instead of the search results she wanted to share with me. She ends up mostly sending me screenshots these days because sharing links don't work.

        • Cthulhu_ 2 days ago

          Depends on what the commenter means by app state. Anything bookmarkable - like search results - should be in the URL, but "state" should not (I consider things like partially filled in forms, shopping carts, etc to be state).

          • PaulHoule 2 days ago

            You have choices, especially all the choices that 1999-style web applications have.

            The shopping cart can be kept on the back end and referenced by an id stored in a cookie.

            You can keep partially filled out forms in hidden form variables and can send them back in either GET or POST.

            Not all requests require all the form data, for instance my RSS reader YOShInOn is HTMX based -- you can see two forms from it here:

            https://mastodon.social/@UP8/114887102728039235

            in the one at the upper left there is a main form where you can view one item and evaluate it which involves POSTing a form with hidden input fields but above that I can change the judgements of the past five items by just flipping one of the <selects> which needs to only submit the id and the judgement selected in the select. I guess on clicking one of the buttons in the bottom section I could redraw the the bottom section, insert a <select> row at the bottom of the list and the delete the one at the top, but it just redraws the whole form which is OK because I don't have 200k worth of open graph and other meta data in the <head> and endless <script> tags and any CSS other than bootstrap and maybe 5k of my own, which all caches properly.

          • foobarbecue 2 days ago

            Presumably you mean search query input, not search results, right?

            • LeFantome 2 days ago

              I think you are saying the same thing. They mean that their wife is trying to send them search results. You are pointing out that the link would contain the search query.

              • foobarbecue a day ago

                Not really same thing. If you share the search query, you'll be re-submitting the query at a later time, so the results are likely to be different when the receiver uses that URL. It would be possible (but pretty terrible) to embed the actual search results in the url.

                A more reasonable implementation of sharing search results would be to store results on the server and have a storedResults id key in the url.

          • naasking 2 days ago

            > Depends on what the commenter means by app state. Anything bookmarkable - like search results - should be in the URL, but "state" should not (I consider things like partially filled in forms, shopping carts, etc to be state).

            Yes, shopping cart state should be in the URL in the form of a server-side token under which the cart state is stored. Ditto for partially filled in forms, if that's something your app needs.

            All page state should be transitively reachable from the URL used to access that page, just like all state in a function in your favourite programming language should be transitively reachable from the parameters passed into that function, eg. no global variables. The arguments for each are basically the same.

        • fvdessen 2 days ago

          That is just the location of a page, and indeed that should be put in the URL. But a modern js app has much more state than that: partially filled fields, drafted documents, scroll position of lists, multi-path navigation history, widget display toggles, etc. etc. In react/vue you have 'stores' to hold and manipulate those info. In HTMX you need to choose between the url, the session cookie and the dom. And what you can usually keep just keep in the dom with client vue/react can't be with HTMX since the backend needs to be aware of the state to correctly render the new widget.

          • withinboredom 2 days ago

            Are people really reimplementing browser history in JavaScript? Why are people implementing “navigation history”??!!

            All of this stuff needs to be stored on the server anyway… otherwise how will you get it back on the page when I switch computers or pull it up on my phone.

            • fvdessen a day ago

              Because app navigation is not linear like the url history. Think of a popup with tabs within a page. There's the navigation within the popup, and the navigation within the page. When you close the popup, you don't want 'back' to bring the popup back, you want it to go to the page before the popup. This is hard to replicate with just urls and server side rendered html.

              Also you don't want that store server side because there can be multiple parallel tabs and you don't get notified server side when the tab is closed to properly cleanup the associated resources.

              • withinboredom a day ago

                This is why we are cooked. Just because you can, doesn’t mean you should.

                ?tab[0]=/some/url&tab[1]=/some/resource&activeTab=0

                Bam, you have tabs in the url. I can duplicate the tab, share my view, or whatever. Assuming the other user/tab/window/profile has access to these resources, it’ll show exactly the same thing. I can even bookmark it!

                You can even add popups:

                ?popupModal=saveorleave

                This state probably won’t be applied on an entry, but what’s great is that pushing [back] in the browser is the same effect as cancel! If you click “leave” then you do a “replace state” instead of a “push state” navigation, so the user doesn’t go back to a modal…

                This was, at one point, decently standard logic. Then people who don’t know how browsers work started creating frameworks and reinventing things.

                I digress. I’m just so glad I left the front end 15 years ago, I’d lose my shit if I were dealing with this kind of stuff every day.

            • javcasas 2 days ago

              Scroll positions of lists, toggleable widget status, partial form fills.

              You say all of that needs to be stored in the server?

              That is how you make a big server crawl with just 100 users, regardless of the programming language of the backend.

            • joseda-hg 2 days ago

              Depending of your flavor of SPA framework, Browser History might not work because there's no actual page change

              Some will manually push a History entry, but not all

        • threatofrain 2 days ago

          For easy apps, sure, like shopping carts and search results. What about all the other apps?

      • bookofcooks 2 days ago

        Yes, this was what I wanted to get across with the article (although I utterly failed to do so). I think I would stick to HTMX for other reasons (which I've made clear in my top-level comment on this thread), but I now I see it an occasional tool to use for something simple and not long-term.

        > The two biggest problems with HTMX is that being fully server side controlled you need to put the whole app state in the URL and that quickly becomes a nightmare.

        Or you can create a large session object (which stores all the state) on the server, and have a sessionId in the URL (although I'd prefer a cookie) to associate the user with that large session object.

      • chrisandchris 2 days ago

        > HTMX is indeed simple and lean, but unfortunately that doesn't mean solving common frontend problems with HTMX is simple either.

        Then it is not simple (as in simple I understand it) it's just the same we already have, reinvented.

        If it would be simple and lean, it would solve the most common problems by itself (like I don't need to care about the HTTP part in .Net - it's a one liner and the framework solves it for me).

      • mpweiher 2 days ago

        > The other is that the code of components is split in two, the part that is rendered on the first time, and the endpoint that returns the updated result.

        Yeah, that also bothered me. To me it looks like the page (template) should fetch that partial from the same endpoint that will deliver the partial via the wire to HTMX.

        • L3viathan 2 days ago

          You can do that if you want to. It'll just be slower.

          • mpweiher 2 days ago

            Will it?

            I haven't gotten around to it yet, but my plan is to use in-process REST with Objective-S, so that accessing the internal endpoint will be the cost of a function call.

            The HTTP wrapper for external access is generic.

      • nsonha a day ago

        > HTMX is indeed simple and lean, but unfortunately that doesn't mean solving common frontend problems with HTMX is simple

        I think it should be obvious that if a software is easy and convenient to the author to write (simple and lean), then the complexity falls onto the users.

      • jgalt212 2 days ago

        Server side rendering requires JavaScript on the server. So then you very often, but not always, violate "the clean and unbreakable separation of the back and front."

        • colejohnson66 2 days ago

          Not necessarily JavaScript, but some kind of rendering or templating engine. As shown in the blog post, Go works.

          • jgalt212 2 days ago

            right, but the person I was responding to mentioned JavaScript frameworks.

            > Nowadays react/vue provide server side rendering as well so i'm not sure what Htmx has to bring.

  • imtringued 2 days ago

    What the author of this blog is doing is essentially server side react, but without the VDOM diffing. If there was DOM based diffing, then the only things that would change are a bunch of class attributes that affect which step becomes visible.

    To avoid the cost of updating the entire page, htmx only fetches a parent element and all of its children, but this runs into the problem that you must choose the common parent element for all the elements you want to update.

    So the author reaches the conclusion that htmx is not meant to be used for SPA style apps. It's meant to add a little bit of interactivity to otherwise static HTML.

    • bookofcooks 2 days ago

      > To avoid the cost of updating the entire page, htmx only fetches a parent element and all of its children, but this runs into the problem that you must choose the common parent element for all the elements you want to update.

      Not exactly, you can use Out-Of-Band updates, which means the server can arbitrarily choose to update specific elements outside the parent.

      > So the author reaches the conclusion that htmx is not meant to be used for SPA style apps. It's meant to add a little bit of interactivity to otherwise static HTML.

      Can you clarify where I seemed to have come to this conclusion, as this is not what I intended to express.

      • nsld 20 hours ago

        I think what OP mean to say is that he came to that conclusion. I did not draw that conclusion from reading the blog post. But rather that HTMX is a good tool, but it's not all sunshine and roses, some features takes some work to get working.

mtlynch 2 days ago

  return pox.Templ(http.StatusOK, templates.AlertError("Name cannot be empty")), nil
Oof, an HTTP 200 OK response with a body that says the request actually was not OK.

I like htmx, but this is probably the weakest part of it.

htmx is supposed to let you write semantic HTML, but it's obviously not semantic HTML/HTTP to respond HTTP 200 to incorrect user input. But I think OP is doing this because if they had responded HTTP 400 - Bad Request, htmx would have thrown away the response body by default.[0]

[0] https://htmx.org/docs/#modifying_swapping_behavior_with_even...

  • yawaramin 2 days ago

    > this is probably the weakest part of it.

    This is not a 'part of' htmx. Htmx doesn't prescribe how you handle errors. You can easily respond with 400 or some other appropriate status code on error, and plug in to an htmx hook on the client side to handle it appropriately.

    Eg here's how I handle form validation errors with a 422 response status: https://dev.to/yawaramin/handling-form-errors-in-htmx-3ncg

  • lelanthran 2 days ago

    > Oof, an HTTP 200 OK response with a body that says the request actually was not OK.

    What's "oof" about it? The application layer should not inject error codes into the transport layer which is what HTTP is in this case.

    Do you also think that Apache/Nginx should be injecting codes into IP packets?

    If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?

    • mtlynch a day ago

      >The application layer should not inject error codes into the transport layer which is what HTTP is in this case.

      I don't follow. What's the boundary between transport layer and application layer in a Go web app?

      The Go web app is responsible for specifying both the HTTP status code and the response body.

      What's the purpose of different HTTP 4xx errors if they're not supposed to come from the application?

      >Do you also think that Apache/Nginx should be injecting codes into IP packets?

      No, because Apache/Nginx is not responsible for populating IP datagrams, whereas Go web apps are responsible for generating the entire HTTP response.

      >If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?

      It can't.

      What design do you have in mind where a client gets a HTTP 404 and can distinguish between the web server and the application server? Are you saying a "not found" at the application layer should return HTTP 200 and the client has to check the HTTP body for the real error code?

      • lelanthran a day ago

        > I don't follow. What's the boundary between transport layer and application layer in a Go web app?

        Just because it's the same app preparing both the transport and the application, you think that it's the same layer of comms?

        > Are you saying a "not found" at the application layer should return HTTP 200 and the client has to check the HTTP body for the real error code?

        The "not found" is not an application level error, even if, in your backend, you mixed them all up into the same function.

        • mtlynch a day ago

          >Just because it's the same app preparing both the transport and the application, you think that it's the same layer of comms?

          Yes, that's what I think.

          I don't see any other way of dividing it, which is why I asked what you think the boundary is otherwise.

          Again: Can you explain what you think the boundary is between the application layer and transport layer in a Go web app?

          >The "not found" is not an application level error, even if, in your backend, you mixed them all up into the same function.

          If the request is something like /user/detail/12345 and user 12345 doesn't exist, what should the response be?

          • lelanthran a day ago

            >> Just because it's the same app preparing both the transport and the application, you think that it's the same layer of comms?

            > Yes, that's what I think.

            If you don't know how communications composed of multiple layers work, I'm afraid I can't really help with that understanding in a comment section on a forum.

            I mean, for example, you can use git over ssh or git over https, but no one thinks that the git communications + https (or git comms + ssh) is a single layer.

            > Again: Can you explain what you think the boundary is between the application layer and transport layer in a Go web app?

            A single Go web app also does SMTP and TCP[1]. Do you also think that SMTP and TCP are on the same comms layer(s) as HTTP and REST?

            Many Go apps also support SSL. Does that mean it is okay for the HTTP webserver to put HTTP-specific content into the SSL layer?

            ---------------------

            [1] Maybe you have a Go webapp that never needs to send confirmation emails, but when I last did that in Go, the Go app needed to reach into the TCP stack (specifically to set socket options) in order to make SMTP work.

            • aatd86 18 hours ago

              I am interested in any answer to this

              > If the request is something like /user/detail/12345 and user 12345 doesn't exist, what should the response be?

              This is quite an interesting question. If we consider that webservers host paginated applications, the answer would be that this is indeed an application level concern and we should return a 404 since the resource is not found.

              SPAs without SSR may have muddled this since the application is about serving a client side application then. We could expect a 200.

              It is not as straightforward as it seems perhaps...

            • mtlynch 16 hours ago

              >If you don't know how communications composed of multiple layers work, I'm afraid I can't really help with that understanding in a comment section on a forum.

              I'm not asking you to explain protocol stacks, and I feel like that's obvious.

              I've been open to your viewpoint, and I've asked you to clarify your position. Instead you just keep mocking me and feigning surprise that I hold a pretty mainstream view.

              At this point, I'm left to assume you're either trolling or you have a viewpoint that can't bear scrutiny, so I'll stop engaging with you.

    • imiric a day ago

      > The application layer should not inject error codes into the transport layer which is what HTTP is in this case.

      Huh? HTTP is an application layer protocol. It's perfectly acceptable for the application to return a non-200 status code when the request is invalid and can't be processed. There's a widely accepted status code for that exact scenario: 400 Bad Request. It informs the client that there was something wrong with their request, and in well-designed APIs, reading the response body would tell them the reason why. It would be wasteful for the client to always read the response and parse structured data to decide whether the request was successful (at the application level) or not. Status codes allow us to do that.

      That said, I've seen arguments for and against this practice, as sibling comments mention, and ultimately consistency and documentation are more important than semantics.

      The reason this line is blurry nowadays is because in the beginning web servers didn't contain complex logic. The web server was the application. Then came CGI scripts and application servers, and suddenly the application itself was making protocol-level decisions. The way this is typically structured in large applications is to have protocol-level abstractions that translate app-level errors into HTTP errors. But in small applications it's acceptable, though unsightly, to have HTTP logic mixed with business logic.

      > Do you also think that Apache/Nginx should be injecting codes into IP packets?

      Web servers do speak TCP/IP, so I'm not sure what your point is. Usually this is not something regular web apps need to be concerned with, but it's possible and sometimes desirable to introduce logic at the TCP or IP layer. There are proxy tools that work at both layer 4 and layer 7.

      > If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?

      By the status code, error message, and headers. An application would typically never return 502 Bad Gateway, a 301/302 redirect, or set headers like Cache-Control. By that same token, a reverse proxy/webserver would typically never override a 404 with a 200, or inject JSON error messages in the payload.

      The application ultimately decides the Content-Type of the response, which Content-Types it supports, and which headers it expects, so why shouldn't it also decide which status codes to return and which response headers to set? A gateway between it and the user can change or enhance this protocol, and specific gateways could be extracted to handle common things like authn/authz and load balancing, but the frontend gateway shouldn't override the message the application is sending (in typical circumstances). Both things can coexist with different responsibilities while speaking the same protocol. HTTP is flexible enough to support that.

      I'm curious, though: if you treat HTTP as the transport layer, what protocol does your application speak to the gateway? Is there some translation gateway that translates application-level semantics into HTTP ones?

      • lelanthran a day ago

        >> The application layer should not inject error codes into the transport layer which is what HTTP is in this case.

        > Huh? HTTP is an application layer protocol.

        I want to emphasise that "in this case" bit.

        HTTP is an application layer protocol when the application in question in a webserver and nothing else.

        In the case of REST, HTTP is simply a transport protocol. It is not necessary to use HTTP as the transport for RESTful applications. It's common, convention even, but not required.

        > if you treat HTTP as the transport layer, what protocol does your application speak to the gateway?

        WSGI, maybe? Sure, you can emit status codes there too, but it will be a different protocol you are talking over, not HTTP.

        I've seen gRPC gateways for HTTP REST endpoints too.

        > Is there some translation gateway that translates application-level semantics into HTTP ones?

        I don't think we should be translating application status codes into HTTP status codes. I mean, sure, I've done it myself plenty of times, but it is a mixing of layers and a mixing of concerns.

        The fact is, HTTP semantics are defined for (and in the context of) a webserver not an application server. That our application server is chatty with HTTP does not place it in the running context of a webserver.

        The semantics of HTTP status codes makes absolutely no sense when emitted by an application.

        You might argue that one of them (or maybe two, if we're being generous) such as "400 Bad Request" should be emitted by the application if (for example) a parameter is missing but even in that case it makes more sense for the application to send error-code/error-message so that more information can be given (such as which parameter is missing/invalid, etc).

        If you're sending "400" status code for a missing parameter, how will the client know whether the HTTP request was malformed or whether the application input was mangled?

        • dogma1138 a day ago

          I suggest you read the actual RFC. HTTP status codes are intended to represent the state of your application. HTTP is part of the application layer it is not a transport layer protocol.

        • imiric 19 hours ago

          I get where you're coming from, but I think you're placing too much emphasis on theoretical definitions rather than real world usage.

          > HTTP is an application layer protocol when the application in question in a webserver and nothing else.

          I haven't heard that definition before, and don't really agree with it.

          HTTP is the protocol web servers use to communicate with web clients. Whether the server is serving static files or dynamic content based on complex logic doesn't change this.

          > In the case of REST, HTTP is simply a transport protocol. It is not necessary to use HTTP as the transport for RESTful applications. It's common, convention even, but not required.

          That's true, but I don't see any practical benefit of this distinction. REST concepts map cleanly to HTTP semantics, and practically all REST deployments use HTTP.

          > WSGI, maybe?

          I guess so, but WSGI is an abstraction useful for interpreted languages and Python specifically. It was a solution to standardize the deployment of a growing number of web frameworks, and to address the lack of a production-ready HTTP server in Python itself. Other languages and ecosystems don't need this abstraction. It would be like trying to make Java servlets universal. Some approaches are a good fit for some ecosystems, but not for others.

          As I mentioned in my previous post, the way this is typically handled in, say, a Go web application, is by having an HTTP layer that acts as an intermediary between the protocol and the application. This way your business logic can remain free from HTTP-specific tasks like serialization, parsing, validation, etc. But if the application is only ever meant to be exposed via HTTP, then there's no harm in avoiding the abstraction, and having it speak HTTP directly. This might not be a good idea for testing and maintainability, but it's fine for small applications.

          > I've seen gRPC gateways for HTTP REST endpoints too.

          That's different. gRPC builds on top of HTTP, and uses a fundamentally different payload and request mechanism. It requires supported clients to even use it, which is why gateways are useful. But REST over HTTP is still plain HTTP. Clients don't need to be aware that they're talking to a REST endpoint, and REST serves as usage documentation more than anything else.

          > The semantics of HTTP status codes makes absolutely no sense when emitted by an application.

          That depends on the application. If an HTTP endpoint wraps an application call to create a user, and the caller doesn't provide a user name, the application can return an error, which the HTTP endpoint can translate to a 400 status code, including the error message in the payload. OR the HTTP endpoint can do some validation upfront, and immediately return a 400.

          I agree with you that it wouldn't make sense for the application code to return HTTP status codes, but not because it's wrong semantically. I think it's wrong from a design standpoint (separation of concerns). HTTP semantics are limited at describing all application concepts, but the ones that are there map pretty cleanly, especially when REST is used.

          > If you're sending "400" status code for a missing parameter, how will the client know whether the HTTP request was malformed or whether the application input was mangled?

          Again, by reading the response body. Just because HTTP status codes don't describe all application errors, doesn't mean that it's a good idea to abandon them entirely, and always return 200. If the client receives a 400 response, then they can immediately know that something went wrong with the request, and they should inspect the response body for details. Nothing stops the application from returning custom error codes internally that uniquely identifies the actual reason for the failure, if the clients find this useful.

          If the request was malformed, then a 400 response would make sense. If the application input was mangled, then the status code will depend on what happened. Was the mangled data part of the request? Then it's still a 400. Was the data mangled during endpoint or application processing? Then a 5xx response would be more suitable.

          There are no hard rules for this, and many, many APIs are poorly implemented. But this doesn't mean that applications shouldn't take advantage of the full breadth of HTTP concepts to implement user and computer-friendly interfaces.

  • SwiftyBug 2 days ago

    I think this debate is as old as time. I can see value in arguments both for and against this pattern. It's just a pattern. GraphQL does the same thing. As long as the consumers of that endpoint is aware of that behaviour I see no harm. In most cases, these endpoints serve only the frontend of the app. I would avoid doing something like that for a public API though.

  • masfoobar 2 days ago

    I have not used the templ library so I cannot comment on this. However, as this is more an htmx query I will focus on this --

    htmx supports returning other responses, and you can handle their behaviour if you choose to.

    At a basic level, validation/checks should be done on the server side. If there is a problem, you can return HTML. I dont see what the big deal is.

    On another note, you can call a javascript function, for before and after some occurance, like doing a "swap"

    You can check the status and go a different route if you so with.

    These are just a couple of options and does not add much complication to the overall htmx design. You still end up with much less javascript code this way.

  • naasking 2 days ago

    > htmx is supposed to let you write semantic HTML, but it's obviously not semantic HTML/HTTP to respond HTTP 200 to incorrect user input.

    It's perfectly semantic HTML, it just doesn't have to be semantic HTTP (as in, you don't have to push semantics to the HTTP level, keep it at the HTML level). Thinking that HTTP status codes should be semantically meaningful means you're still thinking of htmx endpoints are or must be API endpoints. That's a mistake.

    • tacitusarc 2 days ago

      You are, however, communicating over HTTP. If the protocol conventions cannot be used due to limitations of the encoding, it is reasonable to question the encoding design.

      Another comment asserted HTMX can handle this, it just needs to be configured to do so. If that is the case, then I don’t see an actual issue.

      • mtlynch 2 days ago

        >Another comment asserted HTMX can handle this, it just needs to be configured to do so. If that is the case, then I don’t see an actual issue.

        You can, but it feels like you're sort of fighting the htmx library at that point. I do it, as it's the least bad option, but I generally find footguns with violating HTTP conventions (e.g., returning HTTP 200 for a bad request) or going outside the mainstream of a library (telling htmx that HTTP 400 can have a meaningful response body).

  • calvinmorrison 2 days ago

    it's not a HTMX thing, i think others do that as well , or its a pattern i've seen a few times before. 200 -> the server is responding OK, with a body that contains an error.

brokegrammer 2 days ago

I built my latest SaaS (https://clarohq.com) using HTMX, backed by Django. I really enjoy the process because HTMX allows reactivity using swaps and plain Javascript events instead of server side state management, useeffect, and API endpoints.

However, it's difficult to get things right. I spent way too much time on some basic features that I could have shipped quicker if I used React.

The issue with React though, is that you end up with a ton of dependencies, which makes your app harder to maintain in the long-term. For example, I have to use a third-party library called react-hook-form to build forms, when I can do the same thing using plain HTML and a few AlpineJS directives if I need dynamic fields.

I'm not sure if I'll ever build an app using HTMX again but we need more people to write about it so that we can nail down patterns for quickly building server rendered reactive UIs.

  • threatofrain 2 days ago

    You only use the popular React form libraries when you want utmost control over your forms. Like as you type into a registration form you check off boxes for password requirements, and you only show errors after they’ve touched it once.

    Otherwise vanilla forms are great in React. If you did this by hand in Vue or vanilla it would also be hell.

    Also in terms of maintenance burden the top libraries in this space are massively popular. Most here would likely be making a great maintenance burden decision in offloading to well reputed teams and processes rather than in housing your own form and validation library.

    • brokegrammer 2 days ago

      Sure, I could build forms with React only but I found controlled forms to have less performance when there are many fields. react-hook-forms uses uncontrolled forms with refs which has near native performance, so if I had to build that by hand it would be more tedious. AI makes this process easier, but it's still extra code that needs to be maintained and tested, when you can do the same for free if you stick with standard web tools.

    • jollyllama 2 days ago

      > If you did this by hand in Vue or vanilla it would also be hell.

      It's really not. If you walk away from your project for 7 years, your vanilla JS will just load into the web browser and still behave the same. If you walk away from your React (or other NPM-based project) for the same amount of time, you won't be able to build all your dependencies from source without spending time updating everything. Going with something like HTMX or plain JS vastly reduces your maintenance overhead.

      • threatofrain a day ago

        When people talk about maintenance burden they aren't talking about your scenario. A codebase where you walk away for 7 years and then come back? That's something people can do now for any project in any language. When people talk about maintenance burden they're talking about what tomorrow to the next few years is going to feel like for people who actively maintain projects.

        So when you're actively maintaining something and you bring in a dependency, you're in some sense outsourcing some of that work, whether it's a colleague or an outside party maintaining that library. The specifics of who begins to matter. Is it the React team maintaining that part of the codebase? Is it lonely author in Kyiv? Or is it you?

        So what is it like to be the colleague of someone who wrote their own Tanstack Forms and successfully or unsuccessfully integrated with Zod and the like? Or did they choose to write their own runtime type validator too? That's maintenance burden.

        • jollyllama a day ago

          Active is a relative term. The modern frontend monoculture is built for a high churn codebase. Sure, seven years is extreme, but even for two years, there will be more issues if the cold project that I'm trying to load was made with the modern, npm-based frontend monoculture versus if it was all custom code.

    • a_subsystem 2 days ago

      >>> check off boxes for password requirements, and you only show errors after they’ve touched it once

      I don’t get it. This is super easy w htmx.

  • zarzavat 2 days ago

    > For example, I have to use a third-party library called react-hook-form to build forms

    Why? I've written a lot of React and I've never used this library. In fact, I rarely use any React-focused dependencies except a router, as you say every dependency has a cost especially on the client.

    React works just fine without dependencies.

    • yladiz 2 days ago

      Although you don’t necessarily need the hook, writing a large form with reasonable performance in React can be tricky, so I can understand why you would go with a hook for that case.

  • bookofcooks 2 days ago

    Exactly my idea.

    This Github demo was pulled straight out of some work I was doing for the Admin of Prayershub (https://prayershub.com).

    Working on this specific feature (the soundtrack uploader) though, I reguarly asked myself "what if I just used Svelte or SolidJS"?

    Note, Prayershub uses a regular mix of HTMX and SolidJS, so I can pop-in SolidJS whenever I find convenient.

    • brokegrammer 2 days ago

      I've been thinking about loading React or something similar for specific components but still not sure which method I'm going to use for that. How are you embedding SolidJS on specific pages? Or do you actually have a full build pipeline?

      • bookofcooks 2 days ago

        It actually depends on the page. Some pages, I don't use the SolidJS JSX, just its plain-js state primitives.

        For other page, I'll use full-blown SolidJS (with JSX and everything) for like a popup. Example: https://pasteboard.co/hY35xM7VbATG.png

        Now, how I specifically embed SolidJS, its pretty simple, I have my entrypoint files for specific pages: assets/admin-edit-book.tsx assets/admin-edit-song.tsx assets/single-worship.tsx assets/worship-edit.tsx

        Then I have a 30-line build script that invokes esbuild along with esbuild-plugin-solid (to compile JSX to plain-old html, no fancy virtual dom) to compile the scripts into javascript.

        I can share the build script if you'd like. It helps that SolidJS is so self-contained that it makes such a setup trivial.

        • brokegrammer 2 days ago

          Sure, the build script would be insightful. Removing the virtual dom sounds cool. I've been sleeping on SolidJS because I've always stuck with React after being disappointed by other frameworks. If it allows me to keep my server rendered pages, SolidJS might be what I've been looking for.

      • pbowyer 2 days ago

        I do this with Vue in Symfony PHP apps. Depending on the scope I eitehr have a full build pipeline for the JS (preferred) or will include the files direct from a CDN and have in-HTML templates that are parsed on load.

        For passing data into it I've used Inertia.js [0] and also my own data-in-page setup that's parsed and loaded into the Vue app. The app then writes the changes back out, usually into a hidden form input. The form on the page is then submitted as usual to the server, along with any other data that I need.

        It's a great way for adding more complicated behaviour to an existing app.

        0. https://inertiajs.com/

  • bloomca 2 days ago

    What kind of forms do you need? I honestly feel people think they have to use all sort of dependencies. There are some you definitely need, like the router or any state management (you can build both, but there is a decent amount of boilerplate and efficient hooks can be tricky).

    But for forms I honestly would recommend to start with plain React and only abstract things you need in your project.

  • worble 2 days ago

    > I have to use a third-party library called react-hook-form to build forms

    Regular form elements work just fine in React, all you need to do is interrupt the onInput and onSubmit handler and deal with the form data yourself. I've tried a handful of these form libraries and frankly they make everything way more complicated and painful than it needs to be.

    • bestest 2 days ago

      That is fine as long as your forms are simple text inputs and buttons. Now plug in drag-and-drop and multiple file uploads and selects and checkboxes and radios and more of these various inputs you're in the world of pain.

      I've recently, once again, gave native inputs a chance in a new project. It lasted as long as I've described in the first sentence. And I've been in the frontend world for 20 years. Trust me, you don't want complicated native forms.

      And react-hook-form is just what you need (albeit it also is boilerplate-ish, so I always end up wrapping it up in a simpler and smarter hook and component).

      edit: Same, in a sense, for HTMX. It's ok for simple things. But eventually you may end up trying to build a house with a fork. The fork in itself is not a bad tool, sure. But you also don't need a concrete mixer with your morning toast.

  • andrewstuart 2 days ago

    Don’t use a form library, use the machine - program the browser DOM forms API.

devnull3 2 days ago

This should be trivial with the HTMX alternative: datastar [1]

In datastar the "Out Of Band" updates is a first class notion.

[1] https://data-star.dev

  • spiffytech 2 days ago

    Unrelated: datastar doesn't use a two-way connection for interaction <-> updates. It uses two unconnected one-way channels: a long-lived long-lived SSE for updates, and new HTTP requests for interaction.

    I didn't see guidance in the docs for routing one tab's interaction events to the backend process managing that tab's SSE. What's the recommend practice? A global, cross-server event bus? Sticky sessions with no multiprocessing, and an in-process event bus?

    If a user opened the same page in two tabs, how should a datastar backend know which tab's SSE to tie an interaction event to?

    • chuckadams 2 days ago

      It appears to be a SSE channel for each event stream returned, not one channel for all subsequent updates. So a PHP backend for instance could batch all the updates from one request, would open a new SSE channel, send the updates over that, then close it. With an in-process server like swoole or most things not PHP, you could presumably reuse the channel across requests in whatever framework-specific way makes that happen. Would probably need sticky sessions in any scaled-out deployment.

      This is just what I can glean from the docs, I've never actually used datastar myself.

    • devnull3 2 days ago

      With DS (and HTMX) the backend is the source of truth. In the context of the blog post, the state will be made-up of: step no, file content, file path, etc. This can be stored against the session ID in a database.

      So when opened on a different tab, the backend would do authentication and render the page depending on the store state.

      In general, the backend must always compare the incoming state/request with stored state. E.g the current step is step 2 but the client can forces it to go to step 4 by manipulating the URL.

      DS v1.0 now supports non-SSE (i.e. simple request/response interaction as well) [1]. This is done by setting appropriate content-type header.

      [1] https://data-star.dev/reference/actions#response-handling

  • meander_water 2 days ago

    HTMX has out of band updates too [0], what's the differentiator?

    [0] https://htmx.org/attributes/hx-swap-oob/

    • devnull3 2 days ago

      In DS, with SSE you can paint different parts of the page with ease. So in this case, it can update <form> and <label> separately. So instead of one update the backend fires 2. There is not separate marker or indicator for OOB.

      I think it is best seen in examples on DS website.

      • meander_water 2 days ago

        Neat, I'll give DS a go.

        HTMX also has the option of using SSE with an extension [0]. I've used this to update the notifications tray for example. You could probably do it for OPs example too.

        [0] https://htmx.org/extensions/sse/

  • sgt 2 days ago

    Doesn't datastar require an async backend? I prefer Django without async.

    • devnull3 2 days ago

      Not really. DS v1.0 has HTMX like request/response option as well.

      You might need async if there are lot of concurrent users and each of them using long duration SSE. However, this is not DS specific.

alex-moon 2 days ago

These kinds of write-ups are so key to driving adoption of a new technology. I'm still not super interested in HTMX but this write-up has done a lot of the work already toward nudging me that way. Well done!

  • rapnie 2 days ago

    Yes, as I remember it it was a 'back to simplicity' of the early web idea. Rediscover the power of hypermedia. I don't know HTMX well, but am following Datastar [0] which was inspired by it, and their selling points are Simplicity and Performance and take it some steps further than HTMX. The approach does shift logic / complexity towards the backend though.

    [0] https://data-star.dev

    • jgalt212 2 days ago

      The complexity has to live somewhere.

      • mbvisti 2 days ago

        yes, in the backend

PaulHoule 2 days ago

I think you need to make peace with OOB if you want to enjoy working with HTMX. You need to have a framework such that you can draw a partial inside the page when you send the whole HTML document but also render and draw a group of partials to update several things that change with a request.

  • benhurmarcel 12 hours ago

    I’ve DIY’d that in Python/jinja with some helper functions. Do you know of any templating language that includes this?

karel-3d 2 days ago

Every time I attempt to use HTMX and backend-rendered templates because it's "simpler", in the end I always end up doing JSON APIs and something like Svelte. Because all the "simplicity" explodes in complexity 5 seconds later, and it's very user-hostile with the constant reloads.

This blogpost affirms it

  • evantbyrne 2 days ago

    I think it is right to call out HTMX as being a wrong approach for custom form elements. Try a progressive enhancement approach with something like Turbo, where you can still write JS as needed. This wizard could have been more elegantly implemented by removing the server requests for each step and just showing/hiding fieldsets on progression.

  • yawaramin 2 days ago

    User hostile? Constant reloads are exactly what htmx gets rid of–that's the whole point.

    If you mean developer hostile–sure, reloading the backend server can be annoying if it compiles slowly, like say Rust. I would say, pick the right tool for the job. If you're going to use htmx, pick a backend language that compiles and reloads very quickly.

tremon 14 hours ago

I don't really understand the example problem. Why is the first problem statement "each step is a <form> that calls an endpoint like /form/step1 and swaps out itself with the returned form", and not the (to me) more obvious "each step is a page in a tabset and on successful submission, the active tab is automatically advanced"?

gr4vityWall 2 days ago

I liked the article a lot, thanks to the author for writing it and sharing it.

At that point, personally I think it'd be easier to use Preact with a no-build workflow for those bits of the app that have a lot of contained logic themselves, and don't necessarily require a round-trip to the server.

I wouldn't use HTMX for that specific use case.

  • farmeroy a day ago

    I've been building a largish webapp with htmx and I've leaned into web components for these more complicated interactions. I've found htmx great for everything that _should_ involve a call to the backend, anything that does need to fetch data or perform some crud operations, then i can return the necessary markup with oob swaps etc. and mostly forget about client side state

    But yeah it's great to see people sharing their approaches!

    • gr4vityWall 18 hours ago

      Sounds like an interesting use! Did you write about it anywhere? Or have a repo we could look at?

npilk 2 days ago

HTMX is amazing for simpler web apps. If you have a ton of complexity, need to manage a lot of state, etc., I can see how it would be frustrating trying to get everything to fit into HTMX's patterns. In fact, it might actually increase complexity. But, if you have something smaller and want to make it more interactive, React is way overkill and HTMX is a breath of fresh air.

I think a lot of the arguments over HTMX come down to this difference. The people that love it see how much better it is for their use case than something like React, while the critics are finding it can't replace bigger frameworks for more demanding projects.

(Here's an example interface made with HTMX. IMO React would have been overkill for this compared to how simple it was with HTMX. https://www.bulletyn.co )

Polarity 2 days ago

> I originally planned to make a simple non-functional uploader, then progressively bring it from "it works™" to "high-quality production-grade Uploader with S3 Presigned, Multipart, Accelerated Uploads, along with Auto-Saving and Non-Linear Navigation

Why is every developer trying to make things complicated?

devnull3 2 days ago

> Challenge 2: Passing data down each step

Why not use cookies?

  • spiffytech 2 days ago

    Cookies are more appropriate for whole-site or whole-session data. There's no natural segregation of "this cookie belongs to this instance of this form". You could figure that out, but the additional moving parts cut down on the appeal.

    • devnull3 2 days ago

      Cookie is what we make of it. For browser, its opaque data anyway.

      So, when /upload is requested, the backend in response sets a cookie with a random uploadId (+ TTL). At the backend, we tie sessionId and uploadId.

      With every step which is called, we verify sessionId and uploadId along with additional state which is stored.

      This means even if the form is opened on a different tab, it will work well.

      • thefreeman 2 days ago

        Thats... basically what the guy did? He just put the sessionId in the form data instead of a cookie.

        • devnull3 2 days ago

          > He just put the sessionId in the form data instead of a cookie.

          This does not have the benefit of being usable across different tabs or even closing and re-opening the page. Besides, (a minor point) shoving all the state in the cookie makes code simple i.e. don't have use URL params.

kissgyorgy 2 days ago

This implementation is unnecessary complicated. For the step update, you can use Out Of Band update: https://htmx.org/attributes/hx-swap-oob/ which works in a way that you can send multiple HTML fragments which can be anywhere on the page and HTML swaps them out. Good for notifications, step update, breadcrumb update, menu highlight, etc...

I usually solve the second problem by simply saving the state of the individual input fields, you only need a user session. Depending your use-case, you might need to be transactional, but you can still do this saving everything as "partial" and close the "transaction" (whatever it might mean in the given context) at the last step. Much-much simpler than sending form data over and over.

  • bookofcooks 2 days ago

    > For the step update, you can use Out Of Band update

    I did mention using OOB, but I preferred swapping the entire Stepper because the logic on the backend was just a little bit cleaner, and the Stepper didn't include anything else anyways.

    > I usually solve the second problem by simply saving the state of the individual input fields, you only need a user session.

    I believe this is exactly what I did in the article, no?

bilinguliar 2 days ago

It is a nice post. A point of improvement would be to name fields idiomatically. Author should run a few Go linters.

rtpg 2 days ago

that form state persistence looks soooo gnarly. Really have a hard time arguing that's better than a client side header

bargainbin 2 days ago

I’ve attempted to use HTMX a few times (as a React-hater, its hype lures me in) and every time I’ve come away feeling like I’ve wasted my time implementing a subpar solution.

From reading this, I’ve decided I will never attempt to use it again. All I could think was, just use Go’s HTML templating. What is HTMX adding, really?

  • hirvi74 a day ago

    I have had similar experiences too. To me, HTMX felt like a wrapper around vanilla JS's .fetch() or jQuery's AJAX calls except one just litters the HTML with specific custom HTML attributes.

    .fetch() has no issues returning server-side rendered HTML and has a lot more options and freedom than what HTMX provides.

    I do not think HTMX is a bad library by any means. I just can't see what it buys over vanilla JS.

  • bananapub 2 days ago

    (out of interest: did you ever write web apps before ~2010 or so? the answer to your question seems so obvious to me I'm not even sure where the confusion could come from, unless you've only ever written web apps that are "JS frontend, JS backend, magic communication between them, it's often not clear where some bit of code is running").

    HTMX here is making it so the page works without doing a full HTTP form submission + page load for each stage of the "wizard". instead, you write some HTMX stuff in the page that submits the form asynchronously, then the server sends back the HTML to replace part of the page to move to you to the next step in the "wizard", and then HTMX replaces the relevant DOM nodes to make that so.

    Go's templating is completely unrelated to any of this happening on the front end - it's just generating some HTML, both the "whole page loaded by the browser normally" and the "here's the new widget state code", and so obviously:

    > just use Go’s HTML templating.

    is incorrect.

    • jmull 2 days ago

      HTMX is the same approach as old-school server-side templating, except you can replace parts of the page with HTML returned from the server, rather than always the whole page.

      This allows for some nicely responsive interactions, but introduces complexity.

      I'm not the previous poster, but it's a fair question whether the maybe faster responses justify the complexity. In many cases it probably would not.

      (Actually, I suspect it's rare; if you know how to make partial page responses fast for HTMX, you know how to make full page responses fast and don't necessarily need HTMX, up until your page just gets too large overall.)

      The general problem with HTMX is that, by default, the page state, as a function of the initial page plus the accumulated user's interactions with the page, live only in the user's browser tab. This can seem fine for a while, but opens up some fairly fat edge cases (this article covers some of them). There are ways to handle this, but it's additional complexity and work. Maybe someone has or will create an HTMX-friendly server-side templating framework to take the grunt work out of it, but you still have to wonder if one of the numerous existing full page templating mechanisms might not still be superior, overall.

      • hirvi74 a day ago

        > HTMX is the same approach as old-school server-side templating, except you can replace parts of the page with HTML returned from the server, rather than always the whole page.

        Hasn't jQuery been able to do this since the early 2000s? Even vanilla JS has this functionality for decades too.

    • the_sleaze_ 2 days ago

      I believe what he's saying is either embrace the full power and maintenance of a JS client side framework, or just use ssr'd templates. I agree with them, that HTMX isn't attractive because it doesn't solve any issues I find important. I largest of which is I don't "hate" react and I don't attach emotionally to any arguments about "frontend"

sgt 2 days ago

Great stuff and same applies to Django of course.

thevivekshukla 2 days ago

I've been building something with HTMX since the last week, I have not done whole lot of complex things with it but I don't think it will pose any problem when time comes.

I get the premise of HTMX and when and why to use it, it's not solution to everything however it is a blessing for backend developers' who wants to work on frontend.

-> A bit of backstory

For my project Daestro[0], which is bit complex (and big) I chose Rust as backend and Svelte (with Sveltekit) as frontend SPA app. This was my first time working on both. After years of working on Django, I wanted to try statically typed language, after some research and trial, I chose Rust. Sveltekit was obvious because it made sense to me compared to other frameworks and it was super easy to pick up.

After working on Sveltekit for a year, I realised I've been spending a lot of time doing these same thing: 1. You create the api on the backend 2. then you create Zod schema on the frontend for form validation 3. the create +page.ts to initialize the form 4. in +page.svelte you create the actual form and validate it there itself with zod before sending it to the server

Hopping over two code bases just for a simple form, and Daestro has a lot of forms. I was just exhausted with this. Then HTMX started to get a lot of traction, I was watching it from a distant but having worked with Django and it's template, I was dismissive of it and thought having separate frontend is best approach.

-> Why I'm leaning towards HTMX now?

- Askama (rust crate) is a template engine which is compile time checked - Askama supports block fragments[1], which is you can render certain part (block) of template, plus for HTMX usage - Askama's macro almost don't make me miss Svelte's components - Rust has amazing type system, now you can just use it, no need to replicate those on Typescript - same codebase, no more hopping - only one binary to deploy (currently for Daestro I've 3 separate deployments)

-> My rules for using HTMX

You must self-impose a set of rules on how you want to use HTMX, otherwise things can get out of you hand and instead of solving a problem you'll create bigger ones. These are my rules:

- Keep your site Multi-page Application and sprinkle some HTMX to make it SPA like on per page basis - make use of hx-target header to only send the block fragments that is required by HTMX (very easy with Askama) - do not create routes with partial page rendering instead a route must render complete page, and then use block framents to render only what is being asked in hx-target - Do not compromise on security[2]

[0]: https://daestro.com [1]: https://askama.readthedocs.io/en/stable/template_syntax.html... [2]: https://htmx.org/docs/#security

vFunct 2 days ago

HTMX needs an easy way to update multiple named targets at a time. That's my current biggest problem with it.

totaa 2 days ago

congrats on the first blog post, been using Go+Templ+HTMX when implementing my first startup

I think at least some of these issues can be avoided with a different UI/UX to avoid passing temporal/unsaved data between screens.

looking forward to the next instalment!

andrewstuart 2 days ago

All that seems rather ….. indirect. Every step of the way I kept urging him to use JavaScript.

  • lmz 2 days ago

    They could have done all that in one long form with JS for client side progressive enhancement (show/ hide different parts) and that would probably be much easier.

    • graeber_28927 2 days ago

      Yes, my thoughts exactly. I hate implementing N forms with user state session and navigation, when one big form one the client can hold the state for me, and visual trickery can achieve the same.

      Whenever I go debug unnecessary state machines, or have to refactor them (to compress the number if steps), I scratch my head half the time, trying to follow the string of thought that my predecessor felt so smart about.