Good article, but the reasoning is wrong. It isn't easy to make a simple interface in the same way that Pascal apologized for writing a long letter because he didn't have time to write a shorter one.
Implementing the UI for one exact use case is not much trouble, but figuring out what that use case is difficult. And defending that use case from the line of people who want "that + this little extra thing", or the "I just need ..." is difficult. It takes a single strong-willed defender, or some sort of onerous management structure, to prevent the interface from quickly devolving back into the million options or schizming into other projects.
Simply put, it is a desirable state, but an unstable one.
Overall, the development world does not intuitively understand the difficulty of creating good interfaces (for people that aren’t developers.) In dev work, the complexity is obvious, and that makes it easy for outsiders to understand— they look at the code we’re writing and say “wow you can read that?!” I think that can give developers a mistaken impression that other peoples work is far less complex than it is. With interface design, everybody knows what a button does and what a text field is for, and developers know more than most about the tools used to create interfaces, so the language seems simple. The problems you need to solve with that language are complex and while failure is obvious, success is much more nebulous and user-specific. So much of what good interfaces convey to users is implied rather than expressed, and that’s a tricky task.
> creating good interfaces (for people that aren’t developers.)
This is the part where people get excited about AI. I personally think they're dead wrong on the process, but strongly empathize with that end goal.
Giving people the power to make the interfaces they need is the most enduring solution to this issue. We had attempts like HyperCard or Delphi, or Access forms. We still get Excel forms, Google forms etc.
Having tools to incrementaly try stuff without having to ask the IT department is IMHO the best way forward, and we could look at those as prototypes for more robust applications to create from there.
Now, if we could find a way to aggregate these ad hoc apps in an OSS way...
I have nightmare stories to tell of Access Forms from my time dealing with them in the 90's.
The usual situation is that the business department hires someone with a modicum of talent or interest in tech, who then uses Access to build an application that automates or helps with some aspect of the department's work. They then leave (in a couple of cases these people were just interns) and the IT department is then called in to fix everything when it inevitably goes wrong. We're faced with a bunch of beginner spaghetti code [0], utterly terrible schema, no documentation, no spec, no structure, and tasked with fixing it urgently. This monster is now business-critical because in the three months it's been running the rest of the department has forgotten how to do the process the old way, and that process is time-critical.
Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that. We have to fix what we can and get it working immediately. And, of course, these fixes cause havoc with the project planning of all our other projects because they're unpredictable, urgent, and high priority. This delays all the other projects and helps to give IT a reputation as taking too long and not delivering on our promised schedules.
So yeah, what appears to be the best solution from a non-IT perspective is a long, long way from the best solution from an IT perspective.
[0] and other messes; in one case the code refused to work unless a field in the application had the author's name in it, for no other reason than vanity, and they'd obfuscated the code that checked for that. Took me a couple of hours to work out wtf they'd done and pull it all out.
> Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that.
I assume those processes weren't applied when deciding to use this application, why? Was there a loophole because it was done by an intern?
Of course this is ultimately the IT department's own fault for not responding quickly enough to legitimate business requirements. They need to actively look for ways to help rather than processing tickets.
Yeah, this is always the response. But it's wildly impractical - there are only so many developer hours available. The budget is limited, so not everyone gets what they want immediately. This should be obvious.
Part of the problem is that the novices that create these applications don't consider all the edge cases and gnarly non-golden-path situations, but the experienced devs do. So the novice slaps together something that does 95% of the job with 5% of the effort, but when it goes wrong the department calls in IT to fix it, and that means doing the rest of the 95% of the effort. The result is that IT is seen as being slow and bureaucratic, when in fact they're just doing the fecking job properly.
In most organizations the problem is lack of urgency rather than lack of developer hours. The developers sit in isolated siloes rather than going out and directly engaging with business units. This is mostly a management problem but there are plenty of individual developers who wait to be told what to do rather than actively seeking out better solutions for business problems.
I think that's the lesser problem. The bigger problem is the attitude of IT is wrong from the start. When they start doing something, they want to Do It Right. They want to automate the business process. But that's the wrong goal! You can spend years doing that and go all the way to building a homegrown SAP, and it will still suck and people will still use their half-assed Excel sheets and Access hacks.
IT should not be focusing on the theoretical, platonic Business Process. It never exists in practice anyway. They should focus on streamlining actual workflow of actual people. I.e. the opposite advice to the usual. Instead of understanding what users want and doing it, just do what they tell you they want. The problem with standard advice is that the thing you seek to understand is emergent, no one has a good definition, and will change three times before you finish your design doc.
To help company get rid of YOLOed hacks in Excel and such made by interns, IT should YOLO better hacks. Rapid delivery and responsiveness, but much more robust and reliable because of actual developer expertise behind it.
> They should focus on streamlining actual workflow of actual people.
If you streamline a shitty process, you will have diarrhea...
Unfortunately, most processes suck and need improvement. It isn't actually IT's job to improve processes. But almost always, IT is the only department that is able to change those processes nowadays since they are usually tied to some combination of lore, traditions, spreadsheets and misused third-party software.
If you just streamline what is there, you are cementing those broken processes.
That's precisely the mistake I'm talking about. You think you're smarter than people on the ground, and know better how they should do their job.
It's because of that condescending, know-it-all attitude that people actively avoid getting IT involved in anything, and prefer half-assed Excel hacks. And they're absolutely right.
Work with them and not over them, and you may get an opportunity to improve the process in ways that are actually useful. Those improvements aren't apparent until you're knee-deep in mud yourself, working hand by hand with the people you're trying to help.
That depends if one measures productivity in LOCs or business impact. As always, it’s not black or white, but my experience is that higher proximity is a net benefit
more often than not it’s the development team that skips engaging with users, putting in minimal effort to understand their real needs.
most of these teams only wants a straightforward spec, shut themselves off from distractions, just to emerge weeks or months later with something that completely misses the business case. and yet, they will find ways to point fingers at the product owner, project manager, or client for the disaster.
I have met the occasional person like this, sure. But only ever in really large organisations where they can hide, and only a minority.
The huge majority of devs want to understand the business and develop high quality software for it.
In one business I worked for, the devs knew more about the actual working of the business than most of the non-IT staff. One of the devs I worked with was routinely pulled into high-level strategy meetings because of his encyclopaedic knowledge of the details of the business.
The mistake is in trying to understand the business case. There is nothing to understand! The business case is the aggregate of what people actually do. There is no proper procedure that's actually followed at the ground level. Workflows are emergent and in constant flux. In this environment, the role of a dev should not be to build internal products, but to deliver internal hacks and ad-hoc solutios, maintain them, modify on the fly, and keep it all documented.
I.e. done right, it should be not just possible but completely natural for a random team lead in the mail room to call IT and ask, "hey, we need a yellow highlighter in the sheet for packages that Steve from ACME Shipping needs to pick on extra evening run, can you add it?", and the answer should be "sure!" and they should have the new feature within an hour.
Yes, YOLO development straight on prod is acceptable. It's what everyone else is doing all the time, in every aspect of the business. It's time for developers to stop insisting they're special and normal rules don't apply to them.
And yet, even ”knowing about the working of the business” is different from actually understanding user needs at UI level, which involves a lot more variables.
The single most valuable tool is user testing. However it really takes quite a few rounds of actually creating a design and seeing how wrong you saw the other person’s capabilities, to grok how powerful user testing is in revealing your own biases.
And it’s not hard at all at core. The most important lesson really is a bit of humility. Actually shutting up and observing what real users do when not intervened.
> The usual situation is that the business department hires someone with a modicum of talent or interest in tech
This reminds me of the "just walk confidently to their office and ask for a job to get one!" advice. This sounded bullshit to me until I got to stay with some parts of a previous company, where the hiring process wasn't that far really.
That's also the kind of companies where contracts and vendor choices will be negociated on golf courses and the CEO's buddies could as well be running the company it would be the same.
It’s also about keeping things simple, hierarchical, and very predictable. These do not go hand in hand with the feature creep of collaborative FOSS projects, as others point out here.
Good point. A good interface usually demands a unified end-to-end vision, and that usually comes from one person who has sat down to mull it over and make a bunch of good executive decisions.
And then you need to implement that, which is never an easy task, and maintain the eternal vigilance to both adhere to the vision but also fit future changes into that vision (or vice versa).
All of that is already hard to do when you're trying to build something. Only harder in a highly collaborative voluntary project where it's difficult or maybe even impossible to take that sort of ownership.
In the 90s I did a tech writing gig documenting some custom software a company had built for them by one of the big consultancy agencies. It was a bit of a nightmare as the functionality was arranged in a way that reflected the underlying architecture of the program rather than the users’ workflows. Although I suppose if they’d written the software well, I wouldn’t have had as many billable hours writing documentation.
> reflected the underlying architecture of the program rather than the users’ workflows
Is this an inherently bad thing if the software architecture is closely aligned with the problem it solves?
Maybe it's the architecture that was bad. Of course there are implementation details the user shouldn't care about and it's only sane to hide those. I'm curious how/why a user workflow would not be obviously composed of architectural features to even a casual user. Is it that the user interface was too granular or something else?
I find that just naming things according to the behavior a layperson would expect can make all the difference. I say all this because it's equally confusing when the developer hides way too much. Those developers seem to lack experience outside their own domain and overcomplicate what could have just been named better.
Developers often don’t think it’s a bad thing because that’s how we think about software. Regular users think about applications as tools to solve a problem. Being confronted by implementation details is no problem for people with the base knowledge to understand why things are like that, but without that knowledge, it’s just a confusing mess.
If you ever spens time with the low level SAP GUIs, then yes, you will find out why that's definetly a bad thing. Software should reflect users processes. The code below is just an implementation detail and should never impact the design of the interfaces.
I think the more likely explanation is software development is a huge opportunity cost (historically).
To learn how to be a software dev, takes so much time, you don't have time to learn the "arts".
The people who become programmers, are a different breed. They are very close to the autism spectrum, if not in it. Because that's what it takes to be a software dev (before LLM's).
Nowadays the tides might be changing.
The analogy is a hot chick. It's statistically very likely a hot chick does not know calculus.
Hot chicks aren't inherently dumb. Make-up skills/skin care, is such a huge opportunity cost. There's no way someone can stay pretty AND learn calculus at the same time.
> It's statistically very likely a hot chick does not know calculus.
It would be honestly interesting if someone actually did a study regarding it.
I do agree with this statement but it isn't as if everybody doesn't have other opportunity costs, people might have video games as hobbies or just normal hobbies in general as well which could be considered opportunity costs
The question to me which sounds more interesting which I feel like I maybe reading in the lines but does the society shower attention to beauty which can make them feel less susceptible to lets say calculus which might feel a lot more boring respectively?
Generally speaking, I was seeing this the other day but female involvement overall in the whole stem department has reduced in %'s iirc.
Another factor could be the weirdness or expectation. Like just as you think this, this is assumed by many people about hot chicks lets say, so if a hot chick is actually into calculus and she tells it, people would say things like oh wow I didn't know that or really?? which could feel weirdness or this expectation of them to not be this way and be conventional and not have interests it seems.
I have seen people get shocked in online communities if a girl is even learning programming or doing things like hyprland (maybe myself included as it was indeed rare)
Naturally I would love if more girls could be into this as I feel like talking to girls about my hobbies when she isn't that interested or not having common hobbies hurts me when I talk to them, they can appreciate it but I feel like I can tell them anything, I am not that deep of a coder right now as much as I am a linux tinkerer, building linux iso's from scratch, shell scripting and building some web services etc. , I like to tinker with software, naturally the word used in unix/foss communities for this is called hacking which should be the perfect way to describe what I mean except they think I am talking about cybersecurity and want me to "hack something", Sorry about this rant but I have stopped saying this hacking just because of how correlated it is to cybersecurity to the normal public. I just say that I love tinkering with software nowadays. Side note, but is there a better word for what I am saying other than hacking?
Which is a rare thing in this space. Linux is rough around the edges, to say the least. You don't need me telling you. We are in a thread about how open sources software suck at UI design. We could use more people like you in this space.
The men aren't fussed with the "hacker" label. It sounds cool. It's like when people mistakenly think all Asians know Kung Fu or something. The Asian guy isn't complaining lol.
There's definitely stigma/sexism that deter women away from this field. But I think opportunity cost is a factor, gravely overlooked.
Society demands a lot from women, when it comes to appearance. The bar is set very high.
So high, you don't have the time to be a good programmer AND pretty. Unless you won the genetic lottery.
I follow women's basketball avidly. Some of the women are not pretty. They are just very good at basketball. It's refreshing to see women be valued, not just because of their beauty.
That might be the case for you, but something doesn’t need to be universally true for it to be true enough to matter. Find any thread about AI art around here and check out how many people have open contempt for artists’ skills. I remember the t-shirts I saw a few sys admins wearing in the nineties that said “stop bothering me or I’ll replace you with a short shell script.” In the decades I worked in tech, I never saw that attitude wane. I saw a thread here within the past year or two where one guy said he couldn’t take medical doctors and auto mechanics seriously because they lacked the superior troubleshooting skills of a software developer. Seriously. That’s obviously not everybody, but it’s deeefinitely a thing.
I believe it comes from low self esteem initially. Then finding their way into computers, where they then indeed have higher skills than average and maybe indeed observed that the job of some people could be automated by a shell script. So ... lots of ungrounded ego suddenly, but in their new guru ego state, they extrapolated from such isolated cases to everywhere.
I also remember the hostility of my informal universities IT chat groups. Newbs were rather insulted for not knowing basic stuff, instead of helping them. A truly confident person does not feel the need to do that. (and it was amazing having a couple of those persons writing very helpful responses in the middle of all the insulting garbage)
There are many more than three "dimensions" if I may use the term loosely, in software or hardware engineering.
Cost, safety, interaction between subsystems (developed by different engineering disciplines), tolerances, supply chain, manufacturing, reliability, the laws of physics, possibly chemistry and environmental interactions, regulatory, investor forgiveness, etc.
Traditional engineering also doesn't have the option of throwing arbitrary levels of complexity at a problem, which means working within tight constraints.
I'm not an engineer myself, but a scientist working for a company that makes measurement equipment. It wouldn't be fair for me to say that any engineering discipline is more challenging, since I'm in none of them. I've observed engineering projects for roughly 3 decades.
One thing I still struggle with is writing interfaces for complex use cases in an intuitive and simple manner that minimizes required movements and context switching.
Are there any good resources for developing good UX for necessarily complex use cases?
I am writing scheduling software for an uncommon use case.
The best method I have found is to use the interface and fix the parts that annoy me. After decades of games and internet I think we all know what good interfaces feel like. Smooth and seamless to get a particular job done. If it doesn't feel good to use it is going to cause problems with users.
Thats said. I see the software they use on the sales side. People will learn complexity if they have to.
Honestly, it’s a really deep topic — for a while I majored in interface/interaction design in school— and getting good at it is like getting good at writing. It’s not like amateurs can’t write solid stories, but they probably don’t really understand the decisions they’re making and the factors involved, and success usually involves accidentally being influenced by the right combination of things at the right time.
The toughest hurdle to overcome as a developer is not thinking about the gui as a thin client for the application, because to the user, the gui is the application. Developers intuitively keep state in their head and know what to look for in a complex field of information, and often get frustrated when not everything is visible all at once. Regular users are quite different— think about what problems people use your software to solve, think about the process they’d use to solve them, and break it down into a few primary phases or steps, and then consider everything they’d want to know or be able to do in each of those steps. Then, figure out how you’re going to give focus to those things… this could be as drastic as each step having its own screen, or as subtle as putting the cursor in a different field.
Visually grouping things, by itself, is a whole thing. Important things to consider that are conceptually simple but difficult to really master are informational hierarchy and how to convey that through visual hierarchy, gestalt, implied lines, type hierarchy, thematic grouping (all buttons that initiate a certain type of action, for example, might have rounded corners.)
You want to communicate the state of whatever process, what’s required to move forward and how the user can make that happen, and avoid unintentionally communicating things that are unhelpful. For example, putting a bunch of buttons on the same vertical axis might look nice, but it could imply a relationship that doesn’t exist. That sort of thing.
A book that helps get you into the designing mindset even if it isn’t directly related to interface design is Don Norman’s The Design of Everyday Things. People criticize it like it’s an academic tome — don’t take it so seriously. It shows a way of critically thinking about things from the users perspective, and that’s the most important part of design.
> Overall, the development world does not intuitively understand the difficulty of creating good interfaces
Nor can the design world, for that matter. They think that making slightly darker gray text on gray background using a tiny font and leaving loads of empty space is peak design. Meanwhile my father cannot use most websites because of this.
The dozens of people I know that design interfaces professionally can probably recite more of the WCAG by heart than some of the people that created them. You’re assuming that things you think “look designed” were made by designers rather than people playing with the CSS in a template they found trying to make things “look designed.” You’re almost certainly mistaken.
As I age, this x1000. Even simple slack app on my windows laptop - clicking in the overview scroll bar is NOT "move down a page". It seems to be "the longer you click, the further it moves" or something equally disgusting. Usually, I dock my laptop and use an external mouse with wheel, and it's easy to do what I want. With a touchpad? Forget it.. I'm clicking 20x to get it to move to the top - IF I can hit the 5-pixel-wide scrollbar. There's no easy way to increase scrollbar size anymore either..
It's like dark patterns are the ONLY pattern these days.. WTF did we go wrong?
I mean, maybe but the question wasn't what is the superior general pointing device (trackball ftw if you ask me) though, but how to scroll using a trackpad without tearing your hair out.
What pisses me off is that the “brutalist” style in the 1990s was arguably perfect. Having standardized persistent menus, meaningful compact toolbars was nice.
Then the world threw away the menus, adopted an idiotic “ribbon” that uses more screen real estate. Unsatisfied, we dumbed down desktop apps to look like mobile apps, even though input technology remains different.
Websites also decided to avoid blue underlined text for links and be as nonstandard as possible.
Frankly, developers did UI better before UI designers went off the deep end.
The brutalist style also meant that I didn't need a UI designer for my applications. With Delphi I was able to create great apps in a matter of days. And users loved them, because they were so snappy and well thought out.
Nowadays it seems I need a UI designer to accomplish just about anything. And the resulting apps might look better but are worse when you are actually trying to accomplish work using them.
I was ranting exactly the same just yesterday. Nowadays UI designers seem to have forgotten all about affordances. Back in the day you had drop shadows below buttons to indicate that they could be pressed, big chunky scrollbars with two lines on the handle to indicate "grippiness" etc.
A few days ago I had trouble charging an electric rental car. When plugging it in, it kept saying "charging scheduled" on the dash, but I couldn't find out how to disable that and make it charge right away. The manual seemed to indicate it could only be done with an app (ugh, disgusting). Went back to the rental company, they made it charge and showed me a video of the screen where to do that. I asked "but how on earth do you get to that screen?". Turned out you could fucking swipe the tablet display to get to a different screen! There was absolutely no indication that this was possible, and the screen even implied that it was modal because there were icons at the bottom which changed the display of the screen.
So you had: zero affordances, modal design on a specific tab, and the different modes showed different tabs at the top, further leading me to believe that this was all there was.
I've had long discussions at work with our designer, who thinks that people on desktop computers should perform swipe actions with the mouse rather than the UI reacting to mouse scroll events.
99% of the users are not using the mobile version.
The contributors of free software tend to be power users who want to ensure their use case works. I don't think they're investing a lot of thought into the 80/20 use case for normal/majority or users or would risk hurting their workflow to make it easier for others
True; that's why we have companies with paid product who devote a lot of their time - arguably majority - to make the exact interfaces people want and understand:) it's a ton, a ton of difficult work, for which there is little to no incentive in the free software ecosystem
Absolutely. I still prefer MacOS/Mac hardware in some ways but running a browser on Linux on a Thinkpad or whatever works pretty well for a lot of purposes.
My guess is that, as has always been, the pool of people willing to code for free on their own time because it's fun is just much larger than the people willing to make icons for software projects on their own time because they think it's fun.
Graphic designers and artists get ripped off, all the time; frequently, by nerds, who tend to do so, in a manner that insults the value of the artist's work.
It's difficult to get those kinds of creatives to donate their time (trust me on this, I'm always trying).
I'm an ex-artist, and I'm a nerd. I can definitively say that creating good designs, is at least as difficult as creating good software, but seldom makes the kind of margin that you can, from software, so misappropriation hurts artists a lot more than programmers.
Most fields just don’t have the same culture of collaborative everyone-wins that software does. Artists don’t produce CC art in anywhere close to the same influence as engineers produce software. This is probably due to some kind of compounding effect available in software that isn’t available in graphics.
Software people love writing software to a degree where they’ll just give it away. You just won’t find artists doing the same at the same scale. Or architects, or structural engineers. Maybe the closest are some boat designs but even those are accidental.
It might just be that we were lucky to have some Stallmans in this field early.
Isn’t there a lot more compensation available in software? Like as a developer, you can make a lot of money without having to even value money highly. I think in other fields you don’t generally get compensated well unless you are gunning/grinding for it specifically. “For the love of the art” people in visual arts are painters or something like that, probably. Whereas with software you can end up with people who don’t value money that much and have enough already, at least to take a break from paid work or to not devote as much effort to their paid work. I imagine a lot of open source people are in that position?
I think the collaborative nature of open source software dev is unlike anything else. I can upload some software in hopes that others find is useful and can build on top of it, or send back improvements.
Not sure how that happens with a painting, even a digital one.
Fonts are an interesting case. The field of typography is kind of migrating from the "fuck you, pay me" ethic of the pure design space into a more software-like "everyone wins" state, with plenty of high-quality open-source fonts available, whereas previously we had to make do with bitmap-font droppings from proprietary operating systems, Bitstream Vera, and illegal-to-redistribute copies of Microsoft's web font pack.
I think this is because there are plenty of software nerds with an interest in typography who want to see more free fonts available.
There's actually a fair bit of highly influential CC-licensed artwork out there. Wikipedia made a whole free encyclopedia. The SCP Foundation wiki is it's own subculture. There's loads of Free Culture photography on Wikimedia Commons (itself mirrored from Flickr). A good chunk of your YouTube feed is probably using Kevin McCleod music - and probably fucking up the attribution strings, too. A lot of artists don't really understand copyright.
But more importantly, most of them don't really care beyond "oh copyright's the thing that lets me sue big company man[0]".
The real impediment to CC-licensed creative works is that creativity resists standardization. The reason why we have https://xkcd.com/2347/ is because software wants to be standardized; it's not really a creative work no matter what CONTU says. You can have an OS kernel project's development funded entirely off the back of people who need "this thing but a little different". You can't do the same for creativity, because the vast majority of creative works are one-and-done. You make it, you sell it, and it's done. Maybe you make sequels, or prequels, or spinoffs, but all of those are going to be entirely new stories maybe using some of the same characters or settings.
[0] Which itself is legally ignorant because the cost of maintaining a lawsuit against a legal behemoth is huge even if you're entirely in the right
I like this explanation, though there is one form of creative standardization: brand identity. And I suppose that's where graphics folk engage with software (Plasma, the GNOME design, etc.). Amusingly, I like contributing to Wikipedia and the Commons so I should have thought of that. You're absolutely right that I had a blind spot there in terms of what's the equivalent there of free software.
Another thing is that the vast amount of fan fiction out there has a hub-and-spoke model forming an S_n graph around the solitary 'original work' and there are community norms around not 'appropriating' characters and so on, but you're right that community works like the SCP Foundation definitely show that software-like property of remixing of open work.
Anyway, all to say I liked your comment very much but couldn't reply because you seem to have been accidentally hellbanned some short while ago. All of your comments are pretty good, so I reached out to the HN guys and they fixed it up (and confirmed it was a false positive). If you haven't seen people engage with what you're saying, it was a technical issue not a quality issue, so I hope you'll keep posting because this is stuff I like reading on HN. And if you have a blog with an RSS feed or something, it would be cool to see it on your profile.
This is a weird thread for me to read, as someone who a) works primarily with developer tooling (and not even GUI tooling, I write cryptography stuff usually!), b) is very active in a vibrant community of artists that care about nerd software projects.
I don't, as a rule, ever ask artists to contribute for free, but I still occasionally get gifted art from kind folks. (I'm more than happy to commission them for one-off work.)
Artists tragically undercharge for their labor, so I don't think the goal should be "coax them into contributing for $0" so much as "coax them into becoming an available and reliable talent pool for your community at an agreeable rate". If they're enthusiastic enough, some might do free work from time to time, but that shouldn't be the expectation.
Why should they work for pay on free software? Nobody expects to be paid to work on the software itself. Yet artists expect to be treated differently.
If it is your job, then go do it as a job. But we all have jobs. Free software is what we do in our free time. Artists don't seem to have this distinction. They expect to be paid to do a hobby.
Doing a pro graphic design treatment is lot more than just "drawing a few pictures," and picking a color palette.
It usually involves developing a design language for the app, or sometimes, for the whole organization (if, like the one I do a lot of work for, it's really all about one app). That's a big deal.
Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
You'd be surprised, then, to know that a lot of programmers think graphic design is easy (see the other comment, in this thread), and can often be quite dismissive of the vocation.
As a programmer, working with a good graphic designer can be very frustrating, as they can demand that I make changes that seem ridiculous, to me, but, after the product ships, makes all the difference. I've never actually gotten used to it.
That's also why it's so difficult to get a "full monty" treatment, from a designer, donating their time.
"can be" makes it a very different statement. Either one "can be" a lot harder than the other, depending on the task. The statement above is about typical difficulty.
And even if they're wrong about which one is typically harder, they weren't saying it was easy, and weren't saying it was significantly easier than programming.
It's just more common for artists to do small commission work on the side of a real job. 30 dollars for something is basically a donation or tip in my view, and the community can crowd fund for it the same way bug bounties work I think?
I suspect some of this is due to the fact that the programmers consenting to do free work already have well-paying jobs, so they have the freedom and time to pursue coding as a hobby for fun as well. Graphic designers and UX designers are already having a hard time getting hired for their specific skills and getting paid well for it, so I imagine it's insulting to be asked to do it for free on top of that.
That said, I don't think it's as simple as that. Coding is a kind of puzzle-solving that's very self-reinforcing and addictive for a certain type of person. Coders can't help plugging away at a problem even if they're not at the computer. Drawing, on the other hand, requires a lot more drudgery to get good, for most people anyway, and likely isn't as addictive.
I believe it's more nuanced than that. Artists, like programmers, aren't uniformly trained or skilled. An enterprise CRUD developer asks different questions and proposes different answers compared to an embedded systems dev or a compiler engineer.
Visual art is millennia older and has found many more niches, so, besides there being a very clear history and sensibility for what is actually fundamental vs industry smoke and mirrors, for every artist you encounter, the likelihood that their goals and interests happen to coincide with "improve the experience of this software" is proportionately lower than in development roles. Calling it drudgery isn't accurate because artists do get the bug for solving repetitive drawing problems and sinking hours into rendering out little details, but the basic motive for it is also likely to be "draw my OCs kissing", with no context of collaboration with anyone else or building a particular career path. The intersection between personal motives and commerce filters a lot of people out of the art pool, and the particular motives of software filters them a second time. The artists with leftover free time may use it for personal indulgences.
Conversely, it's implicit that if you're employed as a developer, that there is someone else that you are talking to who depends on your code and its precise operation, and the job itself is collaborative, with many hands potentially touching the same code and every aspect of it discussed to death. You want to solve a certain issue that hasn't yet been tackled, so you write the first attempt. Then someone else comes along and tries to improve on it. And because of that, the shape of the work and how you approach it remains similar across many kinds of roles, even as the technical details shift. As a result, you end up with a healthy amount of free-time software that is made to a professional standard simply because someone wanted a thing solved so they picked up a hammer.
I dispute that claim but it doesn't answer the question. When you have multiple people involved in the community of an open source project, what makes them decide where to contribute, and what makes them decide if they'll use marketable skills for free or not? I think it's an interesting thing to look into.
This seems like a self selection problem. It’s not about forcing people to work for free. It’s about finding designers willing to work for free (just like everyone else on the project).
Much larger but not non-existent, people post their work (including laborious stuff like icon suites and themes) on art forums and websites for no gain all the time.
Going back to the winxp days there was a fairly vibrant group of people making unofficial themes for it, although I think that was helped by the existence of tools (from Stardock?) specialized on that task and making it approachable if your skill set didn't align perfectly.
UI and UX are for all intents lost arts. No one is sitting on the other side of a 2 way mirror any more and watching people use their app...
This is how we get UI's that work but suck to use. This is how we allow dark patterns to flourish. You can and will happily do things your users/customers hate if it makes a dent in the bottom of the eye and you dont have to face their criticisms directly.
The dependency on telemetry instead of actually sitting down with a user and watching them use your software is part of the problem. No amount of screen captures, heatmaps or abandoned workflow metrics will show you the expression on a person's face.
They're not just nerds, they're power users. These are different things.
Pretty much everyone is a power user of SOME software. That might be Excel, that might be their payroll processor, that might be their employee data platform. Because you have to be if you work a normal desk job.
If Excel was simpler and had an intuitive UI, it would be worthless. Because simple UI works for the first 100 hours, maybe. Then it's actively an obstacle because you need to do eccentric shit as fast as possible and you can't.
Then, that's where the keyboard shortcuts and 100 buttons shoved on a page somewhere come in. That's where the lack of whitespace comes in. Those aren't downsides anymore.
The person who is going to bother adding stuff to a piece of software is almost certainly by definition a power-user.
This means they want to add features they couldn't get anywhere else, and already know how to use the existing UI. Onboarding new users is just not their problem or something they care about - They are interested in their own utility, because they aren't getting paid to care about someone else's.
I'm sceptical about fixing (in the sense of a lasting solution), but it might be a very powerful tool to communicate to devs what the UI should look like.
I have been beating this drum for many years. There are some big cultural rifts and workflow difficulties. Unless FOSS products are run by project managers rather than either developers or designers, it’s a tough nut. Last I looked, gimp has been really tackling this effort more aggressively than most.
I am not convinced bad UI is either a FOSS issue, or solved by having project managers. I know very non-tech people who struggle with Windows 11, for example. I do not like MS Office on the rare occasions I have used it on other people's machines. Not that impressed by the way most browser UIs are going either.
Microsoft has been lagging on interface design for a long time. If the project managers are focused on forcing users into monetizable paths against their will, then of course you’re going to get crap interfaces and crap software quality. If you have a project manager that’s focused on directing people to solve problems for users rather than people just bolting on whatever makes sense, then that’s a lot different. And no, bad UIs aren’t inherent to FOSS— look at Firefox, Blender, Signal… all FOSS projects that are managed by people focused on integrating the most important features in a way that makes sense for the ecosystem.
gimp has been my goto when I want to explain bad ui, developer designed ui, or just typical foss ui
I'm glad they're fixing it. It's also my image editor of choice.
Yeah I’ve been using it as a go-to example for the wrongest approach to UI design for years. I’m glad to see they’re working harder than most to fix some of the underlying problems.
To design a good user interface, you need a feedback loop that tells you how people actually use your software. That feedback loop should be as painless for the user as possible.
Having people to man a 1-800 number is one way to get that feedback loop. Professional user testing is another. Telemetry / analytics / user tracking, or even being able to pull out statistics from a database on your server, is yet another. Professional software usually has at least two of these, sometimes all four. Free software usually has none.
There are still FLOSS developers out there who think that an English-only channel on Libera.chat (because Discord is for the uneducated n00bs who don't know what's good for them) is a good way to communicate with their users.
What developers want from software isn't what end users want from software. Take Linux for example. A lot of things on Linux can only be done in the terminal, but the people who are able to fix this problem don't actually need it to be fixed. This is why OSS works so well for dev tools.
It always amazes me how even just regular every day users will come to me with something like this:
Overly simplified example:
"Can you make this button do X?" where the existing button in so many ways is only distantly connected to X. And then they get stuck on the idea that THAT button has to be where the thing happens, and they stick with it even if you explain that the usual function of that button is Y.
I simplified it saying button, but this applies to processes and other things. I think users sometimes think picking a common thing, button or process that sort of does what they want is the right entry point to discuss changes and maybe they think that somehow saves time / developer effort. Where in reality, just a new button is in fact an easier and less risky place to start.
I didn't say that very well, but I wonder if that plays a part in the endless adding of complexity to UI where users grasp onto a given button, function, or process and "just" want to alter it a little ... and it never ends until it all breaks down.
I always tell clients (or users):
"If you bring your car to the mechanic because it's making a noise and tell them to replace the belt, they will replace the belt and you car will still make the noise. Ask them to fix the noise."
In other words, if you need expert help, trust the expert. Ask for what you need, not how to do it.
I would hope the mechanic would engage with the customer in more back and forth.
But sometimes power structures don't allow for it. I worked tech support in a number of companies. At some companies we were empowered to investigate and solve problems... sometimes that took work, and work from the customer. It had much better outcomes for the customer, but fixes were not quick. Customers / companies with good technical staff in management understood that dynamic.
Other companies were "just fix it" and tech support were just annoying drones and the company and customer's and management treated tech support as annoying drones. They got a lot more "you got exactly what you asked for" treatment ... because management and even customers will take the self defeating quick and easy path sometimes.
It is a common misconception that the "expert" knows the best. Expert can be a trainee, or may be motivated to make more for its organisation or have yet to encounter your problem.
On the other hand, if you are using your car for a decade and feel it needs a new belt - then get a new belt. Worst case scenario- you will lose some money but learn a bit more about an item you use everyday.
I am a qualified mechanic. I no longer work in the field but I did for many years. Typically, when people 'trust their instincts as a user' they are fantastically wrong. Off by a mile. They have little to no idea how a car works besides youtube videos and forum posts which are full of inaccuracies or outright nonsense and they don't want to pay for diagnosis.
So when they would come in asking for a specific part to be replaced with no context I used to tell them that we wouldn't do that until we did a diagnosis. This is because if we did do as they asked and, like in most cases, it turned out that they were wrong they would then become indignant and ask why we didn't do diagnosis for free to tell them that they were wrong.
Diagnosis takes time and, therefore, costs money. If the user was capable of it then they would also be capable enough to carry out the repair. If they're capable of carrying out the diagnosis and the repair then they wouldn't be coming to me. This has proved to be true over many years for everyone from kids with their first car to accountants and even electrical engineers working on complex systems for large corporations as their occupation. That last one is particularly surprising considering that an engineer should know the bounds of their knowledge and understand how maintenance, diagnosis and repair work on a conceptual level.
Don't trust your instincts in areas where you have no understanding. Either learn and gain the understanding or accept that paying an expert is part of owning something that requires maintenance and repair.
If you don't trust the expert then why are you asking them to fix your stuff? It's a weird idea that you'd want an idiot to do what you say because you know better.
I think what you're driving at can be more generalized as users bringing solutions when it would be more productive for them to bring problems. This is something I focus on pretty seriously in IT. The tricky part is to get the message across without coming across as unhelpful, arrogant, or obstructive. It often helps to ask them to describe what they're trying to achieve, or what they need. But however you approach the discussion, it must come across as a sincere desire to help.
Yeah, I've had now a couple decades of experience dealing with this, and my typical strat is to "step back" from the requested change, find out what the bigger goal is, and usually I will immediately come up with a completely different solution to fulfill their goal(s). Usually involving things they hadn't even thought about, because they were so focused on that one little thing. When looking at the bigger picture, suddenly you realize the project contains many relevant pieces that must be adjusted to reach the intended goals.
In my experience, this is a communication issue, not a logical or technical or philosophical issue. Nor the result of a fixation caused by an idea out of the blue.
In my experience it may be solved by both parties spending the effort and time to first understand what is being asked... assuming they are both willing to stomach the costs. Sometimes it isn't worth it, and it's easier to pacify than respectfully and carefully dig.
Don't fall into the trap of responding to the user's request to do Y a certain way. They are asking you to implement Y, and they think they know how it should be implemented, but really they would be happy with Y no matter how you did it. https://xyproblem.info/
On the other hand, I've not uncommonly seen this idea misused: Alice asks for Y, Bob says that it's an XY problem and that Alice really wants to solve a more general problem X with solution Z, Alice says that Z doesn't work for her due to some detail of her problem, Bob browbeats Alice over "If you think Z won't work, then you're wrong, end of story", and everyone argues back and forth over Z instead of coming up with a working solution.
Sometimes the best solution is not the most widely-encouraged one.
Yeah I often will ask for a quick phone call and try to work from the top down, or the bottom up depending on the client. Getting to the thing we're solving often leads to a different problem description and later different button or concept altogether.
Sometimes it's just me firing up some SQL queries and discovering "Well this happened 3 times ... ever ..." and we do nothing ;)
It's my belief that much of this flavor of UI/UX degradation can be avoided by employing a simple but criminally underutilized idea in the software world (FOSS portion included), which is feature freezing.
That is, either determine what the optimal set of features is from the outset, design around that, and freeze or organically reach the optimium and then freeze. After implementing the target feature set, nearly all engineering resources are dedicated to bug fixes and efficiency improvements. New features can be added only after passing through a rigorous gauntlet of reviews that determine if the value of the feature's addition is worth the inherent disruption and impact to stability and resource consumption, and if so, approaching its integration into the existing UI with a holistic approach (as opposed to the usual careless bolt-on approach).
Naturally, there are some types of software where requirements are too fast-moving for this to be practical, but I would hazard a guess that it would work for the overwhelming majority of use cases which have been solved problems for a decade or more and the required level of flux is in reality extremely low.
Spot on. Defending simplicity takes a lot of energy and commitment. It is not sexy. It is a thankless job. But doing it well takes a lot of skill, skill that is often disparaged by many communities as "political non sense"[1]. It is not a surprise that free software world has this problem.
But it is not a uniquely free software world problem. It is there in the industry as well. But the marketplace serves as a reality check, and kills egregious cases.
[1] Granted, "Political non sense" is a dual-purpose skill. In our context, it can be used both for "defending simplicity", as well as "resisting meaningful progress". It's not easy to tell the difference.
The cycle repeats frequently in industry. New waves of startups address a problem with better UX, and maybe some other details like increased automated and speed using more modern architectures. But feature-creep eventually makes the UX cumbersome, the complexity makes it hard to migrate to new paradigms or at least doing so without a ton of baggage, so they in turn are displaced by new startups.
I suspect in the short term users are going to start solving this more and more by asking ChatGPT how to make their video work on their phone, and it telling them step by step how to do it.
Longer term I wonder if complex apps with lots of features might integrate AI in such a way that users can ask it to generate a UI matching their needs. Some will only need a single button, some will need more.
Not only is it hard to figure out the use-case, but the correct use-case will change over time. If this were made in the iPod touch era, it would probably make 240p files for maximum compatibility. That's ... probably the wrong setting for today.
Good points, but to add to the sources of instability ... a first time user of a piece of software may be very appreciative of its simplicity and "intuitiveness". However, if it is a tool that they spend a lot of time with and is connected to a potentially complex workflow, it won't be long before even they are asking for "this little extra thing".
It is hard to overestimate the difference between creating tools for people who use the tools for hours every day and creating tools for people who use tools once a week or less.
Right. For most people, gimp is not only overkill but also overwhelming. It's hard to intuit how to perform even fairly simple tasks. But for someone who needs it it's worth learning.
The casual user just wants a tool to crop screenshots and maybe draw simple shapes/lines/arrows. But once they do that they start to think of more advanced things and the simple tool starts to be seen as limiting.
But the linked article addresses that. They're not advocating for removing the full-feature UI, they just advise having a simple version that does the one thing (or couple of things) most users want in a simple way. Users who want to do more can just use the full version.
Users don't want "to do more". They want to do "that one extra thing". Going from the "novice" version to the "full version" just to get that one extra thing is a real problem for a lot of people. But how do you address this as a software designer?
I don't know if this works well in general, but for example Kodi has "basic", "advanced" and several progressively more advanced steps in between for most of its menus. It hides lots of details that are irrelevant to the majority of users.
I'm not a coder, so I'm not going to pretend that this solution is easy to implement (it might be, but I wouldn't assume so), but how about allowing you to expose the "expert" options just temporarily (to find the tool you need) and then allow adding that to your new "novice plus" custom menus? I.e., if you use a menu option from the expert menu X number of times, it just shows up even though your default is the novice view.
Progressive disclosure? If you know your audience, you probably know what most people want, and then the usual next step up for that "one extra thing". You could start with the ultra-simple basic thing, then have an option to enable the "next step feature". If needed you could have progressive options up to the full version.
> The casual user just wants a tool to crop screenshots and maybe draw simple shapes/lines/arrows. But once they do that they start to think of more advanced things and the simple tool starts to be seen as limiting.
Silksong Daily News went from videos of a voiceover saying "There has been no news for today" over a static image background to (sometimes) being scripted stop-motion videos.
And why exactly should free software prioritise someone's first five minutes (or first 100 hours, even) over the rest of the thousands of hours they might spend with it?
I see people using DAWs, even "pro" ones made by companies presumably interested in their bottom lines. In all cases I have no idea how to use it.
Do I complain about intuitiveness etc? Of course not. I don't know how to do something. That's my problem. Not theirs.
> And why exactly should free software prioritise someone's first five minutes (or first 100 hours, even) over the rest of the thousands of hours they might spend with it?
Well, if people fail at that first five minutes, the subsequent thousand hours most often never happens.
> It takes a single strong-willed defender, or some sort of onerous management structure...
I'd say it's even more than you've stated. Not only for defending an existing project, but even for getting a project going in the first place a dictator* is needed.
I'm willing to be proven wrong, and I know this flies in the face of common scrum-team-everybody-owns approaches.
> to prevent the interface from quickly devolving back into the million options
Microsoft for a loooong time had that figured out pretty well:
- The stuff that people needed every day and liked to customize the most was directly reachable. Right click on the desktop, that offered a shortcut to the CPL for display and desktop symbols.
- More detailed stuff? A CPL that could be reached from the System Settings
- Stuff that was low level but still needed to be exposed somewhat? msconfig.
- Stuff that you'd need to touch very rarely, but absolutely needed the option to customize it for entire fleets? Group Policy.
- Really REALLY exotic stuff? Registry only.
In the end it all was Registry under the hood, but there were so many options to access these registry keys depending what level of user you were. Nowadays? It's a fucking nightmare, the last truly decent Windows was 7, 10 is "barely acceptable" in my eyes and Windows 11 can go and die in a fire.
A lot of this type of stuff boils down to what you're used to.
My wife is not particularly tech savvy. She is a Linux user, however. When we started a new business, we needed certain applications that only run on Windows and since she would be at the brick and mortar location full time, I figured we could multi-purpose a new laptop for her and have her switch to Windows.
She hated it and begged for us to get a dedicated Windows laptop for that stuff so she could go back to Linux.
Some of you might suggest that she has me for tech support, which is true, but I can't actually remember the last time she asked me to troubleshoot something for her with her laptop. The occasions that do come to mind are usually hardware failure related.
Obviously the thing about generlizations is that they're never going to fit all individuals uniformly. My wife might be an edge case. But she feels at home using Linux, as it's what she's used to ... and strongly loathed using Windows when it was offered to her.
I feel that kind of way about Mac vs PC as well. I am a lifelong PC user, and also a "power user." I have extremely particular preferences when it comes to my UI and keyboard mappings and fonts and windowing features. When I was forced to use a Mac for work, I honestly considered looking for a different position because it was just that painful for me. Nothing wrong with Mac OS X, a lot of people love it. But I was 10% as productive on it when compared to what I'm used to... and I'm "old dog" enough that it was just too much change to be able to bear and work with.
One summer in middle school our family computer failed. We bought a new motherboard from Microcenter but it didn’t come with a Windows license, so I proposed we just try Ubuntu for a while.
My mom had no trouble adjusting to it. It was all just computer to her in some ways.
Same, my mom ran Linux for years in the Vista days cuz her PC was too slow for Windows. She was fine. She even preferred Libreoffice over the Office ribbon interface.
Sometime around 2012, Windows XP started having issues on my parent's PC, so I installed Xubuntu on it (my preferred distro at the time). I told them that "it works like Windows", showed them how to check email, browse the web, play solitare, and shut down. Even the random HP printer + scanner they had worked great! I went back home 2 states away, and expected a call from them to "put it back to what it was", but it never happened. (The closest was Mom wondering why solitare (the gnome-games version) was different, then guided her on how to change the game type to klondike.)
If "it [Xubuntu] works like Windows" offended you, I'd like to point out that normies don't care about how operating system kernels are designed. You're part of the problem this simplified Handbrake UI tries to solve. Normies care about things like a start menu, and that the X in the corner closes programs. The interface is paramount for non-technical users.
I currently work in the refurb division of an e-waste recycling company.[0] Most everyone else there installs Ubuntu on laptops (we don't have the license to sell things with Windows), and I started to initially, but an error always appeared on boot. Consider unpacking it and turning it on for the first time, and an error immediately appears: would you wonder if what you just bought is already broken? I eventually settled on Linux Mint with the OEM install option.
Oh good it's that time of the week when we HN users get together to tell lies about all of our tech illiterate family members who use Linux full time with zero problems and zero tech support.
Familiarity is massively undersold in the Linux desktop adoption discussion. Having desktop environments that are near 1:1 clones of the commercial platforms (preferably paired with a distribution that's designed to be bulletproof and practically never requires its user to fire up a terminal window) would go so far for making Linux viable for users sitting in the middle of the bell curve of technical capability.
It's one of those situations where "close enough" isn't. The fine details matter.
The main problem with this is that the commercial offerings are pretty much just bad.
Windows isn't the way it is because of some purposeful design or anything. No, it's decades of poor decisions after poor decisions. Nothing, and I do mean nothing, is intuitive on Windows. It's familiar! But it is not intuitive.
If you conform to what these commercial offerings do, you are actively making your software worse. On purpose. You're actively programming in baggage from 25 years ago... in your greenfield project.
I don't even think that it remained very familiar aside from a taskbar (that also changed in win11) and the fact that there are desktop icons when you install things via double clicking (the double click installing also optionally changed with the Microsoft store and the msi installers are almost entirely gone these days, totally different uis pop up now). Even core things that people definitely use like the uninstallation, settings etc. ui has changed completely for the worse. Windows has also changed a lot of its core ui over the years like the taskbar, the clock, the startmenu etc. I guess one thing you could say is that it was a gradual change over many versions but everytime people hate it. Really, what Linux should have done is what Windows has done with WSL: Offer a builtin compatability layer so that you can install windows apps on linux, perhaps prompting you to enter a windows license and then it will launch those apps in a VM, even per window/app.
Assuming that the point of comparison is Windows (since it’s a rough XP/7 analogue), any difference in behaviors, patterns, or conventions that might differ from what a long time Windows user would expect, including things that some might write off as insignificant. In particular, anything relating to the user’s muscle memory (such as key shortcuts, menu item positions, etc) needs to match.
The DE needs to be as close to a drop-in replacement as possible while remaining legally distinct. The less the user needs to relearn the better.
For example, practically every text box in practically every Linux system handles ctrl+backspace by deleting a word. This clashes with a Windows user's expectation that ctrl+backspace deletes a word in some system applications while inserting a small square character in others.
> Familiarity is massively undersold in the Linux desktop adoption discussion
Totally agree. My first distro was Elementary because it was sold to me as Mac-like. It’s…sort of that, but it was enough for me to stick with it and now I’ve tried 3 other distros! Elementary is still in place in my n150 server. Bazzite for my big gaming machine. Messed with Mint briefly, wasn’t for me but I appreciated what it was.
What you're used to is definitely a huge part of it. But I do think 10-15 years ago Linux was easier to break than Windows, because it didn't make any effort to hide away the bits that let you break it. This was mainly a matter of taste. People who know what they're doing don't want to use some sanitised sandbox.
Linux was like a racing car. Raw and refined. Every control was directly connected to powerful mechanical components, like a throttle, clutch and steering rack. It did exactly what you told it to do, but to be good at it requires learning to finesse the controls. If you failed, the lessons the were delivered swiftly and harshly: you would lose traction, spin and crash.
Windows was more like a daily driver. Things were "easier", but that the cost of having less raw control and power, like a clutch with a huge dual mass flywheel. It's not like you can't break a daily driver, any experienced computer guy has surely broken Windows more than once, but you can just do more within the confines o the sandbox. Linux required you to leave.
It's different now. Distros like Ubuntu do almost everything most people want without having to leave the sandbox. The beautiful part about Linux, though, is it's still all there if you want it and nice to use if you get there, because it's build and designed by people who actually do that stuff. Nowadays I tend to agree it is mostly just what you're used to and what you've already learnt.
> When I was forced to use a Mac for work, I honestly considered looking for a different position because it was just that painful for me.
I share this aversion. I have a Mac book work sent me, sitting next to me right now, that I never use. Luckily I’m able to access the vpn via Linux and all the apps I need have web interfaces (office 365).
I grew up using Windows but have been using Linux and Mac almost exclusively for the past fifteen years; the only exposure I get to Windows is when I have to play tech support for my parents [1].
I hated OS X when I first used it. A lot, actually. I didn't consider leaving my job over it (I couldn't have afforded it at the time even if I had wanted to), but I did think about trying to do an ultimatum with that employer to tell them to buy me a computer with Windows or let me install Linux on the Macbook (this was 2012 so it had the Intel chip). I got let go from that job before I really got a chance (which itself is a whole strange story), but regardless I really hated macOS at the time.
It wasn't until a few years later and a couple jobs after that I ended up growing to really like macOS, when Mavericks released, and a few years later, I actually ended up getting a job at Apple and I refuse to allow anyone to run Windows in my house.
My point is, I think people can actually learn and appreciate new platforms if they're given a chance.
I agree, people can learn and appreciate if given the chance. But they've more important things to do so changing OS is just a distraction.
I know, techies love to love or hate the OS. Here there are endless threads waxing lyrical about Windows, MacOS or say dozen Linux installs. But 99% of users could care less.
It's kinda like cars. Petrol heads will talk cars for ages. Engine specs. What brand of oil. Gearbox ratios. Whereas I'm like 99% of people - I get in my car to go somewhere. Pretty much the only "feature" a car needs is to make me not worry about getting there.
So for 97% of people the "best" OS is the one they don't notice. The one that's invisible because they want to run a program, and it just runs.
The problem with switching my mom to Linux is not the OS. It's all the programs she uses. And while they might (or might not) be "equivalent" they're not the same. And I'm not interested in re-teaching her every bit of software, and she's not interested in relearning every bit of software.
She's not on "a journey" of software discovery. She has arrived. Changing now is just a waste of time she could be gardening or whatever.
The reason it'll never be the year for Linux Desktop is the same reason it's always been - it's not there already.
I mostly agree with you, though one of the few good things about Electron taking over the desktop means that an increasing number of programs are getting direct ports to Linux. A guy can dream at least.
> And I'm not interested in re-teaching her every bit of software, and she's not interested in relearning every bit of software.
I don't see Windows as having much of an edge there. Lots of things seem to change on Windows just for change's sake. I get so tired of the churn on Windows versions and finding how to disable the new crummy features. If you want to avoid relearning all the time, something simple like XFCE is going to be way better.
And Linux won't arbitrarily irrevocably brick your computer because of an automatic update. In my opinion, having your computer bricked because of an automatic update is a very large change to adapt to.
I feel the need to constantly reiterate this; if someone who works on Windows Update reads this, please consider a different career, because you are categorically terrible at your job. There are plenty of jobs out there that don't involve software engineering.
> And Linux won't arbitrarily irrevocably brick your computer because of an automatic update.
To the average user, it absolutely will. Unless they happen to run on particularly well-supported hardware, the days of console tinkering aren't gone, even on major distros.
What's fixable to the average Linux user and what's fixable to the average person (whose job is not to run Linux) are two very, very different things.
People want features, and they're willing to learn complicated UIs to get them. A software that has hyper simplified options has a very limited audience. Take his example: we have somebody who has somehow obtained a "weird" video file, yet whose understanding of video amounts to wanting it to be "normal" so they can play it. For such a person, there are two paths: become familiar enough with video formats that you understand exactly what you want, and correspondingly can manipulate a tool like handbrake to get it, or stick to your walled-garden-padded-room reality where somebody else gives you a video file that works. A software that appeals to the weird purgatory in the middle necessarily has a very limited audience. In practice, this small audience is served by websites. Someone searches "convert x to y" and a website comes up that does the conversion. Knowing some specialized software that does that task (and only that one narrow task) puts you so far into the domain of the specialist that you can manage to figure out a specialist tool.
When we moved to Canada from the UK in 2010 there was no real way to access BBC content in a timely manner. My dad learned how to use a VPN and Handbrake to rip BBC iPlayer content and encode it for use on an Apple TV.
You had to do this if you wanted to access the content. The market did not provide any alternative.
Nowadays BBC have a BritBox subscription service. As someone in this middle space, my dad promptly bought a subscription and probably has never fired up Handbrake since.
> we have somebody who has somehow obtained a "weird" video file
Why are you arriving at the conclusion that this requires complex software, rather than just a simple UI that says "Drop video file here" and "Fix It" below? E.g., instead of your conclusion "stick to your walled-garden-padded-room reality where somebody else gives you a video file that works", another possibility is the simple UI I described? That seemed to me the point of the post.
The issue is that downloading a software, for most people, implies an investment in the task the software does that is unlikely to be paid off if it only does a single simple task. If I'm going out of my way to download something, then I'm probably willing to learn a few knobs that give me more control. Hence why I suggested that such a person would rather use a website.
This is really just my read for why this sort of software isn't more common. Go ahead and make it, and if it ends up being popular I'll look the fool.
> an investment in the task the software does that is unlikely to be paid off if it only does a single simple task
I don't think that's true at all. The tool linked here is exactly the kind of utility that does one single task and that people are happy to download. Most people use software to solve a problem, not to play around with it and figure out if they have a use for it.
> Free audio editing software that requires hours of learning to be useful for simple tasks.
To be fair, the Audacity UX designer made a massive video about the next UX redesign and how he tried to get rid of "modes" and the "Audacity says no" problem:
So this problem should get better in the future. Good UX (doesn't necessarily have to have a flashy UI, but just a good UX) in free software is often lacking or an afterthought.
You're making application for yourself and somewhere down pipeline you decide that it could benefit others, so you make it open-source.
People growl at you "It's ugly UX but nice features" when it was originally designed for your own tastes. The latter, people growl at you for "not having X feature, but nice UX".
Your own personal design isn't one-fits-all and designing mocks takes effort. Mental strain and stress; pleasing folks is hard. You now continue developing and redesign the foundations.
A theming engine you think. This becomes top-priority as integration of such becomes a PITA when trying to couple it with future features later.
That itself becomes a black hole in how & schematics. So now you're forever doomed in creating something you never desired for the people who will probably never use it. This causes your project to fail but at least you have multiple revisions of the theming engine. Or you strike it lucky and gain a volunteer.
The problem with the new Audacity isn't the new version, it's that it replaces the old version. If the new version came out but it was called "DARing" and Audacity continued to be the thing we have now, people might question the name but no other eyes would be batted.
Pre-emptive anti-snark: yes, the old version will still exist... if you can dig up the right github commit and still make it compile in 2030.
Well, Tantacrul did answer that objection: it just shows you a popup dialog on first start: "which theme do you want" (colorful or colorless, light / dark) and "which experience do you want" (classic / new). So if you pick the "colorless, light, classic" option, it's going to look pretty much like the current Audacity, except that they moved from wxWidgets to Qt.
the "modal disruption" is misguided - he cites as the challenge a very poor implementation in a MS app where the modes were barely visible!!! That's not a proof that modes are bad, just a statement that invisible information makes it hard for the users to adapt! Brushes (another mode he cites as great) are great precisly because their state is immediately visible in your focus area - your primary pointer changes
Now he got rid of the modes by adding handles and border actions - so 1) wasted some space that could be used for information 2) required more precision from the users because now to do the action you must target a tiny handle/border area 3) same, but for other actions as now you have to avoid those extra areas to do other tasks.
While this might be fine for casual users as it's more visible, the proper way out is, of course,... MODES and better ones! Let the default be some more casual mode with your handles, but then let users who want more ergonomics use a keybind to allow moving the audio segment by pressing anywhere in that segment, not just in the tiny handle at the top. And then you could also add all those handles to visually indicate that now segments are movable or turn your pointer into a holding hand etc.
Same thing in the example - instead of creating a whole new separate app with a button you could have a "1-button magicbrake" mode in handbrake
Having actually used Audacity, the modes were horrid and not at all intuitive to use and everything demonstrated in the video only looked like vast improvements (aside from the logo). I am failing to see how adding handles wastes space that could be used for any extra information especially when the tradeoff is an incredible degree of customisation for my UI. In terms of precision, they're working on accessibility issues but I'm not sure how this change is any special than any other UI.
> I am failing to see how adding handles wastes space that could be used for any extra information
What is there to see? You add a bar that takes space. That space can be taken up by something useful. Just like you have apps that hide app title bar and app menus so you can have more space for your precious content. This is especially useful for high-info-density apps like these audio/video/photo authoring ones.
Note how tiny those handles are in the video, why do you think that is?
> tradeoff is an incredible degree of customisation
You don't have that tradeoff, neither of the 2 solutions are anywhere close to "incredible customization", so you can pick either without it.
> In terms of precision, they're working on accessibility issues
Working towards what magic solution?
> but I'm not sure how this change is any special than any other UI.
why does it have to be special? Just a bog standard degradation common to any UI (re)design, nothing special about it.
> the modes were horrid
Of course they were. Just like they were horrid in that MS Paint app the dev worked on before. But you can make any UI primitive horrid, even buttons, that's no reason to remove them, but to improve them!
1. Free software is developed for the developer's own needs and developers are going to be power users.
2. The cost to expose options is low so from the developer's perspective it's low effort to add high value (perceiving the options as valuable).
3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
It takes a lot of time and energy to refine and maintain a minimalistic interface. You are intentionally narrowing the audience. If you are an open source developer with limited time you probably aren't going to invest in that.
That’s one of the great things about the approach demonstrated in the post. The developers of Handbrake don’t need to invest any time or energy in a minimalist interface. They can continue to maintain their feature-rich software exactly as it is. Meanwhile, there is also a simple, easy front end available for people who need or want it.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
i have seen many comments, by lay people, out of Sonobus [0] being superb on what it does and impressive by being 100% free. that's a niche case that if it was implemented on Ardour, could fit the same problem OP describes
however i can't feel where the problem of FOSS UX scaring normal people is. someone getting a .h264 and a .wav file out of a video-record isn't normal after all. there are plenty of converters on the web, i dunno if they run ffmpeg at their server but i wouldn't get surprised. the problem lies on the whole digital infrastructure running on FOSS without returning anything back. power-user software shouldn't simplify stuff. tech literacy hopefully can be a thing and by quickly learning how to import and export a file in a complex software feels better to install 5 different limited software over the years because your demands are growing
> . Free software is developed for the developer's own needs and developers are going to be power users
* Free software which gains popularity is developed for the needs of many people - the users who make requests and complaints and the developers.
* Developers who write for a larger audience naturally think of more users' needs. It's true that they typically cater more to making features available than to simplicity of the UI and ease of UX.
> 2. The cost etc.
Agreed!
> 3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
The developer typically knows what the popular use cases would be. Like with the handbrake example. They also pretty much know how newbie users like simplified workflows and hand-holding - but it's often a lot of hassle to create the simplified-with-semi-hidden-advanced-mode interface.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user
Are people who install, say, the Chrome browser on their PC to be considered power userS? They downloaded and installed it themselves after all... no, I believe you're creating a false dichotomy. Some users will never install anything; some users might install common software they've heard about from friends; and some might actively look for software to install - even though they don't know much about it or about how to operate the apps and OS facilities they already h ave. ... and all of these are mostly non-power-users.
> It’s a bit like obscuring the less-used functions on a TV remote with tape. The functions still exist if you need them, but you’re not required to contend with them just to turn the TV on.
For telling software devs to embrace traditional design wisdom, using TV remotes is an interesting example - cause aside from the commonly used functionality people actually care about (channels, volume, on/off, maybe subtitles/audio language) the rest should just be hidden under a menu and the fact that this isn't the case demonstrates bad design.
It's probably some legacy garbage, along the lines of everyone having an idea for what a TV remote is "supposed" to look like and therefore the manufacturers putting on buttons in plain view that will never get used and that you'd sometimes need the manual to even understand.
At the same time, it might also be possible that the FOSS software that's made for power users or even just people with needs that are slightly more complex than the baseline is never going to be suited for a casual user - for example, dumbing down Handbrake and hiding functionality power users actually do use under a bunch of menus would be really annoying for them and would slow them down.
You can try to add "simple" and "advanced" views of your UI, but that's the real disconnect here - different users. Building simplified versions with sane defaults seems nice for when there is a userbase that needs it.
Remotes are fine. Except the modern ones that have a touchpad and, like, 8 buttons, four of which are ads.
People can handle many buttons just fine. Even one year old kids don't have a problem with this, which becomes apparent if you ever watch a small child play. The only people who have a problem here are UX designers high on paternalistic approach to users.
If handbrake scares them, don’t you dare to demonstrate how to use ffmpeg. I remember when I used handbrake for the first time and thought “wow, it’s much more convenient than struggling with ffmpeg”.
I think GUI tools lend themselves more to being able to discover functionality intuitively without needing to look anything up or read a manual, and especially so if you’re coming back to a task you haven’t done in a while. With CLI I constantly have to google or ask an LLM about commands I’ve done many times, whereas with a gui if I do it once I can more easily find my way the next time. Anyway both have their place
> I think GUI tools lend themselves more to being able to discover functionality intuitively without needing to look anything up or read a manual
Well, there are different issues.
Reading a manual is the best you can do, theoretically. But Linux CLI tools have terrible manuals.
I read over the ssh man page multiple times looking for functionality that was available. But the man page failed to make that clear. I had to learn about it from random tutorials instead.
I've been reading lvm documentation recently and it shows some bizarre patterns. Stuff like "for more on this see [related man page]", where [related man page] doesn't have any "more on this". Or, here's what happens if you try to get CLI help:
1. You say `pvs --help`, and get a summary of what flags you can provide to the tool. The big one is -o, documented as `[ -o|--options String ]`. The String defines the information you want. All you have to do is provide the right "options" and you're good. What are they? Well, the --help output ends with this: "Use --longhelp to show all options and advanced commands."
2. Invoke --longhelp and you get nothing about options or advanced commands, although you do get some documentation about the syntax of referring to volumes.
3. Check the man page, and the options aren't there either. Buried inside the documentation for -o is the following sentence: "Use -o help to view the list of all available fields."
4. Back to the command line. `pvs -o help` actually will provide the relevant documentation.
Reading a manual would be fine... if it actually contained the information it was supposed to, arranged in some kind of logically-organized structure. Instead, information on any given topic is spread out across several different types of documentation, with broken cross-references and suggestions that you should try doing the wrong thing.
I'm picking on man pages here, but actually Microsoft's official documentation for their various .NET stuff has the same problem at least as badly.
We're going full-circle, because LLMs are amazing for producing just the right incantation of arcane command-line tools. I was struggling to decrypt a file the other day and it whipped me up exactly the right openssl command to get it done.
From which I was able to then say, "Can I have the equivalent source code" and it did that too, from which I was able to spot my mistake in my original attempt. ( The KDF was using md5 not sha ).
I'm willing to bet that LLMs are also just as good at coming up with the right ffmpeg or imagemagick commands with just a vague notion of what is wanted.
> I'm willing to bet that LLMs are also just as good at coming up with the right ffmpeg or imagemagick commands with just a vague notion of what is wanted.
they are. ive only used ffmpeg via llm, and its easy to get the LLM to make the right incantation as part of a multi-step workflow.
my own lack of understanding of video formats is still a problem, but getting ffmeg to do the right thing only takes a vague notion
One of the things LLM shines.
For double checking the command explanations, I ask commands to grep the sections from manual instead of relying LLM output blindly.
Come on. "type ffmpeg, then hyphen i then the input filename then the output filename". I would've understood this when I was 8. Because I was super smart? No, because I was making a genuine effort.
The portion you've overlooked is there is an entire population of users out there who have never seen, nor used, a command line, and telling them to "just type this out" ignores all the background command line knowledge necessary to successfully "just type this out":
1) They have to know how to get to a command line somewhere/how (most of this group of users would be stymied right here and get no further along);
2) They now have to change the current directory of their CLI that they did get open to the location in their filesystem where the video is actually stored (for the tiny sliver who get past #1 above, this will stymie most of them, as they have no idea exactly where on disk their "Downloads" [or other meta-directory item] is actually located);
3) For the very few who actually get to this step, unless they already have ffmpeg installed on their PATH, they will get a command not found error after typing the command, ending their progress unless they now go and install ffmpeg;
4) For the very very few who would make it here, almost all of them will now have to accurately type out every character in "a-really_big_filename with spaces .mov", as they will not know anything about filename completion to let the shell do this for them. And if the filename does have spaces, and many will, they now need to somehow know 4a) that they have to escape the spaces and 4b) how to go about escaping the spaces, or they will instead get some ffmpeg error (hopefully just 'file not found', but with the extra parameters that unescaped spaces will create, it might just be a variant of "unknown option switch" error instead).
They are using text inputs, where you press enter to send stuff daily. Most of the hurdle is just overcoming the preconception that at black in put window means hard mode.
They can right-click in the folder view of their OS file viewer. On Windows they can also just type the command into the path bar.
When you tell them the command, you could also just install it. Also you could just tell them to type the name of the app 'ffmpeg' into the OS store and press install. They do this on their phone all the time.
Well, you're cheating a bit here. You're basically assuming the user has never seen a text prompt before. Which is a good assumption.
But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too. What is clicking? What does this X do? What's a desktop again? I don't understand, why do I need 1 million pixels to change an mp3 to an avi? What does that window looking thing in the corner do? Oh no, I pressed the little rectangle at the top right and now it's gone, it disappeared. No not the one with the X, I think it was the other one.
Pretty much all computer use secretly relied on hundreds if not thousands of completely arbitrary decisions and functionality you just have to know. Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
The difference is a lot of "computer education" (as opposed to computing education most in this forum have) has happened with GUIs. "Simple" CLI tools doesn't mean they're understandable or even user-friendly.
Heck, even computing education (and the profession even!) has been propped up by GUIs. After my first year in CS, there were like only three to five of us in a section of forty to fifty who could compile Java from the command line, who would dare edit PATH variables. I'm pretty sure that number didn't improve by much when we graduated. A lot of professionals wouldn't touch a CLI either. I'm not saying they are bad programmers but fact of the matter is there are competent professional programmers who pretty much just expect a working machine handed to them by IT and then expect DevOps to fix Jenkins when it's borked out.
Remember: HN isn't all programmers. There are more out there.
> But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too.
We don't even need to assume, we just need to look at history. GUIs came with a huge amount of educational campaigning behind it, be it corporate (i.e., ads/training programs that teach users how to use their products) or even government campaigns (i.e., computer literacy classes, computer curriculum integrated at school). That's of course followed by man-years upon man-years of usability studies and the bigger vendors keeping consistent GUI metaphors across their products.
Before all of this, users did ask the questions that you enumerated and certain demographics still do to this day.
> Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
"Everything should be made simple, but not simpler." Has it occurred to you that maybe CLI tools assume too little?
To add on to this, there's no standardized way of indicating what needs to be typed out and what needs to be replaced. `foo --bar <replace me>` might be a good example command in a README, but I had to help someone the other day when they ran `foo --bar <test.txt>`, not realizing they should have replaced the < and > as well as just the text.
This describes me somewhat. I use FEA software and only recently started using it to execute jobs in CLI. I still trip over changing directories. Fortunately notepad++ has an option to open CLI with the filepath of the currently open file. I also didn't know right-click is paste in CLI. Don't use ctrl+c accidentally. But ctrl+v does work in powershell (sometimes?). "Error, command not found" is puzzling to me. Where does the software need to live relative to the directory I am using? This is all still very foreign to me, and working in CLI feels like flipping light switches in a dark room.
To answer your last question, on your operating system there is something called “PATH”. It is a user- or systemwide variable that dictates where to look for programs. It basically is a list of directories, often separated by “:”
Further reading: https://www.java.com/en/download/help/path.html (this may have Java references but still applies)
The GP here appears to be on Windows, given their reference to PowerShell. And on Windows, the path separator is ";", not ":".
One of the things I've noticed is that people trying to help the true beginners vastly overestimate their skill level, and when you get a couple of people all trying to help, each of them is making a completely different set of suggestions which doesn't end up helpful at all. Recently, I was helping somebody who was struggling with trying to compile and link against a C++ library on Windows, and the second person to suggest something went full-bore down the "just install and use a Linux VM cause I don't have time to help you do anything on Windows."
The reality is that we've been infantilizing users for far too long. The belief that people can't handle fundamental concepts is misguided and primarily serves to benefit abusive tech companies.
Two decades ago, users understood what "C:\Documents and Settings\username\My Documents" meant and navigated those paths easily. Yet, we decided they were too "stupid" to deal with files and file paths, hiding them away. This conveniently locked users into proprietary platforms. Your point #2 reflects a lie we've collectively accepted as reality. Sadly, too many people now can’t even imagine that a straightforward way to exchange data among different software once existed, but that's a situation we're deliberately perpetuating.
This needs to change. Users deserve the opportunity to learn and engage with their tools rather than being treated as incapable. It’s time we started empowering users for a change.
This is an interesting position because that's only simple if you already know it. From the perspective of discoverability, it's literally the worst possible UI, because a string of that length has, say, 30^30 possible combinations, among which only one will produce the desired effect, and a bash prompt gives you no indication of how to arrive at that string.
I actually think ffmpeg’s UI is simpler than Handbrake for those at all acquainted with the command line (i.e., for those who understand the concept of text-is-everything-everything-is-text). Handbrake shows you everything you can possibly fiddle with whether or not you plan on fiddling with it. Meanwhile ffmpeg hides everything, period, and you ask for specific features by typing them out. It's not great for discovery but once you get the hang of it, it is incredibly precise. One could imagine taking someone for whom Handbrake was too much and showing them “look, you just type `ffmpeg -i`, the input file, and the output file, and it does what you want”. I imagine for many people this would be a perfectly lovely interface.
FFMpeg's command line is practically a programming language.
Someone who only wants to convert from one format to another, and isn't accustomed to CLIs, is far better served by "drag the file here -> type an output filename and extension in the text box".
The problem (and the reason both FFMpeg and Handbrake exist) is that tons of people "only" want to do two or three specific tasks, all in the same general wheelhouse, but with terrible overlap.
Yes. It's been a few years since I regularly used Handbrake, but I remember thinking of it as very simple, especially with its presets-based workflow. I was used to stuff like various CLI tools, mkvmerge and its GUI, and avidemux at that time.
It struck me as a weird example in the OP because I don't really think of Handbrake as a power user tool.
Handbrake's UI is in the uncanny valley for me -- too complicated for use by laymen, and way too limiting for use by people who know what they're doing...
A "normal person" is just someone whose time and mental energy are focused on something other than the niche task your app is aiming to solve. With enough time and focus, anyone can figure out any interface. But for many, something which requires a smaller investment to achieve the results they need is preferrable.
Also, even the most arcane and convoluted interfaces become usable with repetition. Normal people learn the most bureaucratic business workflows and fly through them if that is their job. Then if you dare to "improve" any aspect of it you will hear them complain that you "broke" their system.
Was he able to use it correctly though to be able to digitize video with exacltly the correct setttings so that no notable loss of quality was introduced? How long did it take him to randomly test settings?
Actually I think this is a killer use case for local LLMs. We could finally get back to asking the computer to do something without having to learn how to string 14 different commands together to do it.
The last thing we want for a user-friendly interface is nondeterminism. Some procedure that works today must work tomorrow if it looks like you can repeat it. LLMs can't be the answer to this. And if you go to the lengths of making the llm deterministic, with tests and all, you might as well code the thing once and for all and not ship the local llm to the end user at all.
Sorry, I see how my post lacked sufficient clarity.
The idea behind a cheap UI is not constant change, but that you have a shared engine and "app" per activity.
The particular workflow/ui doesn't need to ever change, it's more of a app/brand per activity for non-power users.
This is similar to how some apps historically (very roughly lotus notes springs to mind) are a single app but have an email interface/icon to click, or contacts, or calendar, all one underlying app but different ui entry points.
Using ffmpeg to convert one file to another remains probably my main use of general LLM web searches. This isn't to say it does a good job with that, but it's still ahead of me.
Just like with regexes I've yet to get a wrong result by doing "give me an ffmpeg command that does this and this" with an LLM, with Handbrake I'm never quite sure what I'm doing, too many checkboxes and dropdowns with default values already filled in.
> The problem is that everyone wants a different 20% of the functionality.
I'm not disagreeing with your basic take, but I think this part is a little more subtle.
I'd argue that 80% of users (by raw user count) do want roughly the same 20% of functionality, most of the time.
The problem in FOSS is that average user in the FOSS ecosystem is not remotely close to the profile of that 80%. The average FOSS user is part of the 1% of power users. They actively want something different and don't even understand the mindset of the other 80% of users.
When someone comes along to a FOSS project and honestly tries to rebuild it for the 80% of users, they often end up getting a lot of hate from the established FOSS community because they just have totally different needs. It's like they don't even speak the same language.
There's a good report/study about the complexity of Microsoft Word floating around somewhere.
It was something like:
- almost everybody only uses about 20% of the features of Word
- everybody's 20% is different, but
- ~80% of the 20% is common to most users.
- on the other hand, the remaining 20% of the 20% is widely distributed and covers basically all of the product.
So if you made a version of Word with 16% of its feature set you would almost make everybody happy. But really, nobody would be happy. There's no small feature set that makes most people happy.
Kind of like how the author likely knows about the report and wanted to make a blog post about it without saying anything about or citing the report itself. IT seems like it but 80/20 can be found in lots of places, just like 60/40 can.
Yeah but MS Word is also designed with the guidance of an army of accountants and corporate shareholders. Your study plays into that, but there's a much bigger picture when you talk about analyzing how any product came to be that has MS as a prefix.
Resources or the care, tbh. FOSS is a big umbrella and a lot of it simply isn't meant for "customers". Some FOSS apps clearly are trying to build a user base, in which case yeah the points this post makes are worth thinking about.
But many other projects, perhaps the majority, that is not their goal. By devs for devs, and I don't think there is anything wrong with that.
Pleasing customers is incredibly difficult and a never-ending treadmill. If it's not the goal then it's not a failure.
For a lot of usecases there is a strong 80% functionality. E.g. For Handbrake, 80% of the time I am reducing the size of my video screen grabs from my computer or phone. Don't need any resolution change, etc.
There are other times I want cropping or something similar, but it's really only 10-30% of the time. If people want to have a more custom workflow they can use an advanced UI
> tends to require a tight feedback loop between testers, designers, implementers, and users
Some FOSS projects attempt something like this, but it can become a self-reinforcing feedback loop: When you're only testing on current users, you're selecting for people who already use the software. People who already use the software were not scared away by the interface. So the current users tend to prefer the current interface.
Big software companies have the resources to gather (and pay) people for user studies to see what works and what does not for people who haven't seen the software before, or at least don't have any allegiances. If you only ever get feedback from people who have been using the software for a decade, they're going to tell you the UI must not change because they know exactly how to use it by now.
You don't need two different versions of the software, one that is easy and one that is powerful. You can have one version that is both easy and powerful. Key concepts here are (1) progressive disclosure and (2) constraints.
Progressive disclosure can be intensely annoying to actual power users.
Definitionally, it means you're hiding (non-disclosing) features behind at least 1 secondary screen. Usually, it means hiding features behind several layers of disclosures.
Making a very simple product more powerful via progressive disclosure can be a good way to give more power to non-power users.
Making a powerful product "simpler" via progressive disclosure can annoy the hell out of power users who already use the product.
Just add an option for "advanced mode" that if clicked toggles to "basic mode".
Power users are going to be looking for advanced features and only have to click it once. People who can barely read and are scared by anything advanced will get the interface they can use best the first time they open the app
Making the progressive version is very difficult. Where you can please one audience with the powerful and easy versions, you can often disappoint both with the progressive version despite it taking much more effort.
In my personal experience, you're lucky if free software has the budget (time or money) to get to easy. There's very little free software that makes it to progressive.
Relevant Steve Jobs quote: "Simple can be harder than complex: you have to work hard to get your thinking clean to make it simple."
So yes, it is hard to make the simple version. You have to have a very good understanding of what the user wants out of your product. Until you have this clarity, every feature seems important. Once you have this clarity you understand what the important features are. You make those features more prominent by giving them prime real estate, then tuck away the less important features in a less visible place. Simple things should be simple. Complex things only need to be possible.
It can get very complicated when you've built an audience where you have 10 segments that think their 10% of the use case is very important and you can only focus on a couple of segments at a time!
For me, it's the fact that I'm running code written by some random people. The code could be malicious. I don't know unless I audit it myself and I have no time for that. Remember the XZ Utils backdoor thing from a few months ago? Well how many backdoors are there in other FOSS stuff?
> The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
Free software can be audited for backdoors. Closed can not. Their backdoors will stay there indefinitely.
The better example for this design principle is the big green button on copy machines. The copier has many functions, but 99% of users don't bother with 99% of them.
Oh man, I have literally done that to my parents’ remote controls. Actually more controls, because they still watch VHS tapes. But I have to admit it never occurred to me to do that to their software.
Logic Pro has a “masking tape” mode. If you don’t turn on “Complete Features” [0], you get a simplified version of the app that’s an easier stepping stone from GarageBand. Then check the box and bam, full access to 30 years’ accumulation of professional features in menus all over the place.
This has been a major UX problem for me when building my app [0] (an AI chat client for power user).
On the one hand, I want the UI to be simple and minimal enough so even non savvy users can use it.
But on the other hand, I do need to support more advanced features, with more configuration panels.
I learned that the solution in this case is “progressive disclosure”. By default, the app only show just enough UI elements to get the 90% cases done. For the advanced use cases, it takes more effort. Usually to enable them in Settings, or an Inspector pane etc. Power users can easily tinker around and tweak them. While non savvy users can stick with the default, usual UX flow.
Though even with this technique, choosing what to show by default is still not easy. I learned that I need to be clear about my Ideal Customer Profile (ICP) and optimize for that profile only.
Abstraction needs to happen on a different layer. Because your power users are already dealing with complicated stuff and you don't want to make their lives even harder.
I know about 10 people in real life that uses Handbrake. And 10 of them use it to rip Blu-ray discs and store media files on their NAS. It will piss them off if you hide all the codec settings and replace the main screen with a giant "convert to Facebook compatible video" button.
This also evidences that in this case, it's more developers of handbrake just know their audience rather than a real design failure. Maybe they'd prefer to keep the user base deliberately small?
As a UX guy, I'd like to note that the normal people aren't so great at knowing what they want, either.
I dread "Can you add a button..." Or worse, "Can you add a check box..." Not only does that make it worse for other users, it also makes it worse for you, even if you don't realize it yet.
What you need is to take their use case and imagine other ways to get there. Often that means completely turning their idea on its head. It can even help if you're not in the trenches with them, and can look at the bigger picture rather than the thing that is interfering with their current work flow.
Sometimes we really do just want a checkbox toggle though :D
Eg an app to prevent MacOS going to sleep. I want a checkbox to also stop an external display sleeping. I don't need my entire usage of the app and my computer-feature desires analysed and interpreted in a way that would make a psychoanalyst look like a hack.
But yes in a professional setting people do use "Can we add a button" to attempt to subvert your skillset, your experience, to take control of the implementation,
and to bypass solid analysis and development practices
There are literally thousands of wrappers for ffmpeg (other examples: imagemagick, ghostscript) that do exactly that. E.g. all commercial and dozens of open source video converters. So there is no lack of simple software for people who know little about the problem they're trying to solve (e.g. playing a downloaded mkv their shitty preinstalled video player doesn't accept), the problem is rather one of knowing that open source software exists and how to find it. Googling or asking an LLM does mostly present you software that costs money and is inferior to anything open source (and some malware).
Does it? I often ask ChatGPT such things and specifying I want free software options is enough (it often mentions which options are and aren’t free on its own).
The problem with why so many OSS/free software apps look bad can be demonstrated by the (still ongoing) backlash to Gnome 3+. It just gets exhausting defending every decision to death.
Sometimes projects need the spine to say "no, we're not doing that."
GNOME 3+ developers put themselves in the inevitable (and unenviable) position of defending every decision to death because they limited the user's ability to make many, many decisions that were possible in previous versions.
There's nothing wrong with an opinionated desktop environment or even an opinionated Linux distribution. But, prior to GNOME 3, the project was highly configurable. Now it is not.
When people start up new highly opinionated projects (e.g. crunchbang, Omarchy), the feedback is generally more positive because those who try it and stick with it are the ones who like the project's opinions. The people who don't like those opinions just stop using it. There isn't a large, established base of longstanding users who are invested in workflows, features, and options.
Ideally you'd want to add selectable options for users in a way that's sustainable long-term and not just panic-adding things all over the place because of user demands. That's how you get the Handbrake situation that OP article is complaining about.
Gnome 3 was a big update and adding options, which does happen, is not free. There were changes from Gnome 2 and 3 and adding some options "back" from Gnome 2 is really asking for that feature to be rewritten from scratch (not all the time, but a lot of the time).
That the Gnome team has different priorities from other DEs, one of them being "keep the design consistent and sustainable," is completely valid and preferred by many users like myself.
I think it woudn't hurt if Handbrake had a simple UI like this by default with an Expert button to get into the full UI. I like how VLC also have basic and expert modes. It's a nice idea IMO.
Focus only on the parts that are really needed, necessary, or the most important; hide the currently unnecessary or secondary, or simply remove the truly unnecessary and secondary; remove all unnecessary
The title of this article isn't supported. It should be "Complicated software scares normal people". You can have simple and intuitive free software and complicated and unintuitive pay software.
Meanwhile, every time Gnome makes UI adjustments along these lines, there's an outcry that it's dumbed downed, copying apple, removing features etc etc.
Well Gnome tells people that they should just know keyboard shortcuts for everything - which is literally something only power users know to do. Their entire design ethos is a weird opposition to itself where it is aiming to be so simple and minimal that in order to do basic things you have to memorize keyboard shortcuts as there is no visual interface possibility to do those things.
Where do they tell to use keyboard shortcuts? I've been using Gnome 3 since it came out and I haven't encountered situations where I could do things with keyboard I couldn't do easily with mouse.
Yeah, and that's because the article's advice is bad.
It works exactly for TV remote controls. Or, rather, it worked before everybody had an HDMI player or smart TVs. It doesn't work for TV remotes now either.
Handbrake is a bit like TV remotes in the turn of the century. That's an exception even among free software, and absolutely no mainstream DE is like that.
They are actually, literally, removing features. That's not an opinion, that's what is actually happening, repeatedly.
Now, maybe you say good riddance. Fine. However, it is indisputable that now the desktop is slightly less capable. The software can do less stuff than before.
There is a massive amount of compromise in a UI. Adding features adds complexity. If you need that feature you have to accept the complexity that goes with it, and generally you are happy to. However if you don't need that complexity you don't want it. The average person uses 5% of the features of there word processor - but there is very little overlap between any two random users, and each wants the other 95% they don't use hidden (or perhaps 90% as there is another 5% they will need or think they will need) Gnome seems to be focusing on the 1% of features that are common to everyone, which means you can't get your 5%.
Well, the outcry is completely justified. Suppose a video conversion app really did have just have a drop-target area and a "do it" button. It would be ridiculously bad. That kind of crutch is ok to install for illiterate users who don't know anything and won't learn anything - but:
1. Some day, those users think "Hey, I'm not happy with some setting, what do I do?" and they can do nothing.
2. The users who need more functionality can't get it - and feel like they have to constantly wrestle with the app, and that it constantly confounds and trips them up, even when they express their clear intents. It's not like the GNOME apps have a "simple mode" and an "advanced mode"
3. The GNOME apps don't even really go along those lines. You see, even non-savvy users enjoy consistentcy and clarity; and the GNOME people have made their icons inscrutable; take over the window manager's decorations to "do their own thing"; hide items behind a hamburger menu, as though you're on a mobile phone with no screen space; etc. So, even for the non-savvy users - the UX is kind of bad. And just think of things like the GTK file picker. Man, that's a little Pandora's box right there, for the newbie and power user alike.
> Well, the outcry is completely justified. Suppose a video conversion app really did have just have a drop-target area and a "do it" button. It would be ridiculously bad. That kind of crutch is ok to install for illiterate users who don't know anything and won't learn anything
One could say the same about people who don't bother to learn ffmpeg CLI.
Its an entire desktop environment, its not as simple as choosing between two different apps. Although people who make this complaint should probably just use KDE, maybe they've used Gnome for a long time and don't want to change.
I do kind of think the solution to this issue lies at the OS level. It should provide a high degree of UI and workflow standardization (via first party apps, libraries and guidelines). Obviously it's an incredibly high bar to meet for volunteer efforts, but the user experience starts at the OS level. Instead of even installing a program like "Handbrake" or "Magicbrake" the OS should have a program called "Video Converter" which does what it says on the tin. There should also be a small on-device model which can parse commands like: "Convert a video so it can play on facebook" and deep link into the Video Converter app with the proper settings. Application-level branding should also basically not exist, it's too much noise. The user should have complete control over theming and typography. There has to be a standard interaction paradigm like the classic menubar but updated for modern needs. We need a sane discoverable default shell language with commands that map to GUI functionality within apps, and the user should never be troubled with the eccentricities of 1970s teletype machines.
I'd argue most software scares normal people. They only learn because of a strong intrinsic motivation (connecting with other people/access to entertainment) or work requirements which come with mandatory trainings and IT support
I like the design pattern of a "basic mode" and an "advanced mode".
The "advanced mode" rarely actually covers all the needs of an advanced user (because software is never quite everything to everyone), but it's at least better at handling both types of users.
Not all free software has this problem... Mozilla and Thunderbird I've had my parents on for years. It's not a ton to learn, and they work fine.
Taking the case of Photoshop vs. Gimp - I don't think the problem is complexity, lol. It's having to relearn everything once you're used to photoshop. (Conversely, I've never shelled out for Adobe products, and now don't want to have to relearn how to edit images in photoshop or illustrator)
Let's do another one. Windows Media Player (or more modern - "Movies & TV"). Users want to click on a video file and have it play with no fuss. VLC and MPC work fine for that! If you can manage to hold onto the file associations. That's why Microsoft tries so hard to grab and maintain the file associations.
I could go on... I think the thesis of this article is right for some pieces of software, but not all. It's worth considering - "all models are wrong, but some are useful".
> Taking the case of Photoshop vs. Gimp - I don't think the problem is complexity, lol. It's having to relearn everything once you're used to photoshop. (Conversely, I've never shelled out for Adobe products, and now don't want to have to relearn how to edit images in photoshop or illustrator)
I don't think this comparison is really accurate, Adobe's suite is designed for professionals that are working in the program for hours daily (e.g., ~1000 hours annually for a creative professional). There are probably some power users of The GIMP that hit similar numbers, but Creative Cloud has ~35-40 million subscribers, these are entirely different programs for entirely different classes of users.
I think there is something deeper here: people have become scared of the unknown, therefore we need to hide things for them. But people don't have to be scared. In fact even for people who are using Handbrake comfortably, a lot of things Handbrake presents in its UI are probably unknown to them and can safely be ignored. The screenshot in the article shows that Handbrake analyzed the source video and reported it as 30 FPS, SDR, 8-bit 4:2:0, 1-1-1. I think less than a tenth of a percent of Handbrake users understand all of that. 30 FPS is reasonably understandable but 4:2:0 requires the user to understand chroma subsampling, a considerably more niche topic. And I have no idea what 1-1-1 is and I simply ignore it. My point is, when faced with unknown information and controls, why do people feel scared in the first place? Why can't they simply ignore the unknown and make sense of what they can understand? Is it because they worry that the part of the software they don't understand will damage their computer or delete all their files? Is it just the lack of computer literacy?
I do not readily empathize with people who are scared of software, because my generation grows up tinkering with software. I'd like to understand why people would become scared of software in the first place.
The world is a complicated place, and there is a veritable mountain of things a person could learn about nearly any subject. But sometimes I don't need or want to learn all those things - I just want to get one very specific task done. What I really appreciate is when an expert who has spent the time required to understand the nuances and tradeoffs can say "just do this."
When it comes to technology 'simple' just means that someone else made a bunch of decisions for me. If I want or need to make those decisions myself then I need more knobs.
In my comment above I specifically did not expect the user to learn and understand everything, just the ability to ignore it. Handbrake has good defaults and the user would be successful if the only thing they do is: open the file and the press the green button.
And scared is the word used by the original author in the title. I want to understand that emotion. I don't need someone to tell me we can't learn everything.
How do you gain the confidence that what you choose to ignore is safe to ignore?
Computer damage is one potential consequence on the extreme end. On the conservative end, the software might just not work the way you want and you waste your time. It’s a mental model you have to develop. Even as a technical power user though, I want to reduce the risk of wasting my time, or even confront the possibility that I might waste my time, if I don’t have to.
How do you know the software in the article will do what you want?
For handbrake you can pick a preset and see what happens. Or don't even do that: when you open it it'll make you pick a video file, then you can just jam the green start button and see if it gives you what you need. Very little time spent.
Right, you don't know if either program is the right thing just by looking at it. The reason you're uncertain isn't all those options handbrake shows. You have that uncertainty no matter what. You need the same confidence with or without options. So that problem, while real, isn't an argument against showing options.
And as far as time goes, it only takes a few seconds in either scenario. You hit go, you see the progress bar is moving, you check your file a few minutes later.
if the UI is forcing me to look at these options before pressing Go, it is a signal that someone thought these were important to consider before i pressed Go. this is the gricean maxims of quantity and relation.
the decision to ignore this signal is a learned behavior that you and i have, is all i’m saying
The average person doesn't even read error messages. They know how to ignore things and hit the button that goes forward just fine. If they choose not to try the program, that's different. They don't lack the skill. (A child might lack this skill but a child is curious enough to push on so it works out anyway.)
I don’t really understand what you’re arguing anymore. Is the average person afraid of the unknown or are they capable of ignoring things?
You seem comfortable with the idea that a child not having this learned skill. I don’t know why you don’t extend that empathy towards the inexperienced in general.
My interpretation was that you're implying a big fraction of adults don't have this skill, that a typical non-technical person likely doesn't have it. I'm saying nearly every adult does have it. So I have empathy for those that truly lack it, the 1% of adults, but that empathy doesn't extend to the rest that aren't suffering that issue.
its complexity. assuming binary flags, the amount of different ways the tool might operate is O(2^n) if the tool isnt doing what you want, thats a gigantic search space for fixing it. hiding options, and putting sane defaults makes n smaller and exponentially reduces the search space.
people arent afraid of doing 2^n stuff, its just that we have a gut sense that its gonna take more time than its worth. im down to try 10-100 things, but if its gonna be 100 million option combinations i have to tinker with, thats just not worth it.
That gets less true the more utility your software is expected to have.
When it comes to software intended for the general public it doesn't take bravery to decide that every user should only ever be allowed to do things exactly how you'd want them done. I might be more likely to attribute that to arrogance. Really, for something like converting audio/video I'd just see the inflexible software with few if any options as too limited for my needs (current, future, or both) and go looking for something more powerful.
It's better to not invest my time on software that is overly restrictive when more useful options are available, even if I don't need all of those options right now because it'll save me the trouble of having to hunt down something better later since I've already got what I need on my systems.
Like Alan Kay said about software: Simple things should be simple, complex things should be possible.
The thing is this takes a lot of resources to get right. FOSS developers simply don't have the wherewithal - money, inclination or taste - to do this. So, by default, there are no simple things. Everything's complex, everything needs training. And this is okay because the main users of FOSS software is others of a similar bend as the developers themselves.
The advice looks sensible, but not sure if it does more good than harm. I recall simplified user interfaces standing in the way, hiding (or simply not providing) useful knobs or information/logs. They are annoying both when using them directly as a "power user", and when less tech-savvy users approach you (as they still do with those annoyingly simplified interfaces), asking for help. Then you try to use that simplified interface, it does not work, and there is no practical way to debug or try workarounds, so you end up with an interface that even a power user cannot use. I think generally it is more useful to focus on properly working software, on documentation and informative logs, sufficient flexibility, and maybe then on UI convenience, but still not making advanced controls and verbose information completely inaccessible (as it seems to be in the provided examples).
Same problem though. Half of UX is knowing which features to include, and the other half is knowing where to put them.
Intuitive UX for the average non-nerd user is task-based. You start with the most common known goals, like sending someone money, or changing the contrast of a photo, and you put a nice big button or slider somewhere on the screen that either makes the goal happen directly or walks you through it step by step.
Professional tools are workbench-based. You get a huge list of tools scattered around the UI in various groups. Beginners don't know what most of the tools do, so they have to work out what the tools are for before they can start using them. Then, and only then, can they start using the tools in a goal-based way. Professionals already know the tradecraft, so they have the simpler - but still hard - "Which menu item does what I need?" problem.
Developer culture tends to be script-based. It's literally just lists of instructions made of cryptic combinations of words, letters, and weird punctuation characters. Beginners have to learn the words, the concepts behind them, and the associated underlying computer fundamentals at multiple levels - just to get started. And if you start with a goal - let's say you want a bot that posts on social media for you - the amount of learning if you're coming to it cold is beyond overwhelming.
FOSS has never understood this. Yes, in theory you can write your own almost anything and tinker with the source code. But the learning curve for most people is impossibly steep.
AI has some chance of bridging the gap. It's not reliable yet, but it's very obvious now that it has a chance to become a universal UI, creating custom code and control panels for specific personal goals, generating workbench UIs and explaining what the tools do if you need a more professional approach, and explaining core concepts and code structures if you want to work at that level.
And the very freedom they got with free software let them change it to suit their fit, which would have been impossible with proprietary or otherwise restricted software.
The open source UIs initially seem alien, complicated or obscure related to similar to closed-source Windows' ones. The reason is OSS projects are built by developers primarily FOR developers and not for regular users. The design principle of "Don't surprise me" and other artistic and ergonomic ones are not met. Examples are Gimp and other content editors like Handbrake, Firefox vs chrome in mobile only, even IDEs.
BUT with time and a variable effort a regular user can get accustomed to the new philosophy and be successful. Either by persistant use, by using different OSS apps in series or by touching the command line. Happy user of Firefox, Libre office, Avidemux, Virt-manager (sic)
Is Firefox v chrome even relevant these days? I struggle to even think of shortcuts that aren't identical among the two browsers. Let alone UX and features.
I think the other big reason is availability of talent. FOSS is made by devs who usually are already well off and have time to contribute. You wont find as many artists nor graphic designers with the same privilege . so if there's bo designers on a project you get the bare basics.
This is useful for everyone not just non-techy types.
I can't help but compare this to sites like shadertoy that let you develop with a simple coding interface on one half the screen and the output on the other (as opposed to the regular complexity of setting up and using a dev environment)
Code goes here>{} ,
Press this button>[] ,
Output here>() ,
Which I think we need more of if we want to get kids into coding.
"I am new to GitHub and I have lots to say
I DONT GIVE A FUCK ABOUT THE FUCKING CODE! i just want to download this stupid fucking application and use it.
WHY IS THERE CODE??? MAKE A FUCKING .EXE FILE AND GIVE IT TO ME. these dumbfucks think that everyone is a developer and understands code. well i am not and i don't understand it. I only know to download and install applications. SO WHY THE FUCK IS THERE CODE? make an EXE file and give it to me. STUPID FUCKING SMELLY NERDS"
I know of one company that explicitly didn't make downloads available to dissuade this kind of hard-to-support user from using their time without materially contributing anything
I’ve been ripping old DVDs recently. I just want something that feels simple from Handbrake: a video file I can play on my Apple TV that has subtitles that work (not burned in!) with video and audio quality indistinguishable from playing the DVD (don’t scale the video size or mess with the frame rate!), at as small a file size as is practical. I’m prepared for the process to be slow.
I’ve been messing with settings and reading forum posts (probably from similarly qualified neophytes) for a day now and think I’ve got something that works - though I have a nagging suspicion the file size isn’t as small as it could be and the quality isn’t as good as it could be. And despite saving it as a preset, I for some reason have to manually stop the subtitles from being burned in for every new rip.
Surely what I want is what almost everyone wants‽ Is there a simple way to get it? (I think this is a rhetorical question but would love it not to be…)
Completely agree, that's why I love old mac software. Things were easy enough to understand for the average user, but power user still get lots of features.
GNOME's libadwaita solves this beautifully. It's simple, nice looking, yet powerful. You could absolutely use it to make an ffmpeg front-end that's both fully featured and friendly to less technical users. But if your app can't, then another good option is to have a "simple mode" and "advanced mode".
And IMO, Handbreak is more complicated than CLI ffmpeg. It's really chaotic.
Although I wish Linux were easier to use -- and there are distros that aim for this, I do agree that FOSS is mostly by nerds for nerds, but it doesn't prevent other people making changes -- which is exactly what the author did.
So I'd like to welcome the author to make more apps based on FOSS.
> Although I wish Linux were easier to use [ ... ]
We're getting there. I run Linux Mint with an XFCE desktop -- an intentionally minimal setup. The system performs automatic updates and the desktop layout/experience resembles older Windows desktops before Microsoft began "improving" things. No ads, no AI.
I'm by no means an end user, but in Linux I see incremental progress toward meeting the needs of that audience. And just in time too, now that Microsoft is more aggressively enshittifying Windows.
What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
> What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
That can be a problem with Linux but in my experience searching for Windows help is usually not good either.
> What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
claude-code actually does this really well, having used it to set up gnome on my phone, and fix all my problems without having to learn anything
Yeah I agree that the difference of usability between Linux and Windows is getting much smaller, now that MSFT is trashing Windows.
I do have a Linux box, and I only have complaints about small things. Double screen works, VSCode works, Firefox works too. Not much to complaint for a personal dev box. The ability to just `apt install` a bunch of stuffs and then start compiling is pretty nice.
But again, I'm pragmatic, so if I'm doing something Windows related, I'd definitely use my Windows box.
I wanted to write an article or short blog post about how Windows 10, menus and javascript, increasingly tuck away important tools/buttons in little folds. This was many months ago.
I want to write it and title it, "What the tuck?" But tuck refers exactly to the kind of hidden menus that make those so called sleek and simple UIs for the the 80% of users.
The problem is that it stupefies computing literacy, especially mobile web versions.
Perhaps not every casual web browser needs to sit at a desk to learn website navigation. Then again, they may never learn anything productive on their own.
Completely agree with the author. Would love most power tools to start off in "simple mode" so I could recommend them to friends/family, and have a toggle for advanced mode which shows everything to power users.
I think you can see this already with websites, like there is dozens of websites like convert video to MP4, ompress this or that. And I think they are just building an UI on top of open source tools
The article complains there's too many old school Windows-type power user GUIs in the free software space. Most of which were not actually FOSS, but Freeware, or sometimes Shareware!
My criticism of Free Software is exactly the reverse. There isn't enough of that kind of stuff on Linux!
Though to be sure, the Mac category (It Has One Button) is even more underserved there, and I agree that there should be more! Heck, most of the stuff I've made for myself has one button. Do one thing and do it well! :)
I agree there isn't enough. Some programs are OK (especially command-line programs), some aren't as good as the actual good quality ones.
> Do one thing and do it well!
This does not necessarily mean that it would have only one button (or any buttons). Depending on what is being done, there may be many options for being done, although there might also be default settings. A command-line program might do it better that you only need to specify the file name, but if there are options (what options and how many options there will be depends what program it is) then you can also specify those options if the default settings are not suitable.
Over the years I've gotten really tired of this obsession with "normal people" and not just because I'm one of the so called power users. This is really part of a growing effort to hide the computer away as an implementation detail.
That's what "UX" is all about. "Scripting the users", minimizing and channeling their interactions within the system. Providing one button that does exactly what they want. No need to "scare" them with magical computer technology. No need for them to have access to any of it.
It's something that should be resisted, not encouraged. Otherwise you get generations of technologically illiterate people who don't know what a directory is. Most importantly, this is how corporations justify locking us out of our own devices.
> We are giving up our last rights and freedoms for “experiences,” for the questionable comfort of “natural interaction.” But there is no natural interaction, and there are no invisible computers, there only hidden ones.
> Every victory of experience design: a new product “telling the story,” or an interface meeting the “exact needs of the customer, without fuss or bother” widens the gap in between a person and a personal computer.
> The morning after “experience design:” interface-less, desposible hardware, personal hard disc shredders, primitive customization via mechanical means, rewiring, reassembling, making holes into hard disks, in order to to delete, to logout, to “view offline.”
Most people don't need computer (full feature power, full power of choice) to solve their task, as could be seen with the smartphones, which are designed as appliances more or less.
I don't want most of consumer electronics to act like a computer, it is a deficiency for me.
I chose "dumb" Linux-based eBook reader instead of Android-based, because I want it to read books, full stop.
This quickly falls apart when you need to do stuff and be productive. Reading as a pass time is a different thing.
The problem is nobody makes this distinction for some reason. In my mind there's two types of software - the kind for doing things, and the kind for mostly consuming. As the wise Britney Spears once said, "there's only two types of people in the world: those that entertain, and the ones that observe"
It makes no sense for your CAD program you're building a company out from to be dumbed down.
Oh, this e-reader has lots of productivity features. You can highlight words (which are later stored in a separate folder), make bookmarks, easily translate words, use screen reader, etc.
I use it mostly for work and academic papers, not for amusement.
Most of the regular simple pdf viewers on the PC don't have this kind of productivity functionality in mind. They might have some, but in general they are not designed to work with read-only text.
Some people just like to eat food, they don't want to learn how to cook it. You or I may think that's a tragedy, but I don't think e.g a dentist has an obligation to become fluent in the things that I'm competent in.
I'm no dentist, I go to dentists. I let them work, and try not to be too annoying. I learn the minimum that I need to know to follow the directions that they deliberately make very simple for me.
This will result in generations of generally dentistry ignorant people, but I am not troubled by this.
As technologically competent people, one of our desires should be to help people maintain the ignorance level that they prefer, and at every level steer them to a good outcome. Let them manage their own time. If they want privacy and control, let's make sure they can have it, rather than lecturing them about it. My grandmother is in her 90s and she doesn't want people reading her emails, listening to her calls or tracking her face. She is not prepared to deal with more than a couple of buttons, and they should be large and hopefully have pictures on them that explain what they do. It's my job to square that circle.
Please assume I'm smarter than I actually am -- I will figure it out no problems. I like complex interfaces that allow me to do my task faster, especially if I'm using it often (e.g. for work).
As one of the main developers of Krita said, just being free isn't good enough, the software needs to be great.
I am in favour of simplified apps like this, maybe it can be a simple toggle switch in the top right corner between simple and advanced. Similar to that stupid new version of outlook I have to constantly switch back to the old version.
I struggle to link the title with the article. Aren't both Handbrake and Magicbrake both free? There are plenty of free tools which are very simple to use.
In this particular case I'd just tell people to download and use VLC Player. But I get the point.
A software should find its own niche. We have imagemagick and ffmpeg to deal with nearly everything of image/video functionality, but we still have a lot of one-click-to-finish softwares.
I guess instead of a separate application, maybe some of these programs would benefit from having 'dumb' mode where only basic/most used functionality is available. I.e. when I run gimp, I most often just use it rescale the image, cut a piece and insert into a new image and every time I have to look for the right options in the menu.
maybe there just isn't a solution? people don't ask for a hammer that magically assembles every piece of furniture. sometimes the user of the tool needs skills to use it. UI/UX only takes you so far.
A good product manager could make a big difference to many open source projects. Someone who has real knowledge of the problem space, who can define a clear vision of what problem is being solved for which user community and who can be judicious in weighing feature requests and developing roadmaps.
I'd love applications that would let me choose how advance I want the UI to be. Kinda like Windows Calculator. A toggle between basic, advance, and some common use cases.
For example, I'd love Gimp to have a Basic mode that would hide everything except the basic tools like Crop and Brush. Like basic mspaint.
the issue is real, but i'm not sure this solves it; in this case you end up with an overly specific solution that you can't really recommend to most people (and won't become widely known)
using the remote analogy, the taped versions are useful for (many!) specific people, but shipping the remote in that configuration makes no sense
i think normal people don't want to install an app for every specific task either
maybe a solution can look like a simple interface (with good defaults!!) but with an 'advanced mode' that gives you more options... though i can't say i've seen a good example of this, so it might be fundamentally flawed as well
Are we at the point yet where we can advise people to ask ChatGPT how to install something called "FFmpeg" and have it tell them what to copy-paste into an app called "Terminal"?
Most people can't comprehend that. "If it's available publicly online and has a readme, it's DEFINITELY was created for me and for all other users, right?"
This is so common, to the point that it's a FOSS misconception #1 for me. They can't get it that the developer can develop the software to solve only their specific problem and not interested in support, feature contributions, and other improvements or usecases.
Yes, but those 80% all use a different subset of the 20% of features. So if you want to make them all happy, you need to implement 100% of the features.
I see the pattern so often. There is a "needlessly complicated" product. Someone thinks we can make it simpler, we rewrite it/refactor the UI. Super clean and everything. But user X really needs that one feature! Oh and maybe lets implement Y. A few years down the line you are back to having a "needlessly complicated" product.
If you think it could easily be done better, you don't understand the problem domain well enough yet. Real simplicity looks easy but is hard to achieve.
I feel like the author wants everything to be Apple simplified. That all users should dumb down to on off go and stop. Ask chat got for anything else.
I disagree for so many obvious reasons it's pointless to iterate them.
We as a society need to get MORE capable, more critical, and improve our cognitive abilities; not the opposite.
I’m not sure I’d describe Apple products as simplified any more, take a look at the settings in iOS for example, it has grown in complexity with each release.
Yes, that's because Apple found out that the domain space is actually complex. Device configuration is complicated because devices are used in thousands of different permutations of environments and people.
The simplicity Apple had was always a mistake, an artifact of their hubris.
His notion of "normal people" are people who use MacOS:
> Normal people often struggle with converting video. ... the format will be weird. (Weird, broadly defined, is anything that won’t play in QuickTime or upload to Facebook.)"
Except normal people don't use MacOS and don't even know what QuickTime is. Including the US, the MacOS user share is apparently ~11%. Take the US out, and that drops to something like... 6%, I guess? And Macs are expensive - prohibitively expensive for people in most countries. So, normal people use Windows I'm afraid.
In fact, if you disregard the US, Linux user share seems to be around half of the Mac share, making the perspective of "normal people use Macs not FOSS" even sillier.
-----
PS - Yes, I know the quality of such statistics is low, if you can find better-quality user share analysis please post the link.
My Pinebook Pro with i3wm is really simple to use. You power it on, all it does is it asks for one of the LUKS passwords. If you miss, it will ask again. Then it's on.
You can't do anything wrong with it. There's no UI to fiddle with WiFi. It's all pre-configured to work automatically in the local WLAN (only; outside, all that's needed is to borrow someone's phone to look for the list of wifi nets in the area and type the name of selected network to /etc/wpa_supplicant/wpa_supplicant.conf). But there's rarely any need to go out anyway, so this is almost never an issue.
There are no buttons to click, ANYWHERE. Windows don't have confusing colorful buttons in the header. You open the web browser by pressing Alt + [. It shows up immediately after about 5 seconds of loading time. So the user action <-> feedback loop is rather quick. You close it with Alt + Backspace (like deleting the last character when writing text, simple, everyone's first instinct when you want to revert last action)
The other shortcut that closes the UI picture is Alt + ]. That one opens the terminal window. You can type to the computer there, to tell it what you want. Which is usually poweroff, reboot, reboot -f (as in reboot faster). It's very simple and relatable. You don't click on your grandma to turn it off, after all. You tell it to turn off. Same here.
All in all, Alt + [ opens your day. Alt + ] gives you a way to end it. Closing the lid sometimes even suspends the notebook, so it discharges slightly slowerly in between.
It's glorious. My gf uses it this way and has no issues with it (anymore). I just don't understand why she doesn't want to migrate to Linux on her own notebook. Sad.
If only there was an easy way to fund all the Open Source programs you like and use, so the projects who struggle with it, can put more focus into design.
Seems like a win-win, take my money solution, for some reason the market (and I guess that means investors) are not pursuing this as a consumer service?
Some TV remotes or air conditioner remotes now have a "boomer flap" which when engaged, hides 90% of all the buttons. The scanner software I use has something similar, novice mode and expert mode.
Dunno why people assume that FOSS developers are just dummies lacking insight but otherwise champing at the bit to provide the same refinement and same customer service experience as the "open source" projects that are really just loss leaders of some commercial entity.
In addition to this issue, I've also had good conversations with a business owner about why he chose a Windows architecture for his company. Paying money to the company created a situation where the company had a "skin-in-the-game" reason to offer support (especially back when he founded the company, because Microsoft was smaller at the time). He likes being able to trust that the people who build the architecture he relies on for his livelihood won't just get bored and wander off and will be responsive to specific concerns about the product, and he never had the perception that he could rely on that with free software.
While I agree that people generally feel better by getting something with little effort, I think that there is a longer-term disservice here.
Once upon a time, it used to be understood that repeated use of a tool would gradually make you better at it - while starting with the basics, you would gradually explore, try more features and gradually become a power user. Many applications would have a "tip of the day" mechanism that encouraged users to learn more each time. But then this "Don't Make me Think" book and mentality[0] started catching on, and we stopped expecting people to learn about the stuff that they're using daily.
We have a high percentage of "digital natives" kids, now reaching adulthood without knowing what a file is [1] or how to type on a keyboard [2]. Attention spans are falling rapidly, and even the median time in front of a particular screen before switching tasks is apparently down from 2.5 minutes in 2004 to 40 seconds in 2023 [3] (I shudder to think what it is now). We as a civilization have been gradually offloading all of our technical competency and agency onto software.
This is of course leading directly to agentic AI, where we (myself included) convince ourselves that the AI is allowing us to work at a higher level, deciding the 'what', while the computer takes care of the 'how' for us, but of course there's no clear delineation between the 'what' and 'how', there's just a ladder of abstraction, and as we offload more and more into software, the only 'what' we'll have left is "keep me fed and entertained".
We are rapidly rolling towards the world of Wall-E, and at this pace, we might live to see the day of AIs asking themselves "can humans think?".
Why are people bothered by the money-trasfer-winking so much, but not by these companies aiding and abetting a brutal and murderous regime engaging in decades-long military occupying, at first, and later - aided and abetted a genocide campaign?
When I used to be active on reddit I was following r/graphicdesign (me being a graphic designer) and one day someone asked a question about Inkscape.
Not 5 minutes after that someone else on the comments went on a weird rant about how allegedly Inkscape and all FOSS was "communist" and "sucked" and capitalist propietary stuff was "superior".
>> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
One of the truest things I've read on HN. I've also tried to visit this concept with a small free image app I made (https://gerry7.itch.io/cool-banana). Did it for myself really, but thought others might find it useful too. Fed up with too many options.
The disaster that is "modern UX" is serving no one.
Infantilizing computer users needs to stop.
Computer users hate it - everything changes all the time for the worse, everything gets hidden by more and more layers until it just goes away entirely and you're left with just having to suck it up.
"Normal people" don't even have computers anymore, some don't even have laptops, they have tablets and phones, and they don't use computer programs, they use "apps".
What we effectively get is:
- For current computer users: A downward spiral of everything sucking more with each new update.
- For potential new computer users: A decreasing incentive to use computers "Computers don't really seem to offer anything I can't do on my phone, and if I need a bigger screen I'll use my tablet with a BT keyboard"
- For the so-called "normal people" the article references (I believe the article is really both patronizing and infantalizing the average person), there they're effectively people who don't want to use computers, they don't want to know how stuff works, what stuff is, or what stuff can become, they have a problem they cannot put into words and they want to not have the problem because the moving images of the cat should be on the place with the red thing. - They use their phones, their tablets, and their apps, their meager and unmotivated desire to do something beyond what their little black mirror allow them is so week that any obstacle, any, even the "just make it work" button, is going to be more effort than they're willing (not capable of, but willing) to spend.
Thing is, regardless of particular domain, doing something in any domain requires some set of understanding and knowledge of the stuff you're going to be working with. "No, I just want to edit video, I don't want to know what a codec is" well, the medium is a part of the fucking message! NOTHING you do where you work with anything at all allows you to work with your subject without any understanding at all of what makes up that subject.
You want to tell stories, but you don't want to learn how to speak, you want to write books, but you don't want to learn how to type, write or spell ? Yes, you can -dictate- it, which is, in effect, getting someone competent to do the thing for you.. You want to be a painter, but you don't care about canvas, brushes, techniques, or the differences between oil, acrylic and aquarelle, or colors or composition, just want to make picture look good? You go hire a fucking painter, you don't go whining about how painting is inherently harder than it ought to be and how it's elitist that they don't just sell a brush that makes a nice painting. (Well, it _IS_ elitist, most people would be perfectly satisfied with just ONE brush, and it should be as wide as the canvas, and it should be pre-soaked in BLUE color, come on, don't be so hard on those poor people, they just want to create something, they shouldn't have to deal with all your elitist artist crap!) yeah, buy a fucking poster!
I'm getting so sick and tired of this constant attack on the stuff _I_ use every day, the stuff _I_ live and breathe, and see it degenerated to satisfy people who don't care, and never will.. I'm pissed, because, _I_ like computers, I like computing, and I like to get to know how the stuff works, _ONCE_ and gain a deep knowledge of it, so it fits like an old glove, and I can find my way around, and then they go fuck it over, time and time again, because someone who does not want to, and never will want to, use computers, thinks it's too hard..
Yeah, I really enjoy _LISTENING_ to music, I couldn't produce a melody if my life depended on it (believe me, I've tried, and it's _NOT_ for lack of amazingly good software), it's because I suck at it, and I'm clearly not willing to invest what it takes to achieve that particular goal.. because, I like to listen to music, I am a consumer of it, not a producer, and that's not because guitars are too hard to play, it's because I'm incompetent at playing them, and my desire to play them is vastly less than my desire to listen to them.
Who are most software written for?
- People who hate computers and software.
What's common about most software?
- It kind of sucks more and more.
There's a reason some of the very best software on the planet is development tools, compilers, text editors, debuggers.. It's because that software is made by people who actually like using computers, and software, _FOR_ people who actually like using computers and software...
Imagine if we made cars for people who HATE to drive, made instruments for people who don't want to learn how to play.. Wrote books for people who don't want to read, and movies for people who hate watching movies. Any reason to think it's a reasonable idea to do that? Any reason to think that's how we get nice cars, beautiful instruments, interesting books and great movies ?
Fuck it. Just go pair your toaster with your "app" whatever seems particularity important.
Couldn't agree with this more. I'm even an advocate for simulating walled gardens with Free Software. Let people who need to feel swaddled in a product or a brand feel swaddled.
It also opens up opportunities for money-making, and employment in Free Software for people who do not program. The kind of hand-holding that some people prefer or need in UX is not easy to design, and the kind of marketing that leads people to the product is really the beginning of that process.
Nobody normal cares that it's a thin layer over the top of a bunch of copyleft that they wouldn't understand anyway (plenty of commercial software is a thin layer over permissively licensed stuff.) Most people I know barely know what files and directories are, and the idea of trying to learn fills them with an anxiety akin to math-phobia. Some (most?) people get a lot of anxiety about being called stupid, and they avoid the things that caused it to happen.
They do want privacy and the ownership of their own devices as much as everyone else however, they just don't know how much they're giving up when they do a particular software thing, or (like all of us) know that it is seriously difficult if not possible to avoid the danger.
Give people mock EULAs to click through, but they will enumerate the software's obligations to them, not their obligations to the software. Help them remain as ignorant as they want about how everything works, other than emphasizing the assurances that the GPL gives them.
> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
For those of you thinking (which 20%) following that article from the other day — this is where a good product sense and knowing which 80% of people you want to use it first. You could either tack on more stuff from there to appeal to the rest of the 20% of people, or you could launch another app/product/brand that appeals to another 80% of people. (e.g. shampoo for men, pens for women /s)
I like this idea -- a simple interface/frontend for an otherwise complicated topic, for the less skilled among us. It has intriguing possibilities beyond technology ...
Q: Why does God allow so much suffering?
A: What? There is no God. We invented him.
Q: Doesn't this mean life has no purpose?
A: Create your own purpose. Eliminate the middleman.
Q: But doesn't atheism allow evil people free rein?
A: No, it's religion that does that. A religious evil person can always claim God either granted him permission or forgave him after the fact. And he won't be contradicted by God, since ... but we already covered that.
Hmm. If it works for HandBrake, it might work for life.
Good article, but the reasoning is wrong. It isn't easy to make a simple interface in the same way that Pascal apologized for writing a long letter because he didn't have time to write a shorter one.
Implementing the UI for one exact use case is not much trouble, but figuring out what that use case is difficult. And defending that use case from the line of people who want "that + this little extra thing", or the "I just need ..." is difficult. It takes a single strong-willed defender, or some sort of onerous management structure, to prevent the interface from quickly devolving back into the million options or schizming into other projects.
Simply put, it is a desirable state, but an unstable one.
Overall, the development world does not intuitively understand the difficulty of creating good interfaces (for people that aren’t developers.) In dev work, the complexity is obvious, and that makes it easy for outsiders to understand— they look at the code we’re writing and say “wow you can read that?!” I think that can give developers a mistaken impression that other peoples work is far less complex than it is. With interface design, everybody knows what a button does and what a text field is for, and developers know more than most about the tools used to create interfaces, so the language seems simple. The problems you need to solve with that language are complex and while failure is obvious, success is much more nebulous and user-specific. So much of what good interfaces convey to users is implied rather than expressed, and that’s a tricky task.
> creating good interfaces (for people that aren’t developers.)
This is the part where people get excited about AI. I personally think they're dead wrong on the process, but strongly empathize with that end goal.
Giving people the power to make the interfaces they need is the most enduring solution to this issue. We had attempts like HyperCard or Delphi, or Access forms. We still get Excel forms, Google forms etc.
Having tools to incrementaly try stuff without having to ask the IT department is IMHO the best way forward, and we could look at those as prototypes for more robust applications to create from there.
Now, if we could find a way to aggregate these ad hoc apps in an OSS way...
Delphi and Access are pretty much still around, even if they are seldom reason of a HN front page post.
I have nightmare stories to tell of Access Forms from my time dealing with them in the 90's.
The usual situation is that the business department hires someone with a modicum of talent or interest in tech, who then uses Access to build an application that automates or helps with some aspect of the department's work. They then leave (in a couple of cases these people were just interns) and the IT department is then called in to fix everything when it inevitably goes wrong. We're faced with a bunch of beginner spaghetti code [0], utterly terrible schema, no documentation, no spec, no structure, and tasked with fixing it urgently. This monster is now business-critical because in the three months it's been running the rest of the department has forgotten how to do the process the old way, and that process is time-critical.
Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that. We have to fix what we can and get it working immediately. And, of course, these fixes cause havoc with the project planning of all our other projects because they're unpredictable, urgent, and high priority. This delays all the other projects and helps to give IT a reputation as taking too long and not delivering on our promised schedules.
So yeah, what appears to be the best solution from a non-IT perspective is a long, long way from the best solution from an IT perspective.
[0] and other messes; in one case the code refused to work unless a field in the application had the author's name in it, for no other reason than vanity, and they'd obfuscated the code that checked for that. Took me a couple of hours to work out wtf they'd done and pull it all out.
> Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that.
I assume those processes weren't applied when deciding to use this application, why? Was there a loophole because it was done by an intern?
Of course this is ultimately the IT department's own fault for not responding quickly enough to legitimate business requirements. They need to actively look for ways to help rather than processing tickets.
Yeah, this is always the response. But it's wildly impractical - there are only so many developer hours available. The budget is limited, so not everyone gets what they want immediately. This should be obvious.
Part of the problem is that the novices that create these applications don't consider all the edge cases and gnarly non-golden-path situations, but the experienced devs do. So the novice slaps together something that does 95% of the job with 5% of the effort, but when it goes wrong the department calls in IT to fix it, and that means doing the rest of the 95% of the effort. The result is that IT is seen as being slow and bureaucratic, when in fact they're just doing the fecking job properly.
In most organizations the problem is lack of urgency rather than lack of developer hours. The developers sit in isolated siloes rather than going out and directly engaging with business units. This is mostly a management problem but there are plenty of individual developers who wait to be told what to do rather than actively seeking out better solutions for business problems.
This usually comes back to maker time vs manager time.
If you want a developer to write good code quickly, put them in an isolated silo and don't disturb them.
If you want a developer to engage with the business units more, be prepared for their productivity to drop sharply.
As with all things in tech, it's a trade-off.
I think that's the lesser problem. The bigger problem is the attitude of IT is wrong from the start. When they start doing something, they want to Do It Right. They want to automate the business process. But that's the wrong goal! You can spend years doing that and go all the way to building a homegrown SAP, and it will still suck and people will still use their half-assed Excel sheets and Access hacks.
IT should not be focusing on the theoretical, platonic Business Process. It never exists in practice anyway. They should focus on streamlining actual workflow of actual people. I.e. the opposite advice to the usual. Instead of understanding what users want and doing it, just do what they tell you they want. The problem with standard advice is that the thing you seek to understand is emergent, no one has a good definition, and will change three times before you finish your design doc.
To help company get rid of YOLOed hacks in Excel and such made by interns, IT should YOLO better hacks. Rapid delivery and responsiveness, but much more robust and reliable because of actual developer expertise behind it.
> They should focus on streamlining actual workflow of actual people.
If you streamline a shitty process, you will have diarrhea...
Unfortunately, most processes suck and need improvement. It isn't actually IT's job to improve processes. But almost always, IT is the only department that is able to change those processes nowadays since they are usually tied to some combination of lore, traditions, spreadsheets and misused third-party software.
If you just streamline what is there, you are cementing those broken processes.
That's precisely the mistake I'm talking about. You think you're smarter than people on the ground, and know better how they should do their job.
It's because of that condescending, know-it-all attitude that people actively avoid getting IT involved in anything, and prefer half-assed Excel hacks. And they're absolutely right.
Work with them and not over them, and you may get an opportunity to improve the process in ways that are actually useful. Those improvements aren't apparent until you're knee-deep in mud yourself, working hand by hand with the people you're trying to help.
That depends if one measures productivity in LOCs or business impact. As always, it’s not black or white, but my experience is that higher proximity is a net benefit
Downside is that it quickly turns to idea people coming over directly to dev team pushing BS ideas and requiring work to be done on that ASAP.
You need a structure if you have org of 100+ employees. If it is smaller than that I don’t believe you get dev department.
more often than not it’s the development team that skips engaging with users, putting in minimal effort to understand their real needs.
most of these teams only wants a straightforward spec, shut themselves off from distractions, just to emerge weeks or months later with something that completely misses the business case. and yet, they will find ways to point fingers at the product owner, project manager, or client for the disaster.
I have met the occasional person like this, sure. But only ever in really large organisations where they can hide, and only a minority.
The huge majority of devs want to understand the business and develop high quality software for it.
In one business I worked for, the devs knew more about the actual working of the business than most of the non-IT staff. One of the devs I worked with was routinely pulled into high-level strategy meetings because of his encyclopaedic knowledge of the details of the business.
The mistake is in trying to understand the business case. There is nothing to understand! The business case is the aggregate of what people actually do. There is no proper procedure that's actually followed at the ground level. Workflows are emergent and in constant flux. In this environment, the role of a dev should not be to build internal products, but to deliver internal hacks and ad-hoc solutios, maintain them, modify on the fly, and keep it all documented.
I.e. done right, it should be not just possible but completely natural for a random team lead in the mail room to call IT and ask, "hey, we need a yellow highlighter in the sheet for packages that Steve from ACME Shipping needs to pick on extra evening run, can you add it?", and the answer should be "sure!" and they should have the new feature within an hour.
Yes, YOLO development straight on prod is acceptable. It's what everyone else is doing all the time, in every aspect of the business. It's time for developers to stop insisting they're special and normal rules don't apply to them.
And yet, even ”knowing about the working of the business” is different from actually understanding user needs at UI level, which involves a lot more variables.
The single most valuable tool is user testing. However it really takes quite a few rounds of actually creating a design and seeing how wrong you saw the other person’s capabilities, to grok how powerful user testing is in revealing your own biases.
And it’s not hard at all at core. The most important lesson really is a bit of humility. Actually shutting up and observing what real users do when not intervened.
Shameless plug, my intro to user testing: https://savolai.net/ux/the-why-and-the-how-usability-testing...
> The usual situation is that the business department hires someone with a modicum of talent or interest in tech
This reminds me of the "just walk confidently to their office and ask for a job to get one!" advice. This sounded bullshit to me until I got to stay with some parts of a previous company, where the hiring process wasn't that far really.
That's also the kind of companies where contracts and vendor choices will be negociated on golf courses and the CEO's buddies could as well be running the company it would be the same.
I feel for you.
It’s also about keeping things simple, hierarchical, and very predictable. These do not go hand in hand with the feature creep of collaborative FOSS projects, as others point out here.
Good point. A good interface usually demands a unified end-to-end vision, and that usually comes from one person who has sat down to mull it over and make a bunch of good executive decisions.
And then you need to implement that, which is never an easy task, and maintain the eternal vigilance to both adhere to the vision but also fit future changes into that vision (or vice versa).
All of that is already hard to do when you're trying to build something. Only harder in a highly collaborative voluntary project where it's difficult or maybe even impossible to take that sort of ownership.
In the 90s I did a tech writing gig documenting some custom software a company had built for them by one of the big consultancy agencies. It was a bit of a nightmare as the functionality was arranged in a way that reflected the underlying architecture of the program rather than the users’ workflows. Although I suppose if they’d written the software well, I wouldn’t have had as many billable hours writing documentation.
> reflected the underlying architecture of the program rather than the users’ workflows
Is this an inherently bad thing if the software architecture is closely aligned with the problem it solves?
Maybe it's the architecture that was bad. Of course there are implementation details the user shouldn't care about and it's only sane to hide those. I'm curious how/why a user workflow would not be obviously composed of architectural features to even a casual user. Is it that the user interface was too granular or something else?
I find that just naming things according to the behavior a layperson would expect can make all the difference. I say all this because it's equally confusing when the developer hides way too much. Those developers seem to lack experience outside their own domain and overcomplicate what could have just been named better.
Developers often don’t think it’s a bad thing because that’s how we think about software. Regular users think about applications as tools to solve a problem. Being confronted by implementation details is no problem for people with the base knowledge to understand why things are like that, but without that knowledge, it’s just a confusing mess.
If you ever spens time with the low level SAP GUIs, then yes, you will find out why that's definetly a bad thing. Software should reflect users processes. The code below is just an implementation detail and should never impact the design of the interfaces.
IMO they just don’t care enough. They want people to use it but it is not the end of world if it stays niche
I think the more likely explanation is software development is a huge opportunity cost (historically).
To learn how to be a software dev, takes so much time, you don't have time to learn the "arts".
The people who become programmers, are a different breed. They are very close to the autism spectrum, if not in it. Because that's what it takes to be a software dev (before LLM's).
Nowadays the tides might be changing.
The analogy is a hot chick. It's statistically very likely a hot chick does not know calculus.
Hot chicks aren't inherently dumb. Make-up skills/skin care, is such a huge opportunity cost. There's no way someone can stay pretty AND learn calculus at the same time.
> It's statistically very likely a hot chick does not know calculus.
It would be honestly interesting if someone actually did a study regarding it.
I do agree with this statement but it isn't as if everybody doesn't have other opportunity costs, people might have video games as hobbies or just normal hobbies in general as well which could be considered opportunity costs
The question to me which sounds more interesting which I feel like I maybe reading in the lines but does the society shower attention to beauty which can make them feel less susceptible to lets say calculus which might feel a lot more boring respectively?
Generally speaking, I was seeing this the other day but female involvement overall in the whole stem department has reduced in %'s iirc.
Another factor could be the weirdness or expectation. Like just as you think this, this is assumed by many people about hot chicks lets say, so if a hot chick is actually into calculus and she tells it, people would say things like oh wow I didn't know that or really?? which could feel weirdness or this expectation of them to not be this way and be conventional and not have interests it seems.
I have seen people get shocked in online communities if a girl is even learning programming or doing things like hyprland (maybe myself included as it was indeed rare)
Naturally I would love if more girls could be into this as I feel like talking to girls about my hobbies when she isn't that interested or not having common hobbies hurts me when I talk to them, they can appreciate it but I feel like I can tell them anything, I am not that deep of a coder right now as much as I am a linux tinkerer, building linux iso's from scratch, shell scripting and building some web services etc. , I like to tinker with software, naturally the word used in unix/foss communities for this is called hacking which should be the perfect way to describe what I mean except they think I am talking about cybersecurity and want me to "hack something", Sorry about this rant but I have stopped saying this hacking just because of how correlated it is to cybersecurity to the normal public. I just say that I love tinkering with software nowadays. Side note, but is there a better word for what I am saying other than hacking?
It sounds like you are a Linux UI designer.
Which is a rare thing in this space. Linux is rough around the edges, to say the least. You don't need me telling you. We are in a thread about how open sources software suck at UI design. We could use more people like you in this space.
The men aren't fussed with the "hacker" label. It sounds cool. It's like when people mistakenly think all Asians know Kung Fu or something. The Asian guy isn't complaining lol.
There's definitely stigma/sexism that deter women away from this field. But I think opportunity cost is a factor, gravely overlooked.
Society demands a lot from women, when it comes to appearance. The bar is set very high.
So high, you don't have the time to be a good programmer AND pretty. Unless you won the genetic lottery.
I follow women's basketball avidly. Some of the women are not pretty. They are just very good at basketball. It's refreshing to see women be valued, not just because of their beauty.
> I think that can give developers a mistaken impression that other peoples work is far less complex than it is.
Not at all. Talented human artists still impress me as doing the same level of deep "wizardry" that programmers are stereotyped with.
That might be the case for you, but something doesn’t need to be universally true for it to be true enough to matter. Find any thread about AI art around here and check out how many people have open contempt for artists’ skills. I remember the t-shirts I saw a few sys admins wearing in the nineties that said “stop bothering me or I’ll replace you with a short shell script.” In the decades I worked in tech, I never saw that attitude wane. I saw a thread here within the past year or two where one guy said he couldn’t take medical doctors and auto mechanics seriously because they lacked the superior troubleshooting skills of a software developer. Seriously. That’s obviously not everybody, but it’s deeefinitely a thing.
I believe it comes from low self esteem initially. Then finding their way into computers, where they then indeed have higher skills than average and maybe indeed observed that the job of some people could be automated by a shell script. So ... lots of ungrounded ego suddenly, but in their new guru ego state, they extrapolated from such isolated cases to everywhere.
I also remember the hostility of my informal universities IT chat groups. Newbs were rather insulted for not knowing basic stuff, instead of helping them. A truly confident person does not feel the need to do that. (and it was amazing having a couple of those persons writing very helpful responses in the middle of all the insulting garbage)
Trust me, there are enough people here that believe that.
Other engineering disciplines are simpler because you can only have complexity in three dimensions. While in software complexitiy would be everywhere.
Crazy to believe that
There are many more than three "dimensions" if I may use the term loosely, in software or hardware engineering.
Cost, safety, interaction between subsystems (developed by different engineering disciplines), tolerances, supply chain, manufacturing, reliability, the laws of physics, possibly chemistry and environmental interactions, regulatory, investor forgiveness, etc.
Traditional engineering also doesn't have the option of throwing arbitrary levels of complexity at a problem, which means working within tight constraints.
I'm not an engineer myself, but a scientist working for a company that makes measurement equipment. It wouldn't be fair for me to say that any engineering discipline is more challenging, since I'm in none of them. I've observed engineering projects for roughly 3 decades.
One thing I still struggle with is writing interfaces for complex use cases in an intuitive and simple manner that minimizes required movements and context switching.
Are there any good resources for developing good UX for necessarily complex use cases?
I am writing scheduling software for an uncommon use case.
The best method I have found is to use the interface and fix the parts that annoy me. After decades of games and internet I think we all know what good interfaces feel like. Smooth and seamless to get a particular job done. If it doesn't feel good to use it is going to cause problems with users.
Thats said. I see the software they use on the sales side. People will learn complexity if they have to.
Honestly, it’s a really deep topic — for a while I majored in interface/interaction design in school— and getting good at it is like getting good at writing. It’s not like amateurs can’t write solid stories, but they probably don’t really understand the decisions they’re making and the factors involved, and success usually involves accidentally being influenced by the right combination of things at the right time.
The toughest hurdle to overcome as a developer is not thinking about the gui as a thin client for the application, because to the user, the gui is the application. Developers intuitively keep state in their head and know what to look for in a complex field of information, and often get frustrated when not everything is visible all at once. Regular users are quite different— think about what problems people use your software to solve, think about the process they’d use to solve them, and break it down into a few primary phases or steps, and then consider everything they’d want to know or be able to do in each of those steps. Then, figure out how you’re going to give focus to those things… this could be as drastic as each step having its own screen, or as subtle as putting the cursor in a different field.
Visually grouping things, by itself, is a whole thing. Important things to consider that are conceptually simple but difficult to really master are informational hierarchy and how to convey that through visual hierarchy, gestalt, implied lines, type hierarchy, thematic grouping (all buttons that initiate a certain type of action, for example, might have rounded corners.)
You want to communicate the state of whatever process, what’s required to move forward and how the user can make that happen, and avoid unintentionally communicating things that are unhelpful. For example, putting a bunch of buttons on the same vertical axis might look nice, but it could imply a relationship that doesn’t exist. That sort of thing.
A book that helps get you into the designing mindset even if it isn’t directly related to interface design is Don Norman’s The Design of Everyday Things. People criticize it like it’s an academic tome — don’t take it so seriously. It shows a way of critically thinking about things from the users perspective, and that’s the most important part of design.
> Overall, the development world does not intuitively understand the difficulty of creating good interfaces
Nor can the design world, for that matter. They think that making slightly darker gray text on gray background using a tiny font and leaving loads of empty space is peak design. Meanwhile my father cannot use most websites because of this.
The dozens of people I know that design interfaces professionally can probably recite more of the WCAG by heart than some of the people that created them. You’re assuming that things you think “look designed” were made by designers rather than people playing with the CSS in a template they found trying to make things “look designed.” You’re almost certainly mistaken.
> can probably recite more of the WCAG by heart than some of the people that created them
That's part of the problem, they'll defend their poorly visible choice by lawyering "but this meets the minimal recommended guideline of 2.7.9"
As I age, this x1000. Even simple slack app on my windows laptop - clicking in the overview scroll bar is NOT "move down a page". It seems to be "the longer you click, the further it moves" or something equally disgusting. Usually, I dock my laptop and use an external mouse with wheel, and it's easy to do what I want. With a touchpad? Forget it.. I'm clicking 20x to get it to move to the top - IF I can hit the 5-pixel-wide scrollbar. There's no easy way to increase scrollbar size anymore either..
It's like dark patterns are the ONLY pattern these days.. WTF did we go wrong?
Indeed.
Win95 was peak UI design.
I don’t understand modern trends.
I created localslackirc to keep using IRC and not have to deal with slack :D
With a touchpad? Use two fingers to scroll (also works horizontally). Who's managing to hit a tiny scrollbar that disappears with a touchpad‽
They just aren't as good at detecting real physical contact as a nice physical mouse is at responding to movement and pressure.
I mean, maybe but the question wasn't what is the superior general pointing device (trackball ftw if you ask me) though, but how to scroll using a trackpad without tearing your hair out.
What pisses me off is that the “brutalist” style in the 1990s was arguably perfect. Having standardized persistent menus, meaningful compact toolbars was nice.
Then the world threw away the menus, adopted an idiotic “ribbon” that uses more screen real estate. Unsatisfied, we dumbed down desktop apps to look like mobile apps, even though input technology remains different.
Websites also decided to avoid blue underlined text for links and be as nonstandard as possible.
Frankly, developers did UI better before UI designers went off the deep end.
The brutalist style also meant that I didn't need a UI designer for my applications. With Delphi I was able to create great apps in a matter of days. And users loved them, because they were so snappy and well thought out. Nowadays it seems I need a UI designer to accomplish just about anything. And the resulting apps might look better but are worse when you are actually trying to accomplish work using them.
I was ranting exactly the same just yesterday. Nowadays UI designers seem to have forgotten all about affordances. Back in the day you had drop shadows below buttons to indicate that they could be pressed, big chunky scrollbars with two lines on the handle to indicate "grippiness" etc.
A few days ago I had trouble charging an electric rental car. When plugging it in, it kept saying "charging scheduled" on the dash, but I couldn't find out how to disable that and make it charge right away. The manual seemed to indicate it could only be done with an app (ugh, disgusting). Went back to the rental company, they made it charge and showed me a video of the screen where to do that. I asked "but how on earth do you get to that screen?". Turned out you could fucking swipe the tablet display to get to a different screen! There was absolutely no indication that this was possible, and the screen even implied that it was modal because there were icons at the bottom which changed the display of the screen.
So you had: zero affordances, modal design on a specific tab, and the different modes showed different tabs at the top, further leading me to believe that this was all there was.
I've had long discussions at work with our designer, who thinks that people on desktop computers should perform swipe actions with the mouse rather than the UI reacting to mouse scroll events.
99% of the users are not using the mobile version.
The contributors of free software tend to be power users who want to ensure their use case works. I don't think they're investing a lot of thought into the 80/20 use case for normal/majority or users or would risk hurting their workflow to make it easier for others
True; that's why we have companies with paid product who devote a lot of their time - arguably majority - to make the exact interfaces people want and understand:) it's a ton, a ton of difficult work, for which there is little to no incentive in the free software ecosystem
And this is precisely why desktop Linux has not knocked off Windows or MacOS.
I'd argue that's more because the average person has no interest in installing a new OS, or even any idea what an OS is.
Most people just keep the default. When the default is Linux (say, the Steam Deck), most people just keep Linux.
And that's fine. Those users who want something that's not like desktop Linux have plenty of options.
And increasingly it doesn't matter because they just live in a browser anyway.
Which also makes it easier than ever for more users to run Linux as a desktop OS :)
Absolutely. I still prefer MacOS/Mac hardware in some ways but running a browser on Linux on a Thinkpad or whatever works pretty well for a lot of purposes.
Omarchy tries resolving this https://github.com/basecamp/omarchy
Dear reader, please make sure you look up whose project this is and why it's spammed everywhere.
> contributors of free software tend to be power users
or, simply put, nerds
it takes both a different background, approach and skillset to design ux and interface
if anything FOSS should figure out how to attract skilled artists so majority of designs and logos doesn't look so blatantly amateurish.
My guess is that, as has always been, the pool of people willing to code for free on their own time because it's fun is just much larger than the people willing to make icons for software projects on their own time because they think it's fun.
Graphic designers and artists get ripped off, all the time; frequently, by nerds, who tend to do so, in a manner that insults the value of the artist's work.
It's difficult to get those kinds of creatives to donate their time (trust me on this, I'm always trying).
I'm an ex-artist, and I'm a nerd. I can definitively say that creating good designs, is at least as difficult as creating good software, but seldom makes the kind of margin that you can, from software, so misappropriation hurts artists a lot more than programmers.
Most fields just don’t have the same culture of collaborative everyone-wins that software does. Artists don’t produce CC art in anywhere close to the same influence as engineers produce software. This is probably due to some kind of compounding effect available in software that isn’t available in graphics.
Software people love writing software to a degree where they’ll just give it away. You just won’t find artists doing the same at the same scale. Or architects, or structural engineers. Maybe the closest are some boat designs but even those are accidental.
It might just be that we were lucky to have some Stallmans in this field early.
Isn’t there a lot more compensation available in software? Like as a developer, you can make a lot of money without having to even value money highly. I think in other fields you don’t generally get compensated well unless you are gunning/grinding for it specifically. “For the love of the art” people in visual arts are painters or something like that, probably. Whereas with software you can end up with people who don’t value money that much and have enough already, at least to take a break from paid work or to not devote as much effort to their paid work. I imagine a lot of open source people are in that position?
I think most OSS projects are started by unemployed people as hobbies. Or ego projects to get jobs.
Well, early '90s Torvalds wasn't the wealthy fellow he is now and he was busy churning things out and then relicensed Linux under GPL.
I think the collaborative nature of open source software dev is unlike anything else. I can upload some software in hopes that others find is useful and can build on top of it, or send back improvements.
Not sure how that happens with a painting, even a digital one.
Fonts are an interesting case. The field of typography is kind of migrating from the "fuck you, pay me" ethic of the pure design space into a more software-like "everyone wins" state, with plenty of high-quality open-source fonts available, whereas previously we had to make do with bitmap-font droppings from proprietary operating systems, Bitstream Vera, and illegal-to-redistribute copies of Microsoft's web font pack.
I think this is because there are plenty of software nerds with an interest in typography who want to see more free fonts available.
There's actually a fair bit of highly influential CC-licensed artwork out there. Wikipedia made a whole free encyclopedia. The SCP Foundation wiki is it's own subculture. There's loads of Free Culture photography on Wikimedia Commons (itself mirrored from Flickr). A good chunk of your YouTube feed is probably using Kevin McCleod music - and probably fucking up the attribution strings, too. A lot of artists don't really understand copyright.
But more importantly, most of them don't really care beyond "oh copyright's the thing that lets me sue big company man[0]".
The real impediment to CC-licensed creative works is that creativity resists standardization. The reason why we have https://xkcd.com/2347/ is because software wants to be standardized; it's not really a creative work no matter what CONTU says. You can have an OS kernel project's development funded entirely off the back of people who need "this thing but a little different". You can't do the same for creativity, because the vast majority of creative works are one-and-done. You make it, you sell it, and it's done. Maybe you make sequels, or prequels, or spinoffs, but all of those are going to be entirely new stories maybe using some of the same characters or settings.
[0] Which itself is legally ignorant because the cost of maintaining a lawsuit against a legal behemoth is huge even if you're entirely in the right
I like this explanation, though there is one form of creative standardization: brand identity. And I suppose that's where graphics folk engage with software (Plasma, the GNOME design, etc.). Amusingly, I like contributing to Wikipedia and the Commons so I should have thought of that. You're absolutely right that I had a blind spot there in terms of what's the equivalent there of free software.
Another thing is that the vast amount of fan fiction out there has a hub-and-spoke model forming an S_n graph around the solitary 'original work' and there are community norms around not 'appropriating' characters and so on, but you're right that community works like the SCP Foundation definitely show that software-like property of remixing of open work.
Anyway, all to say I liked your comment very much but couldn't reply because you seem to have been accidentally hellbanned some short while ago. All of your comments are pretty good, so I reached out to the HN guys and they fixed it up (and confirmed it was a false positive). If you haven't seen people engage with what you're saying, it was a technical issue not a quality issue, so I hope you'll keep posting because this is stuff I like reading on HN. And if you have a blog with an RSS feed or something, it would be cool to see it on your profile.
This is a weird thread for me to read, as someone who a) works primarily with developer tooling (and not even GUI tooling, I write cryptography stuff usually!), b) is very active in a vibrant community of artists that care about nerd software projects.
I don't, as a rule, ever ask artists to contribute for free, but I still occasionally get gifted art from kind folks. (I'm more than happy to commission them for one-off work.)
Artists tragically undercharge for their labor, so I don't think the goal should be "coax them into contributing for $0" so much as "coax them into becoming an available and reliable talent pool for your community at an agreeable rate". If they're enthusiastic enough, some might do free work from time to time, but that shouldn't be the expectation.
It’s a long story, in my case.
There’s a very good reason for me to be asking for gratis work. I regularly do tens of thousands of dollars’ worth of work for free.
Why should they work for pay on free software? Nobody expects to be paid to work on the software itself. Yet artists expect to be treated differently.
If it is your job, then go do it as a job. But we all have jobs. Free software is what we do in our free time. Artists don't seem to have this distinction. They expect to be paid to do a hobby.
Doing a pro graphic design treatment is lot more than just "drawing a few pictures," and picking a color palette.
It usually involves developing a design language for the app, or sometimes, for the whole organization (if, like the one I do a lot of work for, it's really all about one app). That's a big deal.
Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
> Doing a pro graphic design treatment is lot more than just "drawing a few pictures," and picking a color palette.
Are you quoting someone? Yeah it's a real job, and so is programming. I don't think anyone in this conversation is being dismissive about either job.
You'd be surprised, then, to know that a lot of programmers think graphic design is easy (see the other comment, in this thread), and can often be quite dismissive of the vocation.
As a programmer, working with a good graphic designer can be very frustrating, as they can demand that I make changes that seem ridiculous, to me, but, after the product ships, makes all the difference. I've never actually gotten used to it.
That's also why it's so difficult to get a "full monty" treatment, from a designer, donating their time.
> see the other comment
Which other comment?
If you mean the one saying it's not harder than programming, that's not calling it easy.
It can be a lot harder. Programming, these days, isn't always that hard.
Very different skillset. There was a comment about how ghastly a lot of software-developed graphical assets can be.
Tasteful creativity does not grow on trees.
"can be" makes it a very different statement. Either one "can be" a lot harder than the other, depending on the task. The statement above is about typical difficulty.
And even if they're wrong about which one is typically harder, they weren't saying it was easy, and weren't saying it was significantly easier than programming.
Programming is a big deal too.
It’s not like graphic design is harder than programming.
I’d rather have crappy graphics than pay designers instead of programmers for free oss.
It's just more common for artists to do small commission work on the side of a real job. 30 dollars for something is basically a donation or tip in my view, and the community can crowd fund for it the same way bug bounties work I think?
> Yet artists expect to be treated differently.
Because it's a different job!
Your post is like asking, "Why is breathing free but food costs money?"
Either you're implying that people should code for free, or your analogy is so vague as to be useless.
Yeah it's a different job but they're both jobs. Why should one be free and one not be free?
Because programmers consent to programming for free. That fact does not, in any way, obligate anyone else to.
The question/skepticism is why the programmers are consenting to this but not the artists.
I suspect some of this is due to the fact that the programmers consenting to do free work already have well-paying jobs, so they have the freedom and time to pursue coding as a hobby for fun as well. Graphic designers and UX designers are already having a hard time getting hired for their specific skills and getting paid well for it, so I imagine it's insulting to be asked to do it for free on top of that.
That said, I don't think it's as simple as that. Coding is a kind of puzzle-solving that's very self-reinforcing and addictive for a certain type of person. Coders can't help plugging away at a problem even if they're not at the computer. Drawing, on the other hand, requires a lot more drudgery to get good, for most people anyway, and likely isn't as addictive.
I believe it's more nuanced than that. Artists, like programmers, aren't uniformly trained or skilled. An enterprise CRUD developer asks different questions and proposes different answers compared to an embedded systems dev or a compiler engineer.
Visual art is millennia older and has found many more niches, so, besides there being a very clear history and sensibility for what is actually fundamental vs industry smoke and mirrors, for every artist you encounter, the likelihood that their goals and interests happen to coincide with "improve the experience of this software" is proportionately lower than in development roles. Calling it drudgery isn't accurate because artists do get the bug for solving repetitive drawing problems and sinking hours into rendering out little details, but the basic motive for it is also likely to be "draw my OCs kissing", with no context of collaboration with anyone else or building a particular career path. The intersection between personal motives and commerce filters a lot of people out of the art pool, and the particular motives of software filters them a second time. The artists with leftover free time may use it for personal indulgences.
Conversely, it's implicit that if you're employed as a developer, that there is someone else that you are talking to who depends on your code and its precise operation, and the job itself is collaborative, with many hands potentially touching the same code and every aspect of it discussed to death. You want to solve a certain issue that hasn't yet been tackled, so you write the first attempt. Then someone else comes along and tries to improve on it. And because of that, the shape of the work and how you approach it remains similar across many kinds of roles, even as the technical details shift. As a result, you end up with a healthy amount of free-time software that is made to a professional standard simply because someone wanted a thing solved so they picked up a hammer.
Why aten't programmers drawing furry porn?
It's really not deep.
I dispute that claim but it doesn't answer the question. When you have multiple people involved in the community of an open source project, what makes them decide where to contribute, and what makes them decide if they'll use marketable skills for free or not? I think it's an interesting thing to look into.
Wouldn’t designers consent to designing for free?
This seems like a self selection problem. It’s not about forcing people to work for free. It’s about finding designers willing to work for free (just like everyone else on the project).
You know that (some) people get paid to work on free software, right?
Much larger but not non-existent, people post their work (including laborious stuff like icon suites and themes) on art forums and websites for no gain all the time.
Going back to the winxp days there was a fairly vibrant group of people making unofficial themes for it, although I think that was helped by the existence of tools (from Stardock?) specialized on that task and making it approachable if your skill set didn't align perfectly.
UI != icons.
UI and UX are for all intents lost arts. No one is sitting on the other side of a 2 way mirror any more and watching people use their app...
This is how we get UI's that work but suck to use. This is how we allow dark patterns to flourish. You can and will happily do things your users/customers hate if it makes a dent in the bottom of the eye and you dont have to face their criticisms directly.
> UI and UX are for all intents lost arts. No one is sitting on the other side of a 2 way mirror any more and watching people use their app...
Which is also why UI/UX on open source projects are generally going to suck.
There's certainly no money to pay for that kind of experiment.
And if you include telemetry, people lose their goddamn minds, assuming the open source author isn't morally against it to begin with.
The result is you're just getting the author's intuitive guesswork about UI/UX design, by someone who is likely more of a coder than a design person.
The dependency on telemetry instead of actually sitting down with a user and watching them use your software is part of the problem. No amount of screen captures, heatmaps or abandoned workflow metrics will show you the expression on a person's face.
Unless you get super invasive, telemetry will tell you how often a feature is used but I don't think it'll help much with bad and confusing layouts.
They're not just nerds, they're power users. These are different things.
Pretty much everyone is a power user of SOME software. That might be Excel, that might be their payroll processor, that might be their employee data platform. Because you have to be if you work a normal desk job.
If Excel was simpler and had an intuitive UI, it would be worthless. Because simple UI works for the first 100 hours, maybe. Then it's actively an obstacle because you need to do eccentric shit as fast as possible and you can't.
Then, that's where the keyboard shortcuts and 100 buttons shoved on a page somewhere come in. That's where the lack of whitespace comes in. Those aren't downsides anymore.
"If Excel was simpler and had an intuitive UI, it would be worthless."
Excel is a simple intuitive UI.
I use 10% of Excel. I don't even know the 90% of what it's capable of.
It hides away it's complexity.
For people that need the complex stuff, they can access it via menus/formulas.
For the rest of us, we don't even know it's there.
Whereas, Handbrake shoves all the complexity in your face. It's overwhelming for first time users.
The person who is going to bother adding stuff to a piece of software is almost certainly by definition a power-user.
This means they want to add features they couldn't get anywhere else, and already know how to use the existing UI. Onboarding new users is just not their problem or something they care about - They are interested in their own utility, because they aren't getting paid to care about someone else's.
It's not a "nerd" thing.
I'm optimistic that the rise of vibe coding will allow the people who understand the user's wants and needs to fix the world's FOSS UIs.
I'm sceptical about fixing (in the sense of a lasting solution), but it might be a very powerful tool to communicate to devs what the UI should look like.
UX and interface designers are also nerds.
i think the bigger issue is that the power users usecases are different from the non-power users. not a skillset problem, but an incentive one
I have been beating this drum for many years. There are some big cultural rifts and workflow difficulties. Unless FOSS products are run by project managers rather than either developers or designers, it’s a tough nut. Last I looked, gimp has been really tackling this effort more aggressively than most.
I am not convinced bad UI is either a FOSS issue, or solved by having project managers. I know very non-tech people who struggle with Windows 11, for example. I do not like MS Office on the rare occasions I have used it on other people's machines. Not that impressed by the way most browser UIs are going either.
Microsoft has been lagging on interface design for a long time. If the project managers are focused on forcing users into monetizable paths against their will, then of course you’re going to get crap interfaces and crap software quality. If you have a project manager that’s focused on directing people to solve problems for users rather than people just bolting on whatever makes sense, then that’s a lot different. And no, bad UIs aren’t inherent to FOSS— look at Firefox, Blender, Signal… all FOSS projects that are managed by people focused on integrating the most important features in a way that makes sense for the ecosystem.
gimp has been my goto when I want to explain bad ui, developer designed ui, or just typical foss ui I'm glad they're fixing it. It's also my image editor of choice.
Yeah I’ve been using it as a go-to example for the wrongest approach to UI design for years. I’m glad to see they’re working harder than most to fix some of the underlying problems.
To design a good user interface, you need a feedback loop that tells you how people actually use your software. That feedback loop should be as painless for the user as possible.
Having people to man a 1-800 number is one way to get that feedback loop. Professional user testing is another. Telemetry / analytics / user tracking, or even being able to pull out statistics from a database on your server, is yet another. Professional software usually has at least two of these, sometimes all four. Free software usually has none.
There are still FLOSS developers out there who think that an English-only channel on Libera.chat (because Discord is for the uneducated n00bs who don't know what's good for them) is a good way to communicate with their users.
What developers want from software isn't what end users want from software. Take Linux for example. A lot of things on Linux can only be done in the terminal, but the people who are able to fix this problem don't actually need it to be fixed. This is why OSS works so well for dev tools.
It always amazes me how even just regular every day users will come to me with something like this:
Overly simplified example:
"Can you make this button do X?" where the existing button in so many ways is only distantly connected to X. And then they get stuck on the idea that THAT button has to be where the thing happens, and they stick with it even if you explain that the usual function of that button is Y.
I simplified it saying button, but this applies to processes and other things. I think users sometimes think picking a common thing, button or process that sort of does what they want is the right entry point to discuss changes and maybe they think that somehow saves time / developer effort. Where in reality, just a new button is in fact an easier and less risky place to start.
I didn't say that very well, but I wonder if that plays a part in the endless adding of complexity to UI where users grasp onto a given button, function, or process and "just" want to alter it a little ... and it never ends until it all breaks down.
I always tell clients (or users): "If you bring your car to the mechanic because it's making a noise and tell them to replace the belt, they will replace the belt and you car will still make the noise. Ask them to fix the noise."
In other words, if you need expert help, trust the expert. Ask for what you need, not how to do it.
If you tell the mechanic "my car is making a noise, fix the belt please" and then they just fix the belt, that's on the mechanic as well.
I would hope the mechanic would engage with the customer in more back and forth.
But sometimes power structures don't allow for it. I worked tech support in a number of companies. At some companies we were empowered to investigate and solve problems... sometimes that took work, and work from the customer. It had much better outcomes for the customer, but fixes were not quick. Customers / companies with good technical staff in management understood that dynamic.
Other companies were "just fix it" and tech support were just annoying drones and the company and customer's and management treated tech support as annoying drones. They got a lot more "you got exactly what you asked for" treatment ... because management and even customers will take the self defeating quick and easy path sometimes.
It's a hypothetical to communicate an entirely different point. The mechanic is't real or important.
It is a common misconception that the "expert" knows the best. Expert can be a trainee, or may be motivated to make more for its organisation or have yet to encounter your problem.
On the other hand, if you are using your car for a decade and feel it needs a new belt - then get a new belt. Worst case scenario- you will lose some money but learn a bit more about an item you use everyday.
Experts don't have your instincts as a user.
I am a qualified mechanic. I no longer work in the field but I did for many years. Typically, when people 'trust their instincts as a user' they are fantastically wrong. Off by a mile. They have little to no idea how a car works besides youtube videos and forum posts which are full of inaccuracies or outright nonsense and they don't want to pay for diagnosis.
So when they would come in asking for a specific part to be replaced with no context I used to tell them that we wouldn't do that until we did a diagnosis. This is because if we did do as they asked and, like in most cases, it turned out that they were wrong they would then become indignant and ask why we didn't do diagnosis for free to tell them that they were wrong.
Diagnosis takes time and, therefore, costs money. If the user was capable of it then they would also be capable enough to carry out the repair. If they're capable of carrying out the diagnosis and the repair then they wouldn't be coming to me. This has proved to be true over many years for everyone from kids with their first car to accountants and even electrical engineers working on complex systems for large corporations as their occupation. That last one is particularly surprising considering that an engineer should know the bounds of their knowledge and understand how maintenance, diagnosis and repair work on a conceptual level.
Don't trust your instincts in areas where you have no understanding. Either learn and gain the understanding or accept that paying an expert is part of owning something that requires maintenance and repair.
If you don't trust the expert then why are you asking them to fix your stuff? It's a weird idea that you'd want an idiot to do what you say because you know better.
In this case, it's at least partly because the expert has access to a lift...
I think what you're driving at can be more generalized as users bringing solutions when it would be more productive for them to bring problems. This is something I focus on pretty seriously in IT. The tricky part is to get the message across without coming across as unhelpful, arrogant, or obstructive. It often helps to ask them to describe what they're trying to achieve, or what they need. But however you approach the discussion, it must come across as a sincere desire to help.
You are describing a form of the XY problem. https://en.wikipedia.org/wiki/XY_problem
I think you are likely correct, thank you.
Yeah, I've had now a couple decades of experience dealing with this, and my typical strat is to "step back" from the requested change, find out what the bigger goal is, and usually I will immediately come up with a completely different solution to fulfill their goal(s). Usually involving things they hadn't even thought about, because they were so focused on that one little thing. When looking at the bigger picture, suddenly you realize the project contains many relevant pieces that must be adjusted to reach the intended goals.
In my experience, this is a communication issue, not a logical or technical or philosophical issue. Nor the result of a fixation caused by an idea out of the blue.
In my experience it may be solved by both parties spending the effort and time to first understand what is being asked... assuming they are both willing to stomach the costs. Sometimes it isn't worth it, and it's easier to pacify than respectfully and carefully dig.
Don't fall into the trap of responding to the user's request to do Y a certain way. They are asking you to implement Y, and they think they know how it should be implemented, but really they would be happy with Y no matter how you did it. https://xyproblem.info/
On the other hand, I've not uncommonly seen this idea misused: Alice asks for Y, Bob says that it's an XY problem and that Alice really wants to solve a more general problem X with solution Z, Alice says that Z doesn't work for her due to some detail of her problem, Bob browbeats Alice over "If you think Z won't work, then you're wrong, end of story", and everyone argues back and forth over Z instead of coming up with a working solution.
Sometimes the best solution is not the most widely-encouraged one.
Bob saying "you should use Z end of story" it's just as a hardheaded and unhelpful as Bob saying "X doesn't do that end of story".
Yeah I often will ask for a quick phone call and try to work from the top down, or the bottom up depending on the client. Getting to the thing we're solving often leads to a different problem description and later different button or concept altogether.
Sometimes it's just me firing up some SQL queries and discovering "Well this happened 3 times ... ever ..." and we do nothing ;)
I think the XY problem thing is likely very common. But developers are tending to use the term in a very dismissive way, superior way now.
It's my belief that much of this flavor of UI/UX degradation can be avoided by employing a simple but criminally underutilized idea in the software world (FOSS portion included), which is feature freezing.
That is, either determine what the optimal set of features is from the outset, design around that, and freeze or organically reach the optimium and then freeze. After implementing the target feature set, nearly all engineering resources are dedicated to bug fixes and efficiency improvements. New features can be added only after passing through a rigorous gauntlet of reviews that determine if the value of the feature's addition is worth the inherent disruption and impact to stability and resource consumption, and if so, approaching its integration into the existing UI with a holistic approach (as opposed to the usual careless bolt-on approach).
Naturally, there are some types of software where requirements are too fast-moving for this to be practical, but I would hazard a guess that it would work for the overwhelming majority of use cases which have been solved problems for a decade or more and the required level of flux is in reality extremely low.
Spot on. Defending simplicity takes a lot of energy and commitment. It is not sexy. It is a thankless job. But doing it well takes a lot of skill, skill that is often disparaged by many communities as "political non sense"[1]. It is not a surprise that free software world has this problem.
But it is not a uniquely free software world problem. It is there in the industry as well. But the marketplace serves as a reality check, and kills egregious cases.
[1] Granted, "Political non sense" is a dual-purpose skill. In our context, it can be used both for "defending simplicity", as well as "resisting meaningful progress". It's not easy to tell the difference.
The cycle repeats frequently in industry. New waves of startups address a problem with better UX, and maybe some other details like increased automated and speed using more modern architectures. But feature-creep eventually makes the UX cumbersome, the complexity makes it hard to migrate to new paradigms or at least doing so without a ton of baggage, so they in turn are displaced by new startups.
If the last part was true, Autodesk and Adobe would have had to go under a decade ago.
I suspect in the short term users are going to start solving this more and more by asking ChatGPT how to make their video work on their phone, and it telling them step by step how to do it.
Longer term I wonder if complex apps with lots of features might integrate AI in such a way that users can ask it to generate a UI matching their needs. Some will only need a single button, some will need more.
Not only is it hard to figure out the use-case, but the correct use-case will change over time. If this were made in the iPod touch era, it would probably make 240p files for maximum compatibility. That's ... probably the wrong setting for today.
Good points, but to add to the sources of instability ... a first time user of a piece of software may be very appreciative of its simplicity and "intuitiveness". However, if it is a tool that they spend a lot of time with and is connected to a potentially complex workflow, it won't be long before even they are asking for "this little extra thing".
It is hard to overestimate the difference between creating tools for people who use the tools for hours every day and creating tools for people who use tools once a week or less.
Right. For most people, gimp is not only overkill but also overwhelming. It's hard to intuit how to perform even fairly simple tasks. But for someone who needs it it's worth learning.
The casual user just wants a tool to crop screenshots and maybe draw simple shapes/lines/arrows. But once they do that they start to think of more advanced things and the simple tool starts to be seen as limiting.
But the linked article addresses that. They're not advocating for removing the full-feature UI, they just advise having a simple version that does the one thing (or couple of things) most users want in a simple way. Users who want to do more can just use the full version.
Users don't want "to do more". They want to do "that one extra thing". Going from the "novice" version to the "full version" just to get that one extra thing is a real problem for a lot of people. But how do you address this as a software designer?
I don't know if this works well in general, but for example Kodi has "basic", "advanced" and several progressively more advanced steps in between for most of its menus. It hides lots of details that are irrelevant to the majority of users.
I'm not a coder, so I'm not going to pretend that this solution is easy to implement (it might be, but I wouldn't assume so), but how about allowing you to expose the "expert" options just temporarily (to find the tool you need) and then allow adding that to your new "novice plus" custom menus? I.e., if you use a menu option from the expert menu X number of times, it just shows up even though your default is the novice view.
Progressive disclosure? If you know your audience, you probably know what most people want, and then the usual next step up for that "one extra thing". You could start with the ultra-simple basic thing, then have an option to enable the "next step feature". If needed you could have progressive options up to the full version.
> The casual user just wants a tool to crop screenshots and maybe draw simple shapes/lines/arrows. But once they do that they start to think of more advanced things and the simple tool starts to be seen as limiting.
Silksong Daily News went from videos of a voiceover saying "There has been no news for today" over a static image background to (sometimes) being scripted stop-motion videos.
And why exactly should free software prioritise someone's first five minutes (or first 100 hours, even) over the rest of the thousands of hours they might spend with it?
I see people using DAWs, even "pro" ones made by companies presumably interested in their bottom lines. In all cases I have no idea how to use it.
Do I complain about intuitiveness etc? Of course not. I don't know how to do something. That's my problem. Not theirs.
> And why exactly should free software prioritise someone's first five minutes (or first 100 hours, even) over the rest of the thousands of hours they might spend with it?
Well, if people fail at that first five minutes, the subsequent thousand hours most often never happens.
The inverse is also true. If you prioritize the first five minutes, your software is worthless in any industry that matters.
And that's why designers are using Photoshop and not Microsoft paint.
See, I feel this is where programmers just don't "get" good UI design.
Photoshop is good UI design. A normie can use photoshop the same way they use MS paint.
Albeit it just loads slower.
A normie doesn't need all the bells and whistles. They can just use photoshop like a glorified MS paint.
You can't do that with GIMP. It's actually really fucking annoying, if you try to use GIMP to do a MS paint job.
> It takes a single strong-willed defender, or some sort of onerous management structure...
I'd say it's even more than you've stated. Not only for defending an existing project, but even for getting a project going in the first place a dictator* is needed.
I'm willing to be proven wrong, and I know this flies in the face of common scrum-team-everybody-owns approaches.
* benevolent or otherwise
It does shed light on a possibly better solution though that gives the user a list of simple, common use case options or access to the full interface.
I do feel quite strongly that this should be implemented in the app though.
There must be examples of this approach already being used?
This is why i developed GatorCAM for CNC.
FreeCAD is too complicated. Too many ways to accomplish the same task (nevermind only certain ways work too.)
So everything is simple and only 1 way to create gcode. No hidden menus. No hidden state.
> to prevent the interface from quickly devolving back into the million options
Microsoft for a loooong time had that figured out pretty well:
- The stuff that people needed every day and liked to customize the most was directly reachable. Right click on the desktop, that offered a shortcut to the CPL for display and desktop symbols.
- More detailed stuff? A CPL that could be reached from the System Settings
- Stuff that was low level but still needed to be exposed somewhat? msconfig.
- Stuff that you'd need to touch very rarely, but absolutely needed the option to customize it for entire fleets? Group Policy.
- Really REALLY exotic stuff? Registry only.
In the end it all was Registry under the hood, but there were so many options to access these registry keys depending what level of user you were. Nowadays? It's a fucking nightmare, the last truly decent Windows was 7, 10 is "barely acceptable" in my eyes and Windows 11 can go and die in a fire.
Then we have to wait until 'normal' software becomes more scary. Various vendors are doing everything in their power to make it so.
A lot of this type of stuff boils down to what you're used to.
My wife is not particularly tech savvy. She is a Linux user, however. When we started a new business, we needed certain applications that only run on Windows and since she would be at the brick and mortar location full time, I figured we could multi-purpose a new laptop for her and have her switch to Windows.
She hated it and begged for us to get a dedicated Windows laptop for that stuff so she could go back to Linux.
Some of you might suggest that she has me for tech support, which is true, but I can't actually remember the last time she asked me to troubleshoot something for her with her laptop. The occasions that do come to mind are usually hardware failure related.
Obviously the thing about generlizations is that they're never going to fit all individuals uniformly. My wife might be an edge case. But she feels at home using Linux, as it's what she's used to ... and strongly loathed using Windows when it was offered to her.
I feel that kind of way about Mac vs PC as well. I am a lifelong PC user, and also a "power user." I have extremely particular preferences when it comes to my UI and keyboard mappings and fonts and windowing features. When I was forced to use a Mac for work, I honestly considered looking for a different position because it was just that painful for me. Nothing wrong with Mac OS X, a lot of people love it. But I was 10% as productive on it when compared to what I'm used to... and I'm "old dog" enough that it was just too much change to be able to bear and work with.
One summer in middle school our family computer failed. We bought a new motherboard from Microcenter but it didn’t come with a Windows license, so I proposed we just try Ubuntu for a while.
My mom had no trouble adjusting to it. It was all just computer to her in some ways.
Same, my mom ran Linux for years in the Vista days cuz her PC was too slow for Windows. She was fine. She even preferred Libreoffice over the Office ribbon interface.
Sometime around 2012, Windows XP started having issues on my parent's PC, so I installed Xubuntu on it (my preferred distro at the time). I told them that "it works like Windows", showed them how to check email, browse the web, play solitare, and shut down. Even the random HP printer + scanner they had worked great! I went back home 2 states away, and expected a call from them to "put it back to what it was", but it never happened. (The closest was Mom wondering why solitare (the gnome-games version) was different, then guided her on how to change the game type to klondike.)
If "it [Xubuntu] works like Windows" offended you, I'd like to point out that normies don't care about how operating system kernels are designed. You're part of the problem this simplified Handbrake UI tries to solve. Normies care about things like a start menu, and that the X in the corner closes programs. The interface is paramount for non-technical users.
I currently work in the refurb division of an e-waste recycling company.[0] Most everyone else there installs Ubuntu on laptops (we don't have the license to sell things with Windows), and I started to initially, but an error always appeared on boot. Consider unpacking it and turning it on for the first time, and an error immediately appears: would you wonder if what you just bought is already broken? I eventually settled on Linux Mint with the OEM install option.
[0] https://www.ebay.com/str/evolutionecycling
Mint is definitely what I recommend to people who hate windows now but are nervous about swapping to Linux. Bazzite if they’re gamers.
Oh good it's that time of the week when we HN users get together to tell lies about all of our tech illiterate family members who use Linux full time with zero problems and zero tech support.
Familiarity is massively undersold in the Linux desktop adoption discussion. Having desktop environments that are near 1:1 clones of the commercial platforms (preferably paired with a distribution that's designed to be bulletproof and practically never requires its user to fire up a terminal window) would go so far for making Linux viable for users sitting in the middle of the bell curve of technical capability.
It's one of those situations where "close enough" isn't. The fine details matter.
The main problem with this is that the commercial offerings are pretty much just bad.
Windows isn't the way it is because of some purposeful design or anything. No, it's decades of poor decisions after poor decisions. Nothing, and I do mean nothing, is intuitive on Windows. It's familiar! But it is not intuitive.
If you conform to what these commercial offerings do, you are actively making your software worse. On purpose. You're actively programming in baggage from 25 years ago... in your greenfield project.
I don't even think that it remained very familiar aside from a taskbar (that also changed in win11) and the fact that there are desktop icons when you install things via double clicking (the double click installing also optionally changed with the Microsoft store and the msi installers are almost entirely gone these days, totally different uis pop up now). Even core things that people definitely use like the uninstallation, settings etc. ui has changed completely for the worse. Windows has also changed a lot of its core ui over the years like the taskbar, the clock, the startmenu etc. I guess one thing you could say is that it was a gradual change over many versions but everytime people hate it. Really, what Linux should have done is what Windows has done with WSL: Offer a builtin compatability layer so that you can install windows apps on linux, perhaps prompting you to enter a windows license and then it will launch those apps in a VM, even per window/app.
Lol, have you not noticed how every version of windows moves everything and the users are no longer able to do anything?
What do you see as wrong or missing "fine details" in, say, Cinnamon?
Assuming that the point of comparison is Windows (since it’s a rough XP/7 analogue), any difference in behaviors, patterns, or conventions that might differ from what a long time Windows user would expect, including things that some might write off as insignificant. In particular, anything relating to the user’s muscle memory (such as key shortcuts, menu item positions, etc) needs to match.
The DE needs to be as close to a drop-in replacement as possible while remaining legally distinct. The less the user needs to relearn the better.
For example, practically every text box in practically every Linux system handles ctrl+backspace by deleting a word. This clashes with a Windows user's expectation that ctrl+backspace deletes a word in some system applications while inserting a small square character in others.
> Familiarity is massively undersold in the Linux desktop adoption discussion
Totally agree. My first distro was Elementary because it was sold to me as Mac-like. It’s…sort of that, but it was enough for me to stick with it and now I’ve tried 3 other distros! Elementary is still in place in my n150 server. Bazzite for my big gaming machine. Messed with Mint briefly, wasn’t for me but I appreciated what it was.
Familiarity is so important.
What you're used to is definitely a huge part of it. But I do think 10-15 years ago Linux was easier to break than Windows, because it didn't make any effort to hide away the bits that let you break it. This was mainly a matter of taste. People who know what they're doing don't want to use some sanitised sandbox.
Linux was like a racing car. Raw and refined. Every control was directly connected to powerful mechanical components, like a throttle, clutch and steering rack. It did exactly what you told it to do, but to be good at it requires learning to finesse the controls. If you failed, the lessons the were delivered swiftly and harshly: you would lose traction, spin and crash.
Windows was more like a daily driver. Things were "easier", but that the cost of having less raw control and power, like a clutch with a huge dual mass flywheel. It's not like you can't break a daily driver, any experienced computer guy has surely broken Windows more than once, but you can just do more within the confines o the sandbox. Linux required you to leave.
It's different now. Distros like Ubuntu do almost everything most people want without having to leave the sandbox. The beautiful part about Linux, though, is it's still all there if you want it and nice to use if you get there, because it's build and designed by people who actually do that stuff. Nowadays I tend to agree it is mostly just what you're used to and what you've already learnt.
> When I was forced to use a Mac for work, I honestly considered looking for a different position because it was just that painful for me.
I share this aversion. I have a Mac book work sent me, sitting next to me right now, that I never use. Luckily I’m able to access the vpn via Linux and all the apps I need have web interfaces (office 365).
won't you get in trouble for using a personal device for accessing work resources?
I grew up using Windows but have been using Linux and Mac almost exclusively for the past fifteen years; the only exposure I get to Windows is when I have to play tech support for my parents [1].
I hated OS X when I first used it. A lot, actually. I didn't consider leaving my job over it (I couldn't have afforded it at the time even if I had wanted to), but I did think about trying to do an ultimatum with that employer to tell them to buy me a computer with Windows or let me install Linux on the Macbook (this was 2012 so it had the Intel chip). I got let go from that job before I really got a chance (which itself is a whole strange story), but regardless I really hated macOS at the time.
It wasn't until a few years later and a couple jobs after that I ended up growing to really like macOS, when Mavericks released, and a few years later, I actually ended up getting a job at Apple and I refuse to allow anyone to run Windows in my house.
My point is, I think people can actually learn and appreciate new platforms if they're given a chance.
[1] https://news.ycombinator.com/item?id=45708530
I agree, people can learn and appreciate if given the chance. But they've more important things to do so changing OS is just a distraction.
I know, techies love to love or hate the OS. Here there are endless threads waxing lyrical about Windows, MacOS or say dozen Linux installs. But 99% of users could care less.
It's kinda like cars. Petrol heads will talk cars for ages. Engine specs. What brand of oil. Gearbox ratios. Whereas I'm like 99% of people - I get in my car to go somewhere. Pretty much the only "feature" a car needs is to make me not worry about getting there.
So for 97% of people the "best" OS is the one they don't notice. The one that's invisible because they want to run a program, and it just runs.
The problem with switching my mom to Linux is not the OS. It's all the programs she uses. And while they might (or might not) be "equivalent" they're not the same. And I'm not interested in re-teaching her every bit of software, and she's not interested in relearning every bit of software.
She's not on "a journey" of software discovery. She has arrived. Changing now is just a waste of time she could be gardening or whatever.
The reason it'll never be the year for Linux Desktop is the same reason it's always been - it's not there already.
I mostly agree with you, though one of the few good things about Electron taking over the desktop means that an increasing number of programs are getting direct ports to Linux. A guy can dream at least.
> And I'm not interested in re-teaching her every bit of software, and she's not interested in relearning every bit of software.
I don't see Windows as having much of an edge there. Lots of things seem to change on Windows just for change's sake. I get so tired of the churn on Windows versions and finding how to disable the new crummy features. If you want to avoid relearning all the time, something simple like XFCE is going to be way better.
And Linux won't arbitrarily irrevocably brick your computer because of an automatic update. In my opinion, having your computer bricked because of an automatic update is a very large change to adapt to.
I feel the need to constantly reiterate this; if someone who works on Windows Update reads this, please consider a different career, because you are categorically terrible at your job. There are plenty of jobs out there that don't involve software engineering.
> And Linux won't arbitrarily irrevocably brick your computer because of an automatic update.
To the average user, it absolutely will. Unless they happen to run on particularly well-supported hardware, the days of console tinkering aren't gone, even on major distros.
What's fixable to the average Linux user and what's fixable to the average person (whose job is not to run Linux) are two very, very different things.
People want features, and they're willing to learn complicated UIs to get them. A software that has hyper simplified options has a very limited audience. Take his example: we have somebody who has somehow obtained a "weird" video file, yet whose understanding of video amounts to wanting it to be "normal" so they can play it. For such a person, there are two paths: become familiar enough with video formats that you understand exactly what you want, and correspondingly can manipulate a tool like handbrake to get it, or stick to your walled-garden-padded-room reality where somebody else gives you a video file that works. A software that appeals to the weird purgatory in the middle necessarily has a very limited audience. In practice, this small audience is served by websites. Someone searches "convert x to y" and a website comes up that does the conversion. Knowing some specialized software that does that task (and only that one narrow task) puts you so far into the domain of the specialist that you can manage to figure out a specialist tool.
The walled gardens got a lot more appealing.
When we moved to Canada from the UK in 2010 there was no real way to access BBC content in a timely manner. My dad learned how to use a VPN and Handbrake to rip BBC iPlayer content and encode it for use on an Apple TV.
You had to do this if you wanted to access the content. The market did not provide any alternative.
Nowadays BBC have a BritBox subscription service. As someone in this middle space, my dad promptly bought a subscription and probably has never fired up Handbrake since.
For this example:
> we have somebody who has somehow obtained a "weird" video file
Why are you arriving at the conclusion that this requires complex software, rather than just a simple UI that says "Drop video file here" and "Fix It" below? E.g., instead of your conclusion "stick to your walled-garden-padded-room reality where somebody else gives you a video file that works", another possibility is the simple UI I described? That seemed to me the point of the post.
The issue is that downloading a software, for most people, implies an investment in the task the software does that is unlikely to be paid off if it only does a single simple task. If I'm going out of my way to download something, then I'm probably willing to learn a few knobs that give me more control. Hence why I suggested that such a person would rather use a website.
This is really just my read for why this sort of software isn't more common. Go ahead and make it, and if it ends up being popular I'll look the fool.
> an investment in the task the software does that is unlikely to be paid off if it only does a single simple task
I don't think that's true at all. The tool linked here is exactly the kind of utility that does one single task and that people are happy to download. Most people use software to solve a problem, not to play around with it and figure out if they have a use for it.
Conversely, people want familiar UIs that they’re familiar with, and are willing to forgo features to use them.
> Free audio editing software that requires hours of learning to be useful for simple tasks.
To be fair, the Audacity UX designer made a massive video about the next UX redesign and how he tried to get rid of "modes" and the "Audacity says no" problem:
https://www.youtube.com/watch?v=QYM3TWf_G38
So this problem should get better in the future. Good UX (doesn't necessarily have to have a flashy UI, but just a good UX) in free software is often lacking or an afterthought.
UX is the biggest debt.
You're making application for yourself and somewhere down pipeline you decide that it could benefit others, so you make it open-source.
People growl at you "It's ugly UX but nice features" when it was originally designed for your own tastes. The latter, people growl at you for "not having X feature, but nice UX".
Your own personal design isn't one-fits-all and designing mocks takes effort. Mental strain and stress; pleasing folks is hard. You now continue developing and redesign the foundations.
A theming engine you think. This becomes top-priority as integration of such becomes a PITA when trying to couple it with future features later.
That itself becomes a black hole in how & schematics. So now you're forever doomed in creating something you never desired for the people who will probably never use it. This causes your project to fail but at least you have multiple revisions of the theming engine. Or you strike it lucky and gain a volunteer.
The problem with the new Audacity isn't the new version, it's that it replaces the old version. If the new version came out but it was called "DARing" and Audacity continued to be the thing we have now, people might question the name but no other eyes would be batted.
Pre-emptive anti-snark: yes, the old version will still exist... if you can dig up the right github commit and still make it compile in 2030.
Well, Tantacrul did answer that objection: it just shows you a popup dialog on first start: "which theme do you want" (colorful or colorless, light / dark) and "which experience do you want" (classic / new). So if you pick the "colorless, light, classic" option, it's going to look pretty much like the current Audacity, except that they moved from wxWidgets to Qt.
[dead]
the "modal disruption" is misguided - he cites as the challenge a very poor implementation in a MS app where the modes were barely visible!!! That's not a proof that modes are bad, just a statement that invisible information makes it hard for the users to adapt! Brushes (another mode he cites as great) are great precisly because their state is immediately visible in your focus area - your primary pointer changes
Now he got rid of the modes by adding handles and border actions - so 1) wasted some space that could be used for information 2) required more precision from the users because now to do the action you must target a tiny handle/border area 3) same, but for other actions as now you have to avoid those extra areas to do other tasks.
While this might be fine for casual users as it's more visible, the proper way out is, of course,... MODES and better ones! Let the default be some more casual mode with your handles, but then let users who want more ergonomics use a keybind to allow moving the audio segment by pressing anywhere in that segment, not just in the tiny handle at the top. And then you could also add all those handles to visually indicate that now segments are movable or turn your pointer into a holding hand etc.
Same thing in the example - instead of creating a whole new separate app with a button you could have a "1-button magicbrake" mode in handbrake
Having actually used Audacity, the modes were horrid and not at all intuitive to use and everything demonstrated in the video only looked like vast improvements (aside from the logo). I am failing to see how adding handles wastes space that could be used for any extra information especially when the tradeoff is an incredible degree of customisation for my UI. In terms of precision, they're working on accessibility issues but I'm not sure how this change is any special than any other UI.
> I am failing to see how adding handles wastes space that could be used for any extra information
What is there to see? You add a bar that takes space. That space can be taken up by something useful. Just like you have apps that hide app title bar and app menus so you can have more space for your precious content. This is especially useful for high-info-density apps like these audio/video/photo authoring ones. Note how tiny those handles are in the video, why do you think that is?
> tradeoff is an incredible degree of customisation
You don't have that tradeoff, neither of the 2 solutions are anywhere close to "incredible customization", so you can pick either without it.
> In terms of precision, they're working on accessibility issues
Working towards what magic solution?
> but I'm not sure how this change is any special than any other UI.
why does it have to be special? Just a bog standard degradation common to any UI (re)design, nothing special about it.
> the modes were horrid
Of course they were. Just like they were horrid in that MS Paint app the dev worked on before. But you can make any UI primitive horrid, even buttons, that's no reason to remove them, but to improve them!
> 80% of the people only need 20% of the features.
I also heard that, once you try to apply this concept, you see that everyone needs a different 20%. Any thoughts on this?
Some reasons for this:
1. Free software is developed for the developer's own needs and developers are going to be power users.
2. The cost to expose options is low so from the developer's perspective it's low effort to add high value (perceiving the options as valuable).
3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
Probably many other factors!
It takes a lot of time and energy to refine and maintain a minimalistic interface. You are intentionally narrowing the audience. If you are an open source developer with limited time you probably aren't going to invest in that.
That’s one of the great things about the approach demonstrated in the post. The developers of Handbrake don’t need to invest any time or energy in a minimalist interface. They can continue to maintain their feature-rich software exactly as it is. Meanwhile, there is also a simple, easy front end available for people who need or want it.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
i have seen many comments, by lay people, out of Sonobus [0] being superb on what it does and impressive by being 100% free. that's a niche case that if it was implemented on Ardour, could fit the same problem OP describes
[0] https://sonobus.net/
however i can't feel where the problem of FOSS UX scaring normal people is. someone getting a .h264 and a .wav file out of a video-record isn't normal after all. there are plenty of converters on the web, i dunno if they run ffmpeg at their server but i wouldn't get surprised. the problem lies on the whole digital infrastructure running on FOSS without returning anything back. power-user software shouldn't simplify stuff. tech literacy hopefully can be a thing and by quickly learning how to import and export a file in a complex software feels better to install 5 different limited software over the years because your demands are growing
> . Free software is developed for the developer's own needs and developers are going to be power users
* Free software which gains popularity is developed for the needs of many people - the users who make requests and complaints and the developers.
* Developers who write for a larger audience naturally think of more users' needs. It's true that they typically cater more to making features available than to simplicity of the UI and ease of UX.
> 2. The cost etc.
Agreed!
> 3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
The developer typically knows what the popular use cases would be. Like with the handbrake example. They also pretty much know how newbie users like simplified workflows and hand-holding - but it's often a lot of hassle to create the simplified-with-semi-hidden-advanced-mode interface.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user
Are people who install, say, the Chrome browser on their PC to be considered power userS? They downloaded and installed it themselves after all... no, I believe you're creating a false dichotomy. Some users will never install anything; some users might install common software they've heard about from friends; and some might actively look for software to install - even though they don't know much about it or about how to operate the apps and OS facilities they already h ave. ... and all of these are mostly non-power-users.
> It’s a bit like obscuring the less-used functions on a TV remote with tape. The functions still exist if you need them, but you’re not required to contend with them just to turn the TV on.
For telling software devs to embrace traditional design wisdom, using TV remotes is an interesting example - cause aside from the commonly used functionality people actually care about (channels, volume, on/off, maybe subtitles/audio language) the rest should just be hidden under a menu and the fact that this isn't the case demonstrates bad design.
It's probably some legacy garbage, along the lines of everyone having an idea for what a TV remote is "supposed" to look like and therefore the manufacturers putting on buttons in plain view that will never get used and that you'd sometimes need the manual to even understand.
At the same time, it might also be possible that the FOSS software that's made for power users or even just people with needs that are slightly more complex than the baseline is never going to be suited for a casual user - for example, dumbing down Handbrake and hiding functionality power users actually do use under a bunch of menus would be really annoying for them and would slow them down.
You can try to add "simple" and "advanced" views of your UI, but that's the real disconnect here - different users. Building simplified versions with sane defaults seems nice for when there is a userbase that needs it.
I'd say it tells us that "good design" is stupid.
Remotes are fine. Except the modern ones that have a touchpad and, like, 8 buttons, four of which are ads.
People can handle many buttons just fine. Even one year old kids don't have a problem with this, which becomes apparent if you ever watch a small child play. The only people who have a problem here are UX designers high on paternalistic approach to users.
If handbrake scares them, don’t you dare to demonstrate how to use ffmpeg. I remember when I used handbrake for the first time and thought “wow, it’s much more convenient than struggling with ffmpeg”.
At least with ffmpeg, for 99% of use cases you can just google "how do I do X with ffmpeg" and get a copypasta command line.
Whereas with complicated GUI tools, you have to watch a video to learn how to do it.
I think GUI tools lend themselves more to being able to discover functionality intuitively without needing to look anything up or read a manual, and especially so if you’re coming back to a task you haven’t done in a while. With CLI I constantly have to google or ask an LLM about commands I’ve done many times, whereas with a gui if I do it once I can more easily find my way the next time. Anyway both have their place
> I think GUI tools lend themselves more to being able to discover functionality intuitively without needing to look anything up or read a manual
Well, there are different issues.
Reading a manual is the best you can do, theoretically. But Linux CLI tools have terrible manuals.
I read over the ssh man page multiple times looking for functionality that was available. But the man page failed to make that clear. I had to learn about it from random tutorials instead.
I've been reading lvm documentation recently and it shows some bizarre patterns. Stuff like "for more on this see [related man page]", where [related man page] doesn't have any "more on this". Or, here's what happens if you try to get CLI help:
1. You say `pvs --help`, and get a summary of what flags you can provide to the tool. The big one is -o, documented as `[ -o|--options String ]`. The String defines the information you want. All you have to do is provide the right "options" and you're good. What are they? Well, the --help output ends with this: "Use --longhelp to show all options and advanced commands."
2. Invoke --longhelp and you get nothing about options or advanced commands, although you do get some documentation about the syntax of referring to volumes.
3. Check the man page, and the options aren't there either. Buried inside the documentation for -o is the following sentence: "Use -o help to view the list of all available fields."
4. Back to the command line. `pvs -o help` actually will provide the relevant documentation.
Reading a manual would be fine... if it actually contained the information it was supposed to, arranged in some kind of logically-organized structure. Instead, information on any given topic is spread out across several different types of documentation, with broken cross-references and suggestions that you should try doing the wrong thing.
I'm picking on man pages here, but actually Microsoft's official documentation for their various .NET stuff has the same problem at least as badly.
It's so frustrating that most man pages explicitly go out of their way to avoid having examples or answering "how do I X" questions.
We're going full-circle, because LLMs are amazing for producing just the right incantation of arcane command-line tools. I was struggling to decrypt a file the other day and it whipped me up exactly the right openssl command to get it done.
From which I was able to then say, "Can I have the equivalent source code" and it did that too, from which I was able to spot my mistake in my original attempt. ( The KDF was using md5 not sha ).
I'm willing to bet that LLMs are also just as good at coming up with the right ffmpeg or imagemagick commands with just a vague notion of what is wanted.
Like, can we vignette the video and then add a green alien to the top corner? Sure we can (NB: I've not actually verified the result here) : https://claude.ai/share/5a63c01d-1ba9-458d-bb9d-b722367aea13
> I'm willing to bet that LLMs are also just as good at coming up with the right ffmpeg or imagemagick commands with just a vague notion of what is wanted.
they are. ive only used ffmpeg via llm, and its easy to get the LLM to make the right incantation as part of a multi-step workflow.
my own lack of understanding of video formats is still a problem, but getting ffmeg to do the right thing only takes a vague notion
One of the things LLM shines. For double checking the command explanations, I ask commands to grep the sections from manual instead of relying LLM output blindly.
Excellent point. Soon computer use AI agents will bridge this gap.
If you only care about converting media without tweaking anything, ffmpeg offers the simplest UI ever.
Proposing a CLI command as a candidate for "simplest UI ever" is a great gag.
Come on. "type ffmpeg, then hyphen i then the input filename then the output filename". I would've understood this when I was 8. Because I was super smart? No, because I was making a genuine effort.
The portion you've overlooked is there is an entire population of users out there who have never seen, nor used, a command line, and telling them to "just type this out" ignores all the background command line knowledge necessary to successfully "just type this out":
1) They have to know how to get to a command line somewhere/how (most of this group of users would be stymied right here and get no further along);
2) They now have to change the current directory of their CLI that they did get open to the location in their filesystem where the video is actually stored (for the tiny sliver who get past #1 above, this will stymie most of them, as they have no idea exactly where on disk their "Downloads" [or other meta-directory item] is actually located);
3) For the very few who actually get to this step, unless they already have ffmpeg installed on their PATH, they will get a command not found error after typing the command, ending their progress unless they now go and install ffmpeg;
4) For the very very few who would make it here, almost all of them will now have to accurately type out every character in "a-really_big_filename with spaces .mov", as they will not know anything about filename completion to let the shell do this for them. And if the filename does have spaces, and many will, they now need to somehow know 4a) that they have to escape the spaces and 4b) how to go about escaping the spaces, or they will instead get some ffmpeg error (hopefully just 'file not found', but with the extra parameters that unescaped spaces will create, it might just be a variant of "unknown option switch" error instead).
They are using text inputs, where you press enter to send stuff daily. Most of the hurdle is just overcoming the preconception that at black in put window means hard mode.
They can right-click in the folder view of their OS file viewer. On Windows they can also just type the command into the path bar.
When you tell them the command, you could also just install it. Also you could just tell them to type the name of the app 'ffmpeg' into the OS store and press install. They do this on their phone all the time.
Well, you're cheating a bit here. You're basically assuming the user has never seen a text prompt before. Which is a good assumption.
But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too. What is clicking? What does this X do? What's a desktop again? I don't understand, why do I need 1 million pixels to change an mp3 to an avi? What does that window looking thing in the corner do? Oh no, I pressed the little rectangle at the top right and now it's gone, it disappeared. No not the one with the X, I think it was the other one.
Pretty much all computer use secretly relied on hundreds if not thousands of completely arbitrary decisions and functionality you just have to know. Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
The difference is a lot of "computer education" (as opposed to computing education most in this forum have) has happened with GUIs. "Simple" CLI tools doesn't mean they're understandable or even user-friendly.
Heck, even computing education (and the profession even!) has been propped up by GUIs. After my first year in CS, there were like only three to five of us in a section of forty to fifty who could compile Java from the command line, who would dare edit PATH variables. I'm pretty sure that number didn't improve by much when we graduated. A lot of professionals wouldn't touch a CLI either. I'm not saying they are bad programmers but fact of the matter is there are competent professional programmers who pretty much just expect a working machine handed to them by IT and then expect DevOps to fix Jenkins when it's borked out.
Remember: HN isn't all programmers. There are more out there.
> But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too.
We don't even need to assume, we just need to look at history. GUIs came with a huge amount of educational campaigning behind it, be it corporate (i.e., ads/training programs that teach users how to use their products) or even government campaigns (i.e., computer literacy classes, computer curriculum integrated at school). That's of course followed by man-years upon man-years of usability studies and the bigger vendors keeping consistent GUI metaphors across their products.
Before all of this, users did ask the questions that you enumerated and certain demographics still do to this day.
> Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
"Everything should be made simple, but not simpler." Has it occurred to you that maybe CLI tools assume too little?
How are we so blind to these beginner hurdles?
Few people are able to see through the eyes of a beginner, when they are a master.
The 4th one is a pain to teach. Every other file and directory has spaces... so I encourage liberal use of the TAB key for beginners.
To add on to this, there's no standardized way of indicating what needs to be typed out and what needs to be replaced. `foo --bar <replace me>` might be a good example command in a README, but I had to help someone the other day when they ran `foo --bar <test.txt>`, not realizing they should have replaced the < and > as well as just the text.
This describes me somewhat. I use FEA software and only recently started using it to execute jobs in CLI. I still trip over changing directories. Fortunately notepad++ has an option to open CLI with the filepath of the currently open file. I also didn't know right-click is paste in CLI. Don't use ctrl+c accidentally. But ctrl+v does work in powershell (sometimes?). "Error, command not found" is puzzling to me. Where does the software need to live relative to the directory I am using? This is all still very foreign to me, and working in CLI feels like flipping light switches in a dark room.
To answer your last question, on your operating system there is something called “PATH”. It is a user- or systemwide variable that dictates where to look for programs. It basically is a list of directories, often separated by “:” Further reading: https://www.java.com/en/download/help/path.html (this may have Java references but still applies)
The GP here appears to be on Windows, given their reference to PowerShell. And on Windows, the path separator is ";", not ":".
One of the things I've noticed is that people trying to help the true beginners vastly overestimate their skill level, and when you get a couple of people all trying to help, each of them is making a completely different set of suggestions which doesn't end up helpful at all. Recently, I was helping somebody who was struggling with trying to compile and link against a C++ library on Windows, and the second person to suggest something went full-bore down the "just install and use a Linux VM cause I don't have time to help you do anything on Windows."
The reality is that we've been infantilizing users for far too long. The belief that people can't handle fundamental concepts is misguided and primarily serves to benefit abusive tech companies.
Two decades ago, users understood what "C:\Documents and Settings\username\My Documents" meant and navigated those paths easily. Yet, we decided they were too "stupid" to deal with files and file paths, hiding them away. This conveniently locked users into proprietary platforms. Your point #2 reflects a lie we've collectively accepted as reality. Sadly, too many people now can’t even imagine that a straightforward way to exchange data among different software once existed, but that's a situation we're deliberately perpetuating.
This needs to change. Users deserve the opportunity to learn and engage with their tools rather than being treated as incapable. It’s time we started empowering users for a change.
To grab a random part of an ffmpeg command in my history: "-q:a 0 -map a"
Sorry, that's pretty damn indecipherable.
Yes, your example that completely ignores the premise is pretty damn indecipherable.
"If you only care about converting media without tweaking anything"
It is easy but annoying. I nearly always find it easier to write a short script and run that rather than type terminal commands directly.
first u gotta tell them how to install ffmpeg... scary stuff
This is an interesting position because that's only simple if you already know it. From the perspective of discoverability, it's literally the worst possible UI, because a string of that length has, say, 30^30 possible combinations, among which only one will produce the desired effect, and a bash prompt gives you no indication of how to arrive at that string.
I actually think ffmpeg’s UI is simpler than Handbrake for those at all acquainted with the command line (i.e., for those who understand the concept of text-is-everything-everything-is-text). Handbrake shows you everything you can possibly fiddle with whether or not you plan on fiddling with it. Meanwhile ffmpeg hides everything, period, and you ask for specific features by typing them out. It's not great for discovery but once you get the hang of it, it is incredibly precise. One could imagine taking someone for whom Handbrake was too much and showing them “look, you just type `ffmpeg -i`, the input file, and the output file, and it does what you want”. I imagine for many people this would be a perfectly lovely interface.
FFMpeg's command line is practically a programming language.
Someone who only wants to convert from one format to another, and isn't accustomed to CLIs, is far better served by "drag the file here -> type an output filename and extension in the text box".
The problem (and the reason both FFMpeg and Handbrake exist) is that tons of people "only" want to do two or three specific tasks, all in the same general wheelhouse, but with terrible overlap.
Yes. It's been a few years since I regularly used Handbrake, but I remember thinking of it as very simple, especially with its presets-based workflow. I was used to stuff like various CLI tools, mkvmerge and its GUI, and avidemux at that time.
It struck me as a weird example in the OP because I don't really think of Handbrake as a power user tool.
Handbrake's UI is in the uncanny valley for me -- too complicated for use by laymen, and way too limiting for use by people who know what they're doing...
My dad, a total layman, was able to use handbrake as a step in digitizing old family video tapes.
I think in the context of this thread, we shouldn't overgeneralize or underestimate "normal people".
A "normal person" is just someone whose time and mental energy are focused on something other than the niche task your app is aiming to solve. With enough time and focus, anyone can figure out any interface. But for many, something which requires a smaller investment to achieve the results they need is preferrable.
Also, even the most arcane and convoluted interfaces become usable with repetition. Normal people learn the most bureaucratic business workflows and fly through them if that is their job. Then if you dare to "improve" any aspect of it you will hear them complain that you "broke" their system.
Was he able to use it correctly though to be able to digitize video with exacltly the correct setttings so that no notable loss of quality was introduced? How long did it take him to randomly test settings?
ffmpeg with disposable or llm backed dnd interfaces.
for certain types of tooling UIs should be cheap, disposable and task/worlflow specific.
Actually I think this is a killer use case for local LLMs. We could finally get back to asking the computer to do something without having to learn how to string 14 different commands together to do it.
I’ve been computer touching since the mid eighties.
Exactly what golden era of computing are you harking back to, and what are you doing that requires 14 different commands?
The last thing we want for a user-friendly interface is nondeterminism. Some procedure that works today must work tomorrow if it looks like you can repeat it. LLMs can't be the answer to this. And if you go to the lengths of making the llm deterministic, with tests and all, you might as well code the thing once and for all and not ship the local llm to the end user at all.
Sorry, I see how my post lacked sufficient clarity.
The idea behind a cheap UI is not constant change, but that you have a shared engine and "app" per activity.
The particular workflow/ui doesn't need to ever change, it's more of a app/brand per activity for non-power users.
This is similar to how some apps historically (very roughly lotus notes springs to mind) are a single app but have an email interface/icon to click, or contacts, or calendar, all one underlying app but different ui entry points.
Using ffmpeg to convert one file to another remains probably my main use of general LLM web searches. This isn't to say it does a good job with that, but it's still ahead of me.
Just like with regexes I've yet to get a wrong result by doing "give me an ffmpeg command that does this and this" with an LLM, with Handbrake I'm never quite sure what I'm doing, too many checkboxes and dropdowns with default values already filled in.
imo LLMs make all of these UIs unnecessary, i'm happy to use ffmpeg now
The problem is that everyone wants a different 20% of the functionality.
Actual good UI/UX design isn't trivial and it tends to require a tight feedback loop between testers, designers, implementers, and users.
A lot of FOSS simply doesn't have the resources to do that.
> The problem is that everyone wants a different 20% of the functionality.
I'm not disagreeing with your basic take, but I think this part is a little more subtle.
I'd argue that 80% of users (by raw user count) do want roughly the same 20% of functionality, most of the time.
The problem in FOSS is that average user in the FOSS ecosystem is not remotely close to the profile of that 80%. The average FOSS user is part of the 1% of power users. They actively want something different and don't even understand the mindset of the other 80% of users.
When someone comes along to a FOSS project and honestly tries to rebuild it for the 80% of users, they often end up getting a lot of hate from the established FOSS community because they just have totally different needs. It's like they don't even speak the same language.
There's a good report/study about the complexity of Microsoft Word floating around somewhere.
It was something like:
- almost everybody only uses about 20% of the features of Word
- everybody's 20% is different, but
- ~80% of the 20% is common to most users.
- on the other hand, the remaining 20% of the 20% is widely distributed and covers basically all of the product.
So if you made a version of Word with 16% of its feature set you would almost make everybody happy. But really, nobody would be happy. There's no small feature set that makes most people happy.
Kind of like how the author likely knows about the report and wanted to make a blog post about it without saying anything about or citing the report itself. IT seems like it but 80/20 can be found in lots of places, just like 60/40 can.
Yeah but MS Word is also designed with the guidance of an army of accountants and corporate shareholders. Your study plays into that, but there's a much bigger picture when you talk about analyzing how any product came to be that has MS as a prefix.
Resources or the care, tbh. FOSS is a big umbrella and a lot of it simply isn't meant for "customers". Some FOSS apps clearly are trying to build a user base, in which case yeah the points this post makes are worth thinking about.
But many other projects, perhaps the majority, that is not their goal. By devs for devs, and I don't think there is anything wrong with that.
Pleasing customers is incredibly difficult and a never-ending treadmill. If it's not the goal then it's not a failure.
For a lot of usecases there is a strong 80% functionality. E.g. For Handbrake, 80% of the time I am reducing the size of my video screen grabs from my computer or phone. Don't need any resolution change, etc.
There are other times I want cropping or something similar, but it's really only 10-30% of the time. If people want to have a more custom workflow they can use an advanced UI
> tends to require a tight feedback loop between testers, designers, implementers, and users
Some FOSS projects attempt something like this, but it can become a self-reinforcing feedback loop: When you're only testing on current users, you're selecting for people who already use the software. People who already use the software were not scared away by the interface. So the current users tend to prefer the current interface.
Big software companies have the resources to gather (and pay) people for user studies to see what works and what does not for people who haven't seen the software before, or at least don't have any allegiances. If you only ever get feedback from people who have been using the software for a decade, they're going to tell you the UI must not change because they know exactly how to use it by now.
FOSS is ~99% developers, ask anyone in UI/UX to contribute to free projects and they'll look at you like you have two heads.
You don't need two different versions of the software, one that is easy and one that is powerful. You can have one version that is both easy and powerful. Key concepts here are (1) progressive disclosure and (2) constraints.
See Don Norman's Design of Everyday things.
https://www.nngroup.com/articles/progressive-disclosure/
https://www.nngroup.com/videos/positive-constraints-in-ux-wo...
Progressive disclosure can be intensely annoying to actual power users.
Definitionally, it means you're hiding (non-disclosing) features behind at least 1 secondary screen. Usually, it means hiding features behind several layers of disclosures.
Making a very simple product more powerful via progressive disclosure can be a good way to give more power to non-power users.
Making a powerful product "simpler" via progressive disclosure can annoy the hell out of power users who already use the product.
Just add an option for "advanced mode" that if clicked toggles to "basic mode". Power users are going to be looking for advanced features and only have to click it once. People who can barely read and are scared by anything advanced will get the interface they can use best the first time they open the app
That would be even better. It would take longer than an evening, though.
It's easy to make the powerful version
It's a little harder to make an easy version
Making the progressive version is very difficult. Where you can please one audience with the powerful and easy versions, you can often disappoint both with the progressive version despite it taking much more effort.
In my personal experience, you're lucky if free software has the budget (time or money) to get to easy. There's very little free software that makes it to progressive.
Relevant Steve Jobs quote: "Simple can be harder than complex: you have to work hard to get your thinking clean to make it simple."
So yes, it is hard to make the simple version. You have to have a very good understanding of what the user wants out of your product. Until you have this clarity, every feature seems important. Once you have this clarity you understand what the important features are. You make those features more prominent by giving them prime real estate, then tuck away the less important features in a less visible place. Simple things should be simple. Complex things only need to be possible.
It can get very complicated when you've built an audience where you have 10 segments that think their 10% of the use case is very important and you can only focus on a couple of segments at a time!
this is the way
For me, it's the fact that I'm running code written by some random people. The code could be malicious. I don't know unless I audit it myself and I have no time for that. Remember the XZ Utils backdoor thing from a few months ago? Well how many backdoors are there in other FOSS stuff?
How is that specific to FOSS?
It's one of the main features, just incoherently rambled and backwards.
https://en.wikipedia.org/wiki/The_Free_Software_Definition#T...
> The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
Free software can be audited for backdoors. Closed can not. Their backdoors will stay there indefinitely.
The better example for this design principle is the big green button on copy machines. The copier has many functions, but 99% of users don't bother with 99% of them.
For a little history on this design, see https://athinkingperson.com/2010/06/02/where-the-big-green-c...
I think some software -- FLOSS or otherwise -- tries to do this by hiding functionalities behind an "Advanced Mode" toggle.
Which kind of fulfills the best of both worlds: Welcoming for beginners, but full-powered for advanced users.
More software should be designed this way.
Oh man, I have literally done that to my parents’ remote controls. Actually more controls, because they still watch VHS tapes. But I have to admit it never occurred to me to do that to their software.
Logic Pro has a “masking tape” mode. If you don’t turn on “Complete Features” [0], you get a simplified version of the app that’s an easier stepping stone from GarageBand. Then check the box and bam, full access to 30 years’ accumulation of professional features in menus all over the place.
[0] https://support.apple.com/guide/logicpro/advanced-settings-l...
This has been a major UX problem for me when building my app [0] (an AI chat client for power user).
On the one hand, I want the UI to be simple and minimal enough so even non savvy users can use it.
But on the other hand, I do need to support more advanced features, with more configuration panels.
I learned that the solution in this case is “progressive disclosure”. By default, the app only show just enough UI elements to get the 90% cases done. For the advanced use cases, it takes more effort. Usually to enable them in Settings, or an Inspector pane etc. Power users can easily tinker around and tweak them. While non savvy users can stick with the default, usual UX flow.
Though even with this technique, choosing what to show by default is still not easy. I learned that I need to be clear about my Ideal Customer Profile (ICP) and optimize for that profile only.
[0]: https://boltai.com
Abstraction needs to happen on a different layer. Because your power users are already dealing with complicated stuff and you don't want to make their lives even harder.
I know about 10 people in real life that uses Handbrake. And 10 of them use it to rip Blu-ray discs and store media files on their NAS. It will piss them off if you hide all the codec settings and replace the main screen with a giant "convert to Facebook compatible video" button.
Instead, do it like how iina[1] packages mpv[2].
1. https://github.com/iina/iina
2. https://github.com/mpv-player/mpv
This also evidences that in this case, it's more developers of handbrake just know their audience rather than a real design failure. Maybe they'd prefer to keep the user base deliberately small?
As a UX guy, I'd like to note that the normal people aren't so great at knowing what they want, either.
I dread "Can you add a button..." Or worse, "Can you add a check box..." Not only does that make it worse for other users, it also makes it worse for you, even if you don't realize it yet.
What you need is to take their use case and imagine other ways to get there. Often that means completely turning their idea on its head. It can even help if you're not in the trenches with them, and can look at the bigger picture rather than the thing that is interfering with their current work flow.
Sometimes we really do just want a checkbox toggle though :D
Eg an app to prevent MacOS going to sleep. I want a checkbox to also stop an external display sleeping. I don't need my entire usage of the app and my computer-feature desires analysed and interpreted in a way that would make a psychoanalyst look like a hack.
But yes in a professional setting people do use "Can we add a button" to attempt to subvert your skillset, your experience, to take control of the implementation, and to bypass solid analysis and development practices
> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
You should know the common retort - but it's different 20%! So you can't create a one-button UI that covers 80%
But the challenge is real, though mostly "unsolvable" as there is too much friction in making good easily customizable UIs
There are literally thousands of wrappers for ffmpeg (other examples: imagemagick, ghostscript) that do exactly that. E.g. all commercial and dozens of open source video converters. So there is no lack of simple software for people who know little about the problem they're trying to solve (e.g. playing a downloaded mkv their shitty preinstalled video player doesn't accept), the problem is rather one of knowing that open source software exists and how to find it. Googling or asking an LLM does mostly present you software that costs money and is inferior to anything open source (and some malware).
Does it? I often ask ChatGPT such things and specifying I want free software options is enough (it often mentions which options are and aren’t free on its own).
OP should check out Gnome Circle:
https://circle.gnome.org
The problem with why so many OSS/free software apps look bad can be demonstrated by the (still ongoing) backlash to Gnome 3+. It just gets exhausting defending every decision to death.
Sometimes projects need the spine to say "no, we're not doing that."
GNOME 3+ developers put themselves in the inevitable (and unenviable) position of defending every decision to death because they limited the user's ability to make many, many decisions that were possible in previous versions.
There's nothing wrong with an opinionated desktop environment or even an opinionated Linux distribution. But, prior to GNOME 3, the project was highly configurable. Now it is not.
When people start up new highly opinionated projects (e.g. crunchbang, Omarchy), the feedback is generally more positive because those who try it and stick with it are the ones who like the project's opinions. The people who don't like those opinions just stop using it. There isn't a large, established base of longstanding users who are invested in workflows, features, and options.
Ideally you'd want to add selectable options for users in a way that's sustainable long-term and not just panic-adding things all over the place because of user demands. That's how you get the Handbrake situation that OP article is complaining about.
Gnome 3 was a big update and adding options, which does happen, is not free. There were changes from Gnome 2 and 3 and adding some options "back" from Gnome 2 is really asking for that feature to be rewritten from scratch (not all the time, but a lot of the time).
That the Gnome team has different priorities from other DEs, one of them being "keep the design consistent and sustainable," is completely valid and preferred by many users like myself.
I think it woudn't hurt if Handbrake had a simple UI like this by default with an Expert button to get into the full UI. I like how VLC also have basic and expert modes. It's a nice idea IMO.
One of my product design principles:
Concise
Focus only on the parts that are really needed, necessary, or the most important; hide the currently unnecessary or secondary, or simply remove the truly unnecessary and secondary; remove all unnecessary
Maybe it can help you
The title of this article isn't supported. It should be "Complicated software scares normal people". You can have simple and intuitive free software and complicated and unintuitive pay software.
Meanwhile, every time Gnome makes UI adjustments along these lines, there's an outcry that it's dumbed downed, copying apple, removing features etc etc.
Well Gnome tells people that they should just know keyboard shortcuts for everything - which is literally something only power users know to do. Their entire design ethos is a weird opposition to itself where it is aiming to be so simple and minimal that in order to do basic things you have to memorize keyboard shortcuts as there is no visual interface possibility to do those things.
Where do they tell to use keyboard shortcuts? I've been using Gnome 3 since it came out and I haven't encountered situations where I could do things with keyboard I couldn't do easily with mouse.
Yeah, and that's because the article's advice is bad.
It works exactly for TV remote controls. Or, rather, it worked before everybody had an HDMI player or smart TVs. It doesn't work for TV remotes now either.
Handbrake is a bit like TV remotes in the turn of the century. That's an exception even among free software, and absolutely no mainstream DE is like that.
Well that's because it's all those things.
They are actually, literally, removing features. That's not an opinion, that's what is actually happening, repeatedly.
Now, maybe you say good riddance. Fine. However, it is indisputable that now the desktop is slightly less capable. The software can do less stuff than before.
There is a massive amount of compromise in a UI. Adding features adds complexity. If you need that feature you have to accept the complexity that goes with it, and generally you are happy to. However if you don't need that complexity you don't want it. The average person uses 5% of the features of there word processor - but there is very little overlap between any two random users, and each wants the other 95% they don't use hidden (or perhaps 90% as there is another 5% they will need or think they will need) Gnome seems to be focusing on the 1% of features that are common to everyone, which means you can't get your 5%.
Note that I've always been a KDE user...
Well, the outcry is completely justified. Suppose a video conversion app really did have just have a drop-target area and a "do it" button. It would be ridiculously bad. That kind of crutch is ok to install for illiterate users who don't know anything and won't learn anything - but:
1. Some day, those users think "Hey, I'm not happy with some setting, what do I do?" and they can do nothing.
2. The users who need more functionality can't get it - and feel like they have to constantly wrestle with the app, and that it constantly confounds and trips them up, even when they express their clear intents. It's not like the GNOME apps have a "simple mode" and an "advanced mode"
3. The GNOME apps don't even really go along those lines. You see, even non-savvy users enjoy consistentcy and clarity; and the GNOME people have made their icons inscrutable; take over the window manager's decorations to "do their own thing"; hide items behind a hamburger menu, as though you're on a mobile phone with no screen space; etc. So, even for the non-savvy users - the UX is kind of bad. And just think of things like the GTK file picker. Man, that's a little Pandora's box right there, for the newbie and power user alike.
> Well, the outcry is completely justified. Suppose a video conversion app really did have just have a drop-target area and a "do it" button. It would be ridiculously bad. That kind of crutch is ok to install for illiterate users who don't know anything and won't learn anything
One could say the same about people who don't bother to learn ffmpeg CLI.
Its an entire desktop environment, its not as simple as choosing between two different apps. Although people who make this complaint should probably just use KDE, maybe they've used Gnome for a long time and don't want to change.
> maybe they've used Gnome for a long time and don't want to change.
By using GNOME and staying with it as it changed, they suffered more changes than they would have by switching to KDE at any point.
Yes, they've been slowly boiled alive and that is why they are so salty and resentful about it.
I do kind of think the solution to this issue lies at the OS level. It should provide a high degree of UI and workflow standardization (via first party apps, libraries and guidelines). Obviously it's an incredibly high bar to meet for volunteer efforts, but the user experience starts at the OS level. Instead of even installing a program like "Handbrake" or "Magicbrake" the OS should have a program called "Video Converter" which does what it says on the tin. There should also be a small on-device model which can parse commands like: "Convert a video so it can play on facebook" and deep link into the Video Converter app with the proper settings. Application-level branding should also basically not exist, it's too much noise. The user should have complete control over theming and typography. There has to be a standard interaction paradigm like the classic menubar but updated for modern needs. We need a sane discoverable default shell language with commands that map to GUI functionality within apps, and the user should never be troubled with the eccentricities of 1970s teletype machines.
*Software with UI designed for people who aren't the median user scares the median user
Therefore: If you want lots of users, design for the median user; if you don't, this doesn't apply to you
I'd argue most software scares normal people. They only learn because of a strong intrinsic motivation (connecting with other people/access to entertainment) or work requirements which come with mandatory trainings and IT support
> I challenge you to make more of it.
Huge amounts of dumbed-down software that won't do interesting things is made. There's no need to present this challenge.
> a person who needs or wants that stuff can use Handbrake.
That's the part that is often ignored: providing the version with the features.
It is halloween. Perhaps T-shirts can be printed with "free software" and danger sign? Oracle and Microsoft can fund this startup!
I like the design pattern of a "basic mode" and an "advanced mode".
The "advanced mode" rarely actually covers all the needs of an advanced user (because software is never quite everything to everyone), but it's at least better at handling both types of users.
Not all free software has this problem... Mozilla and Thunderbird I've had my parents on for years. It's not a ton to learn, and they work fine.
Taking the case of Photoshop vs. Gimp - I don't think the problem is complexity, lol. It's having to relearn everything once you're used to photoshop. (Conversely, I've never shelled out for Adobe products, and now don't want to have to relearn how to edit images in photoshop or illustrator)
Let's do another one. Windows Media Player (or more modern - "Movies & TV"). Users want to click on a video file and have it play with no fuss. VLC and MPC work fine for that! If you can manage to hold onto the file associations. That's why Microsoft tries so hard to grab and maintain the file associations.
I could go on... I think the thesis of this article is right for some pieces of software, but not all. It's worth considering - "all models are wrong, but some are useful".
> Taking the case of Photoshop vs. Gimp - I don't think the problem is complexity, lol. It's having to relearn everything once you're used to photoshop. (Conversely, I've never shelled out for Adobe products, and now don't want to have to relearn how to edit images in photoshop or illustrator)
I don't think this comparison is really accurate, Adobe's suite is designed for professionals that are working in the program for hours daily (e.g., ~1000 hours annually for a creative professional). There are probably some power users of The GIMP that hit similar numbers, but Creative Cloud has ~35-40 million subscribers, these are entirely different programs for entirely different classes of users.
I think there is something deeper here: people have become scared of the unknown, therefore we need to hide things for them. But people don't have to be scared. In fact even for people who are using Handbrake comfortably, a lot of things Handbrake presents in its UI are probably unknown to them and can safely be ignored. The screenshot in the article shows that Handbrake analyzed the source video and reported it as 30 FPS, SDR, 8-bit 4:2:0, 1-1-1. I think less than a tenth of a percent of Handbrake users understand all of that. 30 FPS is reasonably understandable but 4:2:0 requires the user to understand chroma subsampling, a considerably more niche topic. And I have no idea what 1-1-1 is and I simply ignore it. My point is, when faced with unknown information and controls, why do people feel scared in the first place? Why can't they simply ignore the unknown and make sense of what they can understand? Is it because they worry that the part of the software they don't understand will damage their computer or delete all their files? Is it just the lack of computer literacy?
I do not readily empathize with people who are scared of software, because my generation grows up tinkering with software. I'd like to understand why people would become scared of software in the first place.
Not scared, time limited.
The world is a complicated place, and there is a veritable mountain of things a person could learn about nearly any subject. But sometimes I don't need or want to learn all those things - I just want to get one very specific task done. What I really appreciate is when an expert who has spent the time required to understand the nuances and tradeoffs can say "just do this."
When it comes to technology 'simple' just means that someone else made a bunch of decisions for me. If I want or need to make those decisions myself then I need more knobs.
In my comment above I specifically did not expect the user to learn and understand everything, just the ability to ignore it. Handbrake has good defaults and the user would be successful if the only thing they do is: open the file and the press the green button.
And scared is the word used by the original author in the title. I want to understand that emotion. I don't need someone to tell me we can't learn everything.
How do you gain the confidence that what you choose to ignore is safe to ignore?
Computer damage is one potential consequence on the extreme end. On the conservative end, the software might just not work the way you want and you waste your time. It’s a mental model you have to develop. Even as a technical power user though, I want to reduce the risk of wasting my time, or even confront the possibility that I might waste my time, if I don’t have to.
How do you know the software in the article will do what you want?
For handbrake you can pick a preset and see what happens. Or don't even do that: when you open it it'll make you pick a video file, then you can just jam the green start button and see if it gives you what you need. Very little time spent.
i mean i don’t know that the green button does what i want either so what’s your point?
Right, you don't know if either program is the right thing just by looking at it. The reason you're uncertain isn't all those options handbrake shows. You have that uncertainty no matter what. You need the same confidence with or without options. So that problem, while real, isn't an argument against showing options.
And as far as time goes, it only takes a few seconds in either scenario. You hit go, you see the progress bar is moving, you check your file a few minutes later.
if the UI is forcing me to look at these options before pressing Go, it is a signal that someone thought these were important to consider before i pressed Go. this is the gricean maxims of quantity and relation.
the decision to ignore this signal is a learned behavior that you and i have, is all i’m saying
The average person doesn't even read error messages. They know how to ignore things and hit the button that goes forward just fine. If they choose not to try the program, that's different. They don't lack the skill. (A child might lack this skill but a child is curious enough to push on so it works out anyway.)
I don’t really understand what you’re arguing anymore. Is the average person afraid of the unknown or are they capable of ignoring things?
You seem comfortable with the idea that a child not having this learned skill. I don’t know why you don’t extend that empathy towards the inexperienced in general.
My interpretation was that you're implying a big fraction of adults don't have this skill, that a typical non-technical person likely doesn't have it. I'm saying nearly every adult does have it. So I have empathy for those that truly lack it, the 1% of adults, but that empathy doesn't extend to the rest that aren't suffering that issue.
why would people choose to suffer a skill issue for a skill they have? that makes no sense to me and imo you're vastly underestimating this percent.
Quitting is easier.
you’re bringing in an unwarranted value judgement on quitting here. easier why? maybe because i have more important things to do?
It only takes a few seconds.
its complexity. assuming binary flags, the amount of different ways the tool might operate is O(2^n) if the tool isnt doing what you want, thats a gigantic search space for fixing it. hiding options, and putting sane defaults makes n smaller and exponentially reduces the search space.
people arent afraid of doing 2^n stuff, its just that we have a gut sense that its gonna take more time than its worth. im down to try 10-100 things, but if its gonna be 100 million option combinations i have to tinker with, thats just not worth it.
Someone once told me “every setting you expose to your users is a decision you were too scared to make.”
That gets less true the more utility your software is expected to have.
When it comes to software intended for the general public it doesn't take bravery to decide that every user should only ever be allowed to do things exactly how you'd want them done. I might be more likely to attribute that to arrogance. Really, for something like converting audio/video I'd just see the inflexible software with few if any options as too limited for my needs (current, future, or both) and go looking for something more powerful.
It's better to not invest my time on software that is overly restrictive when more useful options are available, even if I don't need all of those options right now because it'll save me the trouble of having to hunt down something better later since I've already got what I need on my systems.
User: I'd like to be able to change my password
Dev: I'm too brave to let you do that
Like Alan Kay said about software: Simple things should be simple, complex things should be possible.
The thing is this takes a lot of resources to get right. FOSS developers simply don't have the wherewithal - money, inclination or taste - to do this. So, by default, there are no simple things. Everything's complex, everything needs training. And this is okay because the main users of FOSS software is others of a similar bend as the developers themselves.
For complex things there's CLI. For even more complex things there are programming languages.
The advice looks sensible, but not sure if it does more good than harm. I recall simplified user interfaces standing in the way, hiding (or simply not providing) useful knobs or information/logs. They are annoying both when using them directly as a "power user", and when less tech-savvy users approach you (as they still do with those annoyingly simplified interfaces), asking for help. Then you try to use that simplified interface, it does not work, and there is no practical way to debug or try workarounds, so you end up with an interface that even a power user cannot use. I think generally it is more useful to focus on properly working software, on documentation and informative logs, sufficient flexibility, and maybe then on UI convenience, but still not making advanced controls and verbose information completely inaccessible (as it seems to be in the provided examples).
I agree, that having better documentation and properly working software is a much better idea, and that simplified user interface have many problems.
It’s a bit like obscuring the less-used functions on a TV remote with tape.
It’s like creating a new tv controller with fewer options.
Makes a good point, but the headline bothers me. It isn't the free that is the problem, it is the complexity.
Yep, the Adobe tools and basically all professionally-used CAD software are incredibly intimidating to 'normal people', and they ain't free.
Same problem though. Half of UX is knowing which features to include, and the other half is knowing where to put them.
Intuitive UX for the average non-nerd user is task-based. You start with the most common known goals, like sending someone money, or changing the contrast of a photo, and you put a nice big button or slider somewhere on the screen that either makes the goal happen directly or walks you through it step by step.
Professional tools are workbench-based. You get a huge list of tools scattered around the UI in various groups. Beginners don't know what most of the tools do, so they have to work out what the tools are for before they can start using them. Then, and only then, can they start using the tools in a goal-based way. Professionals already know the tradecraft, so they have the simpler - but still hard - "Which menu item does what I need?" problem.
Developer culture tends to be script-based. It's literally just lists of instructions made of cryptic combinations of words, letters, and weird punctuation characters. Beginners have to learn the words, the concepts behind them, and the associated underlying computer fundamentals at multiple levels - just to get started. And if you start with a goal - let's say you want a bot that posts on social media for you - the amount of learning if you're coming to it cold is beyond overwhelming.
FOSS has never understood this. Yes, in theory you can write your own almost anything and tinker with the source code. But the learning curve for most people is impossibly steep.
AI has some chance of bridging the gap. It's not reliable yet, but it's very obvious now that it has a chance to become a universal UI, creating custom code and control panels for specific personal goals, generating workbench UIs and explaining what the tools do if you need a more professional approach, and explaining core concepts and code structures if you want to work at that level.
And the very freedom they got with free software let them change it to suit their fit, which would have been impossible with proprietary or otherwise restricted software.
The open source UIs initially seem alien, complicated or obscure related to similar to closed-source Windows' ones. The reason is OSS projects are built by developers primarily FOR developers and not for regular users. The design principle of "Don't surprise me" and other artistic and ergonomic ones are not met. Examples are Gimp and other content editors like Handbrake, Firefox vs chrome in mobile only, even IDEs.
BUT with time and a variable effort a regular user can get accustomed to the new philosophy and be successful. Either by persistant use, by using different OSS apps in series or by touching the command line. Happy user of Firefox, Libre office, Avidemux, Virt-manager (sic)
Is Firefox v chrome even relevant these days? I struggle to even think of shortcuts that aren't identical among the two browsers. Let alone UX and features.
I think the other big reason is availability of talent. FOSS is made by devs who usually are already well off and have time to contribute. You wont find as many artists nor graphic designers with the same privilege . so if there's bo designers on a project you get the bare basics.
I like when there are presets for UI. You have basic and advanced options visible when you want to.
This is useful for everyone not just non-techy types. I can't help but compare this to sites like shadertoy that let you develop with a simple coding interface on one half the screen and the output on the other (as opposed to the regular complexity of setting up and using a dev environment) Code goes here>{} , Press this button>[] , Output here>() , Which I think we need more of if we want to get kids into coding.
"I am new to GitHub and I have lots to say I DONT GIVE A FUCK ABOUT THE FUCKING CODE! i just want to download this stupid fucking application and use it.
WHY IS THERE CODE??? MAKE A FUCKING .EXE FILE AND GIVE IT TO ME. these dumbfucks think that everyone is a developer and understands code. well i am not and i don't understand it. I only know to download and install applications. SO WHY THE FUCK IS THERE CODE? make an EXE file and give it to me. STUPID FUCKING SMELLY NERDS"
Wow, it's actually real.
https://old.reddit.com/r/github/comments/1at9br4/i_am_new_to...
https://github.com/sherlock-project/sherlock/issues/2011
But that's another issue: developers make software for themselves vs "digital public goods for everyone".
UI/UX (which the article is about) is part of the broader approach.
I know of one company that explicitly didn't make downloads available to dissuade this kind of hard-to-support user from using their time without materially contributing anything
Free software is an anarchist mindset -- wellbeing for all, take what you need, contribute back where you can.
It's scary for folks who are used to transactional relationships to encounter these different mindsets.
Handbrake scares me and I’m a big nerd!
I’ve been ripping old DVDs recently. I just want something that feels simple from Handbrake: a video file I can play on my Apple TV that has subtitles that work (not burned in!) with video and audio quality indistinguishable from playing the DVD (don’t scale the video size or mess with the frame rate!), at as small a file size as is practical. I’m prepared for the process to be slow.
I’ve been messing with settings and reading forum posts (probably from similarly qualified neophytes) for a day now and think I’ve got something that works - though I have a nagging suspicion the file size isn’t as small as it could be and the quality isn’t as good as it could be. And despite saving it as a preset, I for some reason have to manually stop the subtitles from being burned in for every new rip.
Surely what I want is what almost everyone wants‽ Is there a simple way to get it? (I think this is a rhetorical question but would love it not to be…)
Brilliant. Love it! Say your family member or other person you support needs some free software functionality but not the whole UI..
Grab an LLM and make a nice single button app they can use.
LLMs writing code, plus free software, a match made in heaven.
It’s also a new take on Unix “do one thing well” philosophy.
Completely agree, that's why I love old mac software. Things were easy enough to understand for the average user, but power user still get lots of features.
These kind of ui are extremely hard to make.
My interpretation: Author picks a complicated piece of software, complains it's complicated.
Maybe handbrake was never meant to be used by people who need the one button solution? That one button solution exists all over the place.
It has nothing to do with free vs not-free
GNOME's libadwaita solves this beautifully. It's simple, nice looking, yet powerful. You could absolutely use it to make an ffmpeg front-end that's both fully featured and friendly to less technical users. But if your app can't, then another good option is to have a "simple mode" and "advanced mode".
And IMO, Handbreak is more complicated than CLI ffmpeg. It's really chaotic.
Constrict fits into that niche nicely.
https://github.com/Wartybix/Constrict
Although I wish Linux were easier to use -- and there are distros that aim for this, I do agree that FOSS is mostly by nerds for nerds, but it doesn't prevent other people making changes -- which is exactly what the author did.
So I'd like to welcome the author to make more apps based on FOSS.
> Although I wish Linux were easier to use [ ... ]
We're getting there. I run Linux Mint with an XFCE desktop -- an intentionally minimal setup. The system performs automatic updates and the desktop layout/experience resembles older Windows desktops before Microsoft began "improving" things. No ads, no AI.
I'm by no means an end user, but in Linux I see incremental progress toward meeting the needs of that audience. And just in time too, now that Microsoft is more aggressively enshittifying Windows.
What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
> What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
That can be a problem with Linux but in my experience searching for Windows help is usually not good either.
> What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
claude-code actually does this really well, having used it to set up gnome on my phone, and fix all my problems without having to learn anything
Yeah I agree that the difference of usability between Linux and Windows is getting much smaller, now that MSFT is trashing Windows.
I do have a Linux box, and I only have complaints about small things. Double screen works, VSCode works, Firefox works too. Not much to complaint for a personal dev box. The ability to just `apt install` a bunch of stuffs and then start compiling is pretty nice.
But again, I'm pragmatic, so if I'm doing something Windows related, I'd definitely use my Windows box.
True in many ways.
I wanted to write an article or short blog post about how Windows 10, menus and javascript, increasingly tuck away important tools/buttons in little folds. This was many months ago.
I want to write it and title it, "What the tuck?" But tuck refers exactly to the kind of hidden menus that make those so called sleek and simple UIs for the the 80% of users.
The problem is that it stupefies computing literacy, especially mobile web versions.
Perhaps not every casual web browser needs to sit at a desk to learn website navigation. Then again, they may never learn anything productive on their own.
He mentions the 80/20 rule. But I wonder if what he's describing is more like 95/5. Meaning, non-techie users are massively underserved.
Completely agree with the author. Would love most power tools to start off in "simple mode" so I could recommend them to friends/family, and have a toggle for advanced mode which shows everything to power users.
I think you can see this already with websites, like there is dozens of websites like convert video to MP4, ompress this or that. And I think they are just building an UI on top of open source tools
The article complains there's too many old school Windows-type power user GUIs in the free software space. Most of which were not actually FOSS, but Freeware, or sometimes Shareware!
My criticism of Free Software is exactly the reverse. There isn't enough of that kind of stuff on Linux!
Though to be sure, the Mac category (It Has One Button) is even more underserved there, and I agree that there should be more! Heck, most of the stuff I've made for myself has one button. Do one thing and do it well! :)
I agree there isn't enough. Some programs are OK (especially command-line programs), some aren't as good as the actual good quality ones.
> Do one thing and do it well!
This does not necessarily mean that it would have only one button (or any buttons). Depending on what is being done, there may be many options for being done, although there might also be default settings. A command-line program might do it better that you only need to specify the file name, but if there are options (what options and how many options there will be depends what program it is) then you can also specify those options if the default settings are not suitable.
Over the years I've gotten really tired of this obsession with "normal people" and not just because I'm one of the so called power users. This is really part of a growing effort to hide the computer away as an implementation detail.
https://contemporary-home-computing.org/RUE/
That's what "UX" is all about. "Scripting the users", minimizing and channeling their interactions within the system. Providing one button that does exactly what they want. No need to "scare" them with magical computer technology. No need for them to have access to any of it.
It's something that should be resisted, not encouraged. Otherwise you get generations of technologically illiterate people who don't know what a directory is. Most importantly, this is how corporations justify locking us out of our own devices.
> We are giving up our last rights and freedoms for “experiences,” for the questionable comfort of “natural interaction.” But there is no natural interaction, and there are no invisible computers, there only hidden ones.
> Every victory of experience design: a new product “telling the story,” or an interface meeting the “exact needs of the customer, without fuss or bother” widens the gap in between a person and a personal computer.
> The morning after “experience design:” interface-less, desposible hardware, personal hard disc shredders, primitive customization via mechanical means, rewiring, reassembling, making holes into hard disks, in order to to delete, to logout, to “view offline.”
Most people don't need computer (full feature power, full power of choice) to solve their task, as could be seen with the smartphones, which are designed as appliances more or less.
I don't want most of consumer electronics to act like a computer, it is a deficiency for me. I chose "dumb" Linux-based eBook reader instead of Android-based, because I want it to read books, full stop.
This quickly falls apart when you need to do stuff and be productive. Reading as a pass time is a different thing.
The problem is nobody makes this distinction for some reason. In my mind there's two types of software - the kind for doing things, and the kind for mostly consuming. As the wise Britney Spears once said, "there's only two types of people in the world: those that entertain, and the ones that observe"
It makes no sense for your CAD program you're building a company out from to be dumbed down.
Oh, this e-reader has lots of productivity features. You can highlight words (which are later stored in a separate folder), make bookmarks, easily translate words, use screen reader, etc.
I use it mostly for work and academic papers, not for amusement.
Most of the regular simple pdf viewers on the PC don't have this kind of productivity functionality in mind. They might have some, but in general they are not designed to work with read-only text.
Some people just like to eat food, they don't want to learn how to cook it. You or I may think that's a tragedy, but I don't think e.g a dentist has an obligation to become fluent in the things that I'm competent in.
I'm no dentist, I go to dentists. I let them work, and try not to be too annoying. I learn the minimum that I need to know to follow the directions that they deliberately make very simple for me.
This will result in generations of generally dentistry ignorant people, but I am not troubled by this.
As technologically competent people, one of our desires should be to help people maintain the ignorance level that they prefer, and at every level steer them to a good outcome. Let them manage their own time. If they want privacy and control, let's make sure they can have it, rather than lecturing them about it. My grandmother is in her 90s and she doesn't want people reading her emails, listening to her calls or tracking her face. She is not prepared to deal with more than a couple of buttons, and they should be large and hopefully have pictures on them that explain what they do. It's my job to square that circle.
Please don't.
Please assume I'm smarter than I actually am -- I will figure it out no problems. I like complex interfaces that allow me to do my task faster, especially if I'm using it often (e.g. for work).
As one of the main developers of Krita said, just being free isn't good enough, the software needs to be great.
I am in favour of simplified apps like this, maybe it can be a simple toggle switch in the top right corner between simple and advanced. Similar to that stupid new version of outlook I have to constantly switch back to the old version.
I struggle to link the title with the article. Aren't both Handbrake and Magicbrake both free? There are plenty of free tools which are very simple to use.
In this particular case I'd just tell people to download and use VLC Player. But I get the point.
Okay, TFA uses handbrake as an example, but there are probably hundreds of other attempts at a simpler ffmepg front end.
Handbrake is only popular _because_ it is so powerful, not in spite of it.
A software should find its own niche. We have imagemagick and ffmpeg to deal with nearly everything of image/video functionality, but we still have a lot of one-click-to-finish softwares.
> claude --dangerously-skip-permissions -p "convert happy.blarf to a small mp4 file that will work on my ipad and send it to my email"
I guess instead of a separate application, maybe some of these programs would benefit from having 'dumb' mode where only basic/most used functionality is available. I.e. when I run gimp, I most often just use it rescale the image, cut a piece and insert into a new image and every time I have to look for the right options in the menu.
Would be nice for an inverse article -- which is often harder to achieve -- case in point: I wish iCloud had a power user interface.
Oh, it has one, it's just not available to you.
FOSS's issue isn't that they trust users too much, it's that they aren't taking different types of users into account.
Corporate-built software that's locked down or limited like iCloud is 100% about not trusting the users.
80% of people only use 20% of the functionality. But it’s a different 20%.
Yeah, MS took that lesson to heart with Office, and now it's a disaster to use for everyone, not just power-users.
Yeah, MS took this lesson to heart with Office, and now it's a disaster for everyone, not just the power-users.
maybe there just isn't a solution? people don't ask for a hammer that magically assembles every piece of furniture. sometimes the user of the tool needs skills to use it. UI/UX only takes you so far.
A good product manager could make a big difference to many open source projects. Someone who has real knowledge of the problem space, who can define a clear vision of what problem is being solved for which user community and who can be judicious in weighing feature requests and developing roadmaps.
> I’m the person my friends and family come to for computer-related help. (Maybe you, gentle reader, can relate.)
I proactively stopped that decades ago.
"Oh, you use Windows? Sorry, I haven't used it in over a decade so I can't help. If you have any Linux questions, let me know!"
I go one step deeper, a BSD or Haiku. No support calls ever …
> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy.
True but with a caveat: Those people rarely need the same 20% of your features.
I'd love applications that would let me choose how advance I want the UI to be. Kinda like Windows Calculator. A toggle between basic, advance, and some common use cases.
For example, I'd love Gimp to have a Basic mode that would hide everything except the basic tools like Crop and Brush. Like basic mspaint.
In almost all cases I don’t want to mess with the defaults, because I know diddly about video formats.
the issue is real, but i'm not sure this solves it; in this case you end up with an overly specific solution that you can't really recommend to most people (and won't become widely known)
using the remote analogy, the taped versions are useful for (many!) specific people, but shipping the remote in that configuration makes no sense
i think normal people don't want to install an app for every specific task either
maybe a solution can look like a simple interface (with good defaults!!) but with an 'advanced mode' that gives you more options... though i can't say i've seen a good example of this, so it might be fundamentally flawed as well
Are we at the point yet where we can advise people to ask ChatGPT how to install something called "FFmpeg" and have it tell them what to copy-paste into an app called "Terminal"?
I'd be scared too if I see a check box for iPod support, I mean, when was this software less updated, the 80's?
I don't think free software has to aim to be for everyone. It's OK to build software for yourself and people like you.
Most people can't comprehend that. "If it's available publicly online and has a readme, it's DEFINITELY was created for me and for all other users, right?"
This is so common, to the point that it's a FOSS misconception #1 for me. They can't get it that the developer can develop the software to solve only their specific problem and not interested in support, feature contributions, and other improvements or usecases.
I have tried to use GPG several times but the UX got in the way so much. I feel it did a disservice to privacy. It gatekeeps it behind an arcane UX.
Banks. Won't touch any free software, unless backed by some real humans signing huge contracts for support.
Hyperbole. I’m working at a bank, we use many free software.
My number one principle of UI design is this:
The things the user does most frequently need to be the easiest things to do.
You expose the stuff the user needs to do quickly without a lot of fuss, and you can bury the edge cases in menus.
Sadly a lot of software has this inverted.
> 80% of the people only need 20% of the features
Yes, but those 80% all use a different subset of the 20% of features. So if you want to make them all happy, you need to implement 100% of the features.
I see the pattern so often. There is a "needlessly complicated" product. Someone thinks we can make it simpler, we rewrite it/refactor the UI. Super clean and everything. But user X really needs that one feature! Oh and maybe lets implement Y. A few years down the line you are back to having a "needlessly complicated" product.
If you think it could easily be done better, you don't understand the problem domain well enough yet. Real simplicity looks easy but is hard to achieve.
i think what the author is characterizing as "free software" is probably better described as "software with bad UX"
The Venn diagram is a circle!
Yes, everyone knows free software is way too complex and icky, and paid for software is super simple, minimal and user friendly.
That's why I use 3D Max, Photoshop, and Excel. For the simplicity.
i don't have a TV at home and hence very rarely "have to" use a remote (or 2 or 3 at once, as it happens), but it's a nightmare everytime
Free software scares people until they have to pay for Windows.
I feel like the author wants everything to be Apple simplified. That all users should dumb down to on off go and stop. Ask chat got for anything else. I disagree for so many obvious reasons it's pointless to iterate them. We as a society need to get MORE capable, more critical, and improve our cognitive abilities; not the opposite.
I’m not sure I’d describe Apple products as simplified any more, take a look at the settings in iOS for example, it has grown in complexity with each release.
Yes, that's because Apple found out that the domain space is actually complex. Device configuration is complicated because devices are used in thousands of different permutations of environments and people.
The simplicity Apple had was always a mistake, an artifact of their hubris.
Love the example with the remote! People do need that!
His notion of "normal people" are people who use MacOS:
> Normal people often struggle with converting video. ... the format will be weird. (Weird, broadly defined, is anything that won’t play in QuickTime or upload to Facebook.)"
Except normal people don't use MacOS and don't even know what QuickTime is. Including the US, the MacOS user share is apparently ~11%. Take the US out, and that drops to something like... 6%, I guess? And Macs are expensive - prohibitively expensive for people in most countries. So, normal people use Windows I'm afraid.
https://gs.statcounter.com/os-market-share/desktop/worldwide
In fact, if you disregard the US, Linux user share seems to be around half of the Mac share, making the perspective of "normal people use Macs not FOSS" even sillier.
-----
PS - Yes, I know the quality of such statistics is low, if you can find better-quality user share analysis please post the link.
My Pinebook Pro with i3wm is really simple to use. You power it on, all it does is it asks for one of the LUKS passwords. If you miss, it will ask again. Then it's on.
You can't do anything wrong with it. There's no UI to fiddle with WiFi. It's all pre-configured to work automatically in the local WLAN (only; outside, all that's needed is to borrow someone's phone to look for the list of wifi nets in the area and type the name of selected network to /etc/wpa_supplicant/wpa_supplicant.conf). But there's rarely any need to go out anyway, so this is almost never an issue.
There are no buttons to click, ANYWHERE. Windows don't have confusing colorful buttons in the header. You open the web browser by pressing Alt + [. It shows up immediately after about 5 seconds of loading time. So the user action <-> feedback loop is rather quick. You close it with Alt + Backspace (like deleting the last character when writing text, simple, everyone's first instinct when you want to revert last action)
The other shortcut that closes the UI picture is Alt + ]. That one opens the terminal window. You can type to the computer there, to tell it what you want. Which is usually poweroff, reboot, reboot -f (as in reboot faster). It's very simple and relatable. You don't click on your grandma to turn it off, after all. You tell it to turn off. Same here.
All in all, Alt + [ opens your day. Alt + ] gives you a way to end it. Closing the lid sometimes even suspends the notebook, so it discharges slightly slowerly in between.
It's glorious. My gf uses it this way and has no issues with it (anymore). I just don't understand why she doesn't want to migrate to Linux on her own notebook. Sad.
i enjoyed your post, those remotes are too funny!!
Reminds me of aws...
If only there was an easy way to fund all the Open Source programs you like and use, so the projects who struggle with it, can put more focus into design.
Seems like a win-win, take my money solution, for some reason the market (and I guess that means investors) are not pursuing this as a consumer service?
Some TV remotes or air conditioner remotes now have a "boomer flap" which when engaged, hides 90% of all the buttons. The scanner software I use has something similar, novice mode and expert mode.
Ah yes, the infamous “klabing” feature. You open the manual and read “Press Kabling to kabling the whatchanathjng”.
We been knowing that.
Dunno why people assume that FOSS developers are just dummies lacking insight but otherwise champing at the bit to provide the same refinement and same customer service experience as the "open source" projects that are really just loss leaders of some commercial entity.
ffmpeg wrappers be like
This seems just…not true?
You can always cherry pick apps to fit a narrative.
FOSS apps with simple interfaces: Signal, Firefox, VLC, Gnome [1], Organic Maps, etc, the list goes on and on.
[1] it’s not a simple app but I think there’s a good argument to be made that it’s simpler/cleaner than commercial competitors.
Love it.
This is a good write-up.
In addition to this issue, I've also had good conversations with a business owner about why he chose a Windows architecture for his company. Paying money to the company created a situation where the company had a "skin-in-the-game" reason to offer support (especially back when he founded the company, because Microsoft was smaller at the time). He likes being able to trust that the people who build the architecture he relies on for his livelihood won't just get bored and wander off and will be responsive to specific concerns about the product, and he never had the perception that he could rely on that with free software.
> People benefit from stuff like this
While I agree that people generally feel better by getting something with little effort, I think that there is a longer-term disservice here.
Once upon a time, it used to be understood that repeated use of a tool would gradually make you better at it - while starting with the basics, you would gradually explore, try more features and gradually become a power user. Many applications would have a "tip of the day" mechanism that encouraged users to learn more each time. But then this "Don't Make me Think" book and mentality[0] started catching on, and we stopped expecting people to learn about the stuff that they're using daily.
We have a high percentage of "digital natives" kids, now reaching adulthood without knowing what a file is [1] or how to type on a keyboard [2]. Attention spans are falling rapidly, and even the median time in front of a particular screen before switching tasks is apparently down from 2.5 minutes in 2004 to 40 seconds in 2023 [3] (I shudder to think what it is now). We as a civilization have been gradually offloading all of our technical competency and agency onto software.
This is of course leading directly to agentic AI, where we (myself included) convince ourselves that the AI is allowing us to work at a higher level, deciding the 'what', while the computer takes care of the 'how' for us, but of course there's no clear delineation between the 'what' and 'how', there's just a ladder of abstraction, and as we offload more and more into software, the only 'what' we'll have left is "keep me fed and entertained".
We are rapidly rolling towards the world of Wall-E, and at this pace, we might live to see the day of AIs asking themselves "can humans think?".
[0] https://en.wikipedia.org/wiki/Don%27t_Make_Me_Think
[1] https://futurism.com/the-byte/gen-z-kids-file-systems , https://news.ycombinator.com/item?id=30253526
[2] https://www.wsj.com/lifestyle/gen-z-typing-computers-keyboar... , https://news.ycombinator.com/item?id=41402434
[3] https://www.apa.org/news/podcasts/speaking-of-psychology/att...
Why are people bothered by the money-trasfer-winking so much, but not by these companies aiding and abetting a brutal and murderous regime engaging in decades-long military occupying, at first, and later - aided and abetted a genocide campaign?
I wanted to scoff at this, but the remote example is pretty on-point.
The majority of users probably want the same small subset of features from a program and the rest are just confusing noise.
When I used to be active on reddit I was following r/graphicdesign (me being a graphic designer) and one day someone asked a question about Inkscape.
Not 5 minutes after that someone else on the comments went on a weird rant about how allegedly Inkscape and all FOSS was "communist" and "sucked" and capitalist propietary stuff was "superior".
You get weird people on social media. best ignored.
IN this particular case someone things more competition is communist...
I mean, that's why we have software as a paid job, right?
Photoshop is a clustershit of UI mess and professionals use it. Then, home users, following the popularity, also use it.
Maybe we should just say free software is amazing and not a tool for home users, in order to get home users to use it.
>> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
One of the truest things I've read on HN. I've also tried to visit this concept with a small free image app I made (https://gerry7.itch.io/cool-banana). Did it for myself really, but thought others might find it useful too. Fed up with too many options.
I think we need to stop this madness.
The disaster that is "modern UX" is serving no one. Infantilizing computer users needs to stop.
Computer users hate it - everything changes all the time for the worse, everything gets hidden by more and more layers until it just goes away entirely and you're left with just having to suck it up.
"Normal people" don't even have computers anymore, some don't even have laptops, they have tablets and phones, and they don't use computer programs, they use "apps".
What we effectively get is:
- For current computer users: A downward spiral of everything sucking more with each new update.
- For potential new computer users: A decreasing incentive to use computers "Computers don't really seem to offer anything I can't do on my phone, and if I need a bigger screen I'll use my tablet with a BT keyboard"
- For the so-called "normal people" the article references (I believe the article is really both patronizing and infantalizing the average person), there they're effectively people who don't want to use computers, they don't want to know how stuff works, what stuff is, or what stuff can become, they have a problem they cannot put into words and they want to not have the problem because the moving images of the cat should be on the place with the red thing. - They use their phones, their tablets, and their apps, their meager and unmotivated desire to do something beyond what their little black mirror allow them is so week that any obstacle, any, even the "just make it work" button, is going to be more effort than they're willing (not capable of, but willing) to spend.
Thing is, regardless of particular domain, doing something in any domain requires some set of understanding and knowledge of the stuff you're going to be working with. "No, I just want to edit video, I don't want to know what a codec is" well, the medium is a part of the fucking message! NOTHING you do where you work with anything at all allows you to work with your subject without any understanding at all of what makes up that subject. You want to tell stories, but you don't want to learn how to speak, you want to write books, but you don't want to learn how to type, write or spell ? Yes, you can -dictate- it, which is, in effect, getting someone competent to do the thing for you.. You want to be a painter, but you don't care about canvas, brushes, techniques, or the differences between oil, acrylic and aquarelle, or colors or composition, just want to make picture look good? You go hire a fucking painter, you don't go whining about how painting is inherently harder than it ought to be and how it's elitist that they don't just sell a brush that makes a nice painting. (Well, it _IS_ elitist, most people would be perfectly satisfied with just ONE brush, and it should be as wide as the canvas, and it should be pre-soaked in BLUE color, come on, don't be so hard on those poor people, they just want to create something, they shouldn't have to deal with all your elitist artist crap!) yeah, buy a fucking poster!
I'm getting so sick and tired of this constant attack on the stuff _I_ use every day, the stuff _I_ live and breathe, and see it degenerated to satisfy people who don't care, and never will.. I'm pissed, because, _I_ like computers, I like computing, and I like to get to know how the stuff works, _ONCE_ and gain a deep knowledge of it, so it fits like an old glove, and I can find my way around, and then they go fuck it over, time and time again, because someone who does not want to, and never will want to, use computers, thinks it's too hard..
Yeah, I really enjoy _LISTENING_ to music, I couldn't produce a melody if my life depended on it (believe me, I've tried, and it's _NOT_ for lack of amazingly good software), it's because I suck at it, and I'm clearly not willing to invest what it takes to achieve that particular goal.. because, I like to listen to music, I am a consumer of it, not a producer, and that's not because guitars are too hard to play, it's because I'm incompetent at playing them, and my desire to play them is vastly less than my desire to listen to them.
Who are most software written for? - People who hate computers and software.
What's common about most software? - It kind of sucks more and more.
There's a reason some of the very best software on the planet is development tools, compilers, text editors, debuggers.. It's because that software is made by people who actually like using computers, and software, _FOR_ people who actually like using computers and software...
Imagine if we made cars for people who HATE to drive, made instruments for people who don't want to learn how to play.. Wrote books for people who don't want to read, and movies for people who hate watching movies. Any reason to think it's a reasonable idea to do that? Any reason to think that's how we get nice cars, beautiful instruments, interesting books and great movies ?
Fuck it. Just go pair your toaster with your "app" whatever seems particularity important.
Couldn't agree with this more. I'm even an advocate for simulating walled gardens with Free Software. Let people who need to feel swaddled in a product or a brand feel swaddled.
It also opens up opportunities for money-making, and employment in Free Software for people who do not program. The kind of hand-holding that some people prefer or need in UX is not easy to design, and the kind of marketing that leads people to the product is really the beginning of that process.
Nobody normal cares that it's a thin layer over the top of a bunch of copyleft that they wouldn't understand anyway (plenty of commercial software is a thin layer over permissively licensed stuff.) Most people I know barely know what files and directories are, and the idea of trying to learn fills them with an anxiety akin to math-phobia. Some (most?) people get a lot of anxiety about being called stupid, and they avoid the things that caused it to happen.
They do want privacy and the ownership of their own devices as much as everyone else however, they just don't know how much they're giving up when they do a particular software thing, or (like all of us) know that it is seriously difficult if not possible to avoid the danger.
Give people mock EULAs to click through, but they will enumerate the software's obligations to them, not their obligations to the software. Help them remain as ignorant as they want about how everything works, other than emphasizing the assurances that the GPL gives them.
> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
For those of you thinking (which 20%) following that article from the other day — this is where a good product sense and knowing which 80% of people you want to use it first. You could either tack on more stuff from there to appeal to the rest of the 20% of people, or you could launch another app/product/brand that appeals to another 80% of people. (e.g. shampoo for men, pens for women /s)
[dead]
I like this idea -- a simple interface/frontend for an otherwise complicated topic, for the less skilled among us. It has intriguing possibilities beyond technology ...
Q: Why does God allow so much suffering?
A: What? There is no God. We invented him.
Q: Doesn't this mean life has no purpose?
A: Create your own purpose. Eliminate the middleman.
Q: But doesn't atheism allow evil people free rein?
A: No, it's religion that does that. A religious evil person can always claim God either granted him permission or forgave him after the fact. And he won't be contradicted by God, since ... but we already covered that.
Hmm. If it works for HandBrake, it might work for life.