jauntywundrkind an hour ago

Having smaller scaling units is a convenience. Yes. The mainframe world has tended to require massive upgrades to scale, yes.

Yet, in many ways, the world we are in today is actually intensely terminal based. When you are on Facebook, on Google, ask yourself where the compute is happening. Ask yourself what you think the split is, what the ratio of client : server compute is. My money is that Facebook and Google spend far more compute than I do when I use their services. And that's with these companies spending probably billions of dollars carefully optimizing their server side architectures to relentlessly reduce cost.

The hyperscalers use lots of powerful dense small servers. It could be argued that these servers more resemble one kind of shared personal computer model, since there are many dozens of them in the rack, not one big system. But still, these systems host many dozens of requests at any given point in time, are being used as a shared system with many terminals connected to them.

Last, I do think we have mis-optimized like hell, and have almost no data points about how incredibly bad a job we are doing, have almost no idea what a thin-client / terminal based world could look like. Much of the reason for personal computers is that individuals had power over these systems, that it didn't take getting departmental them IT approval for each use case: you have your computer and do with it as you will. The organizational complexity pushed us away from the terminal model long long long ago.

But today, there's enormously popular container options. There's incredibly festureful container-based development options abundant, with web or GUI options abound. Toolbox is a great example. https://github.com/containers/toolbox I guess it could be argued that this brings a personal computer model to a shared terminal system, hybridized the option?

But I long to see better richer shared systems, that strive to create interconnected multi user systems, that we terminal into. There is so much power on tap now, so many amazing many core chips that we could be sharing very effectively, offloading work to, while sparing our batteries & reducing our work:idle compute ratios drastically with.

ggm 12 hours ago

I think when commodity NAS arrived and home users had some expectations of reliable storage at home, combined with ubiquitous cloud storage, to me at least, centrally managed systems became less interesting. Once I realised I didn't run CPU intensive code, the benefit was managed reliable filestore, and access to other people's data.

What I'm left with is old timer regret for time passing. The ozone rich, oily smell of a well managed machine room. A display of lights showing intensity of work on a CPU. Tapes stuttering. Well.. romance is fine. We got work to do old timer.