tmtvl 5 hours ago

Rewriting GPL software under the MIT license is a terrible thing to do. The GPL is meant to protect and preserve what should be basic human rights. So-called "permissive" licenses are meant to provide big tech with free labour.

  • grg0 4 hours ago

    Yup, and one doesn't need to look further than FreeBSD to see what the end result is.

    I wonder, to what extent is the Linux Rust effort "generously subsidized" by corporations?

    > He is thinking about ""what we are going to leave to the next generation"". Developers starting out don't want to use COBOL, Fortran, or C, he said. They want to work with fancy stuff like Rust, Swift, Go, or Kotlin.

    Oh, think about the children! The wolf in sheep's clothing.

    • bendhoefs 4 hours ago

      What does Rust have to do with it? You can write MIT or GPL licensed code in any language.

      • bayindirh 3 hours ago

        It's not the language selection, but the license selection what bothers many.

    • woodruffw 4 hours ago

      > I wonder, to what extent is the Linux Rust effort "generously subsidized" by corporations?

      I would hazard that extent is strictly less than Linux itself.

    • jmclnx 4 hours ago

      I did not see these posts before I posted similar. I fully believe this is a direction Corporations are pushing.

      Almost wonder when spyware will be added and restrictive DRM.

      • surajrmal 4 hours ago

        This is FUD. The folks funding this generally believe in the benefits. Not everything is 4D chess. There are plenty of not more of examples of rewrites where licenses do not change.

        • bayindirh 3 hours ago

          If that's no 4D chess, and if the license is not that important, why not relicense uutils to GPLv3 then?

          • surajrmal 2 hours ago

            I don't mind GPL, but if I were to start a new project it would be MIT licensed. There is no hidden agenda in my decision to do that and it's certainly not some grand scheme by my employer to make me do that. I simply find gpl less free because it has obligations attached to it. I don't think I'm alone in feeling this way. In fact I think this may very well be the way the majority of folks feel at this point.

  • alwayslikethis 2 hours ago

    I would have said this in the past, but at this point I don't think it matters that much anymore.

    1. Code copyright has devalued a lot in general, as you can code significantly faster with LLMs and use them to launder GPL'd code into whatever you want.

    2. Big tech seems to be getting away with most other forms of abuses these days, GPL wouldn't really stop them from doing anything important.

    To the extent that code is being devalued, I would say that these LLMs are a benefit to open source overall, as devaluing of code also reduces the opportunity cost of open sourcing software. They also somewhat level the playing field between single-contributor open source projects and companies with teams of developers, because an individual is almost always limited by the rate at which he/she can write or architecture code, whereas teams have significant non-code overheads (meetings, reviews, bureaucracy).

  • loufe 4 hours ago

    Thanks for sharing this thought, it hadn't occured to me that this could be an issue. Would the choice of AGPL or MPL for a licence have satisfied your concerns?

    • bayindirh 4 hours ago

      As a person who shares OPs concerns, (A)GPL is acceptable, in v2 or v3 form. Anything permissive is not, because allows a closed fork, which everybody wants to do to rob free software ecosystem to undo what has been done over these years.

      Because "monies".

      • switchbak 4 hours ago

        Personally I haven’t seen this motive across any of the organizations I’ve worked at. They usually seem more interested in minimizing maintenance costs, which means they try to upstream changes where possible or practical.

        What would motivate a company to fork and keep private changes to a core GNU utility like chmod?

        • bayindirh 4 hours ago

          Any hardware company which would want to block 3rd party firmware from loading or executing on their systems. They can add a small handshake code to every binary on that system to authenticate via TPM or the processor's embedded secure element on start.

          They don't need to make extensive changes. Pull the latest, patch, compile, burn to FW. TaDa!

          IOW, TiVoization 2.0. GPL2 makes it very hard already, but GPL3 makes it impossible.

          With permissive licenses, it's very possible.

          ref: https://en.wikipedia.org/wiki/Tivoization

    • gr4vityWall 4 hours ago

      While I don't share OP's concern, I believe AGPL is much more preferable, yes.

    • yjftsjthsd-h 4 hours ago

      Having both available means that someone can use it under the MIT license to produce a proprietary version.

  • carra 4 hours ago

    It would depend on what the end goal of the rewritten project is. If they pursue widespread adoption GPL licenses can definitely hinder it in many cases.

    • nine_k 4 hours ago

      AGPL? I can understand. Pure GPL 2 or even GPL 3? Never heard of that.

      • mubou 4 hours ago

        You've never heard that GPL can hinder adoption? Most codebases simply cannot use GPL code because if they do, they'd be forced to relicense under GPL (obviously). Not everyone wants to do that, even setting companies aside.

        Even people who nominally agree with the concept of Free Software might not want to be forced to use GPL. The freedom to choose how to license one's work is also an important freedom, after all. GPL can be confusing, so you can't fault anyone from not wanting to use it even if they agree with the spirit of the license.

        (For the record, I use GPL on some of my projects. I don't hate it, but I also like to use MIT on some projects, too.)

        • nine_k 4 hours ago

          Libraries are normally released under LGPL, which allows the use inside your differently licensed code.

          A ton of commercial services and products somehow use Linux, the poster child of GPL2, while also running tons of their proprietary code. Python is GPL, and it's all over the place in the computing world.

          If you just want to take some code someone else wrote, for free, and alter and meld it into your commercial product, well, yes, GPL does not allow that. I don't think it's a huge impediment for legitimate use.

          I'd say that all open-source approaches have their own use cases. Certain things are easier to release under BSD / MIT license, some makes sense to release under GPL, some have to resort to AGPL, to the detriment of commercial adoption. A dual restrictive open-source + paid commercial license can be the best in many cases.

  • rerdavies 4 hours ago

    Perhaps, but MIT licensed code is Free like Air and Sunshine.

    • bayindirh 4 hours ago

      ...and can be closed anytime, if the author wants to.

      Which will happen with these tools, anyway.

      • mrighele 4 hours ago

        What is released under MIT (or BSD) will stay under that license forever, so it cannot "be closed anytime". The owner can change the license, but that will affect only future developments.

        • bayindirh 4 hours ago

          Permissive licenses sometimes allow sublicensing, which allows changing of the license, and does not require source code be made available when changes made.

          IOW, this is de-facto closed source distribution.

          What happens when a company decides to add their own secret sauce and release that version only, or what happens tons of slightly incompatible, closed source variants pop-up, what happens upstream decides to not release future versions' source code.

          We have seen it all, and we'll see all of them again.

  • silon42 4 hours ago

    +1 also, it is much less interesting if this is not sent upstream (even if upstream is not interested at this point)

  • surajrmal 4 hours ago

    You say that like big tech doesn't use these tools when they are gpl. The primary advantage here is big tech employees can safely read and contribute to these tools without fear of upsetting some internal lawyers or getting explicit permission. No one is arbitrarily forking, extending, and keeping that new source hidden despite placing it in some commercial product. Working upstream is far less costly for long term maintenance and everyone knows that. Sure there are exceptions but far from the common case

    • blueflow 4 hours ago

      > The primary advantage here is big tech employees can safely [...] contribute to these tools without fear of upsetting some internal lawyers

      Doesn't GPL do that better by mandating that changes must be accessible to the public? Like, if an employee worked out some patch, it would be a violation of the license to not make it accessible to the public.

      • surajrmal 4 hours ago

        No, because the fear of possibly not complying is enough to stop folks from even getting near it. I would argue that's worse for the ecosystem.

brian-armstrong 5 hours ago

> " There are between 200 and 300 dependencies in the uutils project. He said that he understood there is always a supply-chain-attack risk, "but that's a risk we are willing to take". There is more and more tooling around to help mitigate the risk, he said.

left-pad II, coming soon to a Linux distro near you

  • charlotte-fyi 5 hours ago

    A left pad incident isn't possible on crates.io. Yanking a package from the registry doesn't remove the code if you have an existing lockfile.

    • brian-armstrong 5 hours ago

      left-pad is symbolic of dependency and supply chain issues generally. If all you took away from that incident is that there's risk only from someone unpublishing the module then you probably need to go back and think about it some more.

      • woodruffw 5 hours ago

        I think it'd be more productive to say that instead, since it's strictly more correct than comparing it to left-pad.

        (An interesting thing to consider: the worst "supply-chain" type attack in recent memory is probably xz, which has a much more traditional maintenance, development, and distribution model than the median Rust package does. I don't think Rust's ecosystem is even remotely immune to the risk of malicious packages, but I imagine the kinds of dependencies that exist in the current coreutils are much more appealing to a high-sophistication attacker because of their relative lack of publicity/transparency.)

      • charlotte-fyi 4 hours ago

        Why would I take anything away beyond the specific scope of the vulnerability to supply chain issues that NPM had? Cargo offers a variety of tools for auditing and managing dependencies that specifically mitigate supply chain issues. If your only suggestion is to not use dependencies at all, that's an extreme opinion.

      • iuyhtgbd 4 hours ago

        Don't chide people for failing to read your mind. If you wanted people to take that away, you should've said that. Using a specific example as a metonym for a larger phenomenon is a poor choice in terms of clarity. Of course people responded to the specific example.

  • johnny22 5 hours ago

    pretty sure you'd have to call it something else. I don't think the crates setup allows what exactly happened with left-pad to happen. It's much more likely to involve malicious code.

  • 6SixTy 4 hours ago

    crates.io does not allow a dependency of another crate to be arbitrarily deleted as per their usage policy section 'package ownership' paragraph 4.

  • yjftsjthsd-h 4 hours ago

    > There is more and more tooling around to help mitigate the risk, he said.

    Could anyone expand on this? I could imagine tools... better static analysis, maybe? being able to help. But I'd really want to see details. Both to see if it really helps, and because if there is tooling to help then I want to know so I can adopt it!

    • krater23 3 hours ago

      Static analysis is embedded in rust, but you can't mitigate intended malicious behavior of software dependencies. That would be like a virus killer for dependencies. The past has shown that this isn't working.

  • krater23 3 hours ago

    But Rust is so secure, what could ever happen? ;)

  • silon42 4 hours ago

    ... when rewriting Linux system software, I'd only use Rust dependencies from a distro (probably something LTS, like Debian stable, or such).

    • surajrmal 4 hours ago

      Given how simple it is to use cargo, and avoiding the per distro dependency problem being one of the reasons people like rust, what are the chances people actually bother doing that? Especially given most dependencies are statically linked

  • ajross 4 hours ago

    I was a little horrified to see that quote, and really hope there's some context that makes it less of a disaster. That is not an appropriate answer or attitude. Also... what "tooling" is he talking about?!

    The simple truth is that in "modern" package systems optimized around Reuse-Uber-Alles principles, the ability of J. Random Attacker to "get code into" a downstream app is much, much higher than it was in the days of coarse-grained projects. We need to start dealing with that as a problem to be solved and not excused away.

  • preisschild 4 hours ago

    Just using dependencies isn't bad IMO. Your code might be of much higher quality when you use libraries that are used by many other packages instead of coding your own stuff that is only used by your package and thus less reviewed / less improved upon.

    • grandempire 4 hours ago

      How many people do you think are reviewing stuff? Does more reviews make code better?

    • ajross 4 hours ago

      That is much less true than you think. It's undeniably true for "complicated" stuff. If you as an app developer roll your own DEFLATE implementation vs. using zlib, you're being an idiot, etc... But the lines around those utilities have long since been drawn already, and traditional open source projects have already organized themselves around this. We don't need crates.io to put libz.so into a separate package, it's already there.

      Instead, what's left over is a bunch of random junk that saves developers 20-30 minutes of typing and Stack Overflow research. "Here's a small package to automate the creation of a zip file with this format and add a manifest file to it", stuff like that.

      And more, downstreams tend not to use the whole package anyway. So you end up importing a "small" 2000-line crate just to use 7% of it. The "code quality" calculus tends to invert very rapidly when you have that kind of ratio.

      • preisschild 2 hours ago

        > And more, downstreams tend not to use the whole package anyway. So you end up importing a "small" 2000-line crate just to use 7% of it.

        Does that really matter? The compiler only includes the stuff you actually use anyways.

        • ajross 2 hours ago

          > The compiler only includes the stuff you actually use anyways.

          Goodness, no. The compiler can elide unreferenced symbols, that's not at all the same thing as "stuff you actually use". Just build a static glibc binary someday around "int main(void) { return 0; }" for a reference as to just how much stuff can get sucked in even if you think you aren't using it.

          In fact "unexpectedly included feature" was part of the xz-utils attack last year! The backdoor leveraged the fact that the openssh daemon linked against libsystemd for authentication, which links against liblzma (for some reason, I don't know why), despite xz not being required for anything in the ssh protocol. Boom.

          And in that case, the two dependencies (systemd and xz-utils) were inarguably in the "complicated" category that apps can't be expected to reimplment. Think how much more complicated this gets if every bit of junk logic becomes a "dependency".

          People need to be thinking about this as a problem!

          • preisschild an hour ago

            Thanks. Definitely have to read more into this.

deivid 5 hours ago

It's a fun pasttime. I'm rewriting mdadm in rust: https://github.com/DavidVentura/mdadm-rs

Mostly, I am tired of tools requiring root access, or a block device, to function, even in read only mode.

If you have a file on disk (eg: a VM's disk) mdadm will refuse to show metadata, requiring root to do so.

  • yjftsjthsd-h 5 hours ago

    > If you have a file on disk (eg: a VM's disk) mdadm will refuse to show metadata, requiring root to do so.

    If that's an artificial limitation, surely it should be easy to fix?

    • blueflow 5 hours ago

      Did the previous developers put it there for fun? Probably not.

      • nine_k 4 hours ago

        The previous developers might have written it at the time when VM disk RAID images were not a consideration. (Apparently it was Linux 3.0, 2011.)

        • yjftsjthsd-h 4 hours ago

          But even then, the device nodes should be protected by file ownership+permissions; why does the tool do its own check?

jmclnx 4 hours ago

Seems this project is MIT-licensed. That is fine, but I cannot help this is a way to get Corporations from following the GPL.

I wonder if Linux is re-written i rust will it too remove GPL as a factor ?

Again due to the license choice I tend to believe this can be seen as a way to move Linux to a Microsoft Type Windows System.

  • jeroenhd 4 hours ago

    There's no specific reason for Rust code not to be GPL licensed. For whatever reason, many Rust devs choose not to use copyleft licences in their projects, but that doesn't preclude GPL projects from using Rust.

    I would've preferred projects like these to be GPL licensed too, but as the author writes in the comments (https://lwn.net/Articles/1009647/): what's the real-world impact of this specific project being MIT?

    Commercial UNIX systems are pretty much dead, and I doubt the ones that still want to ship customised versions of these tools will respect the GPLv3 licence. I believe copyleft is essential for things like kernels, but for userspace tooling where plenty of alternatives exist, I don't think it's as important.

    • jillesvangurp 2 hours ago

      There's no good reason for people to use GPL either. People project all sort of idealistic stuff on this license but the reality is that healthy projects with a diverse contributor based are not at any real risk of their code being hijacked.

      There are plenty of permissively licensed projects that have been around for decades. Copyleft licenses don't provide much additional protection. They impose a requirement on people that modify the software to provide those modifications under the same license. That's about it. Imposing that requirement is important to some people but not really that essential for the long term health of open source projects.

      If you want to create a fork of OpenBSD kernel and call it DrEvilBSD and release it under the 100% proprietary DrEvil License 1.0, you can do that of course. The license allows you to do that. You are required to preserve the license and copyright notice, of course. For copyleft proponents, this is a wrong that needs to be corrected and they'll use big words like theft and stealing. For permissive software people this is a feature, not a bug. Do whatever you want with the software. These are both valid points of view to hold.

      In practice, long lived OSS projects get more protection from the fact that they have a large amount of copyright holders (everybody that ever contributed to the code) which makes any form of re-licensing impractical. The Linux kernel will never re-license. Nor will the OpenBSD kernel. It would take the permission of many thousands/tens of thousands of developers; or their surviving relatives (quite a few are no longer alive). Not going to happen.

      Anyway, the MIT license is a perfectly good license. It's widely used, well understood, easy to understand, etc. It has been around for decades. Countless of OSS projects use it. Perfectly fine choice if your goal is to facilitate others to use your software in whatever way works for them. Whether that's bundling into some proprietary firmware or distributing it in some OSS linux distribution. Giving users of the software that freedom is a fine choice.

      And kind of important for operating systems. Unless your goal is to keep it out of the hands of companies that sell products to customers based on these operating systems. Which might be why there aren't many AGPL or GPLv3 licensed operating systems.

      For a project like this coreutils rewrite, the MIT license makes sense. It maximizes usability across diverse systems Linux, BSD, proprietary UNIX, Windows, etc. without imposing restrictions. That’s likely an intentional choice to ensure broad adoption. There's no good reason to restrict that. Probably the developers want to see this go wherever it can go.

xixixao 5 hours ago

I've been using the rewritten coreutils as a reference in implementing human-utils[0].

The amount of complexity, even with pretty high-level Rust std, is still super high. So rewriting them in Rust is no small feat.

For the file-system management ones: I appreciate the value of everyone knowing these tools, but they do have some terrible defaults, and I wish there was an alternative between using a GUI/TUI file manager and carefully not stabbing myself in the foot. That's why I started building human-utils (alas it's very much unfinished).

https://github.com/xixixao/human-utils

  • linsomniac 4 hours ago

    I like some of the directions you're heading with that. One thing I've thought is it might be useful to have tools that create filesystem objects (like "new" or "mov foo bar/") be able to take permissions. "mov --umask 027 --owner alice:bob foo bar/" and "new --mode a=rx foo/" for example.

jll29 4 hours ago

I wonder what lessons were learned that could benefit others who want to port command line tools from C to Rust, e.g. particular idioms or re-usable functions (error handling, logging, defaults/dot-file management, command line option parsing).

There was a book called "Dr. Dobb's C-tools", which had the commented source code of a C compiler, assembler, linker and std library, and it greatly benefitted me to go beyond K&R's book towards understand the idioms of C programing.

malkia 5 hours ago

Is Rust (llvm?) supported on all platforms Linux targets?

  • mustache_kimono 5 hours ago

    > Is Rust (llvm?) supported on all platforms Linux targets?

    AFAIK, no. Linux chooses to support platforms from which we haven't seen new releases in decades, like DEC Alpha. Although in recent years, Linux has dumped support for many older platforms including IA-64.

    But my guess is GNU coreutils also doesn't support all Linux targets. I mean this in two ways -- 1) AFAIK coreutils does not expressly support each and every Linux platform, and 2) whether something builds is not the measure of whether there is platform support.

    That is -- Linux may support some weirdo MIPs variant and 68K (which has Tier 3 Rust support), but what's your guess that your GNU coreutils support, even busybox support, for these platforms is top tier? You may be guaranteed that your weirdo arch has a C compiler, but what's the likelihood all the GNU tests passed on this arch? That each and every utility even runs on this arch?

    • estebank 4 hours ago

      I recall packages on Debian considered available for some obscure platforms that would segfault immediately when executed. Platform support goes beyond "does a compiler happen to produce a binary".

    • yjftsjthsd-h 4 hours ago

      I can't find an official list of supported targets, but

      https://github.com/coreutils/coreutils/blob/master/README-in...

      contains notes on compiling for IRIX, HPUX, AIX, and OSF/1. So no, I would bet that it very much does run anywhere Linux runs, and a lot of places it doesn't.

      • mustache_kimono 4 hours ago

        > contains notes on compiling for IRIX, HPUX, AIX, and OSF/1.

        Again, how well tested do you imagine the OSF/1 target is? Do you imagine each merge pops off a CI test for that platform? My guess is -- it's been a long time since anyone connected with the GNU coreutils project built for that platform.

        See the reply from estebank to me, below:

        > Platform support goes beyond "does a compiler happen to produce a binary".

        • yjftsjthsd-h 4 hours ago

          Fair. I suppose it comes down to what "supported" means. AFAICT there's no official CI testing for any platform; the closest thing seems to be a mailing list where people will sometimes test new releases. It would be interesting to see what would happen if someone found and reported a bug for something obscure.

  • masklinn 5 hours ago

    No.

    What relevance does that have tho?

    • rerdavies 4 hours ago

      The relevance would be that Rust coreutils cannot be merged into Linux mainline. Obviously.

      • masklinn 4 hours ago

        Coreutils are not part of the linux project in the first place, and uutils does not aim to be merged into GNU coreutils.

        • mustache_kimono 4 hours ago

          >> The relevance would be that Rust coreutils cannot be merged into Linux mainline. Obviously.

          > Coreutils are not part of the linux project in the first place, and uutils does not aim to be merged into GNU coreutils.

          I think he having fun with you bro.

  • preisschild 4 hours ago

    Not yet, afaik the rust4linux devs thus want to create a rust frontend for GCC (gccrs)

    • estebank 4 hours ago

      gccrs (and gcc-codegen which is a project with the same objective but only implementing the backend) is an independent project to Rust4Linux (but they talk to each other).

      • preisschild 2 hours ago

        Ah ok i thought they were the main contributers and users.

greenheadedduck 5 hours ago

I wonder how linux devs feel about the rewrite in Rust. I mean surly loads of them have decades of experience in C, and Rust seems like such a different beast. Can any C developers provide insight, how is this transition?

  • blueflow 4 hours ago

    I don't see an transition happening at all. There are some rust projects, but they are more like an addition to, not an replacement of the current ecosystem.

  • krater23 3 hours ago

    There is no really transistion. Much developers just are ignoring Rust as they ignored D, E, Go, Ruby on Rails(the PHP developers) and some other fashion programming languages. It's a overhyped trend and in some years Rust will find his place beside all the other Languages, but will never be a widespread replacement of C/C++.

    We tried it in a commercially project because some hyperiders in the team wanted to do so. The truth is, a thing that would be developed in C++ in 1.5 month wasn't done in 6 months caused by things like 'All our developers are newbies in the language, no one really can do meaningful reviews, tons of dependencies(often one dependency in different versions), no good way to integrate cargo in our existing build flow and the lack of fun during programming.'

    When you don't start a complete new project with bloody newbies, Rust is not a good choice.

WhereIsTheTruth 4 hours ago

This article is funny

> "I'm going to state the obvious, that Rust is very good for security, for parallelism, for performance".

> The idea to replace GNU coreutils with Rust versions was not about security, though, because the GNU versions were already quite secure. "They did an amazing job. They almost don't have any security issues in their code base." And it's not about the licensing, he said. "I'm not interested in that debate."

> One of the reasons that Ledru liked Rust for this project, he said, is that it's very portable. He is "almost certain" that code he writes in Rust is going to work well on everything from Android to Windows.

> Ledru cited laziness as another reason for using Rust. "So if there is a crate or library doing that work, I'm going to use it. I'm not going to implement it [myself]." There are between 200 and 300 dependencies in the uutils project. He said that he understood there is always a supply-chain-attack risk, "but that's a risk we are willing to take". There is more and more tooling around to help mitigate the risk, he said.

People who keep promote this fraud are fraudsters too

  • krater23 3 hours ago

    We just should ignore the evangelists that just reimplement something existing in rust again. Maybe this language will die or find his way in the corner where someone is doing soemthing useful with it.

shmerl 5 hours ago

Ripgrep should be included in all distros by default.

  • janice1999 5 hours ago

    fd is also great and I install it everywhere.

    https://github.com/sharkdp/fd

  • grandempire 4 hours ago

    It’s a great tool, but it’s not a posix compliant grep.

    • burntsushi 2 hours ago

      So? There are (likely) tons of tools that come installed by default in your distro that aren't POSIX compliant. Or aren't even mentioned by POSIX at all.

      For example, on Archlinux, `base` (the minimal set of packages to install) includes `systemd`. `systemd` isn't POSIX.

      Now, you could say having both grep and ripgrep installed by default would be somewhat wasteful. As the author of ripgrep, I agree with that. ripgrep is fine being something you opt into so long as coreutils is already included by default.

      I'm just tired of people hiding behind POSIX compliance. One wonders how many of your invocations of grep are not POSIX compliant. (A strict POSIX grep is laughably minimal. To the point that not even minimal implementations of coreutils, like busybox, don't stick to the strict set prescribed by POSIX.) I'm not even aware of any grep implementation that doesn't implement something beyond what POSIX requires.

      Of course, grep still has a POSIX compliant base. And so long as your scripts only rely on that POSIX compliant base (no -a or -r or -o flags, for example), you can reap the portability benefits of POSIX.

      • grandempire an hour ago

        > I'm just tired of people hiding behind POSIX compliance.

        grep exists in base primarily to support shell scripting, as well as the benefits of standardization that I mentioned. That’s not hiding - it’s a basic expectation if you want scripts to run.

        Why should ripgrep be on base and what kind of overhead are we adding to install?

        > how many of your invocations of grep are not POSIX compliant.

        It would be a different argument if ripgrep was compliant with extensions, like gnugrep, but being a completely separate tool with a similar name, it doesn’t fill the same role.

        I like it and install it when I need it.

        • burntsushi 39 minutes ago

          I think my comment already addressed every single piece of what you said here. And you're shifting your original claim! You are no longer hiding behind POSIX compliance here. Instead, you're making a more nuanced argument based on more than just POSIX compliance, and one that I find very reasonable! But that's not what you originally said.

    • shmerl 4 hours ago

      I think it's more about having a stable interface. If it's not stable - you can't use it as a base tool long term. But if it's stable, what difference does it make if it's POSIX compliant or not?

      • grandempire 4 hours ago

        Because you aren’t the first one here. Countless lines of software exist which assumes posix. People learned how to use computers and know the grep commands. There is documentation for posix in places where it needs to be. It handles important edge cases which were needed by some groups and that was formalized into a standards requirement.

        So changing to the new thing invalidates all that existing value.

        That’s not argument to never do anything new, but it’s an argument why for your UNIX-like OS should ship the standard boring thing instead of the shiny new thing in the base install.

        • estebank 4 hours ago

          Having ripgrep installed by default doesn't preclude grep also being installed.

          • grandempire an hour ago

            Indeed. So why it should it come by default?

        • shmerl 3 hours ago

          I'd say POSIX is sometimes overrated.

          Example: https://github.com/mpv-player/mpv/commit/1e70e82baa91

          Anyway, I don't see any of that as a reason for not having ripgrep installed everywhere by default and as an argument for not using it.

          • grandempire an hour ago

            It doesn’t matter if it’s over or underrated - what matters is if you want to be compatible with existing code and documentation.

            I wish C didn’t have strtok, but it’s too late.

            • shmerl 16 minutes ago

              I don't specifically want to be compatible with it, no. I mean if ripgrep works for my needs - I'll use it and won't worry if it's POSIX compatible or not.

timewizard 4 hours ago

> "I'm going to state the obvious, that Rust is very good for security, for parallelism, for performance".

That's not obvious.

> He is "almost certain" that code he writes in Rust is going to work well on everything from Android to Windows.

I'd think the problem with security in code is cocky developers who believe that some part of the environment is magical and can save them from themselves.

> Ledru cited laziness as another reason for using Rust. "So if there is a crate or library doing that work, I'm going to use it. I'm not going to implement it [myself]."

Precisely. Where does this "certainty" come from then?

> He is thinking about "what we are going to leave to the next generation".

At this rate, a complete and total mess, of two slightly incompatible libraries neither of which have any significant features which differentiate it from the other, save for in the imagination of the developers themselves.

saurik 4 hours ago

A big reason the GNU utilities were game changing is not because of their existence, or their functionality, but because of their license... a license which, in no small part, is what not merely motivated but then allowed for their continued existence and functionality: a tit-for-tat, sharing is caring, we're all in this together, fighting for the users approach to software development, one which ensures that no one is going to embrace and extend your software for use in their platform to lock people out of participation (whether directly or indirectly) in control over the hardware they own.

It just really really sucks that people are thereby allocating a ton of effort into reimplementing these tools--putting good effort behind a project that even has a good reason to exist (memory safety), even if (as I'll poke at later in this comment) that apparently is explicitly not the reason they are working on this (which shocked me)--with the goal of being "bug for bug compatible" with the upstream copies from the GNU Foundation while carefully ignoring the #1 most important integration (as this affects how the software fits into the whole) test: "is this software 'free' as in freedom?".

Of course, they claim that this is some kind of unproductive waste of time "debate", as if the license is the least important part of the software and doesn't matter, and I think some people want to take this narrative. Regardless, whether or not we agree with this--a position that feels a lot like "politics don't matter and are a waste of time, so stop voicing your concerns"--that's not what's going on here: if you look a bit deeper, this project actually cares deeply about its license, and is going out of its way to choose the license it is using, ignore complaints, and avoid ending up GPL.

https://www.youtube.com/watch?v=5qTyyMyU2hQ

In an interview with FOSS Weekly, Sylvestre Ledru (the main developer, who curiously has a background working on Debian and Firefox, before ending up getting seduced by the clang/LLVM ecosystem), firmly states "it is not about security", focusing only on an interest in learning himself how the full stack of tools function and preparing for a future where new developers don't actually know enough C to contribute; this might seem to fit into the earlier narrative that the license doesn't really matter, which he later restates himself "I don't care that much, as long as it is OSI compliant".

This topic comes up multiple times later in the interview, and Steven sticks to his framing that he doesn't care about the license, that this debate is a waste of time, and that he tries to avoid discussing it as it is "more philosophical than technical". Of course, this isn't preventing him from discussing it ;P... this is clearly a big issue that people have with this project, it is one that comes up in most discussions of the project, and--if it really didn't matter, and it really weren't a big deal--you would thereby expect that he'd just change it, to avoid having to discuss it again...

...only, in this interview--in no small part from the interviewer slowly leaking part of their pre-interview discussion to cause the topic to keep coming back up--we learn just how much this developer does seem to care about the license, as, to keep it all as MIT, he's having to avoid looking at the original implementation, in an attempt to avoid accidentally letting his code get infected by GPL, to support some users of the project who actively choose to use this reimplementation to avoid GPL compliance (the example we are given--by the interviewer outing it, not him--is "car manufacturers").

As someone who works in security but finds it demoralizing how often security is used as an excuse for what ends up being an effort to lock users out of a platform due to what is merely some supposedly-accidental property of the effort--including one time I was in a hearing with the US Copyright Office, sitting next to a rep from General Motors who was there to argue that we shouldn't be allowed to jailbreak a "portable all-purpose mobile computing device" because that might include a car (lol)--I found this back/forth in the comments forum on the website for this interview worth reading:

https://hackaday.com/2024/07/17/floss-weekly-episode-792-rus...

<AgainAgain> the goal is to “rewrite it in x” is to move everything to permissive liscenses. then lock future changes away. just like every thing else “security” is used as pretext.

<Jonathan Bennett> We chatted a bit about exactly that. They make no claim that this effort is for security, and freely admitted that some of their users are doing so precisely because it’s MIT and not GPL. So… Yes, but actually no.

<Thovte> That sounds like yes, but actually, yes. No?

1238127 5 hours ago

[flagged]

  • dylan604 5 hours ago

    If the original thing is improved by being written in Rust as everyone proclaims, then this would be a good thing. However, I have doubts that the years upon years of updates fixing odd behavior, and then forgotten about the whys&hows of those updates that the same issues will not be introduced in the Rust version.

    • janice1999 5 hours ago

      > and then forgotten about the whys&hows of those updates that the same issues will not be introduced in the Rust version.

      Ideally that's what test suites are for, although I'm sure some deviations/gaps will be caught by users. For uutils they are preserving all the edge cases, replicating the original behaviour.

      There are benefits to rewrites (far less often in my experience), like doas replacing sudo in BSD distros and culling previous unused and insecure behaviour.

      • estebank 5 hours ago

        And in the face of gaps of the test suite there's no assurance that consistent output will be preserved across releases of the same application. This is somewhat mitigated by the slow release schedule that these kind of projects usually have, couples with a feeling of "being done" meaning that releases shouldn't have much green field feature development work. But still fixing one bug could cause a regression elsewhere.

  • panstromek 5 hours ago

    You could find a lot of them with your favorite search engine. But you chose to post an inflamatory comment instead.