faizshah 20 minutes ago

The copy-paste programmer will always be worse than the programmer who builds a mental model of the system.

LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.

You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.

The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.

boredemployee 32 minutes ago

Well, I must admit, LLMs made me lose the joy of learning programming and made me realize I like to solve problems.

There was a time I really liked to go through books, documentation, learn and run the codes etc. but these days are gone for me. I prefer to enjoy free time and go to the gym now

  • SoftTalker 26 minutes ago

    I'm the same, and I think it's a product of getting older and being more and more acutely aware of the passage of time and not wanting to spend time on bullshit things. Nothing to do with LLMs. I still like solving problems in code but I no longer get any joy from learning yet another new language or framework to do the same things we've been doing for the past 30 years, but with a different accent.

  • mewpmewp2 20 minutes ago

    It is kind of opposite to me. I do a lot more side projects now, because I enjoy building, and I enjoy using LLMs as this multiplying tool so I build more with the same amount of time. I think integrating LLM with your workflow is also problem solving and an exciting novel way to problem solve at this. It gets my imagination really running and it is awesome to be able to exchange back and forth to overall see things from more perspectives since LLM can give me more different and varied point of views than I alone could have come up with.

Rocka24 14 minutes ago

I strongly disagree, I was able to learn so much about web development by using AI, it streamlines the entire knowledge gathering and dissemination process. By asking for general overviews then poking into the specifics of why things work the way they do, its possible to get an extremely functional and practical knowledge of almost any application of programming. For the driven and ambitious hacker, LLMs are practically invaluable when it comes to self learning. I think you have a case where you're simply dealing with the classic self-inflicting malady of laziness.

lofaszvanitt 30 minutes ago

When a person is using LLMs for work and the result is abysmal, that person must go. So easy. LLMs will make people dumber in the long term, because the machine thinks instead of them and they will readily accept the result it gives if it works. This will have horrifying results in 1-2 generations. Just like social media killed people's attention spam.

But of course we don't need to regulate this space. Just let it go, all in wild west baby.

orwin 21 minutes ago

For people who like me mostly do Backend/Network/System development and who disagree on how helpfull LLMs are (basically a waste of time if you're using it for anything other than rubber ducking/writing tests cases/autocomplete), LLMs can basically write a working front-end page/component in 10s. Not an especially well-designed one, but "good enough". I find it especially shine in writing the html/css parts. It cannot write a FSM on its own, so basically when i write a page, i still write the states, actions and the reducer, but then i can generate the rest and it's really good.

  • dopp0 8 minutes ago

    which LLM are you using for those frontend usecases? chatgpt? and you ask in prompts for some framework such as tailwind?

steve_adams_86 33 minutes ago

I’ve come to the same conclusion in regards to my own learning, even after 15 years doing this.

When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.

When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.

AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.

When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.

The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.

Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.

The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.

tetha a minute ago

I very much agree with this.

If I have a structured code base, I understood the patterns and the errors to look out for, something like copilot is useful to bang out code faster. Maybe the frameworks suck, or the language could be better to require less code, but eh. A million dollars would be nice to have too.

But I do notice that colleagues use it to get some stuff done without understanding the concepts. Or in my own projects where I'm trying to learn things, Copilot just generates code all over the place I don't understand. And that's limiting my ability to actually work with that engine or code base. Yes, struggling through it takes longer, but ends up with a deeper understanding.

In such situations, I turn off the code generator and at most, use the LLM as a rubber duck. For example, I'm looking at different ways to implement something in a framework and like A, B and C seem reasonable. Maybe B looks like a deadend, C seems overkill. This is where an LLM can offer decent additional inputs, on top of asking knowledgeable people in that field, or other good devs.

dennisy 34 minutes ago

I feel this idea extends past just learning, I worry using LLMs to write code is making us all lazy and unfocused thinkers.

I personally have banned myself from using any in editor assistance where you just copy the code directly over. I do still use chatGPT but without copy pasting any code, more along the lines of how I would use search.

  • steve_adams_86 30 minutes ago

    I do this as well. I have inline suggestions enabled with supermaven (I like the tiny, short, fast suggestions it creates), but otherwise I’m really using LLMs to validate ideas, not actually generate code.

    I find supermaven helps keep me on track because its suggestions are often in line with where I was going, rather than branching off into huge snippets of slightly related boilerplate. That’s extremely distracting.

    • dennisy 22 minutes ago

      Yes! This is the other point is that it is also just distracting as you are thinking through a hard problem to have code just popping up which you inevitably end up reading even if you know what you planned to write.

      Just had a glimpse at supermaven and not sure why that would be better, the site suggest it is a faster copilot.

csallen a minute ago

Machine code is an impediment to learning binary.

xnx 38 minutes ago

"Modern" web development is so convoluted I'm happy to have a tool to help me sort through the BS and make something useful. In the near future (once the thrash of fad frameworks and almost-databases has passed) there may be a sane tech stack worth knowing.

  • lolinder 25 minutes ago

    This exact comment (with subtle phrasing variations) shows up in every article that includes "web" in the title, but I feel like I'm living in an alternate universe from those who write comments like these. Either that or the comments got stuck in the tubes for a decade and are just now making it out.

    My experience is that React is pretty much standard these days. People create new frameworks still because they're not fully satisfied with the standard, but the frontend churn is basically over for anyone who cares for it to be. The tooling is mature, IDE integration is solid, and the coding patterns are established.

    For databases, Postgres. Just Postgres.

    If you want to live in the churn you always can and I enjoy following the new frameworks to see what they're doing differently, but if you're writing this live in 2024 and not stuck in 2014 you can also just... not?

    • zelphirkalt 5 minutes ago

      React and frameworks based on it being used mostly for websites, where none of that stuff is needed in the first place, is part of what is wrong with frontend development.

  • grey-area 28 minutes ago

    You don't have to use 'Modern Frameworks' (aka an ahistorical mish-mash of Javascript frameworks) to do web development at all. I'm really puzzled as to why people refer to this as modern web development.

    If you're looking for a sane tech stack there are plenty of languages to use which are not javascript and plenty of previous frameworks to look at.

    Very little javascript is needed for a useful and fluid front-end experience and the back end can be whatever you like.

    • zelphirkalt 3 minutes ago

      Well, I wish more developers had your insight and could make it heard at their jobs. Then the web would be in a better state than it is today.

  • mplewis 16 minutes ago

    It’s only been thirty years, but keep waiting. I’m sure that solution is just around the corner for you.

ellyagg 7 minutes ago

Or is learning web development an impediment to learning AI?

BinaryMachine 38 minutes ago

Thank you for this post.

I use LLMs sometimes to understand a step by step mathematical process (this can be hard to search google). I believe getting a broad idea by asking someone is the quickest way to understand any sort of business logic related to the project.

I enjoyed your examples, and maybe there should be a dedicated site just for examples of code related to the web that used an LLM to generate any logic, the web changes constantly and I wonder how these LLMs will keep up with the specs, specific browsers, frameworks, etc.

Krei-se 32 minutes ago

I like AI to help me fixing bugs and looking up errors, but i usually architect everything on my own and i'm glad i can use it for everything i would've put off to some coworker who can do the lookups and works on a view or sth. that has no reconnect to the base system architecture.

So he's not wrong, you have to ask the right questions still, but with later models that think about what they do this could still become a non-issue sooner than some breathing in relieve think.

We are bound to a maximum of around 8 working units in our brain, a machine is not. Once AI builds a structure graph like wikidata next to the attention vectors we are so done!

elicksaur 32 minutes ago

If it’s true for the beginner level, then it’s true for every level, since we’re always learning something.

cush 35 minutes ago

I find it particularly ironic when someone who goes to a top university with $70k/yr tuition attempts to gatekeep how learning should be. LLMs are just another tool to use. They're ubiquitously accessible to everyone and are an absolute game-changer for learning.

Folks in an academic setting particularly will sneer at those who don't build everything from first principles. Go back 20 years, and the same article would read "IDEs are an impediment to learning web development"

  • wsintra2022 32 minutes ago

    Hmm not so sure. If you don’t know or understand some web development fundamentals; having a friend who just writes the code for you and also sometimes makes up wrong code and presents it as the right code. Can definitely be a hindrance to learning rather than a help.

menzoic 24 minutes ago

Learning how to use ̶C̶a̶l̶c̶u̶l̶a̶t̶o̶r̶s̶ LLMS is probably the skill we should be focusing on.

meiraleal 16 minutes ago

Code School employee says: AI is an impediment to learning web development

seydor 24 minutes ago

I don't think the thing called 'modern web development' is defensible anyway

wslh 26 minutes ago

As Python is an impediment to learning assembler?

camillomiller 21 minutes ago

> For context, almost all of our developers are learning web development (TypeScript, React, etc) from scratch, and have little prior experience with programming.

To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?

blackeyeblitzar 38 minutes ago

Almost every student I know now cheats on assignments using ChatGPT. It’s sad.

  • synack 28 minutes ago

    If all the students are using ChatGPT to do the assignments and the TA is using ChatGPT to grade them, maybe it's not cheating, maybe that's just how things are now.

    It's like using a calculator for your math homework. You still need to understand the concepts, but the details can be offloaded to a machine. I think the difference is that the calculator is always correct, whereas ChatGPT... not so much.

    • Rocka24 7 minutes ago

      We are now in a world where the common layman can get their hands on a GPT (a GPT that is predicted to be equivalent to a pHD in intelligence soon), instead of the person scrolling hugging face and churning out their custom built models.

      I think in the future it'll be pretty interesting to see how this changes regular blue collar or secretarial work. Will the next future of startups be just fresh grads looking for B2B ideas that eliminate the common person?

    • grey-area 26 minutes ago

      Yes that's why it's nothing like using a calculator. If the LLM had a concept of right or wrong or knew when it was wrong, that would be entirely different.

      As it is you're getting a smeared average of every bit of similar code it was exposed to, likely wrong, inefficient and certainly not a good tool for learning at present. Hopefully they'll improve somehow.