When interactive compilers and debuggers started to become more available, my father complained that junior devs would just make random changes to the code until it worked, rather than taking the time to understand it, as compared to when you had to wait for your punchcards to go through an overnight batch process.
It seems that lowering friction will always lower understanding, because industrious people will always try to get the most done, and inexperienced people will conflate getting the most done now with getting the most done in general.
To be fair, I would guess that 90% of the programmers I know would never have learned to program if they had been forced to do it on ancient mainframes with punched card readers. Personal computers made programming something that any schmuck could learn. That sure includes me.
But I think all that does is point to the fact that the average programmer's knowledge about computers and programming becomes poorer and poorer as time goes by. We are continuously pessimising for knowledge and skills.
I'm sympathetic to what you are saying, but you should include the increase in the number of things a programmer needs to know and to consider today. I've been programming for 57 years. There is a huge amount to know and the tasks are huge compared to what I knew and did back in 1968.
Over that time I grew as a programmer as the work became more difficult. I couldn't keep up with the technology. In spite of my attempts to keep up, it seemed that every few years I had to further limit the scope of my work in order to cope. Judging by my experience, today competent programmers will fall further and further behind with what they know becoming more obsolete while they restrict their scope so they can learn the new work on the job. Young programmers won't need to know much of what today's competent programmers know. At the same time the increasing complexity of their assignments will require them to go deeper into new matters, and they in turn will become overwhelmed. And so on it will go.
On the other hand, perhaps I don't know what I'm talking about. : )
When I first started hacking I had the expectation that every chunk of code I came across was broken in some way.
All of the software I relied upon was broken in some visible way. My Windows 95 installation would have multiple kernel panics per day. My Usenet reader would fail catastrophically when encountering non-ASCII text. My CD-ROM copies of games would freeze until I kicked the side of the computer which consistently worked.
I still see bugs everywhere nowadays, but they're more hidden and, honestly more frustrating since they're so opaque.
A concerted PR operation from OAI and Microsoft pushing the belief that LLMs can 'reason' and thus be trusted with things beyond formulaic high school and college papers.
When we decoupled results from capital. It doesn't matter how buggy your software is if people are forced to use it anyway, especially if you haven't turned a profit in ten years but you still get VC money anyway.
Remember when "running a business" meant "making a good product and making some money in the process"? Yeah, me neither.
Why are we blaming VC for bad products? There are always very profitable companies consistently chunking out bad products. Sometimes it feels like quality and profit is inversely correlated
If you try to do this in a work context, you'll be told you are wasting time. Even if you aren't fired, you will not be considered for promotion: the way to do that is to have "a lot of impact". This means shipping a lot of half-baked stuff. The other piece of the puzzle you need is having good "work ethic". This is best demonstrated via late-night debugging heroics where you patch up the crud you shipped earlier while getting "impact" points. For whatever reason people who run companies believe that their customers wants "lots of crud quickly" instead of quality products.
How true that is depends entirely on what sort of company you're working for. It may be common with SV-style companies (and it shows), but it's not nearly as common in the rest of the software world.
And newspapers will have people walking into traffic, and cars will lead to the extinction of horses, and if horse manure keeps piling up at this rate, london will be buried in a decade.
Writing code is easy compared to supporting, debugging and enhancing code. AI is much better at "greenfield" coding where you start from scratch, either on an entire app or a new feature. For anything non-trivial, it is terrible at debugging. At best it is a super-rubber-duck that is nice to talk to that might have a few words buried inside screenfuls of text that help a human realize what might be going wrong. We're still in the honeymoon phase, where AI has only written new stuff and hasn't been around long enough to have to support code with tens- or hundreds-of-thousands of LoC.
I mean, LLMs are new. And if you can't see the difference between an entire profession using broken, hallucinatory tooling to write buggy code, and drivers using more convenient maps, then I'm not sure how to help.
Turns out, knowing stuff is important when you try to do stuff that you claim to be an expert in, instead of outsourcing it to a crappy incorrect tutorial generator. Competitive edge for computer programmers going forward: knowing how computers work.
Just ask the LLM to walk you through each line of code, create an explain the dependency graphs and in a relatively short period of time they’ll know exactly how their code works. Using Claude Code is quite useful for this - I use it on GitHub repositories I’m curious about.
I think the problem is "but why?" What incentive would they have when they just need a brief answer or solution, i.e. just make code do this, get the boss their answer on that, etc.?
It's going to be fun when LLM agents do all the communication for us.
You had it easy! I'm old enough to remember copy-pasting things from random VBulletin forums and the comment section of PHP documentation. And sometimes from old mailing lists that showed up in search results :).
Sure it's a problem if that code ever needs to be fixed or maintained. Or if it irreversibly alters data in a way that the "coder" didn't understand or intend. If it's a prototype or some kind of one-off with limited side effects, I guess there's not much risk.
IMO the primary purpose of a code review is to check that what's written is understandable to at least one other developer besides the author. Having a machine be the primary reviewer kind of misses the point.
When interactive compilers and debuggers started to become more available, my father complained that junior devs would just make random changes to the code until it worked, rather than taking the time to understand it, as compared to when you had to wait for your punchcards to go through an overnight batch process.
It seems that lowering friction will always lower understanding, because industrious people will always try to get the most done, and inexperienced people will conflate getting the most done now with getting the most done in general.
To be fair, I would guess that 90% of the programmers I know would never have learned to program if they had been forced to do it on ancient mainframes with punched card readers. Personal computers made programming something that any schmuck could learn. That sure includes me.
But I think all that does is point to the fact that the average programmer's knowledge about computers and programming becomes poorer and poorer as time goes by. We are continuously pessimising for knowledge and skills.
I'm sympathetic to what you are saying, but you should include the increase in the number of things a programmer needs to know and to consider today. I've been programming for 57 years. There is a huge amount to know and the tasks are huge compared to what I knew and did back in 1968.
Over that time I grew as a programmer as the work became more difficult. I couldn't keep up with the technology. In spite of my attempts to keep up, it seemed that every few years I had to further limit the scope of my work in order to cope. Judging by my experience, today competent programmers will fall further and further behind with what they know becoming more obsolete while they restrict their scope so they can learn the new work on the job. Young programmers won't need to know much of what today's competent programmers know. At the same time the increasing complexity of their assignments will require them to go deeper into new matters, and they in turn will become overwhelmed. And so on it will go.
On the other hand, perhaps I don't know what I'm talking about. : )
Where did all of this trust come from?
When I first started hacking I had the expectation that every chunk of code I came across was broken in some way.
All of the software I relied upon was broken in some visible way. My Windows 95 installation would have multiple kernel panics per day. My Usenet reader would fail catastrophically when encountering non-ASCII text. My CD-ROM copies of games would freeze until I kicked the side of the computer which consistently worked.
I still see bugs everywhere nowadays, but they're more hidden and, honestly more frustrating since they're so opaque.
> Where did all of this trust come from?
A concerted PR operation from OAI and Microsoft pushing the belief that LLMs can 'reason' and thus be trusted with things beyond formulaic high school and college papers.
When we decoupled results from capital. It doesn't matter how buggy your software is if people are forced to use it anyway, especially if you haven't turned a profit in ten years but you still get VC money anyway.
Remember when "running a business" meant "making a good product and making some money in the process"? Yeah, me neither.
Why are we blaming VC for bad products? There are always very profitable companies consistently chunking out bad products. Sometimes it feels like quality and profit is inversely correlated
If you try to do this in a work context, you'll be told you are wasting time. Even if you aren't fired, you will not be considered for promotion: the way to do that is to have "a lot of impact". This means shipping a lot of half-baked stuff. The other piece of the puzzle you need is having good "work ethic". This is best demonstrated via late-night debugging heroics where you patch up the crud you shipped earlier while getting "impact" points. For whatever reason people who run companies believe that their customers wants "lots of crud quickly" instead of quality products.
How true that is depends entirely on what sort of company you're working for. It may be common with SV-style companies (and it shows), but it's not nearly as common in the rest of the software world.
And newspapers will have people walking into traffic, and cars will lead to the extinction of horses, and if horse manure keeps piling up at this rate, london will be buried in a decade.
Writing code is easy compared to supporting, debugging and enhancing code. AI is much better at "greenfield" coding where you start from scratch, either on an entire app or a new feature. For anything non-trivial, it is terrible at debugging. At best it is a super-rubber-duck that is nice to talk to that might have a few words buried inside screenfuls of text that help a human realize what might be going wrong. We're still in the honeymoon phase, where AI has only written new stuff and hasn't been around long enough to have to support code with tens- or hundreds-of-thousands of LoC.
Personally I fail to see how this worry can be 'new'.
A 'new' transportation worry: many car drivers don't know where to turn or even where they are heading without GPS.
I mean, LLMs are new. And if you can't see the difference between an entire profession using broken, hallucinatory tooling to write buggy code, and drivers using more convenient maps, then I'm not sure how to help.
When GPS coordinates are handled by LLM the fear won't be as novel.
Turns out, knowing stuff is important when you try to do stuff that you claim to be an expert in, instead of outsourcing it to a crappy incorrect tutorial generator. Competitive edge for computer programmers going forward: knowing how computers work.
A lot of companies were already outsourcing to companies with humans who don't know how anything works.
Just ask the LLM to walk you through each line of code, create an explain the dependency graphs and in a relatively short period of time they’ll know exactly how their code works. Using Claude Code is quite useful for this - I use it on GitHub repositories I’m curious about.
I think the problem is "but why?" What incentive would they have when they just need a brief answer or solution, i.e. just make code do this, get the boss their answer on that, etc.?
It's going to be fun when LLM agents do all the communication for us.
Did young coders ever know how their code worked?
Back in my day we copied and pasted random stack overflow answers until something worked like real junior devs
You had it easy! I'm old enough to remember copy-pasting things from random VBulletin forums and the comment section of PHP documentation. And sometimes from old mailing lists that showed up in search results :).
That my take on the whole AI coding industry. It’s on part with stackoverflow responses.
This is true of all levels of abstraction. The next question is: does this level of abstraction cost more than it’s worth.
Meh. Many coders don't know how transistors work, and they can still be productive.
If many "young coders" don't know how their code work but can solve more problems faster, is it really a problem?
Sure it's a problem if that code ever needs to be fixed or maintained. Or if it irreversibly alters data in a way that the "coder" didn't understand or intend. If it's a prototype or some kind of one-off with limited side effects, I guess there's not much risk.
wont matter once the agents code review the agent-generated code
If that happens, it'll presumably matter somewhat when it comes to continued employment of said coders.
IMO the primary purpose of a code review is to check that what's written is understandable to at least one other developer besides the author. Having a machine be the primary reviewer kind of misses the point.