Replied to After two years of vibecoding, I'm back to writing by hand by Mo (Mo Bitar)
Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not.

I’ve been using an LLM for support while I set-up a new Linux environment. It gives me a sounding board and, I hope, some idea of good/best practice.

It also gives me commands, with most of which I am already familiar, so I am happy to copy/paste… BUT, even in a single conversation, I can see it deviating for it’s own suggestions, or making subtle tweaks to commands it already advised me to run and not always explaining why.

It’s clearly not building a holistic “mental” picture of what we’re trying to achieve. It keeps assuring me it is, but it is evidently not actually able to do that. Maybe this is demonstrated by the way it often regenerates ALL the advice it has previously given when asked for clarification on an apparent caveat. This is when the unexplained tilts can creep in.

Knowing a little of how an LLM works, I think I can understand exactly why that is.

With all that in mind, I can completely understand how it can’t reliably develop an actual code base!

It is pretty great at tedious shit like combining and transforming two simple JSON datasets, though!

Leave a Reply

Your email address will not be published. Required fields are marked *

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)