I’ve been using an LLM for support while I set-up a new Linux environment. It gives me a sounding board and, I hope, some idea of good/best practice.
It also gives me commands, with most of which I am already familiar, so I am happy to copy/paste… BUT, even in a single conversation, I can see it deviating for it’s own suggestions, or making subtle tweaks to commands it already advised me to run and not always explaining why.
It’s clearly not building a holistic “mental” picture of what we’re trying to achieve. It keeps assuring me it is, but it is evidently not actually able to do that. Maybe this is demonstrated by the way it often regenerates ALL the advice it has previously given when asked for clarification on an apparent caveat. This is when the unexplained tilts can creep in.
Knowing a little of how an LLM works, I think I can understand exactly why that is.
With all that in mind, I can completely understand how it can’t reliably develop an actual code base!
It is pretty great at tedious shit like combining and transforming two simple JSON datasets, though!


