Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hear everyone say "the LLM lets me focus on the broader context and architecture", but in my experience the architecture is made of the small decisions in the individual components. If I'm writing a complex system part of getting the primitives and interfaces right is experiencing the friction of using them. If code is "free" I can write a bad system because I don't experience using it, the LLM abstracts away the rough edges.

I'm working with a team that was an early adopter of LLMs and their architecture is full of unknown-unknowns that they would have thought through if they actually wrote the code themselves. There are impedance mismatches everywhere but they can just produce more code to wrap the old code. It makes the system brittle and hard-to-maintain.

It's not a new problem, I've worked at places where people made these mistakes before. But as time goes on it seems like _most_ systems will accumulate multiple layers of slop because it's increasingly cheap to just add more mud to the ball of mud.



This matches my experience when building my first real project with Claude. The architectural decisions were entirely up to me: I researched which data sources, schema, and enrichment logic were suitable and which to use. But I had no way of verifying whether these decisions were actually good (no programming knowledge) until Claude Opus had implemented them.

The feedback loop is different when you don’t write the code yourself. You describe a system to the AI, after a few lines of code the result appears, and then you find out whether your own mental model was actually sound. In my first attempts, it definitely wasn’t. This friction, however, proved to be useful; it just wasn’t the friction I had expected at the beginning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: