I think libraries (used as dependencies by other software) and "top-level" software/applications (used by users directly, whether that user is a developer or not) -- have different pressures and concerns.
Libraries should be much more conservative. I don't think "release early and often" was said about libraries.
And yes, it's tricky that it's not always a clear line between these two categories. A unix command line utility is sort of top-level software, but also likely to be use in a scripted/automated fashion.
So it's not always cut and dry, but the more you are aware of software depending on your software, the more careful you should be with releases.
A great example of an end-user application that is like a library is Microsoft Excel. Sure, the UX can change, but the formula engine has to be extremely conservative, staying bug-compatible with basically every version, or all hell breaks lose.
I agree. As a developer using libraries and component, I really hate "release early and often".
As a user, I also hate "release early and often".
In both roles, releasing early and often effectively means I'm always using beta software, with all the headaches using such software brings.
However, as part of an internal development process -- that is, before customers see the release, "fail fast" is a totally legitimate and decent approach.
I think it depends a lot on where the library is in its lifecycle.
A new library should iterate rapidly and be very clear about that. Big warnings, obvious version numbering, and good communications. You look for an audience of actual users that can keep up with that. That's how you figure out what the right library is, both in terms of the big abstractions and the small details.
But then once you hit a 1.0 release, things change. You're shifting from an audience of fellow explorers to an audience who wants stability. You can still do exploratory work, but it has to be additive in the 1.x series, and eventually you need to start a 2.x series as you learn more about what needs to change. So the iteration still happens, it just happens away from the people who just want the basic thing you got right via the initial burst of iteration.
I notice the same pattern on an infrastructural level.
If I work on a test system, or a system a small number of people depend on... a year or two ago this would've hurt my pride, but why spend 4 hours planning a change, if you can just muddle through problems in 2 hours. The user impact will be none or low based on the assumption, so move quickly, break things, fix things, document the problems and fixes. Everything's good.
Yet, I also have systems a dozen teams or more rely upon. For these systems, we have to move much slower and more deliberate. We cannot touch some of our database with larger changes, unless we have a way in, a way out, and a prediction how long all of these take, as well as announcement to the customers, and so on, and so on. We'd love to update those faster, but it's rough.
Libraries should be much more conservative. I don't think "release early and often" was said about libraries.
And yes, it's tricky that it's not always a clear line between these two categories. A unix command line utility is sort of top-level software, but also likely to be use in a scripted/automated fashion.
So it's not always cut and dry, but the more you are aware of software depending on your software, the more careful you should be with releases.