Something this article largely ignores is that often you want to build the support for the presumptive feature now because the act of implementing it affects architectural decisions. It came close to this by saying
> it does not apply to effort to make the software easier to modify
But there's a difference between making software easier to modify (where the same effort could be expended later to retrofit the software) versus making architectural decisions. Often times trying to implement something teaches you that your architecture isn't sufficient and you need to change it. If you can learn that while you're building the original architecture, then you've avoided a potential huge amount of work later trying to retrofit it.
To use the example from the article, what if supporting piracy risks reveals that the fundamental design used for representing pricing decisions isn't sufficient to model the piracy risks, and the whole thing needs to be reimplemented? If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing. But if you defer the piracy risks by 4 months, and then discover that your pricing model needs to be re-done, now you need to throw away your previous work and start over, and now your 2-month feature is going to take 3 or 4 months instead.
As I get older, both as a person and as a dev, I have come to the same realization as you: that a lot of the time I end up building things that I don't need "right now", but which I have to at least plan at the architectural level, lest be it a nightmare later on.
And then it hit me.. THIS is one of the areas of software development where I don't think reading more articles, techniques "mantras" etc will help. Only more practice will help. It ends up boiling down to how much experience as a developer you have, making those particular decisions.
I.e. It's a craft, not a science.
So as you grow up as a developer you suddenly find that you start to have certain "intuition" as to why I should actually plan for something that I don't need, vs actually ignoring something (a real "yagni"), but you just can't explain it in terms of generalizations. You end up saying.. "well.. it just feels like I Should do this, because I have been burned in the past when I ignored this intuition"
And I don't think I'm alone in this train of thought.
This sounds like the journey from "realization of the cost of hacking" to its "cure", the grand solution.
I had the same epiphany in my 14th year of software development when I realized that programmers Must Be Given Solid Platforms of Well Architected Object Oriented Code or they will Make A Mess.
However, now in my 34th year, my view is more "Plans are useless, but planning is indispensable".
I have "grand" architectures, but my architectures and their implementations are designed with the assumption that they will be wrong in a week's time.
And my advice to anyone reading this is YAGNI. More practice helps. But practice the right thing.
However, I still am left with the feeling that you have reached this conclusion by means of more than three decades of practicing both the right thing and the wrong thing as well (we all make mistakes, etc).
So basically, how did you realized that "Plans are useless, but planning is indispensable" if not precisely by trying to plan, and then observing it's uselessness?
I could almost bet that you now assume that the architecture will be wrong in a week's time and yet you always find a way to actually make it resilient to this and make it work, otherwise I would have to wonder how did you ever accomplished something if indeed 100% of your architectures was wrong in a week's time.
I.e., this is your experience in your craft. Flowing like water, Bruce Lee would say.
Now, the point I did not get though, is, what exactly are the differences between being a craft or being war?
If it's a craft I kind of assume, like you, that I'll never be able to design the perfect architecture, but only the best architecture that I could have produced at the time. And just strive to get better every time.
If it's war, then it basically means that at any time anything can explode, but then I would be filled with despair all the time I guess. Maybe not. Is this what you refer to? if it's war you have to design assuming it will break very shortly and then work around that? or "make do with what I have" as they say?
Again, very interesting points and I appreciate the discussion since this has been haunting me almost my entire career.
Edit: By the way, I hope you don't me if I quote you. Those are some nifty points right there.
I have done my fair share of over-engineering. =) Its not over-engineering if you need it, but turns out, most of the time, you aint gonna.
A team that just hacks and does YAGNI by accident, or a team that does YAGNI but doesn't plan, they often have a choice of how to build the feature right now. With a little bit of planning, we can see past the first few meters and choose the first step to be in the direction of the bigger plan. Without planning, the first step might be in the wrong direction. The hard part for my teams is to plan, and then still only implement the first step. They're getting better =)
Its war because if you're doing something worthwhile, there's someone else trying to get your customers. I'm not suggesting your stuff should break. The opposite. Combine YAGNI with TDD and push working, tested features to production ASAP.
With a little bit of planning, we can see past the first few meters and choose the first step to be in the direction of the bigger plan.
I had a suspicion that despite the black-and-white way you have expressed your opinion elsewhere, particularly in your exchange with eridius, there probably was actually some common ground here. I don't believe it's possible to avoid the failure modes eridius describes without thinking ahead to some extent; this passage suggests you actually agree.
I can't speak to your experience with your team, but I can say that I've spent a lot of time fixing or working around other people's poorly-considered design decisions. In many cases I understand there wasn't time to do it better; the product had to be shipped. But in some cases I think a little reflection would have shown a better way which wouldn't have taken any longer to implement.
I feel exactly the same way. It turns out everything practical I do with software depends more on the "spider sense" part of it rather than some well-defined knowledge (which is easy to look up when needed, so maybe that has something to do with it). The spider sense is not something you can teach or memorize, and it takes a while to develop. Programming is exactly like a craft in that way.
But there is plenty of better understood theory and experiment out there which if anything is the science part. From the theory and experiment I've dabbled with it's very much a different thing.
And it has a lot to do with the existence of radically different schools of programming. There's a joke, the engineer is showing a laser pointer to the scientist. The scientist asks "why didn't you get an ultraviolet laser when those output far higher energies?" and the engineer says "it's for shining a dot on the wall."
I think what I was trying to say it's not really that developers are divorced from science, but that the day-to-day tasks that we do end up being far from whatever paper or thesis taught us how to do. The example that comes to mind is REST. There is a lot more in that PhD thesis than what is actually used and implemented outside of the lab.
That's what I meant for craft. In the end I don't see anyone implementing "all the science" regarding REST, only those parts that are needed and even then... as best as some engineers understood it (myself included of course). That's also the reason why we end up calling "RESTFul" a lot of stuff that Thomas Fielding clearly states is not.
At least it's good knowing that I'm not alone and other people feel the same. So in the end I know that I can do my best... but not more.
There's an interesting thing that happens when you teach software development: many times you're forced to use heuristics in order to get something of value in the student's head. The 90 or 95% solution that somebody understands and uses is much better than the 99% solution that nobody pays attention to. KISS.
I think everybody needs to build 2,3,4,5 or more systems that you over-design in order to feel the pain and learn that YAGNI makes a lot of sense. Yep, for any given project there might be a couple of YAGNI items that make sense, but good luck trying to get a team of seven to agree on them. Most of the time instead of agreeing, everybody gets their 2 or 3 added (or they add them on their own without asking anybody) and you're back to astronaut architecture land.
A similar thing happens in C++. I firmly believe that a coder needs a bunch of projects where they shoot themselves in the ass with C++ before they finally realize that every little bit of added genericity and redirection is a ticking time bomb of maintenance work later on. Eventually you realize that brutally maximizing simplicity is as important as actually solving the problem. Probably more so.
Interestingly enough, I see more of this in high-level OO languages than I do low-level or FP solutions. I think that's because "manage the complexity" is a key part of those worlds, where "click to make a class" is a key part of the other. (Just speculating)
I believe what you describing is the ability to predict the future. Nobody can do it perfectly, but experience can increase your odds of being correct.
1. My team will not have to throw away our previous work. It works for storm pricing. It is deployed. Customers are using it and we are making money.
Your company has yet to deploy a storm pricing model because it got caught in the complexity of the piracy model.
2. When it comes time to write the piracy pricing system, my team has already gained a huge amount of relevant experience from writing the storm pricing system. In my experience a team that already has developed a simpler system will be better able to develop the more complex system.
3. If piracy pricing is a significant risk then it should be prioritized higher. If it is late because you thought it would take two months and it took you four, then you failed to assess risk. A really good way to assess risk is to implement something similar (may I suggest storm pricing).
However, in my experience, even if we know we want to ship storm and then piracy immediately after, I will still ask my team to develop storm first to completion. And I will stand any one of my teams up against a team that is going to "do it all in one go", and we'll get them both done before the other team.
4. Basically when you make an argument that is "Suppose you have this situation...", then it is equivalent to saying "This thing happened in the past, and if we had known then what we know now...."
They are equivalent because now matter how your frame your "thought experiment", it is guaranteed to never happen in real life. The whole point of YAGNI is that you cannot know the future, and no one ever has. The only way to know for sure that "this is how its going to turn out" is to actually be in the future.
So, yes, if you had a time machine, you could argue that big upfront design would work.
Of course, if you were to start your argument with "Suppose I had a time machine and I could go back in time and tell my team that the piracy model is a perfect superset of the storm model, but is twice as complex because of x, y and z, but there are no other unknowns, and customers love it, and the Navy doesn't kill the pirates, and so therefore we should do it all in one go" then it would be much more obvious that your argument was flawed.
You're still making some pretty strange assumptions here. An imperfect prediction of the future does not render the prediction meaningless, and making architectural decisions to support expectations of the future does not necessarily mean a significant increase in complexity, it often merely means some upfront "critical thinking" time. In fact, this upfront time may very well speed up the rest of the implementation, even if the predicted future requirements never come to pass, because it produces a more well-thought-out and well-understood architecture, that may be cleaner and more powerful without making the actual implementation any harder.
But again, there is a large continuum of possible approaches here, and insisting on reducing it all down to either "do everything" or "YAGNI" is really kind of bewildering.
"May very well speed up the rest of the implementation".
Experience tells us that the rare cases where it does are vastly outweighed by the cases where they do not.
"But again, there is a large continuum of possible approaches here, and insisting on reducing it all down to either "do everything" or "YAGNI" is really kind of bewildering."
This is the "I'm being more reasonable" logical fallacy.
There is a large continuum of possible approaches to the standard prisoners dilemma, but it turns out that there is an optimal solution. In my experience, YAGNI is an optimal solution to software engineering. Sure, there are plenty of times where it doesn't work out. But in any given solution, without a time machine, YAGNI is the sanest strategy.
If this is bewildering to you then you don't have enough data.
In my experience, YAGNI is an optimal solution to software engineering.
Of course it isn't.
For example, it is suboptimal any time you have two features, the total cost of implementing both together is less than the total cost of implementing one then the other separately, and it turns out that you do need both in the end. You might not have known it up-front, but your outcome was still worse in the end.
Moreover, the greater the overhead of implementing the two features separately, the lower the probability of eventually needing both required for doing both together to be the strategy with the better expected return.
If this is bewildering to you then you don't have enough data.
Or just different data that leads to a different conclusion. Some of the YAGNI sceptics here have also been doing this stuff a long time, but apparently with quite different experiences to you.
> Or just different data that leads to a different conclusion. Some of the YAGNI sceptics here have also been doing this stuff a long time, but apparently with quite different experiences to you.
Purely a single data point, but I've been doing this on and off for 30 years, mostly enterprise, and agree that YAGNI is axiomatic to _efficient_ software delivery (most isn't built efficiently of course). And the bigger the project, the more pronounced this becomes.
This leads to questions about what architecture must look like, since it needs to be extremely malleable to support YAGNI. RESTful and microservices based architectures seem to be optimal approaches based on this assumption.
FWIW I've seen traditional enterprise solutions architecture approaches fail extremely hard against this worldview to the point that what looks like best practise in one world (e.g. stateful, transactional, end-to-end design) looks like worst in the other. (I don't think this is resolvable and notably causes huge friction for technical folk from 'the new world' going to work on giant programs populated by veterans who have spent long careers in a BDUF paradigm.)
This leads to questions about what architecture must look like, since it needs to be extremely malleable to support YAGNI.
Precisely. I think this is where some of us might be talking at cross-purposes. Given that implementing robust architecture typically requires work, even if the expectation is that it will be efficient in the long term, I don't see how one can reasonably argue for a flexible architecture to support future developments without considering what sorts of development are most likely to be necessary. Whether you choose to assume general programming principles or something more domain specific is just a matter of degree -- even a simple modular design is more work than spaghetti code in a trivial case, but I imagine most of us would agree that trying to keep things modular is highly likely to pay off for any project of non-trivial scale.
What about the next 37 cases, wherein you didn't need the second feature?
Yagni says that more often than not, you won't need it. Every single case may not prove out according to that dictum, but that doesn't mean that the overall approach is suboptimal.
YAGNI refers to _average_ ROI (as well as time to market, reduced complexity/bugs, 2/3 of features never being needed).
If you want to argue that ROI might not hold true in your particular case, you not only need a time machine, but even worse if you view your work in the context of an ongoing program, you have a halting problem to contend with.
Now as a business owner commissioning software, a) potentially shaving a few dimes is much more expensive to me than not having knowable sums going in and out over the next quarter and b) I may take a financial view that is alarmingly short term to many engineers :)
>It says, "You aren't going to need it". There's no hedging about more-often-than-not
Taking it that literally is silly though, right? I mean, no one is stating that in every single case you couldn't possibly need an anticipated feature. How could anyone argue that? Such a literal interpretation actually reduces the entire discussion to indenfensible nonsense.
YAGNI is an approach, not a statement of fact. Even in the article that is the subject of this thread, the author states the following in the opening paragraph:
>It's a statement that some capability we presume our software needs in the future should not be built now because "you aren't gonna need it".
Well, I'd certainly say so, but I seem to encounter plenty of people who would disagree. Even in this HN discussion, we seem to have some people who are arguing for always assuming YAGNI despite also arguing that YAGNI is an average/on-balance/overall kind of deal and they subjectively assume that the odds will always favour doing no unnecessary work straight away. I also see some people who appear to support the YAGNI principle yet make an exception for refactoring, with an acknowledged open question so far about when refactoring is justified and why it should deserve special treatment in this respect.
Note the word "some".
Unfortunately, the statement you quoted can be parsed at least two ways, with very different meanings, so I'm not sure that really furthers the debate here. Certainly there are some in this HN discussion who do seem to be explicitly arguing that it's very much an all-or-nothing proposition.
My argument is pure logic, based on a direct comparison of the total cost of doing it both ways. No fudge factors are required.
The conclusion fails and so YAGNI always gives the best possible result only if there is no real world situation where the initial assumptions in my argument hold.
Clearly there are actually numerous situations where those assumptions do hold, so YAGNI cannot be trusted to give the best final outcome in all cases.
There are no real world situations where the initial assumptions in your argument hold because you do not own a time machine. A time machine is necessary for you to distinguish between the situation you describe vs. any one of the countless ways that your situation could differ from your expectation as time unfolds. Such as the pirates getting wiped out by the Navy.
Suppose instead of a pair of features, you have twenty pairs of features. For most of those pairs:
a) The second item isn't really needed.
b) The second item is vastly more complicated than the first, and delays revenue
c) The se
If you'd read to the end of the article, you would have seen that the author takes a more balanced view:
Having said all this, there are times when applying yagni does cause a problem, and you are faced with an expensive change when an earlier change would have been much cheaper. The tricky thing here is that these cases are hard to spot in advance, and much easier to remember than the cases where yagni saved effort. My sense is that yagni-failures are relatively rare and their costs are easily outweighed by when yagni succeeds.
My point is that the initial assumptions never hold because we can never say "We know this is how it is going to turn out". We can say "This is how we hope it is going to turn out."
YAGNI embraces this lack of knowledge explicitly, as you quote from the article, and my quote from five posts up.
We're not saying that YAGNI is always better in any specific sample, we're saying YAGNI is an optimal strategy across the entire range of samples.
I don't understand this obsession with the mysterious "time machine".
You don't know for sure whether a feature will ever be implemented or not. Neither side has a definitive answer, or there would be nothing to debate.
So the best you can do is estimate the costs of doing the necessary work in each scenario (do it now but don't need it later, do it now and do need it later, do it later when known to need it), the likelihood that it will in fact be required at some point, and therefore the expected benefit of doing it now vs. later.
It's a straightforward risk assessment and cost/benefit analysis, and you make the best choice you can given the information available at any given time.
>So the best you can do is estimate the costs of doing the necessary work in each scenario (do it now but don't need it later, do it now and do need it later, do it later when known to need it), the likelihood that it will in fact be required at some point, and therefore the expected benefit of doing it now vs. later.
So you do these to estimations and sometimes you are wrong. When you are wrong it costs you. It costs you opportunity cost, cost of carry, cost of delay, cost of building, cost of repair.
With YAGNI, we build feature A then possibly feature B. Let assume that YAGNI and TDD offer no benefit, and the cost of doing A then B is twice the cost of doing A and B together, but sometimes we only do A, or we do A, and then later do B and in between A makes two months of revenue.
The question is, on average, that is to say, over a large number of features, which method costs less?
If you estimates and predictions are accurate, then clearly your solution is the lowest cost over the whole project.
If however, your estimates and predictions are poor then in fact you cause waste and increase costs. There is a point at which YAGNI offers the lower cost over the whole project.
It is my experience, and, it appears, that of Martin Fowler, that in fact we are very bad at making estimations, and very poor at predicting the future.
So this is obviously counter intuitive and not something you want to believe: on average, your "best choice given the information available at the time" will cost more money on average than just implementing feature A now and worrying about B later.
You have to do a risk assessment and cost/benefit analysis of your risk assessment and cost/benefit analysis. It turns out that your risk assessment is highly risky and your cost/benefit analysis is too costly for the supposed benefits. Its cheaper to YAGNI.
So you do these to estimations and sometimes you are wrong. When you are wrong it costs you. It costs you opportunity cost, cost of carry, cost of delay, cost of building, cost of repair.
But unless you are very lucky, always disregarding any expectations about the future that turn out to be true also has a cost. YAGNI seems to assume that such costs are negligible, but in reality both false positives and false negatives can cost you.
As I noted in another comment, this is the danger of generalising from a single person's experience or a small set of anecdotes in such a large and diverse field. I have also been around the industry for a while, but personally I'm still waiting to run into these projects that incur crippling costs because they are so bad at predicting future needs that they write loads of code that is never used, just as I'm still waiting to see a project where refactoring fails catastrophically if you don't have 100% test coverage, or whatever other absolute metric someone wants to argue for this week.
Sure, sometimes you make the wrong call, because as everyone keeps saying, for most projects you can't predict the future with 100% accuracy in this business. But my personal experience has been that much of the time, either you have a reasonable idea of generally where things are going or you know you don't have enough confidence in future directions yet to be worth building specifically. And when you do have a reasonable idea, there are plenty of projects where failing to take advantage of that knowledge really will hurt you because you can't just conveniently refactor it out later -- many fields with specific resource or performance constraints will fall into this category for example.
> This is the "I'm being more reasonable" logical fallacy.
No it's not. It's me outright rejecting the claim that "YAGNI is an optimal solution to software engineering". You can certainly say YAGNI in specific situations, but as a general approach of never building anything that's not immediately needed this very moment, I strongly disagree.
At this point, I'm starting to suspect this comes down to the difference between "programming" and "software engineering" (which is to say, the difference between solving specific problems vs making core architectural decisions).
I agree, it is "programming" vs "software engineering".
You are looking at the problem as a programmer, and making reasonable assumptions about how to solve the problem using objects and graphs and interfaces and models. If we had a better model, our project will go faster and be better.
I am looking at it as an engineer, and seeing what actually happens in real life when real humans build real code. If we had working code for simple problem A, then when we get to more complex problem B, our developers will have more experience, better morale, and better support from management who have happy customers.
Your response is actually very surprising. I'm looking at the problem as a software engineer, who has to make architectural decisions about the applications I'm building. You're looking at it like a programmer. YAGNI is very much not an "engineering" response, it's a "programming" response. Engineering is all about careful, methodical design and architecture and planning for the future maintenance of the product. YAGNI is pretty much the antithesis of this.
I'm curious what kind of software you work on. Most of the software I do is applications / frameworks development. Planning ahead is very important when developing the architecture of an application, and is doubly-so when doing any kind of API design (both when developing frameworks and when developing reusable components inside of applications). You certainly don't need to implement everything up-front, but you do have to understand the future requirements and design appropriately. Failure to do so leads to blowing out time / budget estimates later when you realize your code isn't maintainable anymore and all the "quick" implementations people did to avoid having to make architectural decisions are now so fragile and dependent upon assumptions the implementor probably didn't even realize they were making that you can't change anything without having large ripple effects.
I am the lead architect of a public facing data platform for a fortune 500 company. I influence over 60 engineers through my team of architects, and indirectly influence countless other teams deploying to our platform. My background is start-ups.
We do not suffer the failure modes you describe in your second paragraph.
I take it "data platform" means server-side work? I was guessing that was your line of work. In my experience, server-side programming and native client software (applications, frameworks, or systems) are pretty radically different in a lot of non-obvious ways. I think this might be one of them.
After reading this thread (and others from lowbloodsugar) I would say that I disagree with you here and what I do is similar to what you do. Specifically, I think that architectural choices are only important in a planning phase when they affect how people will collaborate. Architectural choices such as data or system design within that collaborative framework are irrelevant to anything you don't have to build right away.
> Experience tells us that the rare cases where it does are vastly outweighed by the cases where they do not.
I do not agree with this.
My experience does not match this. As I tell my clients regularly, "Yes, I spent one day implementing that feature. But I spent 4 days talking to your folks trying to figure out what you actually needed, and 3 more days getting ready for what they are about to ask for."
Startup folks are fond of thinking that they have to move too fast for thinking or they are going bankrupt. That's rarely true. And, if it is, you're probably hosed anyway.
" The only way to know for sure that "this is how its going to turn out" is to actually be in the future."
But then what kind of "time resolution" do you use for "in the future"?
I mean, of course if you talk about weeks ahead, then yes, why should I plan for something that I may or may not need need in 4 weeks?
What is the cutoff then? Next week? Tomorrow? In an hour? 10 minutes?
After 34 years of experience, you have fine-tuned this. But there IS a line, right? Maybe not fixed, but there should be one, otherwise you won't do anything unless it's needed like right this nanosecond.
Reductio ad absurdum of course, but I'm curious how do you know where to draw the line?.
We do TDD. The cut-off line is "this test". Anything more than fixing "this test" is waste. We fix "this test" and write another test. Fix it. Maybe refactor. Repeat.
So not next month, next week. More like "next 30 seconds".
Even if we are scheduled to do the next feature right after this feature, we still wouldn't start writing a unified solution. We'd write a test for A, then fix it, then write another test for A, and then fix that, and keep going until A was done and now we're writing a test for B.
That said, if we really don't know what we're doing then we spike. Then we throw it away and rewrite it with TDD. Sounds mental, right?
This still seems to suggest some up-front design though. How do you know that the piracy pricing is going to be a significant risk (as pointed out in 3) unless you think through some of its' design?
Why wouldn't the engineers then go into developing the Storm feature with this knowledge, so that they avoid making any decisions that might cause significant rework when it comes time to implement piracy?
I can't see how they could assess the risk of the piracy feature without having thought it through and then choose not to take any of that thought into account when implementing the storm pricing feature.
So the thing is, yes, this hypothetical thing will affect your architecture. That's true. But what's also true is that a) this other thing over here you're not aware of will ALSO affect your architecture; and b) the thing you're planning ahead for might never happen.
Which is to say the really obvious: The future is unknowable.
Some people react to this by saying "okay, let's make everything as general as possible then, so we can be prepared for an unknowable future" and architect up a forest of abstractions.
Some people (the YAGNI crowd) react to this by saying "okay, let's making everything as simple as possible, so it's easy to change in the future when we know things" and rigorously eschew unnecessary abstraction.
In my experience (which includes creating, maintaining, and managing a single large application for a decade), the YAGNI crowd has the right of it. Complexity is a cost, and you should only incur it if you absolutely, no-question need it.
As I stated in another comment[1], this is a false dichotomy. There is a large continuum of various ways you can make decisions in between the two extremes of "YAGNI!" and "be as general as possible". The correct solution almost always lies somewhere in the middle.
YAGNI is appropriate if you think there's a very good chance that you won't need to do anything in the future, or if the future is so ill-defined that you can't make any sort of reasonable prediction of what you might need. If you're writing something that will be deployed once and never updated again, well, YAGNI is fine. But in most cases you're writing something that will need to be maintained and have more features added over time, and in that case, you have a variety of architectural decisions to make that influence what sort of changes can be easily made in the future. Determining which decisions to make in which ways is a hard problem, and it's something you basically just have to develop an intuition for by working on a lot of projects over years.
>There is a large continuum of various ways you can make decisions in between the two extremes of "YAGNI!" and "be as general as possible".
But, isn't the point of Yagni that there is not such a large continuum? Isn't the argument that attempting to anticipate the future to any degree is generally a waste of time?
Once you go beyond solving the problem at hand, you've jumped the shark. My experience has shown me time and again that future planning seldom pays off, and even then in small ways. I come out far better keeping things as simple as possible, as future changes also tend to be simpler.
>But in most cases you're writing something that will need to be maintained and have more features added over time, and in that case, you have a variety of architectural decisions to make...
I don't know that it requires making "architectual decisions" as much as following generally good design principles and keeping your couplings reasonably loose.
But, isn't the point of Yagni that there is not such a large continuum? Isn't the argument that attempting to anticipate the future to any degree is generally a waste of time?
That is the argument, but that doesn't make it true.
I don't know that it requires making "architectual decisions" as much as following generally good design principles and keeping your couplings reasonably loose.
Aren't these going to give much the same result in practice? But if the YAGNI advocates are correct and we can't usefully predict the future at all, how can you judge things like where to put your module boundaries and limit coupling?
>That is the argument, but that doesn't make it true.
My comment was addressing its parent's assertion that Yagni fits on a "continuum". It's a "kind of pregnant" notion in my view. Either you're following Yagni or you're not.
>Aren't these going to give much the same result in practice?
It's probably more the same in theory. But, in practice, it's the difference between, say, simply minding separation of concerns versus attempting to develop a full "framework" based on attempts to predict the future, then implementing the solution to the current problem in that framework.
>how can you judge things like where to put your module boundaries and limit coupling
Concepts like DRY, loose coupling, encapsulation of business logic, MVC, and other design principles stand independent of any particular application.
For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.
That much I agree with. What I would dispute is the implication, typical of many pro-YAGNI posters in this thread and elsewhere, that the alternative to YAGNI is somehow diving in and developing everything up-front without reference to the relative risks of requirements changing vs. incurring additional costs by delaying. That is a false dichotomy.
For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.
I suspect this is where our experience differs. To me, one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system. However, if your basic premise is that you can't tell anything in advance about what your future requirements might be, you might model the current known situation in all kinds of different ways, but some will be much more future-proof than others.
Consider the old chestnut of modelling bank accounts. If you only have to model a balance on a single account, you can have some data structure that stores the balance and some functions to increase or decrease it. As soon as you need to model transfers between accounts, it turns out that the above is a very unhelpful data model, and your emphasis on single accounts was a poor choice. Even the most basic assumptions about likely future applications would have led to a more useful path, but if you follow YAGNI you have to start with a single-account model and then follow an onerous migration procedure precisely when you need to work with multiple accounts for the first time.
>implication...the alternative to YAGNI is somehow diving in and developing everything up-front without reference to...
I don't think that's the dichotomy that's being presented, nor do I think it would matter if it was. That is, one doesn't have to go to that extreme to incur the downside of a "non-YAGNI" approach. It's very easy to have contemplation of future features negatively impact a project.
>one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system
Wow. I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate. It also seems that it would tightly couple the code with business requirements in such a way that change actually guarantees maximum impact to the code. Because, rules don't change in a neat, stovepiped way. So, when they change, cross-cut, overlap, etc., then all of your modularization goes right out of the window.
So, interestingly, given that approach, it probably would make it more important to anticipate future changes, because your code will be less insulated from those changes!
>modelling bank accounts
Thanks for bringing this down from the abstract.
But, this is where generally good design can help. If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers. In fact, I'd say you have a good head start.
It's very easy to have contemplation of future features negatively impact a project.
But it's also very easy to have failure to contemplate future features negatively impact a project. This doesn't get us anywhere.
I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate.
But again, you have to design some way, unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary unless justified by changes required right now. As soon as you are designing some specific way, you are necessarily making choices, and I would argue for making the best choices you can given the information you have available at the time. That may mean you don't have enough confidence in some particular requirement to justify working on it yet, or it may not.
If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers.
But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?
>But it's also very easy to have failure to contemplate future features negatively impact a project.
Perhaps, but in the former case you guarantee an impact to the project. And, empirically speaking, the odds are that impact will be negative. Trying to design for some unknown future is more likely to get you off the rails than designing well to known requirements.
>unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary
Well, "modular" is such an amorphous word. Building a domain model and other functionality around current requirements will yield some degree of modularization. I'm suggesting that such modularization should tie back into the actors and objects dictated by current requirements, and is more at the programmatic level. It's horizontal (relative to requirements). This, as opposed to a vertical approach that attempts to stovepipe individual use cases into modules. The latter scenario can lead to more pain when requirements change.
I sincerely believe that may be why you find it so important to anticipate future changes--because you've totally pegged your design to a modularization scheme that demands your requirements stay within neat boundaries. So, it's really important that you define those boundaries well from the outset, or you may face some serious re-work.
>But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?
But that's really my point: good design practices in themselves do anticipate the future to a significant extent. That is, a system that is well-modeled with good separation of concerns is more flexible, less coupled and thus, more extensible. But, one need not anticipate any specific future requirements to achieve this. Just design well based on known-requirements.
I wonder whether we're still slightly talking at cross-purposes here. You seem to have the idea that I am somehow advocating always trying to anticipate or emphasize future requirements at the expense of what I'm doing right now, or that I'm arguing for some sort of fixed architecture or design where you need to magically know everything up front. This is certainly not what I'm trying to say. On the contrary, I adapt designs and refactor code all the time, just as many others here surely do.
But I still feel that there is something rather selective in the arguments being made for any near-absolute rule about not taking expected future developments into account when designing. Whenever we talk about modular design or domain models or use cases, and whichever words we happen to use, we are always implicitly talking about making decisions of one kind or another, as evidenced by the fact that even those who are supporting YAGNI in this HN discussion are advocating things like refactoring to keep new work in good condition.
Of course we can and should revisit those decisions later if we have better information and of course sometimes we will change things as a result. The only point I'm trying to make here is that I'd rather start from the position most likely to be useful, not the minimal position. To me it is all about probabilities, and perhaps unlike some here, I don't find that predicting what I'm going to need my code to do next week is some inhumanly challenging task that has a 105% failure rate with project-ending costs. On the contrary, the vast majority of the time on a real project I find things will in fact turn out exactly the way we're all expecting on those kinds of timescales, and probably pretty close a month out, while looking six months or two years ahead we probably have at best a tentative plan and just like the YAGNI fans we don't want to invest significant resources catering to hypothetical futures with a high probability of changing before we actually get there.
In this context, I find it is often premature pessimisation and willful ignorance making the work that will almost always turn out to be unnecessary, not the other way around. The idea that I should discard knowledge of what is almost certainly coming next week and create more work for myself on Monday just in case something totally unexpected happens by the end of this week is bizarre to me. Maybe we've just worked on very different kinds of projects or had very different standards for the management/leadership who are making decisions on those projects.
Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering). Because, even with iterative/agile development, a week out can likely be considered in-scope (or close enough).
So, narrowing the timeline so dramatically fundamentally changes this entire discussion. YAGNI's statement implies that there's a reasonable degree of uncertainty with regard to the features you're considering. However, there is generally much less uncertainty about what you'll be building in the following week. The requirements are essentially known for all intents and purposes. So, at that point, it's more like, "I Know We Need This", because you're essentially driving it from the current requirements.
So, I think the real determinant is whether you're looking at current requirements, vs. trying to anticipate future requirements. If you're doing the latter then, I think we just disagree.
And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.
Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering).
Perhaps this is where the crossed wires happened, then.
To me, this is all a matter of degrees. I know exactly what I need to build immediately -- what code I'm writing this morning, what tests I'm currently trying to make pass, or however you want to look at it. I also have a very clear idea of what I need to build later this week. I have some idea of what I need to build by the end of the month. I have a tentative idea of what I'll need to build in three months. On most projects, I probably have very little confidence in what I'll be building a year from now.
When I'm designing and coding, the amount of weight I give to potential future needs is based on that sliding scale of confidence. If I'm writing two cases for something right now and expect to need the third and final possibility next week, I'll probably just write them all immediately, so that whole area is wrapped up and I don't have to shift back to look at this part of the code again in the immediate future. For something that I will probably need in a few weeks, it's less likely that I'll implement it fully right now, but I might well leave myself a convenient place to put it if and when the time comes if that doesn't require much work. For me, the latter is sometimes a bit like deliberately leaving the final step of a routine task unfinished at the end of a working day so I can get going the next morning with a quick and easy win -- it's as much about the positive mindset as any expectation that doing something immediately vs. very soon will make any practical difference to how well it gets done.
Obviously as I look further ahead, confidence in specific needs tends to drop quite sharply on most projects. For something tentative that is being discussed as a possible future requirement for later this year but with no clear spec yet, it's unlikely that I would write any specific code for it at all at this stage. However, I might still take into account likely future access patterns for data in my system if I'm choosing between data structures that are equally suitable for the immediate requirements. I might take into account the all-but-certainty that we will need many variations of some new feature when planning the architecture for that part of the system and design in a degree of extra flexibility, even though we have no clear idea of exactly which variations they will be yet and I'm not actually writing any concrete implementations beyond the first one at this stage.
And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.
That's not really how I look at it. I'm not so much extrapolating (a.k.a. guessing) specific requirements far ahead. I'm just allowing for the possibility that I may have some useful knowledge about future needs without necessarily having all the details yet. If I do, I will take advantage of that to guide my decisions today to the extent that confidence justifies doing so. The amount of actual change in designs or code that results will vary with both the level of confidence I currently have in whatever potential requirements we are talking about and in the assessment of how much effort is required to allow for them now vs. how much effort will potentially be saved later if the expectation is accurate.
I don't think YAGNI ignores this so much as explicitly rejects it. I've participated in the type of rewrites you mention here. Yes, they suck. Yes, they happen far too often. But I've also observed that they seem to happen regardless of how much you plan ahead to future requirements: even if you are exhaustively brainstorming possible future directions and are absolutely sure you're going to want something, you're very often wrong. And the inevitable rewrite that happens is a lot more painful when you have to carryover requirements that you don't actually need.
YAGNI fundamentally is a statement about costs and benefits. And it's a statement about personal experiences of costs and benefits, with a counterintuitive conclusion. I've found, however, that the teams I've worked on that just accept that they'll have to rewrite or throw away 90% of what they write end up performing at a much higher level (in terms of their impact on the broader industry) than the teams who figure "But if we could just get that 90% down to 50%, we'll be 5x more effective than other teams!"
Another way to look at this is in terms of external vs. internal drivers of success. YAGNI makes sense when the primary drivers of success are external and you need to quickly react to changing market conditions or customer requirements. It doesn't when the primary drivers of success are internal and you need to quickly act to get from known point A to known point B as efficiently as possible. Some engineers are lucky (unlucky?) enough to work on the latter, but typically it only happens when you either have a monopoly or you're deep in the bowels of a corporation and only need to report up to an executive who never changes her mind.
To use the example from the article - if the biggest risk or change in the external environment you'll face is your software, go ahead and build the feature into it. But who knows? You may be able to close a round of venture funding in 2 months and then hire the Gondor navy to eliminate piracy. Or Gondor may enter into a trade agreement with Rohan and redoing all your contracts takes primacy. Or Aragorn may arrive with the Army of the Dead and suddenly piracy is not a problem anymore, but a lucrative business in life insurance may pick up.
I think there's a difference between going ahead and implementing piracy risk immediately, vs determining all the requirements of piracy risk and using them when designing your fundamental architecture, with hooks left in place for extensibility that simply aren't actually implemented yet. Maybe the Gondor navy will destroy the pirates, but then you have to worry about Corruption risk because you now may need to bribe the navy or risk having your cargo impounded (so you have to balance the risk of impounding vs the cost of bribing). Sure it's not the same thing as piracy risk, but it's similar, and because you designed your architecture from the get-go to enable piracy risk and other such extensions, you can now implement navy bribes pretty easily.
Meanwhile, if you'd said YAGNI to piracy risk and just implemented support for storm risk, you may find that you can't easily implement navy bribes without re-doing much of the work you already did for storm risk.
As saganus said[1], this is more of a craft than a science. You need to plan ahead with your architectural decisions, and they need to be made using the actual requirements you expect to encounter (as opposed to theoretical requirements, which aren't really much of a use to anyone), but that doesn't mean you need to actually implement everything immediately. Just enough to be satisfied that your architecture will suffice.
And this is why a lot of software written by the agile teams in the companies I've worked for looks like a messy accumulation of ad hoc solutions, with too little shared code and too much duplication.
Even if you are not implementing a feature right now, you still need to have as much information as possible about what might be needed in the future and how it will fit in in your solution.
No. You need to have information about what you're implementing right now, and the willingness to write it well.
What you generally see is that if you have a complex code base with lots of plan-for-the-future abstractions in it, refactoring it in any non-trivial way is really hard[1], so people don't, and you end up with hacks.
Whereas if the system is as simple as possible, you can make bigger changes more easily, and the code can stay cleaner.
That's no guarantee that it will -- code quality still requires discipline and sound engineering -- but it's a lot more likely with YAGNI than without.
[1] "Hey, we need to make our pricing system handle multiple currencies."
"Oh geez, that's going to wreak havoc with our risk plugin system, that change will take at least three weeks."
"Oh yeah, and we'll definitely need that risk plugin system when we get to the piracy risk feature later this year... what if we just kind of hack currency systems up by [doing something awful]?"
"Sure, we can do that in a week. I don't like it, but it's the only choice given our deadline right now."
The software I'm working on now is a messy accumulation of ad-hoc solutions but not because of YAGNI - it's because the real world of selling a system to multiple customers who get to demand things like "but we want our adverts to be blue on the second Tuesday of every third month" tends to makes software a mess.
But on the gripping hand, deferring an implementation often means you understand the problem better by the time you get to it, so that your actual implementation is better than the naïve version you would have written earlier.
Fair point. Making these decisions correctly is hard. I'm pretty sure this is one of the things that you slowly learn how to do with experience, that no amount of being smart or research can make up for.
I think you had it right first time. Much like you can define an interface without a concrete implementation, or a unit test without working code.
One rather hypothetical approach to the problem is separation of concerns - make pricing composable - starting with storm risks, and supporting other unknown, future risks. The obvious follow-on I see too often is to use some type of dependency injection, which is almost always (my experience - YMMV) a bad move - pricing risk components probably don't change often enough to warrant the configuration nightmare that ensues. Just recompile, and redeploy. Or use reflection to load available pricing modules, assuming the perf hit is acceptable.
You may not need to know the intricacies of pirate risk when delivering storm risk, but you do know that you'll be dealing with another risk after delivering storm risk. So plan for it.
Because what'll happen is, you'll find out that the type of risk you actually encounter ends up being tied to (say) the time of year in a way that you were not expecting, and now you've got this elaborate risk-plugin architecture sitting there, and there's no way to get the time of year down to the risk calculator, so you need to elaborately hoist out and redesign this entire gigantic apparatus, instead of just adding a parameter to a function call.
And meanwhile, as you were daydreaming your future hypothetical risks and trying to have an idea of what stuff they might need to know, you imagined that it might need to know what currency the thing is priced in to calculate currency-fluctuation risks, and so you're passing currencies all over the place, to be prepared for general extensible etc., but it turns out you never need them in your risk calculators anyway, so it's just this pile of unnecessary nonsense of setting currencies everywhere and you have to maintain all that.
And if you read those paragraphs and think "hmm, you'd probably want to make it extensible in terms of which fields are passed into the risk calculator to prevent that kind of problem" then realize that you are now solving a hypothetical problem that was caused by the "solution" to your first hypothetical problem, and you're three levels removed from delivering any actual value to anyone.
Forget about it. Don't plan for it. Plan for what you need today, because you will not be good at anticipating what you need tomorrow.
You seem to be dividing the world into two possibilities:
1. Don't plan ahead. Only implement what you need right at this moment.
2. Plan for everything that's even remotely imaginable.
#2 is quite ridiculous. But the alternative to that is not #1. There is a huge amount of middle ground here, for assessing likely future needs and designing your architecture appropriately, as well as for identifying what future requirements affect architecture and which ones can be safely ignored as something that can be easily implemented on top of the current architecture.
The problem with #1 is this either leads to re-implementing large portions of your app (possibly many times if you persist with this) as you discover your architecture doesn't work; or, more likely, adding hack on top of hack to implement new functionality without having to rearchitect, which leads to massive technical debt and results in an un-maintainable product.
But, what are you defining as "architecture" here?
Having a design that requires you to completely re-implement your app whenever changes are required (even significant ones), seems more a problem of not following proven design principles than one of poor "architecture".
>* If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing.*
But, isn't this the same kind of thinking that necessitated a precept like Yagni?
I wonder if the very notion that we are "architecting" vs. simply building software to solve a clear and present problem is at the heart of the tendency to overengineer.
Every "architectural decision" I've ever seen has been wrong. It's always resulted in a worse product than simply writing the code and allowing the architecture to happen.
(This doesn't mean writing code with no layering, just deferring those decisions to the point where you actually have the code that makes use of them)
I have found the opposite. Building things you don't need yet it's just increasing the mass of code that needs to change when the real requirements hit. It pays off to build software as simple as possible, and modify / generalize later, as opposed to designing an up-front "architecture" that is bound to become obsolete and a hindrance in 6 months.
If you know that you are going to need piracy risk support, then of course it makes sense to prepare the architecture for it, even if you only have to deliver the feature four months down the line. But YAGNI does not apply if you know a feature is needed. If you don't know but are just guessing about possible future development there are a million things in various directions you could also prepare for, and all preparations have a cost.
> it does not apply to effort to make the software easier to modify
But there's a difference between making software easier to modify (where the same effort could be expended later to retrofit the software) versus making architectural decisions. Often times trying to implement something teaches you that your architecture isn't sufficient and you need to change it. If you can learn that while you're building the original architecture, then you've avoided a potential huge amount of work later trying to retrofit it.
To use the example from the article, what if supporting piracy risks reveals that the fundamental design used for representing pricing decisions isn't sufficient to model the piracy risks, and the whole thing needs to be reimplemented? If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing. But if you defer the piracy risks by 4 months, and then discover that your pricing model needs to be re-done, now you need to throw away your previous work and start over, and now your 2-month feature is going to take 3 or 4 months instead.