Basically someone stands around with a stopwatch and monitors a process a few times and then gets a feel for the average time the process takes.
You can do the same thing in software with the caveat that you must be doing pretty much the same thing.
e.g., if the last 5 times you had to build the basic framework for a CRUD program in Ruby it took 10 hours, then it's likely that the 6th time it will also take roughly 10 hours. However, doing the same thing in C means that the estimate goes flying out the window.
I wonder if this works a bit better where things stay the same. E.g. "I need to perform a minor tweaks to a feature in repository X in language Y and Z and release process Q.". It'd take considerable time to build a dataset of timings, and since the software itself evolves over time (developer tools, build process, language, etc.) any data recorded is almost immediately irrelevant. Fun!
Basically someone stands around with a stopwatch and monitors a process a few times and then gets a feel for the average time the process takes.
You can do the same thing in software with the caveat that you must be doing pretty much the same thing.
e.g., if the last 5 times you had to build the basic framework for a CRUD program in Ruby it took 10 hours, then it's likely that the 6th time it will also take roughly 10 hours. However, doing the same thing in C means that the estimate goes flying out the window.