|Scrum smells, pt. 6: Unknowns and estimates||https://mobileit.cz/Blog/Pages/scrum-smells-6.aspx||Scrum smells, pt. 6: Unknowns and estimates||<p>Today, I'd like to share some of the ideas and estimation approaches that helped us in past
projects. The tricky part in long and short-term planning is how to predict the unknowns that
will influence us in the future. As I wrote earlier, there are several things that usually come
up and may not be visible in the product backlog when you are planning something.</p><h2>The unknowns</h2><p>In projects related to mobile app development, we usually encounter the following unplanned
activities:</p><ul><li>Defect fixing</li><li>Backlog refinement activities</li><li>Collaboration on UI/UX design</li><li>Refactoring</li><li>New user stories</li></ul><p>Defect fixing is quite obvious and we have spoken about it already. You can't usually foresee
what bugs will appear.</p><p>Backlog refinement activities include understanding the backlog items, analyzing the underlying
technical and usability aspects, and making the backlog items meet the definition of ready. </p><p>The UI/UX design process is not just a simple decision about colors and shapes. The controls used
and the screen layouts and flows usually have a large impact on how the application needs to be
built, and we witness over and over again that a seemingly small aspect of the design idea can
have a vast impact on the complexity of the actual implementation. So in order to keep the
cost/benefit ratio reasonable, we have learned that it is necessary that the developers
collaborate closely with the designers in order to prevent any unpleasant surprises. You can
read more about this topic in <a href="/Blog/Pages/design-system-1.aspx">this
</p><p>Refactoring existing code and infrastructure setup is a must if we want to develop a product that
will be sustainable for longer than a few weeks. It can also have the potential of making the
dev team more effective.</p><p>New user stories are interesting. You invest a lot of time into the backlog refinement and it
just looks perfect, everything is thought through and sorted. Fast forward two months into the
future and you discover (with new knowledge from that past two months) that you need to simplify
some stories while others have become obsolete, but more importantly, you realize that you need
to introduce completely new features that are vital for app's meaningfulness. You couldn’t see
this before you had the actual chance to play around with the features from the past couple of
months and gather feedback from users, analyze the usage stats or see the economical
results.</p><h2>Estimates</h2><p>Having most of the stuff in the backlog estimated for its complexity (size) is vital for any
planning. But as we have all probably learned the hard way, estimates are almost always anything
but precise. We, therefore, did not find any value in trying to produce exact estimate values
(like 13.5 man-days of work), but we rather use the approach of relative estimation while using
pseudo-Fibonacci numbers: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100.</p><p>It is important to understand that these are dimensionless numbers. They are not hours, man-days,
or anything similar. It is an abstract number used solely to set a benchmark and compare other
items against each other.</p><p>So what does that mean? At the beginning of the project we pick an item in the backlog that seems
to be of a common size and appears neither small nor big, a number between the 5-8 range. That
will be our benchmark and all other stories are then compared to it. How much more difficult (or
easy) is this or that item compared to our benchmark?</p><p>Over time, we usually found out that the initial benchmarks and estimates were completely off.
But that is OK, it's a learning process. It is important to review the estimates after the
actual development and from them. Was that user story really an 8? Were these two items as
similar as we initially thought? If not, how would we estimate them now and why? That also means
that from time to time it's necessary to revisit all the already estimated items in the product
backlog. </p><p>It usually is not necessary to go into deep details with stuff that is several sprints ahead. As
the team gains experience with the product domain, the developer's gut feelings get more
relevant and precise. That means useful estimates can be done quite swiftly after the team
grasps the particular feature's idea. Sure, some stuff in the backlog will be somewhat
underestimated, some overestimated. But with long-term planning and predictions it usually
suffices because statistically, the average usually gets quite reliable.</p><p>The outcome of all this is a backlog where every item is labelled with its size. It becomes clear
what items are meaningfully defined. The development team has an idea about the technical
solution (meaning that the size is reasonable) and what items are completely vague or for which
the team members lack key business or technical information. Those are usually the items with
estimates labels of “40”, “100”, or even “??”.</p><p>If such inestimable stories are buried in the lower parts of the backlog and the product owner
does not even plan to bring them to the market for a long time from now, that's fine. But do any
of these items have a high value for the product and do we want to bring it to the market soon?
If that's the case, it sends a clear message to the product owner: back to the drawing board,
let's completely re-think and simplify such user stories and expect that some team capacity may
be needed for technical research. </p><p>So after all this hassle, the upper parts of the backlog will have numbers that you can do math
with.</p><h2>Quantifying unexpected work</h2><p>The last piece of the puzzle requiring predictions and plans is to quantify how much of the
unexpected stuff usually happens. Now, this might seem like a catch-22 situation - how can we
predict the amount of something that we can't predict by its definition? At the beginning of the
development, this is indeed impossible to solve. But as always, agile development is empirically
oriented - over time we can find ways to get an idea about what is ahead based on past
experience. As always, I am not preaching any universal truth. I am just sharing an experience
that my colleagues and I have gathered over time and we find useful. So do we do it?
</p><p>It's vital to visualize any team's work in the product and sprint backlog as transparently as
possible. So it's also good to include all the stuff that are not user stories, but the team
knowingly needs to put some effort into them (like the known regressions, researches,
refactorings, etc.) into the backlog too. If it's possible to estimate the size upfront, let's
do it. If it's not, either cap the maximum capacity to be invested or re-visit and size the item
after it's been done. This is necessary in order to gather statistics.
</p><p>Just to be clear - let's not mistake such unexpected work with a scope creep. I assume that we
don't suffer from excessive scope creep, the unexpected work is indeed solely highly valuable
and necessary work that was just not discovered upfront.</p><p>So now we have a reasonably transparent backlog, containing the originally planned stories and
also the on-the-go incoming items. We have most of it labelled with sizes. In the next part of
this series, we'll try to make some statistics and conclusions on top of all this.
|Scrum smells, pt. 3: Panic-driven bug management||https://mobileit.cz/Blog/Pages/scrum-smells-3.aspx||Scrum smells, pt. 3: Panic-driven bug management||<p>Bugs create a special atmosphere. They often cause a lot of unrest or outright panic. But does it
have to be that way?</p><p>Nearly every developer out there has come across the following scenario: The development team is
working on the sprint backlog when suddenly the users report an incident. The marketing manager
comes in and puts pressure on the development team or their product owner to urgently fix the
bug. The team feels guilty so some of the developers stop working on whatever they've been doing
and focus on fixing the bug. They eventually succeed, and now the testers shift their focus as
well to verify the fix as soon as possible, so the developers can release a hotfix. The hotfix
is deployed, sprint passes by, and the originally planned sprint backlog is only half-done.
Everyone is stressed out.</p><p>A similar situation is often created by a product owner: He finds a defect in functionality,
created two sprints ago, but demands an immediate repair.</p><p>Is this all really necessary? Sure, some issues have a great impact on the product or service,
and then this approach might be justifiable, but rather often this kind of urgent defect
whacking is a process that is more emotional than rational. So how to treat bugs
systematically?</p><h2>What are bugs and bug fixes?</h2><p>A defect, incident, or simply a “bug” is effectively any deviation of the existing product from
its backlog. Any behavior that is different from the one agreed upon between the dev team and a
product owner can be called a bug. Bugs aren’t only defects in the conventional meaning (e.g.,
crashes or computational errors); a technically correct behavior in conflict with a boundary set
by a user story can also be considered a defect.</p><p>Some bugs are related to the product increment being implemented in the current sprint. Other
bugs are found retrospectively: They are related to the user stories developed in past sprints.
These fall into two categories:</p><ol><li>Regressions: When a subsequent development broke a formerly functional part of the code.
</li><li>Overlooked bugs: They were always there, but no one had noticed.</li></ol><p>Conversely, a bug fix is something that adds value to the current product by lowering the
above-mentioned deviation. It requires a certain amount of effort and it raises the value of the
present product. At the end of the day, a bug is just another unit of work, and we can evaluate
its cost/benefit ratio. It is the same as any other backlog item.</p><h2>A bit of psychology</h2><p>Scrum teams and stakeholders tend to approach both defect categories differently. They also treat
them differently than the “regular” backlog items.</p><p>In my experience, there are two important psychological factors influencing the irrational
treatment of defects.</p><p>First of all, there's often a feeling of guilt when a developer is confronted with a bug. The
natural response of most people is to try to fix the error as soon as possible so that they feel
they are doing a good job. Developers naturally want to get rid of such debts.</p><p>Another factor is how people perceive gains and losses. People are evolutionarily averse to
losses because the ability to obtain and preserve resources has always been key to survival.
There have been studies concluding that on average, people perceive a loss four times as
intensely compared to a gain of the same objective value: If you lose 5 dollars, it is four
times as painful compared to the gratification of finding 5 dollars lying on the ground. You
need to find 20 dollars to have a comparable intensity of feeling as when you lose the mentioned
5. The bug/defect/incident is perceived as a loss for the team's product, especially if it's a
regression. A small bug can therefore be perceived as much more important than a newly delivered
valuable feature.</p><p>Don't get me wrong—I am not saying that bugs are not worth fixing or that they don't require any
attention. That is obviously not true. One of the key principles of scrum is to deliver a
functional, <em>potentially releasable</em> product increment in every sprint. That means that a
development quality is fundamental and teams should always aim at developing a debt-free
product. Nonetheless, bugs will always have to be dealt with.</p><h2>Bugs caused by newly added code</h2><p>When working on a sprint backlog, the team needs to set up a system to validate the increment
they’ve just developed. The goal is to make sure that at the end of the sprint, a feature is
free of debt, and can be potentially released. Our experience shows that during a sprint backlog
development, the team should focus on removing any bugs related to the newly developed features
as quickly as possible in order to keep the feedback/verification loop as short as possible.
This approach maximizes the probability that a newly developed user story is done by the end of
the sprint and that it is potentially releasable.</p><p>Sometimes there are just too many bugs and it becomes clear that not everything planned in the
sprint backlog can be realistically achieved. The daily scrum is the opportunity to point this
out. The development team and the product owner together can then concentrate their efforts on a
smaller amount of in-progress user stories (and related bugs). It is always better to make one
user story done by the end of the sprint than to have ten stories halfway finished. Of course
all bugs should be recorded transparently in the backlog.</p><p>Remember, a user story is an explanation of the user's need that the product tackles, together
with a general boundary within which the developed solution must lie. A common pitfall is that
the product owner decides on the exact way for developing a (e.g., defines the exact UI or
technical workflow) and insists on it, even though it is just her personal preference. This
approach not only reduces the development team's options to come up with the most effective
solution but also inevitably increases the probability of a deviation, thus increasing the
number of bugs as well.</p><h2>Regressions and bugs related to past development</h2><p>I think it's important to treat bugs (or rather their fixes) introduced before the current sprint
as regular backlog items and prioritize them accordingly. Whenever an incident or regression is
discovered, it must go into the backlog and decisions need to be made: What will be the benefit
of that particular bug fix compared to other backlog items we can work on? Has the bug been
introduced just now or have the users already lived with it for some time and we just did not
know it? Do we know the root cause and are we able to estimate the cost needed to fix it? If
not, how much effort is worth putting into that particular bug fix, so that the cost/benefit
ratio is still on par with other items on the top of the backlog?</p><p>By following this approach, other backlog items will often be prioritized over the bug fix, which
is perfectly fine. Or the impact of the bug might be so negligible that it's not worth keeping
it in the backlog at all. One of the main scrum principles is to always invest the team's
capacity in stuff that has the best return on invested time/costs. When the complexity of a fix
is unknown, we have good experience with putting a limit on the invested capacity. For instance,
we said that at the present moment, this particular bug fix is worth investing 5 story points
for us. If the developers managed to fix the issue, great. If not, it was abandoned and
re-prioritized with this new knowledge. By doing this, we mitigated the situations when
developers dwell on a single bug for weeks, not being able to fix it.</p><p>I think keeping a bug-log greatly hinders transparency, and it’s a sign that a product owner
gives up on making decisions that really matter and refuses to admit the reality.</p><h2>Final words</h2><p>I believe all backlog items should be approached equally. A bug fix brings value in a similar way
as a new functionality does. By keeping bug fixes and new features in one common backlog and
constantly questioning their cost/benefit ratio, we can keep the team going forward, and ensure
that critical bugs don't fall through.</p>||#scrum;#agile;#project-management;#release-management|
|Scrum smells, pt. 4: Dreadful planning||https://mobileit.cz/Blog/Pages/scrum-smells-4.aspx||Scrum smells, pt. 4: Dreadful planning||<p>In a few of our past projects, I encountered a situation that might sound familiar to you:
Developers are getting towards the end of a sprint. The product owner seems to have sorted the
product backlog a bit for the sprint planning meeting - he changed the backlog order somewhat
and pulled some items towards the top because he currently believes they should be added to the
product rather soon. He added some new things as well because the stakeholders demand them. In
the meantime, the team works on the development of the sprint backlog. The sprint ends, the team
does the end-of-sprint ceremonies and planning we go.</p><p>At the planning meeting, the team sits down to what seems to be a groomed backlog. They go
through the top backlog items with the product owner, who explains what he has prioritized. The
team members try to grasp the idea and technical implication of the backlog items and try their
best to plan them for development. But they find out that one particular story is very complex
and can't be fitted within a sprint, so they negotiate with the product owner about how to
meaningfully break it down into several smaller pieces. Another item has a technical dependency
on something that has not been done yet. The third item has a functional dependency - meaning it
won't work meaningfully unless a different story gets developed. The fourth item requires a
technology that the developers haven’t had enough experience with. Therefore, they are unable to
even remotely tell how complex it is. And so on it goes - the team members dig through the
“prepared” backlog, try to wrap their heads around it, and finally find out that they can't work
on every other story for some reason.</p><p>One possible outcome is that such items are skipped, and only the items that the team feels
comfortable with are planned into the sprint backlog. Another outcome is that they will want to
please the product owner and “try” to do the stuff somehow. In any case, the planning meeting
will take hours and will be a very painful experience.</p><p>In both cases, the reason is poor planning. If there ever was a planned approach by the product
owner towards the backlog prior to the planning meeting, it was naive, and now it either gets
changed vastly, or it gets worked on with many unknowns - making the outcome of the sprint a
gamble.</p><h2>What went wrong?</h2><p>One might think all the planning occurs exclusively at the planning meeting. Why else would it be
called a planning meeting? Well, that is only half true. The planning meeting serves the purpose
for the team to agree on a realistic sprint goal, and discuss with the product owner what can or
cannot be achieved within the upcoming sprint, and create a plan of attack. Team members pull
the items from the top of the backlog into the sprint backlog in a way that gets to that goal in
the best possible way. It is a ceremony that actually starts the sprint, so the team sets off
developing the stuff right away.</p><p>In order to create a realistic sprint plan that delivers a potentially releasable product
increment with a reasonable amount of certainty, there has to be enough knowledge and/or
experience with what you are planning. The opposite approach is called gambling.</p><h2>Definition of ready</h2><p>It is clear that the backlog items need to fulfill some criteria before the planning meeting
occurs. These criteria are commonly referred to as a “definition of ready” (DoR). Basically, it
is a set of requirements set by the development team, which each backlog item needs to meet if
the product owner expects it to be developed in upcoming sprints. In other words, the goal of
DoR is to make sure a backlog item is immediately actionable, the developers can start
developing it, and they can be realistically finished within a sprint.</p><p>We had a good experience with creating DoR with our teams. However, we also found that this looks
much easier at a first glance than it is in practice. But I believe it is definitely worth the
effort, as it will make predictions and overall workflow so much smoother.</p><p>DoR is a simple set of rules which must be met before anyone from the scrum team can say “we put
this one into the sprint backlog”. They may be dependent on the particular product or project,
and they can be both technical and business-sided in nature, but I believe there are several
universal aspects to them as well. Here are some of our typical criteria for determining if
backlog item satisfies the DoR:</p><ul><li>Item has no technical or business dependencies.</li><li>Everyone from the team understands the item's meaning and purpose completely.</li><li>We have some idea about its complexity.</li><li>It has a very good cost/benefit ratio.</li><li>It is doable within one sprint.</li></ul><p>There are usually more factors (such as a well-written story definition, etc.), but I picked the
ones that made us sweat the most to get them right.</p><h2>Putting backlog refinement into practice</h2><p>This is a continuous and never-ending activity, which in my opinion has the mere goal of getting
the DoR fulfilled. As usual, the goal is simple to explain, but in practice not easy to achieve.
Immature teams usually see refinement activities as a waste of time and a distraction from the
“real work”. Nonetheless, our experience has proven many times that if we don't invest
sufficient time into the refinement upfront, it will cost us dearly in time (not so much) later
in the development.</p><p>So, during a sprint, preparing the ground for future sprints is a must. The development team must
take this t into account when planning the sprint backlog. Refinement activities will usually
occupy a non-negligible portion of the team's capacity.</p><p>The product owner and the team should aim at having at least a sprint or two worth of stuff in
the backlog, which meets the DoR. That means there needs to be a continuous discussion about the
top of the backlog. The rest of the scrum team should challenge the product owner to make sure
nothing gets left there just “because”. Why is it there? What is its purpose and value in the
long term?</p><p>Once everyone sees the value, it is necessary to evaluate the cost/benefit ratio. The devs need
to think about how roughly complex it will be to develop such a user story. In order to do that,
they will need to work out a general approach for the actual technical implementation and
identify its prerequisites. If they are able to figure out what the size roughly is, even
better.</p><p>However, from time to time, the devs won't be able to estimate the complexity, because the nature
of the problem will be new to them. In such cases, our devs usually assigned someone who did
research on the topic to roughly map the uncharted area. The knowledge gained was then used to
size the item (and also later on, in the actual development). This research work is also tracked
as a backlog item with it's intended complexity, to roughly cap the amount of effort worth
investing into it.</p><p>Now with the approximate complexity established, the team can determine whether the item is not
too large for a sprint. If it is, then back to the drawing board. How can we reduce or split it
into more items? In our experience, in most cases, a user story could be further simplified and
made more atomic to solve the root of the user's problem. Maybe in a less comfortable way for
him, but it is still a valuable solution - remember the Pareto principle. The product owner
needs the support of the devs to know how “small” a story needs to be, but he must be willing to
reduce it, and not resist the splitting process. All of the pieces of the “broken down” stories
are then treated as separate items with their own value and cost. But remember, there always
needs to be a user value, so do vertical slicing only!</p><p>Then follows the question: “Can't we do something with a better ratio between value and cost
instead?” In a similar fashion, the team then checks the rest of the DoR. How are we going to
test it? Do we need to figure something out in advance? Is there anything about the UI that we
need to think about before we get to planning? Have we forgotten anything in dependencies?</p><p>Have we taken all dependencies into account? <strong>Are we able to start developing it and get
it done right away?</strong></p><h2>Let the planning begin!</h2><p>Once all the questions are answered, and both the devs and the product owner feel comfortable and
familiar with the top of the backlog, the team can consider itself ready for the planning
meeting.</p><p>It is not necessary (and in our case was also not common) for all devs to participate in the
refinement process during a sprint. They usually agreed on who is going to be helping with the
refinement to give the product owner enough support, but also to keep enough devs working on the
sprint backlog. At the planning meeting, the devs just reassure themselves that they have
understood all the top stories in the same way, recap the approach to the development,
distribute the workload and outline a time plan for the sprint.</p><p>The sprint retrospective is also a good time to review the DoR from time to time, in case the
team encounters problematic patterns in the refinement process itself.</p><p>Proper and timely backlog refinement will prevent most last-minute backlog changes from
happening. In the long run, it will save money and nerves. It is also one of the major
contributors to the team's morale by making backlog stuff easier to plan and achieve.</p>||#scrum;#agile;#project-management;#release-management|
|Relative Estimates||https://mobileit.cz/Blog/Pages/relative-estimates.aspx||Relative Estimates||<p>
In my past articles related to <a href="/Blog/Pages/scrum-smells-6.aspx">project</a> and <a href="/Blog/Pages/scrum-smells-4.aspx">sprint planning</a>, we touched on the concept of relative estimates. Those articles were
focused more on the planning aspect and the usage of the estimates and less on the actual process of estimation. So let's talk about estimation techniques
my colleagues and I found useful.
I already touched on this <a href="/Blog/Pages/scrum-smells-5.aspx">before</a>, there is a huge misunderstanding in what makes a feature
development estimate exact. People intuitively think that an exact estimate is a precise number with no tolerance. Something like 23.5 man-days of work. Not
a tad more or less.
How much can we trust that number? I think we all feel that not much unless we know more about how the estimate was created. What precise information did
estimator base his estimate on? What assumptions did he make about future progress? What risks did he consider? What experience does he have with similar
We use this knowledge to make our own assessment on how likely it is that the job's duration will vary from the estimate. What we do is make our own
estimation of a probable range, where we feel the real task's duration is going to be.
It is quite a paradoxical situation, isn't it? We force someone to come up with precise numbers so that we can do our own probability model around it.
Wouldn't it be much more useful for the estimate to consider this probability in the first place?
That also means that (in my world) a task estimate is never an exact number, but rather a qualified prediction of the range of probability in which a
job’s duration is going to land. The more experience with similar tasks the estimator has, the narrower the range is going to be. A routine task that one
has already done hundreds of times can be estimated with a very narrow range.
But even with a narrow range, there are always variables. You might be distracted by someone calling you. You mistype something and have to spend time
figuring it out. Even though those variables are quite small and will not likely alter the job's duration by an order of magnitude, it still makes an
absolutely precise estimate impossible.
</p><h2>Linear and non-linear estimates</h2><p>
On top of all that, people are generally very bad at estimating linear numbers due to a variety of cognitive biases. I mentioned some of them here [link:
Wishful plans - Planning fallacies]. So (not just) from our experience, we proved that it is generally better to do relative estimates.
What is it? Basically, you are comparing future tasks against the ones that you already have experience with. You are trying to figure out if a given task
(or user story or job or anything else for that matter) is going to be more, less, or similarly challenging compared to a set benchmark. The more the
complexity increases, the more unknowns, and risks there generally are. That is the reason why relative estimates use non-linear scales.
One of the well-known scales is the pseudo-Fibonacci numerical series, which usually goes like 0, 1, 2, 3, 5, 8, 13, 20, 40, 100. An alternative would be
T-Shirt sizes (e.g. XS, S, M, L, XL, XXL). The point is that the more you move up the scale, the bigger is the increase in difference from the size below.
That takes out a lot of the painful (and mostly wildly inaccurate) decision-making from the process. You're not arguing about if an item should be sized 21
or 22. You just choose a value from the list.
We had a good experience with playing planning poker. Planning poker is a process in which the development team discusses aspects of a backlog item and then
each developer makes up his mind as to how “big” that item is on the given scale (e.g. the pseudo-Fibonacci numbers). When everyone is finished, all
developers present their estimates simultaneously to minimize any mutual influence.
A common practice is that everyone has a deck of cards with size values. When ready, a developer will put his card of choice on the table, card facing down.
Once everyone has chosen his card, all of the cards are presented.
Now each developer comments on his choice. Why did he or she choose that value? We found it helpful that everyone answers at least the following
</p><ul><li>What are similarly complex backlog items that the team has already done in the past?</li><li>What makes the complexity similar to such items?</li><li>What makes the estimated item more complex than already done items, which were labeled with a complexity smaller by one size degree?</li><li>What makes the estimated item less complex than already done items, which were labeled with a complexity higher by one size degree?</li></ul><p>
A few typical situations can arise.
</p><h3>1) Similar estimates</h3><p>
For a matured team and well-prepared backlog items, this is a swift process, where all the individual estimates are fairly similar, not varying much. The
team can then discuss and decide together as to what value it will agree on.
</p><h3>2) An outlying individual estimate</h3><p>
Another situation is that all individual estimates are similar, but there is one or two, which is completely different. This might have several causes.
Either that outlying individual has a good idea, that no-one has figured out or he misunderstands the backlog item itself. Or he has not realized all the
technical implications of the development of that particular item. Or he sees a potential problem that the others overlook.
In such situations we usually took the following approach. People with lower estimates explain the work they expect to be done. Then the developers with
higher estimates state the additional work they think needs to be done in comparison to the colleagues with lower estimates. By doing this, the difference
in their assumptions can be identified and now it is up to the team to decide if that difference is actually necessary work.
After the discussion is finished, the round of planning poker is repeated. Usually, the results are now closer to the first case.
</p><h3>3) All estimates vary greatly</h3><p>
It can also happen, that there is no obviously prevailing complexity value. All the estimates are scattered across the scale. This usually happens, when
there is a misunderstanding in what is actually a backlog item's purpose and its business approach. In essence, one developer imagines a simple user
function and another sees a sophisticated mechanism that is required.
This is often a symptom of a poorly groomed backlog that lacks mutual understanding among the devs. In this case, it is usually necessary to review the
actual backlog item's description and goal and discuss it with the product owner from scratch. The estimation process also needs to be repeated.
Alternatively, this can also happen to new teams with little technical or business experience of their product in the early stages of development.
</p><h2>It's a learning process</h2><p>
Each product is unique, each project is unique, each development environment is different. That means the development team creates their perception of
complexity references anew when they start a project. It is also a constant process of re-calibration. A few backlog items that used to serve as a benchmark
reference size at the beginning of a project usually need to be exchanged for something else later on. The perception of scale shifts over time.
The team evolves and gains experience. That means the team members need to revisit past backlog items and ask themselves if they would have estimated such
item differently with the experience they have now. It is also useful, at the end of a sprint, to review items that in the end were obviously far easier or
far more difficult than the team initially expected.
What caused the difference? Is there any pattern we can observe and be cautious in the future? For instance, our experience from many projects shows that
stuff that involves integrations to outer systems usually turns out to be far more difficult in comparison to what the team anticipates. So whenever the
devs see such a backlog item, the team knows it needs to think really carefully about what could go wrong.
</p><h2>Don't forget the purpose</h2><p>
In individual cases, the team will sometimes slightly overestimate and sometimes slightly underestimate. And sometimes estimates are going to be completely
off. But by self-calibrating using retrospective practices and the averaging effect over many backlog items, the numbers can usually be relied on in the
Always bear in mind that the objective of estimating backlog items is to produce a reasonably accurate prediction of the future with a reasonable amount of
effort invested. This needs to be done as honestly as possible given the current circumstances. We won't know the future better unless we actually do the
work we're estimating.
|Scrum smells, pt. 5: Planning fallacies||https://mobileit.cz/Blog/Pages/scrum-smells-5.aspx||Scrum smells, pt. 5: Planning fallacies||<p>As the scrum godfathers said, scrum is a lightweight framework used to deal with complex problems in a changing environment. Whether you
use it for continuous product development or in a project-oriented mode, stakeholders always demand timelines, cost predictions,
roadmaps, and other prophecies of this sort. It is perfectly understandable and justifiable - in the end, the project or product
development is there to bring value to them. And financial profit is certainly one of these values.</p><p>Many of us know how painful the inevitable questions about delivery forecasts can be. When will this feature be released? How long will
it take you to develop this bunch of items? Will this be ready by Christmas? We would, of course, like to answer them in the most honest
way: "I don't have a clue". But that rarely helps, because even though it is perfectly true, it is not very useful and does not help the
management very much. For them, approving a project development based on such information would be writing a blank check.</p><p>I've seen several ways in which people approach such situations. Some just give blind promises and hope for the best, while feeling a bit
nervous in the back of their minds. Others go into all the nitty-gritty details of all the required backlog items, trying to analyze
them perfectly and then give a very definitive and exact answer, while feeling quite optimistic and confident that they have taken
everything into account. Some people also add a bottom line "...if things go as planned".</p><h2>If things go as planned</h2><p>Well, our experience shows that all these approaches usually generate more problems than benefits because the impact of that innocent
appendix "...if things go as planned" proves to be massive and makes the original plan fall far from reality. It actually stems from the
very definition of the words project and process. A process is a set of actions, which are taken to achieve an expected result, and this
set is meant to be repeated on demand. On the other hand, a project is a temporary undertaking that aims to deliver a unique outcome or
product. While the process is meant to be triggered as a routine and its variables are well known and defined, a project is always
unique.</p><p>So, a project is something that people do for the first time, to achieve something new. And when we do something for the first time,
there are two kinds of unknowns involved: the known unknowns (knowledge we consciously know we are lacking) and the unknown unknowns
(stuff we don't know and we don't even realize it). Based on the nature and environment of the project and our experience in this field,
we can identify some of the unknowns and risks to a certain degree. But I don't believe that there will really be a project where all
the potential pitfalls could be identified unless you actually implement the project - only then you will know for sure. If we'd like to
identify all risks and analyze future problems and their potential impact, we need to try it out in real life. Only then could we be
certain about the outcomes, approving or disapproving our initial expectations.</p><p>I am trying to express that uncertainty is part of every project. That means that when planning a project, we need to take that into
account. So when setting up a project and trying to get a grasp of the costs, timeline, and scope, we must understand we're always
dealing with estimates and planning errors. So instead of trying to pretend it doesn't exist and requiring (or providing) a seemingly
"exact and final" project number, I think a more constructive discussion would be about the actual scale of the error. </p><h2>Cognitive biases</h2><p>While the above is generally logically acceptable to rational and experienced people, why do we tend to ignore or underestimate the risks
at the beginning? I believe it's got something to do with how our minds work.</p><p>There is a phenomenon called the <strong>planning fallacy</strong>, first described by psychologists in the 1970s. In essence, they found
that people tend
to (vastly) underestimate time, costs, and risks of actions while (vastly) overestimating the benefits. The researchers measured how
probable were various subjects to finish various tasks within the timeframes the subjects have estimated. Interestingly, over half of
the subjects often needed more time to finish the task than was their catastrophic-case estimate.</p><p>The actual thinking processes are even more interesting. Even with past experience of solving a similar problem and a good recollection
of it, people tend to think they will be able to solve it quicker this time. And that people genuinely believe their past predictions
(which went wrong in the end) were too optimistic, but this time they believe they are making a realistic estimate.</p><p>There is also something called an <strong>optimism bias</strong>. Optimism bias makes people believe that they are less likely to
experience problems (compared to others). So even though we can have a broad range of experience with something, we tend to think things
will evolve in an optimistic way. We tend to put less weight on the problems we may have already encountered in similar situations,
believing this was "back then" and now we are of course more clever, and we won't run into any problems this time. People tend to think
stuff is going to go well just because they wish for it.</p><p>Another interesting factor is our tendency to take credit for whatever went well in the past, overestimating our influence, while
naturally shifting the reasons for negative events to the outside world - effectively blaming others for what went wrong or blaming bad
luck. This might not be expressed out loud, but it influences our views regardless. This stems from a phenomenon called <strong>egocentric
bias</strong>.</p><h2>Combining psychology with projects</h2><p>So it becomes quite obvious that if we combine the lack of relevant experience (a project is always a unique undertaking up to a certain
degree, remember?) with the natural tendency to wish for the best, we get a pretty explosive mixture.</p><p>We need to understand that not just the project team itself, but also the stakeholders fall victim to the above-mentioned factors. They
also wish for a project to go as they planned and managers rarely like sorting out any problems that stem from a project in trouble that
doesn't evolve as expected.</p><p>Yes, I have met managers who naturally expect considerable risks and don't take positive outcomes for granted. Managers who understand
the uncertainties and will constructively attempt to help a project which slowly deviates from the initial expectations. When we have a
manager who addresses risks and issues factually and rationally, it is bliss.</p><p>But what if that's not the case? Many managers try to transfer the responsibility for possible problems to the project teams or project
managers while insisting that the project manager must ensure "project goes as estimated". Usually, their way of supporting a project is
by stressing how important it is to deliver stuff in time and that the team must ensure it no matter what. And that all the features
need to be included, of course.</p><p>Now when you combine the fuse in the form of pressure from stakeholders with this explosive mix, that's when the fireworks start.</p><p>So how to increase the chance of creating a sane plan, keep the stakeholders realistically informed, while maintaining a reasonably
peaceful atmosphere in the development team? I think we can help it by gathering certain statistics and knowing we are constantly under
the effect of cognitive biases. We'll look at this in the next part of this series.</p>||#scrum;#agile;#project-management;#release-management|
|Scrum smells, pt. 7: Wishful plans||https://mobileit.cz/Blog/Pages/scrum-smells-7.aspx||Scrum smells, pt. 7: Wishful plans||<p>
In the preceding parts of the planning series, we were just preparing our ground. So today, let's put that into practical use and make some qualified
You're planning an initial release of a product and you know what features need to be included so that it gets the necessary acceptance of users. Or your
stakeholders are asking you how long it will take to get to a certain feature. Or you have a certain budget for a project and you're trying to figure out
how much of the backlog is the team capable of delivering for that amount of money.
There is a useful metric commonly used in the agile world called development velocity (or team velocity). It basically says, what is the amount of work that
a particular team can do within one sprint on a certain product in a certain environment?
In essence, it's just a simple sum of all the work that the team is able to do during a sprint. It is important to count only the work that actually got to
the state where it meets the definition of done within that particular sprint. So when a team does work worth 50 story points within a sprint, that's the
team's velocity in that given sprint.
Nonetheless, we must expect that there are variables influencing the “final” number. Estimates are not precise, the team might have its members sick or on
vacation and so on. That means that the sprint velocity will vary between the sprints. So as always, the longer we observe and gather data, the more
reliable numbers we can get. Longer-term statistical predictions are usually more precise than short-term ones.
So over time, we can calculate averages. I found it useful to calculate rolling averages over several past sprints because the velocity usually evolves. It
smooths out local dips or highs caused for instance by the parallel vacation of several team members. Numbers from the beginning of a project will probably
not relate very much to values after two years of the team maturing. The team gets more efficient, makes better estimates, and also the benchmark for
estimates usually changes somewhat over the course of time.
That means that we will get an average velocity that represents the typical amount of work that a given team is able to do within one sprint. For instance,
a team that finished 40, 65, 55, 60, 45, and 50 story points in subsequent sprints will have an average velocity of slightly over 50 story points per sprint
over that time period.
Note: If you're a true geek, you can calculate standard deviation and plot a chart out of it. That will give you a probability model.
</p><h2>Unexpected work's ratio</h2><p>
Now the last factor we need to know in order to be able to create meaningful longer-term plans is the bias between the known and unknown work.
I'll use an example to explain the logic that follows. So let's say we have 10 user stories at the top of our product backlog, worth 200 story points. The
development team works on them and after 4 sprints it gets them done. But when retrospectively examining the work that was actually done within those past 4
sprint backlogs, we see that there was a lot of other (unpredicted) stuff done apart from those original 20 stories. If we've been consistent enough and
have most of the stuff labeled with sizes, we can now see their total size. Let's say 15 unexpected items got done in a total size of 75 story points.
That means we now have an additional metric. We can compare the amount of unexpected work to the work expected in the product backlog. In this particular
example, our ratio for the past 4 sprints is 75:200, which means that for every expected story point of work, there came almost 0,4 additional story points
that we had not known about 4 sprints ago.
Again, this evolves over time and you also get more precise numbers as time passes and the team matures. On one of our projects, we came to a long-term
statistic of 0,75 of extra story points of unpredictable stuff for every 1 known story point, just to give you some perspective.
Having a measurable metric like this also helps when talking to the stakeholders. No one likes to hear that you keep a large buffer just in case; that's
hard to grasp and managers usually will try to get rid of that in any planning. So a metric derived from experience is much easier to explain and defend.
So back to the reason why we actually started with all these statistics in the first place. In order to provide some qualified predictions, we need to do
some final math.
With considerable consistency, we got to a state where we know the (rough) sizes of items in our backlog and therefore we know the amount of known work. Now
we also know the typical portion of the unexpected stuff as a ratio to the known work. You also know the velocity of your team.
We will now add the percentage of unpredicted work to the known work and we get the actual amount of work that we can expect. Dividing by the team's
velocity, we can get to the amount of time the team will need to develop all of it.
Let's demonstrate that with an example:
There's a long list of items in the product backlog and you're interested in knowing how long it will take to develop the top 30 of them. There shouldn't be
any stories labeled with the “no idea” sizes like “100” or “??”. That would skew the calculation considerably, we need to make sure such items don't exist
there. So in our example, we know the 30 stories are worth 360 story points.
We've observed that our ratio of unpredictable to known stuff is 0,4:1. So 360 * 0,4 = 144. That means that even though we now see stuff for 360 points in
our list, it is probable that by the time we finish the last one , we will actually make another (of course <i>roughly</i>) 144 points of work
that we don't know about yet. So in total, we will have <i>roughly</i> 500 points of work to do.
Knowing our velocity (let's stick with 50 points per sprint), let's divide 500 / 50 = 10. So we can conclude that to finish the thirtieth item in our list,
it will take us <i>roughly</i> 10 sprints. It might be 8 or it might be 12, depending on the deviations in our velocity and the team's maturity.
</p><h2>Additional decisions we can take</h2><p>
Two common types of questions that we can now answer:
It's the first of January and we have 2-week long sprints with the team from the previous example. Are we able to deliver all of the 30 items by March?
Definitely not. Are we able to deliver them by December? Absolutely. It seems that they will be dealt with sometime around May or June.
We know our budget will last for (e.g.) 4,5 months from now. Will we be able to deliver those 30 items? If things go optimistically well, it might be
the case. But we should evaluate the risk and decide accordingly.
How can we act upon this? We can now systematically influence the variables in order to increase our chances of fulfilling the plan. A few options out of
</p><ul><li>We can try to raise the team's velocity by adding a developer if that's deemed a good idea.</li><li>We can try to simplify some stories in the backlog to make the amount of known work smaller.</li><li>Or we can push the plan's end date.</li></ul><p>
A warning: Some choose an approach to let everything be constant and try to increase the velocity by “motivating” (understand forcing) the team to plan more
story points for a sprint. I don't need to explain that this is a dead-end that, statistically speaking, leads to the most likely scenario of having
something “fall over” from the sprint backlog. It burdens the team with the unnecessary overhead of having to deal with the consequences of overcommitment
during the sprint and work that won't get done any faster anyway. We can rather review the development tools and processes to see if there is any chance for
velocity improvement, but that should be a permanent and continuous activity for any team regardless of plans.
Planning projects is never an exact process. But there are certain statistics and metrics that can give us guidelines and help us see how realistic various
plans are. We can then distinguish between surefire plans, totally unrealistic plans, or reasonable ones. It can tell us when we should be especially
cautious and take action to increase our chances.
But any predictions will only be as precise as we are transparent and honest with ourselves when getting the statistics. Trying to obscure anything in order
to pretend there are no unforeseen factors or problems will only make the process more unpredictable in the long run.
So hopefully this article will inspire you on how to tackle the future in a more comfortable way.
|Scrum smells, pt. 1: Irregular releases—overview||https://mobileit.cz/Blog/Pages/scrum-smells-1.aspx||Scrum smells, pt. 1: Irregular releases—overview||<p>There's been a lot written about what agile development and scrum should do for a team. After a team has been maturing for some time, it's easy to become blind and
insensitive to phenomena that hinder its effectiveness and restrict its potential. Let's
call them “scrum smells”.</p><p>Some teams are just starting with scrum adoption while others are moderately matured or even
experienced in it. Each level brings with it its own smells and scents. This series will focus
on both basic and advanced challenges that scrum teams commonly encounter. Today, we’ll talk
about the problem of irregular releases.</p><h2>Inability to regularly provide releasable versions</h2><p>One of the basic scrum principles is being able to provide a potentially releasable product
increment as a result of each sprint's effort. I personally believe this is one of the most
valuable and underrated benefits of the whole scrum world. Scrum says that at the end of each
sprint, the development team should produce a piece of software which the product owner can then immediately release and put to productive us, if he so chooses. That means that the software needs
to be working, regression-free and without any work-in-progress stuff. Everyone on the team
needs to have the feeling that there is no debt—be it technical or functional.</p><p>In real life, however, the production builds are provided quite randomly. Every scrum team has gone through this. At the end of the sprint the team would like to provide a final version, but there
are incomplete features or regressions severe enough that this version can not be put out for
productive use. So in the subsequent sprint the team attempts to fix the bugs, but continues
developing new features in parallel, which introduces a new source of potential problems and
uncertainty. Days and sprints go by and there is always something to be fixed in the current
version.</p><p>This vicious cycle pressures the product owner to finally release
everything that's been implemented so far. There's the always present “almost there” syndrome,
the eluding moment of problem-free release. The product owner gets nervous and it's tempting to
try to sneak one more “important” thing to the sprint, because god knows when the next release will take place. Long time-to-market is a reality. So-called hardening or
stabilisation sprints occur, where teams just try to fix the product into a usable state.</p><p>Aside from the inevitable demotivation and pressure that arise, this also causes problems with planning
and transparency. You never know where you truly are until you have no work left on the already
developed backlog items.</p><h2>Preparing the ground</h2><p>So how to increase the chance of regular end-of-the-sprint potentially releasable versions
actually happening? This is partially about a shift in mindset. Being able to provide a working,
debt-free, done software increment must be a top priority for the team during the sprint and all
activities need to focus on this one goal.</p><p>It all begins with the backlog refinement. Backlog items must be broken down to pieces as small
as possible in order to give the team a high degree of maneuverability during planning. Oftentimes
creating really atomic user stories is necessary—that means stripping the user story to the very
core of the user's needs and solving it with the most straightforward approach. All the rest
just becomes another backlog item, prioritised independently. Keeping the items too big or just
being too optimistic about keeping some “easy to do” nice-to-have things attached to the core
user story's essence is just naivety, frequently combined with a degree of convenience.</p><p>Then, at sprint planning, the team creates a strategy to steer the risk of discovering a severe
problem shortly before the end of sprint with too little time to actually solve it. It helps to
start the sprint with the riskiest features first and strive to start testing even separate
parts of them as early as possible. This way there's a greater understanding of how much work
there is really left to be done. Low risk items (like simple text changes or UI adjustments) can
be left for later into the sprint.</p><p>The development team must not plan too much work, hoping that this time “we will manage”. The
team must, based on past experience, realistically anticipate extra unexpected work even on
“safe” backlog items.</p><p>And of course there is the well-known Definition of Done. Each team member must understand what
it takes for an item to be considered done and everyone must understand it in the same way. What
is on the DoD checklist? Well, that depends on the particular team, product and technologies
being used. But if a team agrees, that DoD of each item consists of code to be written, unit tests or
automated tests, documentation, code review, end-to-end test and anything else needed. Nobody
can claim an item done until all this work has been done. This helps to create a common standard
for quality and for complexity estimates. Strictly adhering to it reduces the risk of debt
accumulation. Missing or unused DoD creates a fertile ground for debts and makes planning almost
impossible.</p><h2>Day-to-day activities and decisions</h2><p>Frequent end-to-end testing during a sprint is absolutely vital. It is a dangerous illusion to create a
single version one day before the sprint's end, test it and expect that all is going to be fine.
That's not planning, that's gambling.</p><p>To enable this, new builds need to be created as often as possible, even several times a day.
CI/CD pipelines are a must. TDD helps a lot. Automated regression tests are a must. Basically
automating as much of the manual hassle as possible removes the reasons why teams usually avoid
making builds regularly. This investment into automation is usually well worth it in the long
run.</p><p>Adding feature switches (or flags) help. If it's evident that the team is not
going to be able finish a certain backlog item (i.e. fulfill the DoD), it is “switched off” and
it doesn’t interfere with the rest of the software.</p><p>The team must also understand that one done and delivered backlog item is worth far more than ten
items in progress. The daily scrum is an ideal time for the team to evaluate the sprint
backlog's progress and mutually collaborate on pushing in-progress items closer to a “done” state
as quickly as possible. Team needs to learn to constantly re-evaluate where everyone's present effort lies and decide if there is something more valuable to concentrate on.
All sprint backlog items are the whole team's job, not individual assignments. It is all about
constant re-evaluation as to where to invest the day's efforts in order to maximise the chance of
achieving debt-free software at the sprint's end.</p><p>When a sprint backlog item gets risky and it seems there's not enough time left in the sprint,
the team needs to decide whether it wants to invest more energy in it (to increase the chance of
getting it done, e.g. putting more developers onto it) or to stop working on it altogether and
focus on another item that has a real chance of getting done. Decisions to drop a
sprint backlog item must be consulted with the product owner.</p><p>For more about strategies to achieve regular releases, please check out the follow up “Scrum
Smells pt. 2” post for more details.</p>||#scrum;#agile;#project-management;#release-management|
|Scrum smells, pt. 2: Irregular releases—strategies||https://mobileit.cz/Blog/Pages/scrum-smells-2.aspx||Scrum smells, pt. 2: Irregular releases—strategies||<p><a href="/Blog/Pages/scrum-smells-1.aspx">Last time</a> we talked about irregular releases and
why they happen. So without
further ado, let's pick up where we left off.</p><h2>Sprint strategy continued</h2><p>It's sometimes tempting to introduce code freezes of some sort, meaning a date after which it
isn't allowed to introduce any new code to the sprint's version (only defect fixes are allowed).
While in general this idea is not bad, setting a fixed date reduces a team's maneuverability and
often hinders effectiveness. Fixed code freezes are a waterfall-ish attempt to create a
feeling of control. Needless to say, this is often not an idea of the development team itself,
but rather some company process. Code freezes rob the team of one of the biggest values of
agile—adaptability and room to find creative ways to become more effective.</p><p>Risk in items must be evaluated individually for each sprint backlog (based on size, complexity, technical sophistication etc.). The team must then decide on the latest reasonable time frame for starting final testing. That means, for instance, that for a big item
that time frame might be around the middle of the sprint, whereas for a tiny change it is half a day
before the end of the sprint will suffice. After that moment it is risky to
include this item in the sprint's product increment. Again, the team must evaluate individually the amount of risk involved in additional work is after they finish the implementation of a story and start
final testing—and plan for this accordingly. Attempting to set a one-size-fits-all code freeze
date will usually lead to the date being too early for items of small complexity and way too
late for more complex items.</p><h2>The beauty of frequent releases</h2><p>At the end of the sprint, the product owner decides if he wants to actually release the sprint's
version or not. It may not be necessary for many reasons, but he always needs to have the option
to do so. It is generally advantageous to release as often as possible. Releasing frequent small
batches of features induces far less risk of something major going wrong and makes a potential
roll-back not that painful. And it's great for planning too, because there is usually no
“invisible” debt that would otherwise need to be done before a major release.</p><p>Typical symptoms of accumulating technical or business debt are so-called hardening sprints,
release sprints, bug fix sprints or whatever you wish to call them. Their goal is to remove any
debt before actually being able to produce a version that is potentially releasable. This is a
strong smell, as it indicates the inability to produce working versions regularly. It cures
symptoms, but the underlying habitual problem is still there.
It basically means that until the hardening sprint happens, the team has never truly had a
working, potentially releasable version of the product. If it had, the bugs (or debt of any
sort) would already be removed and there would exist no need for an extra time-out to fix stuff.
</p><p>In my opinion this often happens due to an unconstructive pressure put on the development team to
deliver new stuff quicker, created by some external factor. I've seen product owners and even
scrum masters push teams to plan more stuff for a sprint without any underlying justification
stemming from the team's past velocity. It creates a false sense of progress, but with a great
amount of uncertainty. It leads to overcommitments and naive planning, ignoring the fact that
there is “invisible” work that always emerges: defects found, teeny tiny tweaks, refactoring
here and there, small changes, analyses, backlog refinement activity. Ignoring this leads to the
hamster-in-a-wheel effect.</p><p>There is no reason to have so called hardening sprint(s) before release if you knowingly don't
push any debt ahead of you. How can you do any sort of release planning if you don't know how
many problems will appear when you actually attempt to create a releasable version after
many months of development? It is always more valuable (and, needless to say, stress-free) to
have a working, debt-free version after each sprint regularly (with seemingly less new
features), but knowing there are no skeletons hiding in the closet. Finding out problems as
early as possible to know how far you've come and how far you need to go, that's what agile is
about. That's the transparency everyone talks about, right?</p><h2>What about bugs?</h2><p>The Scrum team should understand that Done means that the team is reasonably convinced there is
no more work to be done on the new delivery. No to-dos. No bugs. No technical debt. No wording
that needs to be tweaked, no graphics that is just provisional. But realistically, software is
never bug-free. It can happen that a defect is discovered right before a sprint release. It's
tempting to block the whole release because of it.</p><p>There's an article coming up on managing defects in agile product development, but in short it is
always a good idea to ask yourself if the discovered bug is a regression compared to the last
productively used version. If not (meaning the bug already exists in the current live version),
there is rarely a reason why not to release an update even though it does not fix the defect.
And this discovered defect is then prioritised in the backlog in the same way as any other
item.</p><h2>It's a long way</h2><p>I'm not saying it's easy to achieve regular releases. My experience shows that it's not, but that does not mean the scrum team should give up on it.</p><p>Adopting this habit and values gives the product owner a great amount of predictability for
future planning because he knows that he will get something to release on a regular basis. It
also relieves the pressure of something getting postponed for a later date than planned, because
everyone knows that in a short time it's going to be released anyway. Things just suddenly get
much more tidy.</p>||#scrum;#agile;#project-management;#release-management|