A Definition of Done for Architectural Decisions

Evidence, criteria, agreement, documentation, realization/review

Doc SoC
ZIO’s Blog

--

It is good to know when the most responsible moment for an architectural decision about a pattern or technology has come. But when can a decision be considered done? This post suggests five criteria to help you decide whether you are ready to move on.

Photo by Massimo Sartirana on Unsplash

The proposal is inspired by the five SMART criteria used in project and people management (but also when eliciting non-functional quality requirements). Let’s look at them one at a time.

Evidence

You have gained reason to believe that the chosen design (selected pattern, technology, product or open source asset and its configuration, that is) will work: a) it helps satisfy specific, measurable quality requirements, b) it does not break previous ADs by accident, and c) it is actionable: implementable and deployable in the short term and manageable and maintainable in the long run (if these are important qualities).

You can gain this evidence in several ways:

  • Implement a proof-of-concept or architectural spike yourself. Coding architects love this option and will probably argue that it is the only one; however, it might not scale and fit each decision making context.
  • Put such evaluation activity on the backlog, have a qualified team member work on it and analyze the results.
  • Ask somebody you trust to vouch for this design option.

Criteria and alternatives

At least two alternatives have been identified and investigated, and compared by stakeholder concerns and other decision drivers (regarding short term/long term impact). One is chosen, and the other ones are rejected (or kept as fallbacks).

To get there, you might want to apply a recognized, systematic evaluation technique, but also be pragmatic. It is not cost-effective to establish and evaluate 20+ criteria for 5+ alternatives per AD thoroughly (you might have to make 100s while sprinting!).

You also might want to predefine the criteria across projects (portfolio or company level) to make architectures (and portfolio products) comparable.

Agreement a.k.a. Consensus

At least one mentor or peer and the team have challenged the AD and agree with outcome and rationale. The amount of “decision socialization” that is adequate depends on project context and decision making culture. Sometimes no objections in a design workshop or review meeting is enough, sometimes explicit approvals or formal sign offs by the entire team or by external stakeholders (for instance, a design authority) may be required.

  • Agile teams may differ from those applying more traditional plan-driven methods. Often all team members participate in the decision making; decentralization and autonomy are often emphasized.
  • The wider a decision reaches, the more buy-in is required. All relevant stakeholders should be involved early; otherwise they might fight the decision because one of their key concerns is not considered (or because they felt left out). Early means early enough from a recipient point of view here: last-minute requests for comments and approval are usually not appreciated. People usually are more willing to comment if they know that something is coming their way (and that their feedback matters); they might even be willing to block time for commenting.

Documentation

The decision has been captured and shared, preferably in a lean and light template such as a Y-statement. Other Architecture Decision Record (ADR) notations are fine too, as long as they are used consistently and continuously.

  • The justification should provide convincing rationale referencing requirements and information gathered to meet E, C and A; see previous post for examples of good and bad justifications.
  • The decision record must be made available to all affected parties (for instance, announced and “published” in a collaboration or document sharing tool).

Realization and Review Plan

To be effective, a made decision must be executed upon; this work has been scheduled.

It also has been identified when to evaluate whether the AD has been implemented as intended, and that the resulting design indeed works as desired.

You have also looked ahead and planned when to talk about the AD in a review meeting or retrospective. You may want to answer questions such as:

  • Are we (still) content with the AD outcome?
  • Are there new alternatives (options)?
  • When will we revisit and possibly revise this decision (expiration)?

Checklist (Quick Test)

If the above criteria discussion was too verbose for your taste, how about:

  1. Are we confident enough that this design will work (E)?
  2. Have we decided between at least two options, and compared them (semi-)systematically (C) ?
  3. Have we discussed among each other and with peers just enough and come to a common view (A)?
  4. Have we captured the decision outcome and shared the decision record (D)?
  5. Do we know when to realize, review and possibly revise this decision (R)?

If you can answer “yes” five times in this quick test, you are done with an AD. If any answer is missing, you might have to invest a bit more time — at least to justify why this criterion does not apply for this particular AD.

Example

Let’s have a look we met the `D`criterion when we decided for a template engine to generate MDSL on the Context Mapper project:

D(documentation) criterion in ecADR: Y-Statement for a Sample Decision
D part of ecADR: Y-Statement for a Sample Decision

The full example elaborating on all five criteria can be found in the long version of this post on my personal website.

Concluding Thoughts

The take-away messages from this post are:

  • While it is important to know when the most (vs. last) responsible moment for an architectural decision has come, it is equally important to know when it has passed and you are DONE-done with an AD.
  • A checklist or quick test can help the team to agree that it is actually time to move on.
  • I proposed five criteria E, C, A, D, R: presence of Evidence, Criteria, Agreement, Documentation and Realization and Review plans.
  • A criteria-based checklist can remove ambiguities and cut unnecessary, inefficient discussions short by clarifying the difference between done and DONE-done.

Some ADs take longer than others (to make and agree upon). The strategic buy-or-build-or-rent decision for a company-wide customer relationship management system will require significant E, C, and A work, while the tactic decision to wrap calls to a messaging system or cloud provider to promote portability (hopefully) reaches the DONE-done state much sooner (but also might have to be revised more often).

You are never done with the entire decision making: one AD typically triggers others immediately, and the made ADs age and therefore might require your attention later. There always will be a backlog of less important/urgent ADs, new ones, ones to be revisited due to technology evolution and feedback from customers, operators or other stakeholders.

I hope you find the five criteria and the checklist useful. Your feedback is appreciated — do the above five criteria work for you? Did I miss a criterion (checklist item)? Contact me!

Olaf (always appreciating a story clap)

Next story about decisions: “From Architectural Decisions to Design Decisions”

© Olaf Zimmermann, 2020. All rights reserved.

--

--

Doc SoC
ZIO’s Blog

Architectural Decision Maker, Domain-Driven Designer, Serviceorienteer. Author of "Patterns for API Design" and "Design Practice Reference", MDSL, Y-Statements