Model Reviews: Best Practice or Process Smell?

A model review, also called a model walkthrough or a model inspection, is a validation technique in which your modeling efforts are examined critically by a group of your peers. The basic idea is that a group of qualified people, often both technical staff and project stakeholders, get together in a room to evaluate a model or document. The purpose of this evaluation is to determine if the models not only fulfill the demands of the user community, but also are of sufficient quality to be easy to develop, maintain, and enhance. When model reviews are performed properly, they can have a big payoff because they often identify defects early in the project, reducing the cost of fixing them. In fact, in the book Practical Software Metrics for Project Management and Process Improvement Robert Grady reports that project teams taking a serial (non-agile) approach that 50 to 75 percent of all design errors can be found through technical reviews.

This article discusses:

  1. Types of Model Reviews
  2. Steps of a Review
  3. Why You Want to Avoid Reviews
  4. When Should You Hold a Review
  5. Holding Effective Reviews (If You Can't Avoid Them)

1. Types of Model Reviews

There are different "flavors" of model review. A requirements review is a type of model review in which a group of users and/or recognized experts review your requirements artifacts. The purpose of a user requirement review is to ensure your requirements accurately reflect the needs and priorities of your user community and to ensure your understanding is sufficient from which to develop software. Similarly an architecture review focuses on reviewing architectural models and a design review focuses on reviewing design models. As you would expect the reviewers are often technical staff.


2. Steps of a Review

Regardless of the type of model review, the basic steps are the same. The steps of a formal review (informal reviews will be discussed later) are:

  1. The team prepares for review. The artifacts to be reviewed are gathered, organized appropriately, and packaged so they can be presented to the reviewers.

  2. The team indicates it is ready for review. The project team must inform the review facilitator (often a member of your quality assurance department, if you have one) or another project manager when it is ready to have its work reviewed as well as what the team intends to have reviewed.

  3. The review facilitator performs a cursory review. The first thing the review facilitator must do is determine if the project team has produced work that is ready to be reviewed. The manager will probably discuss the team's work with the team leader and do a quick rundown of what it has produced. The main goal is to ensure the work to be reviewed is good enough to warrant getting a review team together.

  4. The review facilitator plans and organizes the review. The review facilitator must schedule a review room, any equipment needed for the review, invite the proper people, and distribute any materials ahead of time that are needed for the review. This includes an agenda for the review, as well as the artifacts to be reviewed. The review package may also contain supporting artifacts - artifacts the reviewers may need handy to understand the artifacts they are reviewing. Supporting artifacts are not meant to be reviewed; they are only used as supplementary resources. The review often includes the standards and guidelines your team is following in the package, so the reviewers can understand the development environment of your team.

  5. The reviewers review the package prior to the review. This enables the reviewers to become familiar with the material and prepared for the review. Reviewers should note any defects, issues, or questions before the review takes place. During the review, they should be raising previously noted issues, not reading the material for the first time.

  6. The review takes place. Reviews can take anywhere from several hours to several days, depending on the size of the material being reviewed. The best reviews are less than two hours long, so as not to overwhelm the people involved. The entire development team should attend, or at least the people responsible for what is being reviewed, to answer questions and to explain/clarify their work. There are typically between three to five reviewers, as well as the review facilitator, all of whom are responsible for the review. All material must be reviewed because it is too easy to look at something quickly and assume it is correct. The job of the review facilitator is to ensure everything is looked at and everything is questioned. The review scribe should note each defect or issue raised by the reviewers. Note most reviews focus on the high-priority items identified by the reviewers. Low-priority defects are written down and handed to the authors during the review and are not even discussed during the review. The authors then address these less critical defects without taking up review time. At the end of the review, the artifacts are judged, the typical outcome being one of: passed, passed with exceptions, or failed. For reviews where several artifacts were looked at - perhaps you reviewed your use-case model and user interface prototype simultaneously - the outcome may be broken down by artifact, so the model may pass, but the prototype fail.

  7. The review results are acted on. A document is produced during the review describing both the strengths and weaknesses of the work being reviewed. This document should provide a description of any weakness, why it is a weakness, and provide an indication of what needs to be addressed to fix the weakness. This document is then given to the requirements team, so it can act on it, and to the review facilitator to be used in follow-up reviews. The work is inspected again in follow-up reviews to verify the weaknesses were addressed.

As you can see, formal reviews can be a time consuming process. Informal reviews take a more streamlined approach, typically distributing the artifacts to several reviewers and asking them for their comments. The comments are then gathered and acted on by the team, who then release a new version of the artifact.

Minimally reviews lengthen your project schedule while you wait on the reviewers. In the worst case they increase your cost of change if you decide to go at risk and not wait for the feedback - because when the defects are finally detected by your reviewers you've already done work based on those defects that will also need to be fixed.


3. Why You Want to Avoid Reviews

The fundamental reason why you should question the practice of holding reviews or inspections is that their feedback cycle is much longer than agile approaches for detecting potential defects. As a result, the average cost of fixing that defect is much higher, as you see in Figure 1. Furthermore, all a review says is that it is the opinion of the reviewers that the artifact being reviewed is correct, you don't actually know that it's correct. That's important to understand: reviews reflect opinion, not fact.

Figure 1. Comparing the effectiveness of defect detection strategies.


Just as source code can have "bad smells" (Fowler 1999) that indicate a problem may exist that you need to address, the desire to hold a model review may similarly be considered a process smell indicating that you need to rethink your process. Here are some potential problems that model reviews may be hiding:

  1. Serial development. Model reviews often make sense in traditional environments when you are handing-off a model from one group to another, often when the requirements model is provided to the design team or the design model is provided to the programming team. Hand-offs are a leading indicator that you're following an overly serial approach or that your team has too many specialists on it (instead, you want people to be generalizing specialists).

  2. Poor communication/collaboration within the team. When people do not work with one another effectively, when they work on their own or when they do not share their work with others, then there is potential for them to unknowingly inject defects into their work. Agile modelers follow the practices model with others and collective ownership, effectively holding "mini reviews" in progress as they work.

  3. You're not producing working software. Teams that are unable to produce working software quickly become desperate to show that they're getting something done, and when all you've accomplished is a bunch of paperwork then reviews of that paperwork start to sound like a good idea.

  4. You don't have the right people involved with the project. Model reviews make sense when people outside of your team exist that could provide valuable insights to your team. Wouldn't it be better to have those people involved your modeling efforts in the first place?

  5. Bureaucrats need to justify their existence. Reviews are easy opportunities for people not directly involved with software development to justify their existence - they can spend days or weeks preparing for a review, they can attend the review, and they can then spent more time writing reports after the review. If these people actually have value to add then they should be part of the project team, if they don't then they should get out of the way of the people who are actually doing the work.

Agile Modeling

It seems clear to me that whenever model review seems like a good idea that you should step back and ask yourself if you really have a process problem that can be better resolved another way. You should also question whether the model/document is even required, because it's fairly likely that the TAGRI (they ain't gonna read it) principle applies.


4. When Should You Hold a Review?

There are several situations where it makes sense to hold reviews:

  1. Regulatory requirements. When your project is subject to regulations, such as the Food and Drug Administration (FDA)'s 21 CFR Part 11, you may by law be required to hold some reviews. My advice is to read the relevant regulations carefully, determine if you actually are subject to them, and if so how much additional work do you actually need to do to conform to the regulations. If you let the bureaucrats interpret the regulations you will likely end up with an overly bureaucratic process.
  2. A work product wasn't created collaboratively. I'm eager to review artifacts where only one or two people have actively worked on them AND which can't be proven with code (e.g. user manuals, operations documents, …). These situations do occur, perhaps only one person on your team has technical writing skills so they've taken over the majority of your documentation efforts. Or, perhaps your team is distributed/dispersed and you simply can't overcome this environmental challenge. Yes, they should still work with others to accomplish this but there often isn't as many eyes on these artifacts and therefore you're at risk. Furthermore, the cost of producing and deploying documentation may be much higher than that of software so the motivation is higher to get it right the first time.
  3. To prove to stakeholders that things are going well. I don't mind holding requirements reviews early in the development of a major release of a system to help the overall audience for my system to gain confidence in the development team. Although we may have several project stakeholders working directly with the team we could have hundreds or even thousands that don't know what's going on. A review can be a good way to show everyone that we're doing great work, and that it in fact is possible to produce working software on a regular basis (assuming that you're also reviewing your work to date as well as your requirements artifacts). Initial reviews such as this can also go a long ways to showing the traditionalists that reviews aren't as effective as they think when there isn't much in the way of solid feedback produced. Furthermore, initial reviews such as this provide your team with an opportunity to assess whether your project stakeholders who are actively participating with your team truly do represent the overall community - if not, you need to change your team.
  4. To ensure your overall architectural strategy is viable. At the beginning of a project an architecture review, at least an informal one, can be invaluable. When your system potentially needs to interface to other systems you want to make sure that what you're proposing is possible. Of course, it would be more effective to simply work with the owners of that system to begin with as you're formulating your architecture. An interesting side effect of architectural reviews is political in nature - it sends out a loud and clear message to the rest of your IT organization that your team has a handle on the technical aspects of what it is that you're trying to accomplish.

  5. You honestly need outside guidance. Another viable situation is when your team is new to agile development and you want to make sure that you're "doing it right". In this case you would want to involve reviewers who are experienced at agile development, you may need to bring in consultants if your organization is completely new to agility, to get their feedback. However, you'd still be better off involving these experts directly in your project to begin with.


5. Holding Effective Reviews (If You Can't Avoid Them)

If you are going to hold a review, the following pointers should help you to make it effective:

  1. Hold a review as a last resort. The reality is that model reviews aren't very effective for agile software development. Teams that are co-located with an on-site customer have much less need of a review than teams that are not co-located.
  2. Get the right people in the review. You want people, and only those people, who know what they're looking at and can provide valuable feedback. Better yet, include them in your development efforts and avoid the review in the first place.
  3. Review working software, not models. My belief is that model and documentation reviews are popular with project stakeholders because the traditional, near-serial development approach currently favored within many organizations provides little else for them to look at during most of a project. However, because the iterative and incremental approach of agile development techniques tightens the development cycle you will find that user acceptance testing can replace many model review efforts. My experience is that given the choice of validating a model or validating working software that most people will choose to work with the software.
  4. Stay focused. This is related to maximizing value, you want to keep reviews short and sweet. The purpose of the review should be clear to everyone, for example if it's a requirements review don't start discussing database design issues. At the same time recognize that it is okay for an informal or impromptu model review to "devolve" into a modeling/working session as long as that effort remains focused on the issue at hand.
  5. Understand that quality comes from more than just reviews. Reviews are one of many ways to achieve quality, but when used alone, they result in little or no quality improvement over the long run. In application development, quality comes from developers who understand how to build software properly, developers who have learned from experience, and/or have gained these skills from training and education. Reviews help you to identify quality deficits, but they will not help you build quality into your application from the outset. Reviews should be only a small portion of your overall testing and quality strategy
  6. Set expectations ahead of time. The expectations of the reviewers must be realistic if the review is to run smoothly. Issues that reviewers should be aware of are
  • The more detail a document has, the easier it is to find fault.
  • With an evolutionary approach your models aren't complete until the software is ready to ship.
  • Agile developers are likely to be traveling light and therefore their documentation may not be "complete" either.
  • The more clearly defined a position on an issue, the easier it is to find fault.
  • Finding many faults may often imply a good, not a bad, job has been performed.
  • The goal is to find gaps in the work, so they can be addressed appropriately.
  1. Understand you cannot review everything. If you do not have time to inspect everything, and you rarely do, then you should prioritize your artifacts on a risk basis and review the ones that present the highest risk to your project if they contain serious defects. The implication is you need to distinguish between the critical portions of your requirements model that must be formally reviewed and the portions that can be informally reviewed, most likely by use-case scenario testing or an informal walkthrough.
  2. Focus on communication. Reviews are vehicles for knowledge transfer, that they are opportunities for people to share and discuss ideas. However, working closely with your coworkers and project stakeholders is even more effective for this. This philosophy motivates agile developers to avoid formal reviews, due to their restrictions on how people are allowed to interact, in favor of other model validation techniques.
  3. Put observers to work. People will often ask to observe a review either to become trained on the review process or to get updated on the project. These are both good reasons, but do they require the person to simply sit there and do nothing? I don't think so. If these people understand what is being reviewed and have something of value to add, then let them participate. Observers don't need to be dead weight.

This article has been expanded upon from The Object Primer 3rd Edition: Agile Modeling Driven Development with UML 2.