Tuesday, September 18, 2018

Using OKRs to define Project Success

Using OKRs to define Project Success
One of the main reasons why projects fail is not defining project success. A project can only be successful if the success criteria are defined. And this ideally upfront. Unfortunately, I have seen many projects that skipped this part completely.

When starting a project, it's essential to work actively with the organization that owns the project to define success across three levels:

1) Project delivery
2) Product or service
3) Business

Project delivery success

Project delivery success is about defining the criteria by which the process of delivering the project is successful. Essentially this addresses the classic triangle "scope, time, budget". It is limited to the duration of the project and success can be measured as soon as the project is officially completed (with intermediary measures being taken of course as part of project control processes). 

Product or service success

Product or service success is about defining the criteria by which the product or service delivered is deemed successful (e.g. system is used by all users in scope, uptime is 99.99%, customer satisfaction has increased by 25%, operational costs have decreased by 15%, etc.). These criteria need to be measured once the product/service is implemented and over a defined period of time. This means it cannot be measured immediately at the end of the project itself.

Business success

Business success is about defining the criteria by which the product or service delivered brings value to the overall organization, and how it contributes financially and/or strategically to the business. For examples: financial value contribution (increased turnover, profit, etc.), competitive advantage (market share won, technology advantage), etc.

Overall project success

These levels combined will determine your overall project success. You can be successful on one level but not others. This sounds all good in theory, but in practice, it is not so easy to define the criteria for levels 2 and 3. This is one of the main reasons why so many organizations only look at level 1: scope, time and budget. They are easy to define and measure.

Personally, I think level 1 matters very little if levels 2 and 3 are not met. So in spite of the “project management triangle”, the fact is that delivering a project on time, in scope and in budget is not enough. The project must be delivered successfully – meaning that the objective(s) that motivated the project in the first place have to be reached.

This is where so-called OKRs come into the game.

What are OKRs?

OKR is an abbreviation for Objective and Key Result. The concept was invented at the Intel Corporation and is widely used amongst the biggest technology companies in the world including Google and Zynga.

OKRs are originally meant to set strategy and goals over a specified amount of time for an organization and teams. At the end of a work period, your OKRs provide a reference to evaluate how well you did in executing your objectives.

You can use the same concept for defining project success. Spending a concerted effort in identifying your project strategy and goals, and laying it out in a digestible way with OKRs can truly help your project team and stakeholders see how they are contributing to the big picture and align with other teams.


Any project has one or more objectives. The goal of setting an objective is to write out what you hope to accomplish such that at a later time you can easily tell if you have reached, or have a clear path to reaching, that objective. Choosing the right objectives is one of the hardest things to do and requires a great deal of thinking and courage to do well.

Key Results

Assuming your Objectives are well thought through, Key Results are the secret sauce to using OKRs. Key Results are numerically-based expressions of success or progress towards an Objective. All Key Results have to be quantitative and measurable. As Marissa Mayer, a former Google’s Vice President, said:

“If it does not have a number, it is not a Key Result.”                                   

The important element here is measuring success. It’s not good enough to make broad statements about improvement (that are subjectively evaluated). We need to know how well we are succeeding. Qualitative goals tend to under-represent our capabilities because the solution tends to be the lowest common denominator.

For example, if I create a goal to “launch new training for the sales team” I might do that for one sales member. If I alternatively make a Key Result of “train 50 sales team members” and only train 10, I’ve still 10x-ed my original goal.

Don’t turn OKRs into a project task list

Do you measure effort or results? Are your OKRs focused on your objective or on the means to get there? As we mentioned before, when used correctly, OKRs define success criteria for a project. OKRs should determine whether a project achieved success. But to do that, OKRs cannot be based on activities for three main reasons:

1) We want a results-focused culture, and not one focused on tasks.

2) If you did all your tasks and nothing improved, that is not a success. Success is improving something: customers are more satisfied, sales are higher, costs have been reduced.

3) Your project is just a series of hypotheses. An idea is just a non-validated hypothesis. In the same way, we don’t know if our project will improve our results or add value to the organization. The project is just a hypothesis so you cannot attach your OKRs to a non-validated bet. See No validation? No Project! for more on this topic.

Nobody works on projects as a hobby. Behind every project is a desire to improve one or more metrics. So, instead of tracking the delivery of a project, we should measure the indicators that motivated it in the first place.

Use Value-based Key Results

There are two basic types of Key Results:

1) Activity-based Key Results: These measures the completion of tasks and activities or the delivery of project milestones or deliverables. Some examples of Activity-based Key Results are:

- Release a beta version of the product.
- Launch a new feature
- Create a new training program.
- Develop a new lead generation campaign.
- Write a solution design document

Activity-based Key Results usually start with verbs such as launch, create, develop, deliver, build, make, implement, define, release, test, prepare and plan.

2) Value-based Key Results: These measures the delivery of value to the organization or its customers. Value-based Key Results measure the outcomes of successful activities. Some examples of value-based Key Results:

- Increase Net Promoter Score from X to Y.
- Increase Repurchase Rate from X to Y.
- Maintain Customer Acquisition cost under Y.
- Reduce revenue churn (cancellation) from X% to Y%.
- Improve average weekly visits per active user from X to Y.
- Increase non-paid (organic) traffic from X to Y.
- Improve engagement (users that complete a full profile) from X to Y.

The typical structure of a Value-based Key Result is:

Increase/Reduce Metric M from X to Y                                                     

Where X is the baseline (where we begin) and Y is the target (what we want to achieve).

Using the “from X to Y” model is better than writing a change in percentages because it conveys more information. Compare the two options below:

1) Increase the number of new users by 20%.
2) Increase the number of new users from 4000 to 4800.

Option 1 can be confusing since it’s hard to tell how ambitious the target is. Are we talking about increasing the number of new users from 500 to 600 or 4000 to 4800?


When project teams start with Value-based OKRs, it is common for them to get stuck listing activities as Key Results. To convert those activities into value, think about what would be the consequences of being successful with this task. What would be the desired outcomes?

If we are successful with , we will

- Key Result #1
- Key Result #2
- Key Result #3

Example 1

If we are successful with the new campaign, we will

- Increase Net Promoter Score from 29 to 31%
- Reduce churn from 3.2 to 2.7%

Example 2

If we successfully migrate the platform, we will

- Reduce infrastructure costs from X to Y.
- Maintain availability during migration in 99,99%.
- Maintain revenue of $ X.

OKRs as a communication tool

As you might have guessed by now, effective OKRs are widely shared and meant to be understood by project teams, related teams, and stakeholders. In that regard, they can serve as a communication tool for directing teams to solve complex challenges with constraints.

As a communication tool, OKRs bring two key things to an organization:

1) Easily digestible direction such that every project member/stakeholder in the organization understands how they contribute to the mission; aka focus.

2) Expectations amongst teams and their individual members; aka accountability.

Defining measurable results becomes easier as you learn what you should be measuring and what ultimately matters for your project and business. In my work with large project teams, I find that the quality of OKRs has a good correlation with the understanding of the project. Just going through the exercise of either defining OKRs, or reworking current project plans into OKRs is a highly effective evaluation tool.

Read more…

Thursday, August 16, 2018

Risk Management Is Project Management for Adults

The title of this article is a quote from Tim Lister, and is a universal principle for the success of any project in the presence of uncertainty. All software development projects are subject to risk and uncertainty because they are unique, constrained, based on assumptions, performed by people and subject to external influences. Risks can affect the outcome of projects either positively or negatively.

“If There’s No Risk On Your Next Project, Don’t Do It” 

Greater risk brings greater reward, especially in software development. A company that runs away from risk will soon find itself lagging behind its more adventurous competitors. But by ignoring the threat of negative outcomes project managers and executives can drive their organizations into the ground.

Positive Risk

Risk includes both opportunities and threats. Where negative risk implies something unwanted that has the potential to irreparably damage a project, positive risks are opportunities that can affect the project in beneficial ways. You should manage and account for known negative risks to minimize their impact, but positive risks you should manage to take full advantage of them.

There are many examples of positive risks in projects: you could deliver the project early; you could discover that the problem is easier to solve as expected; you could re-use your solution for other problems; you could acquire more customers than you accounted for; you could imagine how a delay in shipping might open up a potential window for better marketing opportunities, etc. Just be aware that positive risk can quickly turn to negative risk and vice versa.

Risk or Uncertainty? 

In project management, or more specifically in risk management, many professionals commonly use risk interchangeably with uncertainty. This is incorrect.

“Uncertainty is risk that is immeasurable.” – Frank Knight

Risk has an unknown outcome, but we know what the underlying distribution looks like. Every game in the casino has a known distribution of winning and losing. Hence you can play and manage risk. Following basic strategy for Black Jack for example. But where the hand of your new and unknown Poker neighbor is a risk, how he plays that hand is an uncertainty. Events of the past are no guarantee for the future.

You cannot manage uncertainty, but you can manage risk. What you can do is reduce the amount of uncertainty. For example by doing a proof of concept or a business case validation for your project.

Risks and Issues

Consider the following circular definition of risk: A risk is an issue that has yet to occur, and an issue is a risk that has already materialized.

Before it happens, a risk is just an abstraction. It’s something that may affect your project, but it also may not. There is a possibility that ignoring it will not come back to bite your ass. Risk management is the process of thinking out corrective actions before an issue occurs. The opposite of risk management is crisis management, trying to figure out what to do about the issue after it happens.

“The opposite of risk management is crisis management” - Tim Lister


Risks may be encountered in an almost infinite variety of forms and intensity, it is most useful to consider two varieties:

- Incremental risks. These include risks that are not significant in themselves but that can accumulate to constitute a major risk. For example, a cost overrun in one subcontract may not in itself constitute a risk to the project budget, but if a number of subcontracts overrun due to random causes or a common cause affecting them all, then there may be a serious risk to the project budget. While individually such risks may not be serious, the problem lies in the combination of a number of them and in the lack of recognition that the cumulative effect is a significant project risk.

- Extreme risks. These include risks that are individually major threats to the success of the project, or even the company as a whole. Their likelihood is typically very low but their impact very large. Examples of such risks are dependence on critical technologies that might or might not prove to work, scale-up of bench-level technologies to full-scale operations, or dependence on single suppliers and employees. And of course aliens, always account for a space attack by aliens...


Imagine the moment when something that used to be a risk suddenly becomes an issue. It used to be an abstraction, a mere possibility, and now it is not abstract at all. It has happened. This is the point at which the risk is said to materialize. It is the moment of risk transition.

Transition is a key concept for a project manager—it is the triggering event for whatever is planned to deal with the risk. Well, almost. The actual transition may be invisible to you (for example, your biggest client goes out of business). What you do see is a transition indicator (the client not paying your invoices for a while). For every risk you need to manage, there is some kind of transition indicator. Of course some indicators are more useful than others.

Response Strategies

Depending if a risk is positive (opportunity) or negative (thread) you have following response strategies available to you.

The reason you care about the above mentioned transition is that when the indicator fires, you intend to take some action. This is defined in your contingency plan. But much work can be doen before the transition starts. And you should. Buying a life insurance after your dead is difficult... 

Risk Management

So what is risk management? I always explain as the combined outcome of the five activties below.

1) Risk discovery: your initial risk brainstorm and subsequent triage, plus whatever mechanism you put in place to keep the process going
2) Exposure analysis: quantification of each risk in terms of its probability of materializing and its potential impact
3) Contingency planning: what you expect to do if and when the risk materializes
4) Response planning: steps that must be taken before transition in order to make the planned contingency actions possible and effective when required
5) Ongoing transition monitoring: tracking of managed risks, looking for materialization

The first of these is an overall activity, while all the others are done on a per-risk basis.

Risk management is something that most of us practice all the time—everywhere except the office. In our personal lives, we face up to such risks as sickness and early death. We mitigate by buying life and health insurance and by making arrangements for who will look out for the kids if something bad happens. We don’t pretend to be immortal or that our earning capacity can never be harmed by bad luck. Each time we take on a new responsibility—say, a mortgage—we go over all the awful things that we hope won’t happen and force ourselves to think, what if they do?

You should do the same for your software development project.

Read more…

Wednesday, August 08, 2018

Why do technology projects fail so often and so spectacularly? My personal top 10 reasons

Why do IT projects fail?
It was to be a great digital leap for Germany’s biggest discount grocer. Instead, after seven years and €500 million, Lidl’s new inventory management system with SAP is dead on arrival. Now everybody is asking why.

Big technology projects fail at an astonishing rate. Whether major technology implementations, postmerger integrations, or new growth strategies, these efforts consume tremendous resources over months or even years. Yet, as study after study has shown, they frequently deliver disappointing returns—by some estimates, in fact, well over half the time. These reports show that 25 percent of technology projects fail outright; that 20 to 25 percent don’t show any return on investment; and that as much as 50 percent need massive reworking by the time they’re finished.

And the toll these failed projects take is not just financial. These defeats demoralize employees who have labored diligently to complete their share of the work. Reputations are lost, and legal issues arise.

But the question is why. Why do so many technology projects fail—and fail so spectacularly? From my experience, it’s usually not technology problems that derail technology projects. I am of the opinion that most technology project failures can be attributed to poor management, while only a small percentage are due to technological problems. Reports seem to support my theory.

Below you will find the reasons for project failure that I’ve encountered most in my work as a project recovery consultant.

1. Poorly defined (or undefined) done.

Project failure starts when we can’t tell what “done” looks like in any meaningful way. Without some agreement on our vision of “done,” we’ll never recognize it when it arrives, except when we’ve run out of time or money or both. Without a clear and concise description of done, the only measures of progress are the passage of time, consumption of resources, and production of technical features. These measures of progress fail to describe what business capabilities our project needs to produce or what mission we are trying to accomplish. Capabilities drive requirements. Therefore, without first identifying the needed capabilities, we cannot deliver a successful project, and it will end up a statistic, like all the other failed projects.

2. Poorly defined (or undefined) success.

Besides not having defined what done is, one of the more common problems I see with IT projects is an ill-defined goal or definition of success.

A project can only be successful if the success criteria are defined, ideally upfront. Unfortunately, I have seen many projects that skipped this part completely. When starting on a project, it's essential to work actively with the organization that owns the project to define success across three levels:

1) Project delivery
2) Product or service
3) Business

A company will say they want to improve customer service, for example, but no one ever bothers to say what that looks like. Shorter call times? Fewer calls? Higher customer satisfaction? How will you know when you’ve succeeded? If you don’t know, you’re doomed to fail.

Related: When is my project a success?

3. Lack of leadership and accountability.

Too often, technology projects are deemed “IT” projects and relegated to the IT department, regardless of what the project actually is. But for any project to work, it needs strong leadership from the top down. If a project doesn’t have buy-in and support from C-level executives as well as specific department leaders, it’s difficult to get employees on board and hard to know who is in charge when leadership questions arise.

The moment projects are dubbed “IT projects” and left to the IT department, a lack of accountability can also develop. Executives may wrongly believe that they can’t understand what’s happening, and leave it to the tech guys to figure out. This is a mistake. If your tech team can’t adequately explain what’s happening on the project or why it’s needed, that’s a huge red flag. And if the executives aren’t driving the project and holding the team accountable, it can easily spiral out of control.

Related: How to establish an effective Steering Committee (and not a Project Governance Board)

4. No plan or timeline.

How can we get to “done” (see above) on time and on budget and achieve acceptable outcomes? We need a plan to get to where we are going, to reach done. This plan can be simple or complex. The fidelity of the plan depends on our tolerance for risk. The complexity of the plan has to match the complexity of the project.

Without a clear timeline and plan with milestones, any project (but technology projects in particular) can wander off the original path and meander through many detours and dead ends. A clear plan and someone to keep track of it is vital for keeping these projects moving forward.

Also, the famous watermelon reporting (green from the outside and bright red from the inside) is far more likely to happen when there is no clear plan and measure of progress.

“Plans are useless, but planning is indispensable.” – Dwight D. Eisenhower

5. Insufficient communication.

As mentioned above, someone on the tech team needs to be able to explain the project details regularly to the “non-tech” executives and other stakeholders. It’s vital for someone on the team to have strong visualization and storytelling skills in order to communicate clearly and regularly what’s happening with the project.

Related: Outsourcing technical competence? and 10 Principles of Stakeholder Engagement

6. Lack of user and performance testing, or failure to address feedback.

The thing about technology projects is that ultimately, they’re made for people, not machines. A lack of real-world user testing before launch is a common problem. The software engineers, solution architects and business analysts think they know what users want, but users may have an entirely different set of needs and problems. Once user testing is conducted, the project has to prioritize addressing the feedback, or the end user won’t be happy—and ultimately won’t use the technology created for them.

On a similar note it is essential to test under expected load very early. Even the best system won’t be used, or be very ineffective, when it is just too slow.

Related: Start your project with a Walking Skeleton and It's never too early to think about performance

7. Solving the wrong problem.

I’ve seen this time and time again with IT projects: companies think they’re creating something to address the problem, but it turns out they’re addressing the wrong problem. In our customer service example, if the company decides that shorter call times is the metric for improved customer service, employees become incentivized to get off the phone as quickly as possible, which may or may not actually improve customer service. Yes, call time decreases, but customers may be even less satisfied than before.

“We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” – Russell L. Ackoff

Related: Understanding your problem is half the solution (actually the most important half)

8. Trying to adapt standard software to business processes instead of the other way around.

Adjusting standard software to quirky business processes is a recipe for disaster. Going back to the Lidl SAP project, apparently one of the biggest problems was a “but this is how we always do it” mentality at Lidl. Changing the software necessitated reassessing almost every process at the company, insiders say. But Lidl’s management was not prepared to do that.

Unlike many of their competitors, Lidl bases its inventory management system on purchase prices. The standard SAP for Retail software uses retail prices. Lidl didn’t want to change, so the software had to be adapted. Many more accommodations had to be made, and the more changes there were to the code, the more complex and more susceptible to failure the Lidl software became.

Performance fell, and costs rose. Altering existing software is like changing a prefab house—you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.

“Something I learned in ERP for dummies: Don't customize software to your process unless the process is your competitive advantage.”

9. Continuing to pursue bad ideas.

In Hollywood, they say it’s easy to make a bad film from a good script but impossible to make a good film from a bad script. Though you won’t always recognize a bad idea straightaway, once you do, never assume that you’ll make it work or believe that you’ve put in too much effort to change course. The sunk cost fallacy is real.

For large or high-risk projects (what is large depends on your organization) it should be mandatory to do business case validation before you dive headfirst into executing the project. The minute you recognize that an idea won’t work, you have to pull the plug. It’s usually impossible to fix a bad idea, and you’ll only waste time, money and energy trying to put lipstick on a pig.

Related: No validation? No project!

10. No real decisions and death by committees.

A project often has multiple parties interested in its outcome and groups may even have divergent goals and expectations. I once spent months working with multiple enterprise and solution architects on a new data analysis and modelling platform, only to have our carefully crafted design dismantled by the company’s various heads of department demanding changes to meet their individual needs. Our problem was that we failed to establish who really owned the project and therefore didn’t deal with potential conflicts and disappointments in advance.

Feedback and governance is an essential part of any project. But decision-by-committee rarely leads to the best outcome. Every project should start by establishing clear, workable goals and give one person the ultimate ownership and accountability for meeting them.

Related: Many decisions are no decisions (and this makes projects difficult)

Read more…

Wednesday, August 01, 2018

10 Benefits of Agile Project Portfolio Management

Agile Project Portfolio Management (APPM) is a combined system of tools, methodologies, and processes to support your organization in reacting to change with the speed that is necessary for your organization to thrive. It helps you to do the right projects and to do projects right.

Agile Project Portfolio Management helps you to do the right projects and to do projects right.
APPM does not involve making project-by-project choices based on fixed acceptance criteria. Instead, decisions to add or subtract projects from the portfolio are based on the four goals of APPM:

1) Maximize the value of the portfolio.
2) Seek the right balance of projects, thus achieving a balanced portfolio.
3) Create a strong link to strategy, thus the need to build strategy into the portfolio.
4) Do the right number of projects.

APPM is described in detail in “The Agile Project Portfolio Management Framework Guide” that you can download for free here.

But how does this help your organization? Why does it makes sense to implement this?  Below you will find ten clearly measurable benefits that you can expect when implementing APPM in your organization. Let’s explore each of these benefits in detail.

1. More insightful decision-making

The first benefit of APPM concerns its ability to drive better business decisions. To make good decisions you need good data, and that’s why visibility is so crucial, both from a strategic, top-down perspective and from a tactical, bottom-up perspective.

Anything that can be measured can be improved. However, organizations don’t always do sufficient monitoring. Few organizations actually track project and portfolio performance against benchmarks. Worse, strategic multiyear initiatives are the least likely to be tracked in a quantitative, objective manner. For smaller organizations, the absence of such a process might be understandable, but for a large organization, tracking is a must.

Not monitoring project results creates a vicious circle: If results are not tracked, then how can the portfolio management and strategic planning process have credibility? It is likely that it doesn’t, and over time, the risk is that estimates are used more as a means of making a project appear worthy of funding than as a mechanism for robust estimation of future results. Without tracking, there is no mechanism to make sure initial estimates of costs and benefits are realistic.

When you have a good handle on past project metrics, it makes it much easier to predict future factors like complexity, duration, risks, expected value, etc. And when you have a good handle on what is happening in your current project portfolio, you can find out which projects are not contributing to your strategy, hindering other more important projects, or not contributing enough value.

Reviewing the data of your project portfolio gives you project history that reflects the symbiotic relationship between money, time, people, value, and projects. Informed decision-making forces you to draw findings based on organization-wide benefits rather than personal interest, contributing to overall portfolio success.

High-value projects are those that align strategically with the organization’s long-term objectives. Effective portfolio management collects information from those project initiatives that performed well in the past and successfully delivered business value. It explores the probability of similar projects flowing in the pipeline, preparing your people to obtain the appropriate briefing and training beforehand.

Your organization is as good as the data you have. The longer you rely on outdated or irrelevant information, the more you just guess and operate in the dark. APPM gives you a reporting and monitoring strategy that will help you get the insights you need. See “Agile Project Portfolio Management? How to monitor your portfolio” for more details.

2. Better Risk Management

It is important for organizations to create portfolios that reduce risks, but at the same time, it is necessary to take enough risks to move forward and stay competitive. You must target a point on the scale between playing it so safe that you never reach your full potential, and taking too much risk and losing everything.

Where this point is depends on your appetite for risk, the stage of your organization, your industry, and many other factors that you know better than I do. After you have decided on this point, your project portfolio needs to be balanced in such a way that the combined set of projects have the risk profile and upside potential you want.

APPM helps you create this balance by making the risk vs. value balance visible, transparent, and part of the decision-making process. See “Agile Project Portfolio Management: How to evaluate your portfolio” for more details.

3. Optimized (NOT maximized) resource utilization by doing the right number of projects

Traditional project portfolio management is all about value optimization and optimizing resource allocation. Both are designed in such a way that, in my opinion, it will result in the opposite. As we have seen time and again, running projects at hundred-percent utilization is an economic disaster. Any small amount of unplanned work will cause delays, which will become even worse because of time spent on re-planning, and value is only created when it is delivered and not when it is planned. Hence, we should focus on delivering value as quickly as possible within our given constraints.

This brings us to lean thinking. Lean is sometimes described as focusing on smaller batch (project) size, shorter queues, and faster cycle time. The focus is on delivering value quickly. In truth, lean is much more than this. The pillars are respect for people and continuous improvement resting on a foundation of manager-teachers in lean thinking. Queue management is just a tool that is far from the essence of lean thinking. But it is a very helpful tool that can have a very positive impact on your project portfolio management process.

APPM gives you a framework and guidelines to help you do this.  See “Doing the right number of projects” for more details.

4. Alignment with the strategy of the organization 

One of the biggest mistakes organizations make is not linking projects with strategic goals. Many organizations have a well-defined and well-scoped strategic process. This can be augmented by better and broader idea capture to provide supportive tactics, but the execution of it is the critical challenge. Indeed, as is widely recognized, weakness in execution, not a weakness in strategy, is a primary reason for organization failure. Knowing this, it is important to link the strategic theory governing the organization to the experience of project management. Without this linkage, either the project portfolio is blind to the needs of the organization or the strategic goals are empty, with no support at the execution level. It is clear that this is an area that organizations must get right for long-term success.

Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat. – Sun Tzu
APPM makes this linking part of every step of the process. See “Agile Project Portfolio Management: How to categorize your project backlog“ and “Agile Project Portfolio Management: How to evaluate your portfolio” for more details.

5. Increased project delivery success

One of the best ways to demonstrate the value of APPM is to show how it creates an environment that leads to repeatable and predictable project success. While not discounting the skills of the Portfolio Team, the essence of an effective APPM is providing a process framework and technology infrastructure that allows you to continuously meet your organization’s objectives. Repeatable success is gained by establishing best practices and proven project management methodologies and enforcing their use throughout the organization.

APPM consists of methods that factor in the scale, complexity, duration, and deliverables of a project. With an effective APPM strategy, you can leverage the processes and lessons learned from previous projects. A central repository of historical and real-time data helps you prioritize projects, preventing them from being wrecked by ‘guesstimations.’ This way you can be a proactive organization, not a reactive one.

Unsuccessful project delivery leads to project failure. Project failure can be caused by many factors such as cost overruns, schedule delays, poorly defined requirements, mismanaged resources, lack of strategy alignment, unresolved issues, or technical limitations. APPM allows organizations to ensure these risk factors are transparent and minimized within project delivery.

While it is easy to see how your projects perform in the present, what matters more is ensuring this success repeats itself in the future. Aggregating your projects gives you a consolidated view. This leads to your demands being captured in order to evaluate and prioritize your projects. Individual roles and responsibilities can be allocated to workload, which structures your workflow.

Repeatable success is achievable with a framework that sets boundaries to tightly control the project. It mandates the usage and effectiveness of technical infrastructure and establishes practices that improve governance. Repeatable success equals progress toward your organization’s objectives when the projects were aligned with those objectives in the first place. Many projects fail to deliver benefits even if they’re executed successfully.

6. Faster project turnaround times

Too many organizations try to save money on projects (cost-efficiency) when the benefits of completing the project earlier far outweigh the potential cost savings. You might, for example, be able to complete a project with perfect resource management (all staff is busy) in 12 months for $1 million. Alternatively, you could hire some extra people and have them sitting around occasionally at a total cost of $1.5 million, but the project would be completed in only six months.

What's that six-month difference worth? Well, if the project is strategic in nature, it could be worth everything. It could mean being first to market with a new product or possessing a required capability for an upcoming bid that you don't even know about yet. It could mean impressing the heck out of some skeptical new client or being prepared for an external audit. There are many scenarios where the benefits outweigh the cost savings (see "Cost of delay" for more details).

On top of delivering the project faster, when you are done after six months instead of 12 months you can use the existing team for a different project, delivering even more benefits for your organization. So not only do you get your benefits for your original project sooner and/or longer, you will get those for your next project sooner as well because it starts earlier and is staffed with an experienced team.

An important goal of your project portfolio management strategy should be to have a high throughput. It’s vital to get projects delivered fast so you start reaping your benefits, and your organization is freed up for new projects to deliver additional benefits.

There are many reasons why APPM can reduce project turnaround times by an average of 10 percent. Lean governance, workflow, and standardization tend to reflect repeatable, proven processes. The defined processes aligned with APPM technology allow team members to keep the work flowing and will typically increase productivity because it answers two important questions: “What do I stop doing?” and “What do I do next?” As we all know, strategically aligned projects should always result in value for the organization. With shorter time to market, this value can be realized sooner and in many cases can give organizations a head start on their competition. See “Project portfolio throughput: Faster is better” for more details.

7. Maximized organization impact

You will maximize the impact of your projects for your organization when you follow the process of APPM. Since APPM is built around the four goals below, you will do the right projects—namely, the ones that push your strategy forward and deliver actual value whilst fitting your appetite for risk.

1) Maximize the value of the portfolio.
2) Seek the right balance of projects, thus achieving a balanced portfolio.
3) Create a strong link to strategy, thus the need to build strategy into the portfolio.
4) Do the right number of projects.

See “Agile Project Portfolio Management: How to evaluate your portfolio” for more details.

8. Reduced sunk costs

For large or high-risk projects (what is large depends on your organization) it should be mandatory to do business case validation before you dive head-first into executing the project. In the Project Portfolio Funnel of the Agile Project Portfolio Management Framework you will see a phase called "Validation" after selection of a project has taken place. In this phase you typically have a business case validation and/or a technical validation in the form of a proof of concept.

Project Portfolio Management is about doing the right projects. In order to help with this, Agile Project Portfolio Management has adapted the lean product validation from Jon Lay and Zsolt Kocsmárszky (https://leanvalidation.hanno.co/) as an obligatory step in the process.

We will distinguish between four different kinds of validations. Depending on the project that you want to validate the business case for, you do only one, two, three, or all four of them. When your project is about launching a new product or service I would advise you to do all four validations before executing the project. The results of these validations form your business case validation. Or the opposite, of course; they can also show you that you should not do this project.

A. Validate the problem: Is this a problem worth solving? If users don’t think this is a major problem, your solution won’t be appealing.

B. Validate the market: Some users might agree that this is a problem worth solving. But are there enough of them to make up a market for your product or service?

C. Validate the product/service/solution: The problem might exist, but does your product/service/solution actually solve it?

D. Validate the willingness to pay: There might be market demand and a great product or service. But will people actually be willing to reach into their wallets and pay for it?

As you can see, these validations focus on the introduction of a new product or service for your clients. But you can easily reframe them for all of your projects. Your market can be your employees instead of customers. For example, when you think about implementing a new CRM, but only a very small but vocal number of users see the benefit of it, and are actually quite happy with what they currently have, your market is too small. Or how about the willingness to pay for the accounting department for a new solution that triples the yearly operational costs? See “No validation? No project!” for more details.

9. Transparency

Agile Project Portfolio Management relies on transparency. Decisions to optimize value and control risk are made based on the perceived state of the artifacts. When transparency is complete, these decisions have a sound basis. When artifacts are incompletely transparent, these decisions can be flawed, value may diminish, and risk may increase.
See “Agile Project Portfolio Management: How to monitor your portfolio” for more details.

10. More and better ideas

Although there is clearly no shortage of ideas within organizations, most organizations unfortunately seldom capture these ideas, except in the few cases where a handful of employees are sufficiently entrepreneurial to drive their own ideas through to implementation. This can happen in spite of the organization, rather than because of it.

Organizations are effective at focusing employees on their daily tasks, roles, and responsibilities. However, they are far less effective at capturing the other output of that process: the ideas and observations that result from it. It is important to remember that these ideas can be more valuable than an employee’s routine work. Putting in an effective process for capturing ideas provides an opportunity for organizations to leverage a resource they already have, already pay for, but fail to capture the full benefit of—namely, employee creativity.

To assume that the best ideas will somehow rise to the top, without formal means to capture them in the first place, is too optimistic.
Providing a simplified, streamlined process for idea submission can increase project proposals and result in a better portfolio of projects. Simplification is not about reducing the quality of ideas, but about reducing the bureaucracy associated with producing them. Simplification is not easy, as it involves defining what is really needed before further due diligence is conducted on the project. It also means making the submission process easy to follow and locate, and driving awareness of it. Agile Project Portfolio Management defines exactly such a process. See “Agile Project Portfolio Management: Demand management” for more details.

Workshop Agile Project Portfolio Management

When you want to learn more about Agile Project Portfolio Management then join one of our workshops on this topic.

Read more…

Friday, July 27, 2018

How to establish an effective Steering Committee (and not a Project Governance Board)

How to establish and effective Steering Committee
Without exception, every large project I was involved with over the past 15 years had a Steering Committee (SC). Broadly speaking a SC is a group of high-level stakeholders who provide strategic direction for a project, provide governance, and supports the Project Manager. Ideally, SCs increase the chances for project success by closely aligning project goals to organizational goals. However, this is unfortunately not always guaranteed.

I have performed project recovery and project management where I was required to report to a SC. I have coached and consulted individual SC members and Project Sponsors on their role, and even been a voting SC member myself. I have undertaken project reviews ordered by a SC, and ordered by individual members. I have spent considerable time on very large strategic software development and financial modeling projects where I was required to engage with large and diverse SC, many containing C-level executives. So I believe it is safe to say that I have some experience with SC’s and their members.

Many SC’s I have worked with were very effective and added tremendous value to the project, some of them were the reason that a project got into trouble, some of them never noticed that a project was in trouble until it was too late, and the large majority of them were somewhere in between. Note that my experience is biased because I primarily work with troubled projects for a living.

In this article I will provide my perspective on the definition of a SC, what it should aim to accomplish, who should be part of it, and present my lessons learned in the form of 16 tips on how you can establish a SC that adds value to your project and your organization.

So what exactly is a SC?

The "steering committee" of a project can be described as a "governing device" used to organize key project stakeholders and empower them to "steer" a project (or group of projects) to successful outcomes. So what is ‘steering’? Steering is not managing. Managing seeks to get the job done, but steering determines what the job is. The SC members help guide the business, the project team and the Project Manager do not. They can direct, control and manage the required changes in the business, the team can only define, plan and support them. If the business isn’t prepared for the project (and the team has done their part) then the steering committee has failed.

Few SC members realize this but their function is to serve as a resource for the project team, and in particular the Project Manager. They need to be searching for what has to be done to ensure the success of your project, determine what challenges exist, and reveal what other business or external events need to be taken into account and managed. You don’t just want them to turn up once a month to have a few questions and engage in meaningless discussion.

Most SC’s believe their role is to ‘control the project’. The recent use of the title ‘Project Control Boards’ for SC’s emphasizes this lack of understanding as to the SC’s role. Undoubtedly there is an element of ‘project control’, and yes, there is an element of board-like governance (are you complying with the necessary standards and policies?), but these are minor aspects of their role.

Where the steering committee adds value is by clearing obstacles from the pathway to success for the project. This requires taking action. Many SC’s don’t realize this is their critical function, and often they are not prepared to adopt it, until enlightened (and, often, pushed).

What is the job of your SC?

As pointed out above of the SC’s general job description is "to steer a project to successful conclusion through deliberation, decision making, support and action".

This doesn't necessarily mean that every SC’s specific job description is automatically the same. Quite the opposite, specifics can vary greatly based on the following key factors:

Scope: Will the SC have jurisdiction over a single project or a group of projects (program or project portfolio)? This article will address only project SC’s, but much of it is applicable to portfolio and/or program committee’s as well. I have written extensively about project portfolio management in other articles.

Authority: Will the SC serve as the ultimate authority on the project, or will the SC function to advise the ultimate decision making authority (i.e. the project executive or sponsor)?

Difficulty: What is the degree of project difficulty? When the project is of a higher degree of complexity, visibility, sensitivity, cost and risk, the job difficulty increases in direct proportion, which ultimately places greater burden on the SC members and exposing SC operations to increased scrutiny. Job difficulty goes a long way in determining how a given SC will be organized, who will be appointed, and how it will operate in order to reach the expected results.

Deliverables: What will the SC produce? After all, that's the reason for forming a SC - to produce all the results (analysis, decisions, directives and opinions) needed to support and "steer" a successful project.

These are the factors that will drive job specifics. No steering committee can be expected to function properly without a clear description of job requirements. That's why defining the job is the first and most important action for SC success.

Who is in your SC?

With the right people in the room, your SC can almost guarantee project success. But how do you have selected the right people? Picking experienced leaders and subject matter experts is beneficial, but it is not always good enough.

Most often, the project sponsor, senior management, key stakeholders and high-level permanent representatives of clients and suppliers are members of a SC. Within the project governance structure, the members of the SC are now strategically positioned to effectively promote the goals of their respective organizations.

This sounds good in theory, but SC members are usually chosen by the areas they represent in a checklist type style. It may sound counterintuitive, but having balanced representation can be problematic. Members come to the table only with perspectives that are influenced by their vested interests. For example, during a restructuring exercise, the SC can disintegrate into winners and losers; those who got their way and those who did not. In the end, the ultimate loser is the organization.

You can tackle this challenge by involving non-biased members. Identify members who do not have vested interested in particular outcomes of the project itself. This could be a finance person for a technology project or a product manager in an HR project. A neutral member will not get lost in the details and will not push for their own agenda. They can contribute by challenging the biases of others, and helping ensure balanced participation and achieving results.

Some projects are so controversial and political that the SC cannot function objectively. Bringing in professional facilitators adds a critical dimension of neutrality and a focus on achieving objectives.

In my opinion SC’s benefit from a mix of executive leadership and practitioners. This creates a balance of hands-on experience and people who are agents of change. You need people in the SC who are in the position to bring about organizational change. Having such a mix will often result in members having not equal levels of power within their own organizations. While some are mid-level managers, others may be top executives, which results in an imbalance in decision-making abilities. I will address this later in the article when discussing the rules of engagement.

Having the optimal membership of the SC is critical to project success. Potential members should:

- Have a known vested interest in making the project and the SC a success.
- Be willing to participate as a SC member and agree to the SC’s job and expectations.
- Have the authority to make decisions on behalf of the organization they represent.
- Be willing and able to work with the other SC members.
- Be able to perform the work in a timely fashion.
- Have a clear line of authority over the project team.

Importantly, the Project Manager is NOT a member of the SC but is essentially “contracted” by the SC to ensure project success as agreed. The Project Manager takes part in the SC meeting, but they should not participate in decision-making; the Project Manager’s role is to update members on the project’s progress, areas of concern, current issues, and options for addressing these issues.

What is the optimal size of the SC?

Ideally a SC is made up of four to seven people, but it can be larger in order to obtain buy-in from all concerned areas of the organization.

A small group, comprised of senior people, can make strategic decisions, give strategic advice, and also give the project influence among the intended users. However, a small group may not represent the necessary breadth of experiences and perspectives needed to ensure success. Moreover, busy senior executives and experts may not be able to give enough time and thought for the tasks at hand.

A larger group, (say up to ten), is manageable when the meetings are very organized and structured. A large group can obviously include a greater range of members, thus tapping in to a wider range of experience. However, a larger group can sometimes lose its effectiveness because of its sheer size. Meetings are even more difficult to schedule.

Establishing the Rules of Engagement

Try putting a number of people in a room, call them the SC, vaguely define their job and leave them on their own to figure out what it all means and how to get the job done. They might produce results for a while, but sooner or later, problems will appear. Perhaps not everyone heard the same message. Perhaps people will struggle to gain control. Perhaps changing circumstances will throw everyone a curve ball. These are the types of risks that diminish productivity and can complicate results.

For some, believe it or not, a SC is treated like a stage. Power plays are made behind the scenes. Deals are cut before meetings are held. You can almost see the strings being pulled in the meetings. Suddenly, SC meetings begin to get cancelled, and members become disengaged. Solutions lack vital information and perspectives. The puppets don’t enjoy having their strings pulled and simply walk away from the SC.

This can be avoided when the job (see above) is presented as a roadmap in the form of a documented "SC Charter". The Charter specifies how the SC will be organized and how it will operate, all from a procedural and process point of view. This is a great appriach to improve productivity, save time, minimize conflict and set expectations. I recommend that SC members agree at a minimum on the four following things:

- How decisions are made
- How participation will be managed
- What happens when there is disagreement / conflict
- What happens between SC meetings

Normally, the members of a SC are selected because they occupy positions in an organization that the ability and authority to make strategic decisions is a given. However, it must also be recognized that regardless of the make-up of the SC, or the position of its members in an organization, it is in most cases not intended to be a “voting democracy”.

As discussed in the job of your SC it makes a huge difference what authority your SC has in how the rules of engagement are invoked. In theory it would be optimal to have the decision making authority with the SC according to a pre-defined decision making process. In reality a SC often exists as a group of individuals who should share a common purpose but whose opinions and agendas may not always be completely aligned.

Therefore, it makes sense to give the final decision power to the SC chair when there is disagreement. It is of course than essential that the chair of the SC should be an individual with the actual authority and empowerment to make such decisions as may be necessary in the best interests of the organization and the project.

The chairing of the SC is most often done by the Project Sponsor who should have been selected for those very qualities. In my experience, from time-to-time the Sponsor as chair of the SC will be required to make decisions that run counter to the view of some (or even all) of the other SC members.

16 Tips for a highly effective Steering Committee

Tip #1 Your SC should focus on collaboration, cooperation and communication.

At the end of the day, SC members are just individuals who are appointed to do a difficult (and often thankless) job. The job is made much easier if the surrounding work environment is consistently positive, where every voice is heard, opinions are respected, information is shared, and common sense prevails. This occurs when SC (and project) leadership acts to promote member collaboration, cooperation and communication. Create trust.

Tip #2 You should plan your SC meetings ahead and decide upfront how you handle decision making between meetings.

Simply scheduling a SC meeting can be a problem, as each representative has his or her priorities and a busy agenda. If the Project Manager needs input before going ahead with some important change, he or she may not be able to get it in time if a SC meeting is required and it can’t be scheduled immediately. Even with the best intentions, the SC might slow the project down to a halt, due to slow decision-making or excessive analysis. For this reason, it makes sense to already plan the meetings a long time ahead. This is where the monthly SC meetings comes from. But this is not really helpful. Imagine having an issue a week after your last SC meeting. You have to wait for three weeks for the next meeting, and then in that meeting no decision is made because some or all members want additional information/analysis. And suddenly you are waiting for up to 8 weeks from issue to decision. Hence you have to discuss how you handle these situations.

Tip #3 Your SC time should not be used by looking at a status report.

The precious time available during the monthly SC meetings is often occupied by the Project Manager presenting a progress report. This is utterly useless, since this can be done far more efficiently through email without the demand on time (and the associated cost and non-productiveness) of senior managers and the Project Manager. Address SC members’ need for regular, timely information, with a monthly report that is a little more than just a snapshot of the previous month’s performance against known targets. Besides this snapshot you should add changes in the RAID lists and a forecast of “highlights” anticipated in the month to come (proposed completions, benefits delivery, transitions to operations, handovers to customers etc.). Provide this information by email and only discuss it during SC meetings when somebody has a question.

Tip #4 Show instead of tell.

A demo of what the project has been building, or an example of what is not working is so much more powerful than just words. Use this strategy as often as possible. It promotes a better understanding and awareness for the project and its issues. What I have done at different companies with great results is to have one regular SC and then a SC in the form of a “Sprint Review” where the project team is showing the SC members what they have done and the SC members can ask the whole team questions rather than just the project manager. These meetings are very helpful for both sides.

Tip #5 Be honest and transparent.

It is a shame this should be even a tip, but more times than I would like to remember I have worked with project teams that want to keep issues and challenges away from the SC because they think it would make them look incompetent or endanger the career of the Project Manager. We all have seen the watermelon reporting tactic. Green from the outside and bright red from the inside. This way of handling organizational challenges causes so much troubles for everybody involved. Don’t create drama about small things, but indicate issues when they arise not when it is too late to react. Tell them as it is.

Tip #6 Make decisions, real decisions.

A decision has NOT been made until people know:
- The name of the person accountable for carrying it out;
- The deadline;
- The names of the people who will be affected by the decision must be made aware of the consequences, understand the issue, and approve it—or at least not be strongly opposed to it; and
- The names of the people who have to be informed of the decision, even if they are not directly affected by it.

Tip #7 Your SC should not manage the project.

The more significant a project is to an organization, the more vital it is that the SC actively supports the Project Manager, but paradoxically, it is this role which is most often misunderstood or simply overlooked in many organizations. Too often, because of the importance of a project, the SC seeks a degree of control which should reside in the hands of the Project Manager with the result that “micro-management” occurs, often manifested through the “monthly meeting”. SC’s should steer, not manage.

Tip #8 Your Project Sponsor and Project Manager need to be able to work with each other.

It is critical to ensure a good working relationship between the Project Manager and Sponsor – some conflict or difference of opinion is healthy but if the two roles are constantly at odds, the project will suffer. Before finalizing decisions on either role, the current working relationship between the two should be evaluated to increase the likelihood of success.

Tip #9 Your SC should organize the necessary resources.

SC members should help a project by providing active support to ensure that resources are made available as required, especially in a “matrix” organization where key people reside within functions and are only “loaned” to projects. SC members in charge of such functions should use their position and influence to help the Project Manager overcome the many obstacles that the matrix approach creates, e.g. where a conflict arises between project and functional priorities, it is usually the function that prevails. It is also not uncommon for resources to be withdrawn or reallocated at short (or no) notice.

I am the opinion that when a project is of major strategic importance to the organization, key people should be withdrawn from the functions and dedicated to the project for the duration required. Such a proposal often meets with considerable resistance which SC members should seek to overcome if the project is deemed to be of greater significance than the function affected – at least in the short term but perhaps even in the longer term.

Tip #10 Your SC should establish how project success will be defined and measured.

A project can only be successful if the criteria for quantifying success are clearly defined. And this should ideally occur upfront. Unfortunately, I have seen many projects that skipped this part completely. When starting on a project, it's essential to work actively with the SC to define success across three levels:

1) Project delivery
2) Product or service
3) Business

Project delivery success: Project delivery success is about defining the criteria by which the process of delivering the project is successful. Essentially this addresses the classic triangle "scope time, budget, and quality?" It is limited to the duration of the project and success can be measured when the project is officially completed (with intermediary measures being taken of course as part of project control processes). Besides the typical project delivery KPIs you can also look at KPIs like overtime, project member satisfaction, stakeholder satisfaction, lessons learned (improved project delivery capabilities), etc.?

Product or service success: Product or service success is about defining the criteria by which the product or service delivered is deemed successful (e.g. system is used by all users in scope, uptime is 99.99%, customer satisfaction has increased by 25%, operational costs have decreased by 15%, etc.). These criteria must be measured once the product/service is implemented and over a defined period of time. This means it cannot be measured at the end of the project itself.?

Business success: Business success is about defining the criteria by which the product or service delivered brings value to the overall organization, and how it contributes financially and/or strategically to the business. For examples: financial value contribution (increased turnover, profit, etc.), competitive advantage (5% market share won, technology advantage), and etc.?

Tip #11 Your SC should take responsibility for business success.

When it comes to accountabilities for success, they are different for each level:

1) Project delivery success: PM (and project team).
2) Product or service success: Product/Service Owner.
3) Business success: Project Sponsor.

But when it comes to responsibility, I am the opinion the SC is responsible for business success. They run the business, the team and the Project Manager does not. They can direct, control and manage the required changes in the business, the team can only define, plan and support them. If the business isn’t prepared for the project (and the team has done their part) then the SC has failed.

Your SC should monitor business and strategic issues and provide advice to the project team on issues that may present a risk to the project or have impact on the project rationale or success.

After the project has delivered the SC should continue to monitor if the expected benefits are actually realized. A lessons learned session on each of the three levels should be organized and summarized so that the rest of the organization can learn from both past successes and failures.

Tip #12 Your SC should be a strong advocate for the project.

Your SC members should actively and overtly support the project and act as an advocate for its outcomes. If even your SC is not supporting the project, how can you expect the organization will?

Tip #13 Your SC has a very active role in maintaining your RAID lists.

A great tool to proactively manage your project are the so-called RAID lists. RAID is an acronym for Risks, Assumptions, Issues, and Decisions. Some use the "D" for dependencies instead of decisions, and some use the "A" for actions instead of assumptions. I personally track dependencies on my assumption list (because that is what dependencies are) and I have no need for a separate action list because I track actions in a separate column of each of the other lists. Your SC has a very active role on maintaining these. The members will have meaningful input on the Risks and Assumptions of the project. The same for mitigations. They will need to be aware of the Issues and every decision that cannot be made by the project team typically will be made by the SC. So instead of spending precious SC time on status reports (see above), use this opportunity to actively engage with the RAID lists.

Tip #14 Your SC should provide some governance.

Besides the steering there is also the governance role of the SC. Typically this means:

- Approve the business case, project approach and project management methodology;
- Establish delegation authorities and limits for the project management, with regard to cost, time, resource, quality and scope;
- Define the acceptable risk profile and risk thresholds for the project, based on the company’s risk management strategy and review project risks;
- Oversee stakeholder management and change management programs;
- Oversee the project quality assurance program;
- Review and approve or reject project plans;
- Resolve matters of project cost, time, risk, resource, quality and scope escalated to the Committee;
- Monitor project progress against approved business case, project plans and delegations; and
- Approve project closure.

Tip #15 Your SC should be aware of the Dominator Effect.

SC’s need leadership, and leaders often have strong personalities. When strong personalities begin to dominate SC agendas, projects can be put at risk. The problem with the dominator effect is twofold:
- The steering committee’s direction becomes more biased  towards the dominators personal preferences.
- Valuable input and perspectives from other members rarely make it to the table.

As the dominator takes over the SC participation from other other members will decline. What’s the point in attending meetings if you’re not heard? In this case, the SC remains in name only. The co-lead model has proven effective to balance perspective and help prevent the dominator effect. Ideally your co-leads will bring very different perspectives and leadership styles to the table to balance things out. In some instances, even a tri-lead model can be effective for large and complex initiatives.

Tip #16 Your SC should not be afraid to request project reviews.

In regular intervals on multiyear projects, reviews can serve a wide range of stakeholders and fulfill a variety of roles. It’s therefore not surprising that organizations undertake several different types of review on their most important projects. From my perspective, there are five distinct types of review. Each with its own focus and outcome.

1) Project Review: Can occur at any point through the project. It assesses progress against the original schedule, budget, and deliverables. It looks at the effectiveness of the team, project management, engineering practices, and other related processes. It typically delivers some sort of assessment of the likelihood of project success and identifies areas of concern and corrective actions.

2) Gate Review: Occurs at the end of a project phase or at some other defined point in the project’s lifecycle. It typically represents a decision point, using the outputs from an evaluation to decide whether continued investment in the project is justified.

3) Project Audit: An objective evaluation by a group outside the project team. A project audit is about being compliant and about the now. An audit aims to demonstrate the extent to which your project conforms to the required organizational and project standards. So, if your organization uses PRINCE2 or their own project management methodology, an audit will examine how closely you follow the processes. An audit can take place during or after the project.

4) Project Retrospective: Occurs as the project closes down. It assesses the overall success of the project and identifies what did or didn’t work during its execution, generating lessons learned for the future. Also known as Postmortem or Post-project Review.

5) Benefits Realization Review: Occurs after the organization has had some chance to use the outputs from the project. It evaluates the extent to which the benefits identified in the original business case have been achieved.


Carefully constructing a functioning and effective SC is critical for project success. Don’t gamble unnecessarily on the success of your project. Strategically structure your SC to have the right leadership model based on the personalities around the table. Build objectivity into your membership and take the time to ensure everyone is in the same boat before the committee sets sails and starts steering.

Workshop Stakeholder Management & Engagement

When you want to learn more about Stakeholder Management and Stakeholder Engagement then join one of our workshops on this topic. 

Read more…

Wednesday, June 13, 2018

Many decisions are no decisions (and this makes projects difficult)

Effective decision making
Again and again, I am confronted with the situation during projects that I ask for a decision on something and as an answer I get "Oh, that has been already decided...".

No, it is not. A decision has NOT been made until people know:

- the name of the person accountable for carrying it out;
- the deadline;
- the names of the people who will be affected by the decision and therefore have to know about, understand, and approve it—or at least not be strongly opposed to it; and           
- the names of the people who have to be informed of the decision, even if they are not directly affected by it.

An extraordinary number of organizational decisions run into trouble because these bases aren’t covered.

It’s just as important to review decisions periodically—at a time that’s been agreed on in advance—as it is to make them carefully in the first place. That way, a poor decision can be corrected before it does real damage. These reviews can cover anything from the results to the assumptions underlying the decision. Such a review is especially important for the most crucial and most difficult of all decisions, the ones about hiring, firing and promoting people.

When it comes to making decisions I am of the opinion that people and organizations should use a simple decision-making process in order to make them effective. Personally, I use the one outlined by Peter F. Drucker. In 1967, he wrote his famous article on effective decision making in The Harvard Business Review that still stands the test of time. You can find it online here. Drucker defines the following six steps for effective decisions.

1) Classifying the problem. Is it generic? Is it exceptional and unique? Or is it the first manifestation of a new genus for which a rule has yet to be developed?

2) Defining the problem. What are we dealing with?

3) Specifying the answer to the problem. What are the “boundary conditions”?

4) Deciding what is “right,” rather than what is acceptable, in order to meet the boundary conditions. What will fully satisfy the specifications before attention is given to the compromises, adaptations, and concessions needed to make the decision acceptable?

5) Building into the decision the action to carry it out. What does the action commitment have to be? Who has to know about it?

6) Testing the validity and effectiveness of the decision against the actual course of events. How is the decision being carried out? Are the assumptions on which it is based appropriate or obsolete?

Read more…

Saturday, May 26, 2018

Understanding your problem is half the solution (actually the most important half)

Problem understanding
Before we can solve a problem, we need to know exactly what the problem is, and we should put a good amount of thinking and resources into understanding it. And because today’s problems are so complex, we know they can’t be solved by being broken down into specific components.

Russell Ackoff (1979) has one of the most compelling metaphors for complex problems I have encountered so far. He called them “messes”. How many times you heard, or have spoken the phrase “this project is a mess” yourself? I have countless times. That said, the word “mess” means many things to many people so it means not much at all without context. Ackoff defined it as follows:

“Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations messes. Problems are abstractions extracted from messes by analysis; they are to messes as atoms are to tables and chairs.”

The only real means to achieve a shared understanding of a problem is through dialogue. Unfortunately, in this day and age where hours are equated to cash and naïve simplicity reigns, time spend on understanding problems is viewed as time wasted.

“Everything Should Be Made as Simple as Possible, But Not Simpler” – Albert Einstein

Management demands action, not talk and collaborative analysis. Especially the kind of meetings that involves debate and discussion are seen as “just talk”. This is understandable considering the number of meaningless meetings most people experience, but I believe debate and discussion are necessary to create a shared understanding of a problem. I would not use the same time split as Einstein, but that is only because the problems I work on are not saving the world.

“Given one hour to save the world, I would spend 55 minutes defining the problem and 5 minutes finding the solution.” - Albert Einstein

The next time you’re in a meeting to address a problem, pay attention to how much time is spent discussing or understanding the problem vs. how much time is spent on solutions. If your experience is typical, perhaps a few minutes of an hour-long meeting about the problem will be spent understanding the problem.

When I started paying attention, I realized meeting after meeting that the problem would be briefly summarized and then people would spend a huge amount of energy brainstorming or fleshing out solutions.

“It's so much easier to suggest solutions when you don't know too much about the problem.” - Malcolm Forbes

So, what happens when we don’t understand the problem? When the problem is not well understood, “solutions” only create new problems. In fact, there’s no guarantee the solutions will address the problem at all. Conversely, the more we understand the problem, the more likely we understand the root cause and can create countermeasures so the problem won’t recur. Understanding the problem is the first step of any problem-solving. The second step is defining how you measure success. After all, you would like to know if your solution is actually solving the problem.

“We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” – Russell L. Ackoff

Read more…

Tuesday, May 01, 2018

How to review your team’s software development practices

Software Development Practices
An important part of the project reviews I do is a software development practices review. Notice I do not say "best practices". The term “best practices” is without meaning in many contexts. At best, it’s vague, subjective, and highly dependent on context.

Wikipedia defines best practices as:

“A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things.”

In other words, a “best practice” is a practice that has been somehow empirically proven to be the best. Although there is quite some research on software development practices, I think defining the “best” software development practices, that would be the "best" for all projects, based on this kind of research is impossible.

What I am looking for in my software development practices review is the application of “standard practices” or “accepted practices”. A good example of this might be surgeons washing their hands prior to surgery. It is such a widely accepted and standardized practice that it would be scandalous for a surgeon not to follow it. Washing hands versus not washing hands have been empirically demonstrated to be better in terms of patient outcomes. Is the washing of hands the absolute “best” thing in the universe that a surgeon could do prior to surgery? Well, this philosophical question is not especially important. What matters is that hand-washing is beneficial and widely accepted.

The review

Your review would be sitting in a room with the development team and go through the list of practices from the second part of this article. Just ask the team about each practice and let them tell their stories. You might learn a lot of other things too! After talking about a practice the team should agree on one of the following answers:

1) We do not do this
2) We do not need this
3) We do this, but not enough/consistently
4) We do this, but we do not see the expected benefits
5) We do this and see the expected benefits

After agreeing on an answer, everybody in the team should give input on why this is the case. You could use the 5 whys technique when you feel that it helps you get important information.

Of course there are exceptions based on specific context, and of course, there are wildly varying degrees of maturity for each practice. But in general, you can say that the more times the team tells you “No, we do not do this” the more room for improvement there is. This is the positive formulation. You could also say the more “We do not do this" you count, the more issues with the software and your project you can expect.

The moment you should listen very carefully is when the team says “No, we do not NEED this”. Here you will learn about your project specific challenges and environment AND you will learn about the mindset of your team.

Answer numbers 3 and 4 will indicate possible room for improvements on the implementation of a practice and with that on the software development process as a whole.

You can combine this review with the delivery team review. The answers to the software development practices review will give you valuable information on team dynamics, mindset, individual skills, individual knowledge, as well as skills and knowledge as a team,

The practices

Since the creation of the first programming languages in the 1950’s there currently exists a widespread agreement in the software development community on which software engineering practices help creating better software. I will explain the in my opinion most effective practices in the rest of this article.

1. Separate Development and Deployment Environments

In order to develop and deploy software effectively, you will need a number of different environments. This practice seems to be so logical, still, I see time and again that essential environments are not available. Let’s start with the first environment.

1) Development: The development environment is the environment in which changes to software are developed, most simply an individual developer's workstation. This differs from the ultimate target environment in various ways – the target may not be a desktop computer (it may be a smartphone, embedded system, headless machine in a data center, etc.), and even if otherwise similar, the developer's environment will include development tools like a compiler, integrated development environment, different or additional versions of libraries and support software, etc., which are not present in a user's environment.

2) Integration: In the context of version control, particularly with multiple developers, finer distinctions are drawn: a developer has a working copy of source code on their machine, and changes are submitted to the repository, being committed either to the trunk or a branch, depending on development methodology. The environment on an individual workstation, in which changes are worked on and tried out, may be referred to as the local environment, sandbox or development environment. Building the repository's copy of the source code in a clean environment is a separate step, part of integration (integrating disparate changes), and this environment is usually called the integration environment; in continuous integration this is done frequently, as often as for every version. The source code level concept of "committing a change to the repository", followed by building the trunk or branch, corresponds to pushing to release from local (individual developer's environment) to integration (clean build); a bad release at this step means a change broke the build, and rolling back the release corresponds to either rolling back all changes from that point onward, or undoing just the breaking change, if possible.

3) Test: The purpose of the test environment is to allow human testers to exercise new and changed code via either automated checks or non-automated techniques. After the developer accepts the new code and configurations through unit testing in the development environment, the items are moved to one or more test environments. Upon test failure, the test environment can remove the faulty code from the test platforms, contact the responsible developer, and provide detailed test and result logs. If all tests pass, the test environment or a continuous integration framework controlling the tests can automatically promote the code to the next deployment environment.

Different types of testing suggest different types of test environments, some or all of which may be virtualized to allow rapid, parallel testing to take place. For example, automated user interface tests may occur across several virtual operating systems and displays (real or virtual). Performance tests may require a normalized physical baseline hardware configuration, so that performance test results can be compared over time. Availability or durability testing may depend on failure simulators in virtual hardware and virtual networks.

Tests may be serial (one after the other) or parallel (some or all at once) depending on the sophistication of the test environment. A significant goal for agile and other high-productivity software development practices is reducing the time from software design or specification to delivery in production. Highly automated and parallelized test environments are important contributors to rapid software development.

4) Staging: Staging is an environment for final testing immediately prior to deploying to production. It seeks to mirror the actual production environment as closely as possible and may connect to other production services and data, such as databases. For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during a test), which tests the effect of networking on the system. This environment is also know as UAT, which stands for User Acceptance Test.

The primary use of a staging environment is to test all installation/configuration/migration scripts and procedures, before they are applied to production environment. This ensures that all major and minor upgrades to the production environment will be completed reliably without errors, in minimum time.

Another important use of staging is for performance testing, particularly load testing, as this often depends sensitively on the environment.

5) Production: The production environment is also known as live, particularly for servers, as it is the environment that users directly interact with. Deploying to production is the most sensitive step; it may be done by deploying new code directly (overwriting old code, so only one copy is present at a time), or by deploying a configuration change. This can take various forms: deploying a parallel installation of a new version of code, and switching between them with a configuration change; deploying a new version of code with the old behavior and a feature flag, and switching to the new behavior with a configuration change that performs a flag flip; or by deploying separate servers (one running the old code, one the new) and redirecting traffic from old to new with a configuration change at the traffic routing level. These in turn may be done all at once or gradually, in phases.

Deploying a new release generally requires a restart, unless hot swapping is possible, and thus requires either an interruption in service (usual for user software, where applications are restarted), or redundancy – either restarting instances slowly behind a load balancer, or starting up new servers ahead of time and then simply redirecting traffic to the new servers.

When deploying a new release to production, rather than immediately deploying to all instances or users, it may be deployed to a single instance or fraction of users first, and then either deployed to all or gradually deployed in phases, in order to catch any last-minute problems. This is similar to staging, except actually done in production, and is referred to as a canary release, by analogy with coal mining. This adds complexity due to multiple releases being run simultaneously, and is thus usually over quickly, to avoid compatibility problems.

In some exceptional cases you could do without a Test Environment and use the Staging Environment for this, but all other environments should be present.

2. Use of Version Control

Version control is any kind of practice that tracks and provides control over changes to source code. Teams can use version control software to maintain documentation and configuration files as well as source code.

As teams design, develop and deploy software, it is common for multiple versions of the same software to be deployed in different sites and for the software's developers to be working simultaneously on updates. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).

At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the version control process have been developed. This ensures that the majority of management of version control steps is hidden behind the scenes.

Moreover, in software development, legal and business practice and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated version control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.

Version control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.

3. Clear Branching Strategy

Branching strategy has always been one of those sticky topics which always causes many questions.  Many senior programmers are baffled by the ins-and-outs of branching and merging. And for good reason; it is a difficult topic. Many strategies exist; main only, development isolation, release isolation, feature isolation, etc.

I’ve been around in many different organizations. I’ve been the person who was told what the branching strategy was, and I have been the person who designed it.  I’ve seen it done just about every way possible, and after all that, I have come to the following conclusion.

Keep it simple. Working directly off the trunk is by far the best approach in my opinion.

In a future post, I will show you what I think is the most simple and effective branching strategy.  A strategy I have effectively used in the past and have developed over time.  It can be summarized as follows:

1) Everyone works off of trunk.
2) Branch when you release code.
3) Branch off a release when you need to create a bug fix for allready released code.
4) Branch for prototypes.

4. Use of a Bug Tracking System

A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. When your team is not using some kind of a system for this than you are in for a lot of trouble.

Many bug tracking systems, such as those used by most open source software projects, allow end-users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications.

The main benefit of a bug-tracking system is to provide a clear centralized overview of development requests (including bugs, defects and improvements, the boundary is often fuzzy), and their state. The prioritized list of pending items (often called backlog) provides valuable input when defining the product road map, or maybe just "the next release".

A second benefit is that it gives you very useful information about the quantity, type and environment of bugs/defects that are discovered. There is a big difference between finding them at the test environment versus the production environment. In general, you can say the later you find them, the more they cost to fix.

5. Collective Code Ownership

Collective Ownership encourages everyone to contribute new ideas to all parts of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottleneck for changes. This is easy to do when you have all your code covered with unit tests and automated acceptance tests.

6. Continuously Refactoring

Code should be written to solve the known problem at the time. Often, teams become wiser about the problem they are solving, and continuously refactoring and changing code ensures the code base is forever meeting the most current needs of the business in the most efficient way. In order to guarantee that changes do not break existing functionality, your regression tests should be automated. I.e. unit tests are essential.

7. Writing Unit Tests

The purpose of unit testing is not for finding bugs. It is a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by a reasonable amount of unit tests, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests. Without unit tests your refactoring efforts will be a major risk every time you do it.

8. Code Reviews

Code review is a systematic examination (sometimes referred to as peer review) of source code. It is intended to find mistakes overlooked in software development, improving the overall quality of software. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.

Code review practices fall into two main categories: formal code review and lightweight code review. Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review.

Lightweight code review typically requires less overhead than formal code inspections. Lightweight reviews are often conducted as part of the normal development process:

1) Over-the-shoulder – one developer looks over the author's shoulder as the latter walks through the code.

2) Email pass-around – source code management system emails code to reviewers automatically after check in is made.

3) Pair programming – Having 2 developers work on one piece of code, using one keyboard and one monitor. Pairing results in higher quality output because it greatly reduces wasted time and defects, and results in high collaboration. It is nothing else as continuous code reviews. Hence, when implemented you do not need code reviews before merging your branches, hence continuous integration can be done faster. This is common in Extreme Programming.

4) Tool-assisted code review – authors and reviewers use software tools, informal ones such as pastebins and IRC, or specialized tools designed for peer code review.

A code review case study published in the book Best Kept Secrets of Peer Code Review found that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more cost-effective. In my opinion, it does not matter what kind of code reviews you do, but there should go NO code in production that has not been peer-reviewed.

9. Build Automation

Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and creating all necessary artifacts to deploy the application on a target environment.

Build automation is considered the first step in moving toward implementing a culture of Continuous Delivery and DevOps. Build automation combined with Continuous Integration, deployment, application release automation, and many other processes help move an organization forward in establishing software delivery best practices

10. Automated Tests and Test Automation

In the world of testing in general, and continuous integration and delivery in particular, there are two types of automation:

1) Automated Tests
2) Test Automation

While it might just seem like two different ways to say the same thing, these terms actually have very different meanings.

Automated tests are tests that can be run automated, often developed in a programming language. In this case, we talk about the individual test cases, either unit-tests, integration/service, performance tests, end-2-end tests or acceptance tests. The latter is also known as Specification by Example.

Test automation is a broader concept and includes automated tests. From my perspective, it should be about the full automation of test cycles from check-in up-to deployment. Also called continuous testing. Both automated testing and test automation are important to continuous delivery, but it's really the latter that makes continuous delivery of a high quality even possible.

11. Continuous Integration

Martin Fowler defines Continuous Integration (CI) in his key article as follows: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." You see, without unit tests and test automation, it is impossible to do CI right. And only when you do CI right you might be able to succeed at Continuous Deployment.

12. Continuous Delivery

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. Since every change is delivered to a staging environment using complete automation, you can have confidence the application can be deployed to production with a push of a button when the business is ready. Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. Continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.

A simple continuous delivery pipeline could look like this:

1) Continuous integration server picks-up changes in the source code
2) Starts running the unit-tests
3) Deploys (automated) to an integration environment
4) Runs automated integration tests
5) Deploys (automated) to an acceptance environment
6) Runs automated acceptance tests
7) Deploys (automated or manual) to production

13. Configuration Management by Code

The operating system, host configuration, operational tasks etc. are automated with code by developers and system administrators. As code is used, configuration changes become standard and repeatable. This relieves developers and system administrators of the burden of configuring the operating system, system applications or server software manually.

14. Code Documentation

Inevitably, documentation and code comments become lies over time. In practice, few people update comments and/or documentation when things change. Strive to make your code readable and self-documenting through good naming practices and known programming style.

15. Step by step development process guide 

This guide is essential for onboarding of new people and inspecting and adapting the way the team works. I work a lot with Kanban and ScrumBan and an important concept of these is to make your process explicit.

16. Step by step deployment process guide 

Somebody who is usually not doing this should be able to deploy on production with this guide on the table. You will never know when you need it, but the day will come, and than you are happy you have this. Of course the more you go into the direction of Continuous Delivery, the smaller this guide becomes, because all documentation of this process is coded in your automated processes.

17. Monitoring and Logging

To gauge the impact that the performance of application and infrastructure have on consumers, organizations monitor metrics and logs. The data and logs generated by applications and infrastructure are captured, categorized and then analyzed by organizations to understand how users are impacted by changes or updates. This makes it easy to detect sources of unexpected problems or changes. It is necessary that there be a constant monitoring, to ensure a steady availability of services and an increment in the speed at which infrastructure is updated. When these data are analyzed in real-time, organizations proficiently monitor their services

18. Being aware of technical debt

The metaphor of technical debt in code and design can be defined as follows: You start at an optimal level of code. In the next release, you are adding a new feature. This would take an effort E. This, of course, assuming that estimations are somewhere near reality.
If the level of code was less than optimal, the effort will be E + T.

Where T is the technical debt. Writing bad code is like going further into debt. You take the loan now, and you repay the debt later. The bigger the mess, the larger the delay in the next release.

The term “technical debt” was first introduced by Ward Cunningham. It was in the early 90s when the disconnects between development and business were growing bigger and bigger. The business people would urge developers do release untested, ugly code in order to get their product or new features faster. The developers tried to explain why this was a bad mistake. Some things will never change...

Most products and projects are still released much earlier than the developers have wanted. Assuming that developers are not just being stubborn (I know, maybe an ever bigger assumption as decent estimations), you would think that we didn’t manage to get the message across to the business. We have done an awesome job explaining what technical debt is and what the results are going to be. The business people understand it. But they are just willing to take the loan now. Can you blame them? Business wants something out there, in the field, that will sell now.

No problem, just make sure the consequences of these decisions are clear for all parties involved.

19. Good design

Good design is hard to judge, but luckily enough bad design is easy to “smell”. Software Developers are notorious for using different criteria for evaluating good design but, from experience, I tend to agree with Bob Martin and Martin Fowler who have said that there is a set of criteria that engineers usually agree upon when it comes to bad design.

And because you can't recognise good design until you know what bad design is, and once you know what good design should avoid you can easily judge whether a said engineering principle has any merit or is just fuzz waiting to distract you from your real goal of building software that is useful to people we just use bad design as a start to determine if we have a good design.

A piece of software that fulfils its requirement and yet exhibits any or all of the following traits can be considered to have "bad design":

1) Rigidity: It's too hard to make changes because every change affects too many other parts of the system.
2) Fragility: When you make a change, unexpected parts of the system break.
3) Immobility: It's hard to reuse a chunk of code elsewhere because it cannot be disentangled from its current application/usage.
4) Viscosity: It's hard to do the "right thing" so developers take alternate actions.
5) Needless Complexity: Overdesign
6) Needless Repetition: Mouse abuse
7) Opacity: Disorganized expression


As you have noticed in the descriptions of the practices above they are “layered”. To do x, you will need to do y first. For example, Continuous Integration without Build Automation is not possible. Test Automation without Automated Tests neither. And so on. Good software development practices start with the foundational layers,  and then build on top. When the foundation is weak, all else will be weak as well.

Read more…