Advancing the State of the Art for Engineering Leadership

After the team agrees on a goal and formulates an initial plan of attack, a team leader’s primary value is tracking project status, supporting people in achieving their goals, and helping the team course correct as necessary. The following is a non-comprehensive list of questions you might ask yourself to quickly determine whether the project is on track. On many teams, project leads (tech leads, PMs, and engineering managers) will shoulder the burden of identifying and removing roadblocks. However, the best teams have a shared sense of project health and distribute responsibility for ensuring the project is healthy.

Execution

Are there clear milestones and a plan to achieve them? Do we use new information to revise the plan and goals?

For long projects (1 month or more) setting intermediate milestones helps set expectations for what should be done when and manage their energy more effectively. When people are given a big goal and a deadline far in the future, it’s difficult to independently determine how much should be done when. When necessary, it’s important to revise milestones dates if the team is ahead or behind schedule. When determining whether to adjust milestones the primary constraints are the team’s energy/morale and how urgently the business needs you to ship.

Is the team shipping at sustainable pace?

A sustainable pace is one that the team can maintain indefinitely. For example, working 2 week long sprints that consist of 12 hour days 5 days a week is not sustainable for most people. Working 2 week long sprints that consist of 2 days of planning of 8 days working 8 hours per day is sustainable (most people can work many more hours comfortably). The pace at which your team works best depends on the individuals and the type of work you’re doing.

Are we assessing the quality of the project at regular intervals?

Setting goals before the project begins helps us quantify the project’s expected impact. It’s also important to do some continual qualitative assessment. Assessing this accurately is largely a matter of collecting feedback (from the team and stakeholders). That may take the form of assessing metrics impact, design reviews, product reviews, and periodic informal walkthroughs you do yourself - the approach depends on the type of project. This is a compliment to milestones - complete a chunk of work and use that as an opportunity to assess project quality.

Prioritization

Are we prioritizing working on the most important tasks?

Most projects can be broken down into smaller milestones that deliver value to the user or help the team validate assumptions. For example, product development teams commonly choose to narrow down product requirements to a minimum viable product to determine whether or not their feature is useful without building out the full vision. And within that milestone there may be several more milestones (e.g. prototyping, feature complete, ship ready) Periodically reassessing task priority in the context of the next milestone, is important for helping the team stay focused on the most important tasks.

Do we assess task priority with the correct stakeholders?

Prioritization should not be done in isolation. The product development team owns execution of the product but all stakeholders should be a strong input into that exercise.

Impact

Is there new information that affects expected impact? And are we using that information to course correct?

It’s easy to course correct when new information arises that affects the project’s timeline. You either work harder or change the deadline. Responding to changes in expected impact is a bit trickier. In the best case, you should be prepared to ask for more resource and/or time. In the worst case, you should be prepared to change the team’s mission or disband the team altogether.

Can the team see the impact they are having along the way? Is the team excited about their progress?

This is a compliment to setting goals and intermediate milestones. When the team reaches a milestone you should find a way to give them some tangible indicators of progress (e.g. metrics, a lightning talk, a product walk through). It’s also important to make a big deal out of finishing milestones, even when they’re not completed on time. Anytime the team expends energy to achieve a goal, ensure that there’s a positive emotional pay off.

Project Management

Do they understand the process for getting things done (status meetings, milestones, triage, etc)?

Giving people visibility into how their work (especially metawork) fits into the bigger picture ensures that it feels like a good use. Without a thorough understanding of the biggest picture, most people won’t have sufficient motivation to update update jira tickets, complete intermediate milestone, or clean up tech debt in the middle of shipping. Also, the team’s process is collectively owned and without understanding all of the moving parts the team won’t know enough to help you iterate on the process.

Are all team members clear on how we’re making decisions?

Setting an expectation that everyone’s ideas are welcome often isn’t sufficient for helping the team make decision collectively. Giving people a clear idea of how decisions are made (e.g. who owns the decision, what are the right forums for feedback, when is the decision being made) gives a greater sense of autonomy and empowers them to participate in the decision making process.

Do we understand how well we’re executing relative to our time estimates? Are they realistic?

Quickly understanding how well you’re executing against your estimates ism one of the quickest ways to understand whether you need to adjust your plan or approach.

How much metawork is there?

Communication

Are you giving timely updates to stakeholders?

There are several types of critical updates you should be funneling to your stakeholders: project goals, strategic changes, design/implementation decisions, staffing shifts, and achievement of milestones.

If you’re working on anything non-trivial, frequent course corrections are expected and necessary. Course corrections become disruptive when they’re not communicated with adequate context in a timely fashion. Stakeholders shouldn’t be surprised by major developments in the project and should be thinking about managing their perception of project status or success. Making the “right” decisions is often not enough. The team needs to do the legwork to get buy-in for every major change in the plan between goal-setting and shipping.

Are we giving people a forum for feedback and using that feedback to inform our decision making?

Sometimes expectations become misaligned because the team is not receiving some critical piece of feedback that will help them course correct. A simple example: you set a goal at the beginning of the project to deliver something for another team and part way through the project, due to changes in the market, they discover that they need something slightly different. Changes in project requirements are not inherently bad. Discovering them too late or not responding appropriately is the only failure. Often times, the change is much more nuanced and difficult to discover than the example above, which is why it’s important to establish clear forums for feedback and tight feedback loops, especially with your most important stakeholders.

Code Quality

Are we shipping code quickly without causing major customer impacting issues?

All else being equal, faster execution is better. However, there’s often a trade off between execution speed and execution quality. Keeping an eye on the number of customer impacting issues and bugs is one of many ways to assess execution quality. If the team is shipping quickly but committing lots of bugs, it’s a sign that you should work with them on testing or code design. If the team is moving slowly, and shipping no bugs or seeking code perfection, push them to make technical decisions more expeditiously by being a bit more lenient in their design reviews or seeking help from their tech lead or domain experts within the eng team.

Do we have clear ways to continually assess code quality?

It’s important to form your own opinion about code quality and organize conversation among members of the team. For example, reading through design docs and commits to ensure that people are considering the right risks and anticipating future use cases of their code is probably table stakes. As a leader, you should be producing other leaders so in general organizing conversation among the team is preferred. For example, write/review design docs as a group, have the team give lightning talks about a hard problem they solved recently, or ask people to proactively identify tech debt and devise mitigation strategies.

Is technical debt accumulating? Are we spending excess time cleaning up after earlier shortcuts?

Introducing technical debt into the codebase is not inherently bad as long as you’re intentional about it. That means building an understanding of the risks of your technical decisions and formulating a plan for mitigating or eliminating those risks. Your goal should almost always be to leave the codebase as good or better than you found it. The rare exception is projects with tight time constraints and a huge expected impact on the business - in those cases you may be willing to take on some technical debt indefinitely. When unplanned technical debt begins to accumulate, it’s a sign that you should be more hands on with planning and perhaps set aside more time during the sprint for cleanup and/or discovery.

Are we using the right patterns and technology for the problem we’re solving?

It’s often tempting to use a new project as an opportunity to try out new frameworks or work with technologies you’re unfamiliar with. For projects where maintainability, performance, stability, or efficiency is a concern (which covers pretty much everything outside of rapid prototyping), it’s important to consider the long term implications of your technology decisions. As one example, choosing to redesign a feature and migrate to a new framework is often risky since, toward the end of the project, when you’re trying to assess the impact on business metrics you’ll have trouble separating the impact of a different technology from the impact of a different design.

Are we writing an appropriate number of tests and documentation?

Different types of projects call for different approaches to testing and documentation. At one end of the spectrum, there are projects with murky requirements or questionable impact for the business (e.g. a new blue “Foo” to increase revenue?). The primary goal of these projects is learning. Investing a lot of time in long term maintainability is often a waste of time since you don’t even know whether you’re shipping something of value. On the other end of the spectrum, there are projects with clear requirements and impact on the business (e.g. an infrastructure that stores and retrieves private user data in a secure way). The primary goal of these projects is delivering value. Investing a lot of time in long term maintainability is paramount for these types of projects.

In other words project with well-defined, valuable reproducible outcomes require you to think more about the long term maintainability and stability. And projects that emphasize learning generally require less of that thinking for the first iteration.