In “How Engineering can Contribute to Sales,” I provide an OKRs coaching conversation to illustrate how an engineering team created OKRs to support another team. I now explore how a marketing team can create OKRs when outcomes are dependent on IT.
Most teams can easily draft objectives. However, many of us struggle when drafting key results. Key results often look more like a to-do list than an actual result.
In order to develop key results for an objective, begin with the basic OKRs question:
Basic OKRs Question
At the end of the quarter (or period), how will we know we’ve achieved the Objective?
This basic question is one of the most important that OKRs coaches can ask. It’s natural to first respond to it by creating a list of tasks that you will want to undertake in order to achieve an objective. But, effective OKRs coaches ask questions that help translate tasks into measurable results.
Defining scoring criteria (sometimes called grading criteria) when defining key results can be an effective approach for translating a task into a measurable key result. Scoring allows you to calibrate your expectations; creating a series of targets that spell out what is meant by exceptional, good, and mediocre performance. Most organizations will score key results using three levels
Let’s look at how scoring can improve an OKR. A Marketing Manager’s objective is: “Delight customers with relevant offers and communications.” Initially, the manager suggested “Add 3 fields to the marketing database” as the key result. This is clearly a task. The coaching excerpt below shows how it was converted to a key result with scoring.
Detailed OKR Coaching Excerpt
Marketing Manager: Our primary objective is to “Delight customers with relevant offers and communications.”
Ben/OKRs Coach: At the end of the quarter, how will we know you’ve delighted customers with relevant offers and communications?
Manager: We need to add 3 fields to the marketing database. Well, actually, IT needs to add these fields.
Ben (thinking): This is a task. As a coach I have to question the intended outcome of the task.
Ben: OK, assuming we had 3 fields in the marketing database, how would that help us achieve the objective?
Manager: we could then send better emails.
Ben: How do we measure the quality of our emails?
Manager: We run pilot campaigns, say 5 campaigns, using one of the new fields in the marketing database and report the results using A/B testing. I got it, our Key Result is to run 5 campaigns.
Ben: We’re getting closer, but can you tell me the intended outcome of the pilot?
Manager: We report the results of the pilot across a number of metrics like open rate, click through rate, time spent on site, etc…
Ben: What would be the most amazing possible result of the pilot campaign?
Manager: It would be amazing if we could increase the revenue per email sent, but that’s not fully in our control.
Ben: OK, amazing things are often not completely in our control. What would be the highest possible increase you can imagine?
Manager: The best improvements we’ve seen in the past are in the 5-10% range for a given campaign, so I’d say 10% increase would be amazing, but I don’t think every campaign could lead to that increase. Then again, we don’t know what the outcome will be since we’ve never run emails based on the missing fields. This is totally new, so we don’t really have a data point from which to compare.
Ben: OK, what if we just write the key result as “10% increase in revenue per email sent this quarter versus last quarter”? That will represent a 1.0 score.
Manager: We all agree that would be amazing, but it takes a month to really run a pilot so it’s quite possible that we’re not going to report any overall increase in revenue per email sent this coming quarter. I don’t think we can just post a key result like that since it’s pretty likely we’d just score a zero. Also, we depend on IT to get the fields added to the database.
Conversation with IT to address dependency
Context: Separately, we spoke with a database administrator in IT. The IT team added one of the fields to the database. Marketing agreed to start using it right away. IT agreed to add the second field to the database on a fast track. IT did not commit to adding a 3rd field. Now, back to my conversation with the manager.
Ben: OK, so knowing 2 of the 3 fields will be available this quarter, describe the level of progress we know we can achieve this coming quarter with business-as-usual efforts?
Manager: I’m sure we can run 5 pilot campaigns based on at least one new field in the marketing database. This is one of our priority projects for the quarter.
Ben: Okay, let’s score that as our .3 level of achievement. Something we’re confident we can achieve. What would a target level of achievement look like? This should be difficult, but realistic, like a 50-50 confidence level.
Manager: I’d like to see 1 or 2 campaigns show an impact of 5% increase in revenue per email sent.
Ben: So that will be our .7 score.
Ben: The key result is now written as a clearly defined metric. It’s definitely a stretch in that it’s nearly impossible to achieve, but you can still make progress by aligning on what progress looks like. Here’s what the final Key Result looks like:
10% increase in revenue per email sent this quarter versus last quarter
- .7 = 2 pilot email campaigns show an impact of 5% increase in revenue per email sent over a 4-week period with 5,000 or more emails in each pilot campaign
- .3 = Report results of 5 pilot email campaigns based on fields that IT adds to marketing database
The manager feels like this measurable key result focuses the team on the ultimate outcome of driving revenue per email sent. The pre-defined scoring levels enable the key result to reflect high-priority project work as well as tasks. We can now make measurable progress even if we don’t achieve the stretch level of achievement by the end of the quarter.
Get everyone on the same page. Take the time to clearly define the destination and how we will know we’re making progress. Defining this destination is a critical step in OKRs coaching. Balancing the tension between creating stretch key results with more realistic levels of progress can align the team on what progress looks like. Scoring key results upfront can be an effective approach for OKRs coaches looking to convert tasks into measurable results.