Prioritising the contents of your feature backlog is a vital part of determining your product roadmap. Tom and infrequent co-host Russ talk us through how to go about backlog weighting in order to work out what you should be building next. They cover:
- Weighting by perceived value – Chatting to your users, stakeholders and frontline customer service team about what people are lacking, longing for or frustrated by.
- Weighting by analytics – Using Google Analytics etc to provide the least subjective picture of what’s needed.
- Weighting by cost  – Working out the effort needed by teams to get the feature completed versus the need for them.
- The balance between prioritising fixing bugs and building new things.
And more areas along the way.
We also found out that sorting tasks alphabetically isn’t a sensible framework! Who’d have guessed…
Product Innovation Framework
Part methodology, part manifesto, all value. Download it now! 🙌
Grab our backlog weighting template here (we’ve added some example features and scored them to get you started) 👉 Google Sheets / Excel
Transcript
Tom:
Hey, welcome to the Product Leadership podcast from Lighthouse London. Where we talk about how to validate, launch and maintain successful digital products with product owners, innovators, digital experts and founders. Lighthouse London are a digital design and product development team who spent the last 10 years helping people conceive, build and steer digital products. You can find out more about us and more podcast episodes that wearelighthouse.com enjoy the show.
Hey everyone. Cheers for tuning in. We’re back again here talking about something super exciting today. I’m joined by a infrequent cohost, Russ.
Russ:
Hello.
Tom:
Hi there. Remind people of what you do.
Russ:
I am the design director at Lighthouse.
Tom:
Nice, nice and fancy stuff. It’s been a while since he’d been on air, I think.
Russ:
I don’t know. I got banned for low listenership. How my [crosstalk 00:01:03] were working against me.
Tom:
That is absolutely harsh. So today the topic of conversation is roadmap for management, is that right?
Russ:
Yeah. I think we want to talk about the processes you go through to prioritise what’s in your backlog and obviously your prioritise backlog makes up what your roadmap is going to be.
Tom:
So this comes from a projects we’ve worked on in the past. Spoiler alert, there is a download for you to grab at the end of this once you’ve obviously listened to what we’re saying and taken on board, all of the lovely knowledge. But yeah, you can go on our website, wearelighthouse.com/backlog to grab all the goodies you need from this at a later date. So give us a bit of background, Ross, who’s this for and why would you use it?
Russ:
Mm-hmm (affirmative). Yeah, so I think any existing product or even someone who’s looking at canvassing a new product, you’re going to have a backlog and your backlog basically just contains all of the features that you don’t have yet, but you want to build at some point. And I think any kind of healthy business idea or product idea, you probably have loads and loads of things that you could do. And that’s a good thing. And I think that kind of grows over time as well. But with that comes the issue that you only have finite resource and time and you have to work out what you’re going to be building next.
Tom:
But that’s easy because I sort alphabetically. And work trough one by one, why do I need this?
Russ:
Yeah, I mean yeah alphabetically that works for some companies strangely, but for the most part you’ll want a slightly different framework than that to work out where to focus your energy. I think it’s probably worth saying that you how you determine that is quite specific to you as a business, as an individual and what your kind of approaches. We’ve come up with a few for different clients over the years. And of course there are other industry and template ones you can have a look at as well.
Tom:
So what we want to get to is a weighted backlog that we can all work off. That could include a UX tasks, UI tasks, the sort of stuff that we generally work on. But obviously there’s going to be a load of development stuff in there as well, but it’s essentially a list that we can work through with whatever team we’re working with to push things forward and make progress.
Russ:
Yeah. And however you approach it, you want to have some way of essentially determining a score based on factors we’ll get into and that will tell you, great, this thing has 10 out of 10 so that’s the top of the backlog and it’s going to be the next thing that we work on.
Tom:
Cool. So give us some obvious ways in which you’d weight aback look.
Russ:
Yeah. So obviously, this kind of reflects a lot of what we’ve said in a podcast before, but we want to be designing products for people, for users or for an existing product then for your customers. And so one of the first things that we wanted to be doing is speaking to those people and finding out what’s important to them.
So a lot of this comes up in and around our discovery phases. when we talk about speaking to people, we don’t just mean your users as well, but you may have a set of stakeholders that you’re trying to work out the priorities for as well. That can be kind of one of the balances that you’re trying to make as well. But yeah, generally it would be silly to not be talking to people who are going to use your product about what they’re lacking at the moment, what frustrates them. Or what other products they’ve used, where they’ve got a killer feature that’s turning them away, that kind of thing.
Tom:
And so you probably already grouped that into perceived value from the customer or the user.
Russ:
Yeah, definitely.
Tom:
So that’s probably, I mean it’s not one that, you’re making it up as you talk to them. Right. Someone says, I really, really want that. Then it gets a 10. There’s no kind of a statistical method behind that. You’re just kind of making things up.
Russ:
Yeah. [crosstalk 00:04:43]
Tom:
Not making things up, you know what I mean.
Russ:
Exactly. Yeah. I mean, I mean we’ve had some interesting ways of trying to determine that as what is the most kind of popular or most mentioned. That’s actually one of the metrics that we used in a previous project for an existing product. So basically we were just tallying up how many mentions a feature got in interviews with customers and that would just give us a percentage of people mentioned this, so if almost everyone’s mentioning something that’s getting a high score and mentions, so that’s a good one. If you’re talking to customers about their needs.
Tom:
What examples have you got of working with internal people to determine stuff like that as well?
Russ:
Yeah, so as you mentioned, if you have a team of stakeholders or a few stakeholders, you probably are one of those people as well. If this is a project that you’re running. And we need some way of determining again what their priorities are. What we find when it comes to discovery phases is that stakeholders often have quite different views of kind of what the mission is for the product right now. What the biggest issues are. If someone’s very customer facing, they usually have quite different perspective from the person who maybe is closer to the financial side of it. So you’ll get very different perspectives. But again we deal with the similar kind of format of, what do you think is the most lacking thing in the product right now? What are you hearing about the most and just trying to get that perspective.
Even things like the stakeholders pains related to the product itself. You don’t usually think of your stakeholders as users, but actually, if you have a content management side to your product and you have a team that’s really struggling to do something in your product internally, that’s could also and should make its way into your backlog too. Because again, that may save you money.
It makes people pleased about it on the actual internal team.
Tom:
Yeah. Certainly with the more complex tools, have a lot of stuff to do. You’ve got a marketing site, you’ve got a product, you’ve got all sorts of sides of it that need attention. So trying to figure out what to do next is very tough.
Russ:
Actually the marketing site, that’s a really good point. It’s not something that’s often thought about, but if your business goal at the moment is just to increase signups. You don’t want to be focusing on the inner workings of the product. But actually your, yeah, your marketing site or social campaigns, however you’re getting the message out there that should actually have an impact on what you’re building next.
Alice: Hey, it’s Alice from Lighthouse here. Have you got a product that has users, revenue, and traction, but it’s let down by its looks, our product makeover breathe new life into digital products that are in need of some UX and UI expertise. Talk to us about how our user focused approach can turn your product around and take it from good to great. Find out more at wearelighthouse.com/make-over.
Tom:
So give us some examples of other factors that you might use that are a bit more analytical come from numbers that we can kind of get from different places.
Russ:
Yeah. So yeah, so analytics is another great perspective. It’s the one that’s the least subjective out of these. You’re talking to people you have biases to deal with and you have to design tests around that to make up for those biases. But analytics tend to give you a much clearer picture. So for example, if you’re hearing from your stakeholders that our feature is really slow and sluggish and people hate it, or not used, you can get an objective, quantifiable answer on that by digging into the analytics of a platform. Sometimes that’s as easy as your very basic Google analytics set up like page views or if you have click events tracked, sometimes that’s a little bit more difficult to get.
If we think about something like you have a bookmarking feature in your product and to customers can bookmark certain things. You don’t really have an easy way of collecting that data in an analytics platform, but it may actually be hidden in your database. How many users actually have saved bookmarks, that kind of thing. So the analytics gives you a good perspective on whether something’s getting used a lot, that kind of thing.
Tom:
And I suppose once you’ve got the base level data there, if there are questions you can work that into research as well and ask people how they find certain features, what problems they have with them.
Russ:
Absolutely. Yeah. And also determining which users are actually the people who really use a feature loads and talking to them about that. Because they have the most experience. So yeah. So that can give you a really good perspective on kind of where your product is right now. The other good thing about that is it’s kind of as an aside to your backlog, but having those metrics means that once you actually take on improving a feature, you can actually measure against how much value you’ve added afterwards, which is always nice.
Tom:
So one thing that often important is cost, resource, that kind of thing. That’s a thing you do. You’d expect to report on here.
Russ:
Yeah.
Tom:
How do you go about determining that?
Russ:
Yeah, so breaking up a feature into the kind, of the teams that are responsible for it. You may have some features in a backlog which just need one team to make a small change. So something that’s just development only, something that’s just design only. You might have a feature which needs expertise from each of those teams.
So what you’re trying to work out is the cost and how much investment is required for each of those. You’ll often find that something that yeah, touches on less of those teams is something that you can get done a lot quicker. Yeah. You’re trying to work out how much effort is coming from each team to get a feature completed.
Tom:
And that doesn’t mean to say the high effort means that won’t be done, but just making sure that you’re aware of all this stuff so you can evaluate which you choose to work on next.
Russ:
Definitely. Yeah. I mean higher effort is just a higher risk, right? You’re putting more, you’re backing a feature more than others. So you kind of want to make sure that that investment’s worth it.
Tom:
And so there’s one thing we’ve done some times around testing or kind of seeing how I can evaluate this stuff without actually doing anything.
Russ:
Mm-hmm (affirmative).
Tom:
How would that work?
Russ:
Yeah. So can be a little bit cheeky, but yeah, there’s a few different ways of doing it. So we were just mentioning something with which is a big investment for your team and a lot of development or design resource, you want to be quite sure that that is the right thing to build. And so one of the ways you can do that, it’s kind of fake that it’s already there. So you can do that in a few ways. It can kind of boil down to something as simple as a button that actually doesn’t do anything yet. Which is kind of why I say it’s a bit cheeky because-
Tom:
Sneaky.
Russ:
It is. It is sneaky. You want to be careful about how you manage the experience of when someone actually clicks on it. But-
Tom:
Just set up a 404 page.
Russ:
Yeah, 404, feature doesn’t exist yet.
Tom:
Browser closes.
Russ:
But yeah, so you want to manage that and some way but you’re going to get really good view of how much interest people actually have for that feature before you’ve built it. And if clicking on a download button that doesn’t work yet. If telling the user that this is coming soon, like you’re still, being honest about that thing as well. And what you might determine is that actually so few people are clicking on your little experiment that you’ve just saved yourself months of work by not actually building something that people aren’t interested in.
Tom:
And I guess that could work in a bunch of different ways. So it could allow you to save a portion of your time for these sorts of experiments running a few where you can before getting to [dev 00:11:50] as a kind of side project or it could also be worked into your, what you see as your development or design workflow.
Russ:
Yeah, I think another example we’ve done for a project before is we had quite a few key features for a product and we weren’t entirely sure how to balance like which one was the lead kind of featured test cell. So what we did is we just AB tested some different banners with the signup button. So we called. So there’s one feature, A is like the main focus of the banner with a signup button and the next one talks about feature B and we just measured what kind of responses they got based on the feature that we mentioned. And again that shows you the interest that people have. We haven’t actually had to build those features yet, but we’re just, we’re just collecting email addresses so that when we do we know which way to to kind of pivot. So also in a product development process, it doesn’t matter how much you’re paying for your development resource and to be honest design, sometimes bugs are going to creep up in your product.
Tom:
Uh-oh.
Russ:
Uh-oh indeed. Yeah. And that’s something that you, you can’t avoid, but it also kind of gives you a new issue to deal with, which is do I focus on fixing existing things or do I just focus on building new stuff? And I, again, I think this is, you know, it’s very much down to the kind of stuff that you’re finding. If this is a minor typo that you found somewhere in one terms and conditions page or something, maybe it’s low down on the backlog. But if customers are reporting it, like the fact that they’ve not only found it that he contacted you to tell you that they found it probably means it’s quite important.
Tom:
Yeah.
Russ:
So yeah, in some product development cycles, there’ll be set time, you know, in sprints or something where you just focus on bugs. Yeah. There’s a few different ways of handling that.
Tom:
So you might have a kind of separate waiting for bugs specifically then based on if a user found it or these kinds of things.
Russ:
Yeah, you’ll have had phrases before like critical bug, or [in sprint 00:13:46] critical bug, that kind of thing, which basically means this thing is so important that we should be not be working on new things until that’s fixed.
Tom:
I suppose having a way to figure out if it’s really critical is quite important. Otherwise everything is critical and priority high on all that.
Russ:
Yeah.
Tom:
It doesn’t really help.
Russ:
Then you’re just a maintenance team forever.
Tom:
Yeah. So kind of want to go into a made up real world example, but maybe, Russ, could just talk through how you’d set this kind of thing up and how it would look and how you’d use the tool to then figure stuff out.
Russ:
So yeah so this example, we’ll use one of the more industry standards set ups here, but again, choose your own. So this one is based around desirability, feasibility, and viability. So when we talk about desirability, we mean what the customers are looking for. And as we’ve mentioned, the best way for you to do that is to speak to your users and find out what their kind of biggest asks are at the moment. And that will give you a score based on how popular something is coming out with those interviews.
Feasibility is next. And that is around how easy or difficult it is for you to produce this. Is this fitting in with the capabilities that your team currently has? Is this a, do we need to hire in specialists to do something like this? If we’re talking about a complex search algorithm or something that’s actually quite difficult to pull off and that makes it less feasible for you. So measuring how feasible something is would be your second metric there. And the last one is viability. So is this feature kind of sustainable? Is it going to make us money or is it actually quite an expensive thing ongoing for us to do that may have impact on pricing models, that kind of thing. So that will be our viability.
Tom:
Nice. And so where do you put all this information?
Russ:
Yeah, so we’d recommend putting together a spreadsheet for this quite simple thing to do.
Tom:
Very simple.
Russ:
Yeah, set of columns. Don’t need to describe what a spreadsheet looks like. So a spreadsheet, right? So it’s like A through Zed but you can go further than that. Yeah. So we’ve actually put together just a quick example spreadsheet of this with a few features in it, prioritised in those three dimensions so you can head over to our websites, have a look at that as well.
Tom:
Wheelhouse.com/backlog, but you’d have each feature in a row and you’re going to score them like one to five, one to 10 it doesn’t really matter. Right? You can do whichever you feel is fit.
Russ:
Yeah, I like to go one to 1000. Just like I feel like I’m really certain on the score of each one. But I recommend, maybe one to ten is fine.
Tom:
So it’s totally down to you. No real rules. Obviously some people have really breaking them here. And then it’s just there in front of you. You can use colour-coding, you can use whatever you want. Really. It’s quite a simple task and you obviously then sort colours by certain amounts, all very simple stuff. But it’s just a bit better than having a big long list, which is kind of sitting in front of you, worrying you. And you can try and somehow evaluate these numbers, see which has the most greens, if that’s what you’re after. And then have a discussion around what to do next. Cool. So remember to head on over to the website to download that. Hopefully Russ will be back soon for another pod.
Russ:
If I’m allowed. Yeah, if we get the listenership.
Tom:
The stats.
Russ:
Otherwise I guess I’ll be out for a few months.
Tom:
Who will keep a keen eye on downloads of the spreadsheet and listens to the podcast on this one. [crosstalk 00:17:04].
Russ:
Like and subscribe guys. Like and subscribe.
Tom:
Nice one. Until next time. See ya.
Russ:
Cheers.
Tom:
Thanks for listening. If you want more product leadership content, then head over to the Lighthouse site. Wearelighthouse.com for more podcasts and blogs. To find out more about our product leadership framework, check wearelighthouse.com/plf. Find us on Twitter using @wearelighthouse. And if you’ve enjoyed the show, then we’d love a rating in iTunes to help spread the word. Don’t forget to subscribe wherever you get your podcasts to see the archive and get any future shows. Until next time, we’ll see then.