Death To Automated Virtual Boards
Last Updated October 06, 2020
What if I told you the automation your team has in your virtual board isn't really making you more efficient? On the contrary, I'll argue it's actually slowing you down.
What are virtual boards?
First thing's first. Let's define our terms. When I talk about virtual boards I'm talking the JIRAs and Azure DevOps (formerly TFS) of the world. JIRA is extremely popular among developers. And honestly, Atlassian has some great products. I'm not knocking JIRA, Azure DevOps, etc. themselves, I am, however, knocking the overuse of the automation they provide.
The problem
Automated processes appear to make us move faster because we're now moving through what was a manual process at the speed of automation. Effectively, we're skipping steps that we would've normally taken ourselves. On the surface it looks like you've saved time by losing manual steps. In reality, though, you've actually lost much more.
Teams move at the speed of trust. And we know this. You've likely experienced it. I've seen managers put restraints on teams and individuals in the form of timesheets or needless "update" meetings. Teams even do this to themselves. We add mandatory approvals for pull requests, host "standup" meetings that last half an hour and behave more like "status" meetings, and program virtual boards to prevent descension from the defined "process".
All these may help keep a team "in line", but they certainly slow them down. In this article I'm going to hone in on that last point about programming virtual boards. It's a travesty and it's time for them to die. Here's why.
Automated tasks create division between development and QA
A very popular thing to do with virtual boards is automate the process of development completing a task and QA picking it up to test. It seems like a harmless automation that serves to speed up our development time. But it has unintended consequences.
The problem this automation creates is a virtual wall between development and QA. This problem is so common you'll hear it referred to as "throwing it [a task] over the wall". Anyone else heard that before? 🙋♂️ It takes us from a teamwork mentality to one resembling an assembly line, where developers complete a task and hand it off to QA to test, hopefully to never be seen again.
To be fair, this particular automation seems to make sense. After all, it removes human error. Now the QA can't blame the developer for not telling them about a completed task and the developer can't complain about the QA not recognizing a task was ready to test in time. Sounds like a win-win, right? Where's the collaboration, though? It's gone, replaced with an automated script for the sake of convenience. In an attempt to remove the messiness of human interaction we actually just create a bigger mess - isolation.
Now the QAs and developers hardly speak outside of their ceremonious meetings. And what happens when we don't communicate? We don't get a chance to know each other. We miss out on a shared experience. And most importantly, we lose the opportunity to develop trust in one another. And trust is the key to speed, not automation.
Automated boards give way to unnecessary metrics
I've worked at several places over my career, and each place has tracked my daily work in a different way. I've had requirements for my entire day to be logged. I've had to put arbitrary hours on developer tasks. I've even had to log my time in two different places because ...reasons. Now, I'm not saying that time tracking is useless. Certainly, if you're billing by the hour (why?) then you're gonna want to track that. What I am saying, though, is that just because we can doesn't mean we should.
On the other hand, I've also worked in environments where I was not tracked. And, let me tell you, those were the best ones. The teams were able to avoid a lot of unnecessary work and simply focus on the product. Not only was it great for our velocity but it was great for our morale knowing our leaders trusted us to do our jobs well. Managers simply looked at team burndown charts and took bi-weekly updates from Scrum Masters - that's it - that's all they used to determine the health of a team. Productivity was an after thought - more on that in a bit.
I've also seen some wild metrics like the time a task spends in a particular status, such as QA Waiting, QA Testing, Deploy To Test, so on and so forth. Some companies track the number of bugs created during a sprint - I know, I've worked at one. The list of metrics goes on and on. But at the end of the day these metrics are a red herring. The idea to track metrics is often well intentioned. Most managers look at metrics to find areas to improve, not reprimand. Unfortunately, though, the metrics are a trap. Metrics like those mentioned help us measure productivity, sure, but what if productivity isn't what we're after? What if productivity is just another smelly fish dragged along a false trail to throw off our scent?
Someone took the time to develop the automated processes and then generate the metrics. We ought to get a return on that time investment, right? And on paper, or screen, it might appear we are. Less bugs, less time in statuses, more churn. Congratulations. You now have a successful assembly line. But let's be honest, how often does that actually happen? And even it did work, is that what we really want?
Productivity is a false god. Product is the one, true king. And product is not measured with metrics. A successful product is measured by revenue and sustained by customer feedback. If it's possible to have high productivity without a successful product then we shouldn't be attempting to measure productivity hoping fine tuning it will produce a better product. They do not correlate.
Improvement
Okay, so maybe you're starting to come around. Maybe you're realizing some of these metrics we typically capture are actually harmful to a successful product. But you're still left wandering, how do we know we're improving? And that's a fair question. We've become so metric-minded that without them we're not even sure how to know we're making progress. There are several ways in which we can measure improvement, and, shocker, they don't involve metrics.
The main, central, cornerstone, absolute bedrock of measuring improvement is customer feedback and satisfaction. If your product's customer satisfaction is increasing then you're improving, no questions about it. Furthermore, you're improving if your product's customer retention is rising. Happy customers stay, unhappy customers leave. We know this. It's so simple. We apply this principle as consumers every day. Why should our software be any different?
Okay, customers matter. That makes sense. But how do we capture that information? Aren't those metrics? Didn't you just say metrics are bad? I'm confused.
All metrics aren't created equal
Up to this point I've really been driving home the fact that we collect meaningless metrics as development teams, often to our detriment. But that doesn't mean all metrics are bad. In fact, metrics are great! They're only great when they're used properly, though. Proper metrics measure the product, not productivity. That means instead of measuring ourselves we should be measuring our customers - how they use the product, what they think about it, etc. Metrics like that can give direct insight on how to improve the product such as features to add, features to remove, and features to improve. This feedback derived directly from customers is then used to drive your stories, tasks, tickets, whatever your team calls them. The bottom line is customer feedback drives product improvement.
But what about us?!
Okay, you've bought in. Put the customer first. Derive product improvements from their feedback. Got it. But how do we improve? How do we make sure our team is being efficient? How we measure velocity? And I get that. You want to move faster. We all do. But I don't want that getting in the way of what really makes a great product. Because fast moving teams is not it. And, unfortunately, that's how we've behaved.
Fear not, though. For there is a solution. There is a way to make sure your gears are greased and all cylinders are firing. But before we go there, I want to really drive home the point that none of what I am about to matters if you're not delivering a good product. Remember, product is king. Productivity is a false god. Do not think that what I'm about to say in any way gives you a pass on focusing on customer feedback. This is something you do after you have a good customer feedback / product implementation loop. Repeat after me, after.
Okay, was I unclear? I hope not. Now let's talk about improving our teams. Improving your team is both incredibly simply, but also incredibly difficult. So, we'll start with the "incredibly simple" piece. Measure team improvement by the number of tasks you complete in an iteration. Woah! What?! It's that simple? All I have to do is count?! I can count! I've been counting a long time! But of course, it doesn't end there. Well, actually it does. That's the goal. That's the end. Count your tasks/stories/tickets and try to do more next time. It's how you get there that's so difficult. Why is that? Because not all tasks are created equal. But that's the goal. Strive to get your tasks broken down to the point they're all a similar size. A good rule of thumb, if the task cannot be clearly articulated on a sticky note then the task is too big. What was that? I think I heard the product owners shudder from here.
Getting to that place takes time and discipline. But, if your team can achieve that level of zen you can measure improvement with elementary math. There is another, less efficient, more common, and still worthwhile means of measuring for improvement on a team. And that's with story points. I won't go into what story points are. That's a topic for another day. But tracking story points is great way to know if your team is getting faster. Why do I consider story points as merely a viable alternative, lesser than optimal, option? Well, have you ever sat in a room arguing about whether a story should be pointed as a 5 or a 3? I have. It's awful. And it's almost always pointless (get it?). Rarely have such conversations revealed some unknown piece of work that made a difference. Story pointing can also act as a gateway drug to the other misleading and downright useless metrics I ranted about at the beginning of this article. When story pointing, proceed with caution. Often, though, and this is especially true for new teams, it is a necessary, or at least viable, step to achieving the zen mode we discussed earlier. So, because of the ambiguity, time suck, and potentially poor habit-forming nature that comes along with story pointing I consider it to be a suboptimal means of team improvement.
Wrapping up
So, there you have it. The problem with automated virtual team boards and how to fix them. I hope you learned something. But let's not stop here! I want to learn, too! If you have anything to add, a story to share (I love stories), or just think I'm full of it, reach out on Twitter. Let's keep the conversation going. Because, let's be honest, that's how we learn best, isn't it?