2021 · BT

Delivering value to our customers more efficiently at BT

Process design, Product Design, User Research, Agile Methodologies, Delivery management

Here at BT and EE, it can sometimes take longer than expected to ship value to our customers and whilst we continue to embrace a more agile mindset across our organisation, it's also hard to let go of old habits and more traditional ways of doing things.

Commercially, it's unpractical to spend large amounts of time and effort on big bets that might or might not work. Instead, we need to accept that customer behaviour and demand are transient so we need to move fast and learn as we iterate (nothing new to see here).

To help us move fast and meet that customer demand, we lean on several processes and frameworks (build measure learn canvas, five whys, problem framing canvas etc) but like any other team, we are still learning how we can best utilise these tools to our advantage to help us ship great products.

For a handful of squads in my tribe, shipping any form of value (quick wins or big bets) to our customers regularly can be a long, complicated and sometimes painful process that can take months.

Clearly, this isn't a sustainable way to continue operating and I wanted to help try and fix these problems as quickly as possible.

What problems were we trying to solve?

To help me better understand the problems we were dealing with, I conducted several squad interviews to learn first-hand what stopped them from shipping value to our customers more efficiently.

Here are the biggest issues we discussed:

  1. We lean on big bet, multi-featured MVPs as our go-to method of releasing value to our customers. Whilst this is necessary for rare circumstances, it's often far too risky to spend large amounts of time and effort on something that we can never be sure will perform as expected.
  2. We've unintentionally created a culture where we deem it essential to test everything before it goes live. Whilst this is somewhat commendable, it's not necessary and not only does it slow us down it repudiates our aspiration to 'build, measure, learn' in a fast-paced environment.
  3. Limited user research capacity for our squads means that they'll need to book research in advance. Whilst this helps us mitigate this dependency in the short term this can delay delivery for our squads whilst we find availability that meets the squad's timelines.
  4. Starting from scratch whenever we solve a new problem overlooks existing research and insights that we already have that might help us move faster and more efficiently.
  5. Access and training to quantitative data tools like Adobe Analytics and Decibel are not readily available to our squads. To help us avoid starting from scratch when solving new problems we need to learn how to leverage this data more effectively.
  6. Scheduled releases (once a month) aren't flexible or regular enough for our squads to embrace a true 'build, measure, learn' way of working. Instead, our squads need to be empowered to push code to production when they need to and not be limited by existing governance and traditions.
  7. Sometimes, we forget the basics. When we ship something we're not always defining any success metrics that will help us determine whether what we've shipped is performing as expected.

Time to hit the reset button

The issues I uncovered worried me.

Some of the issues felt like they were fairly simple skills gaps that we could fill with the right level of training and support. Others felt more institutionalised and habitual which would be a lot harder to break down and adapt to a new way of working.

I desperately wanted to understand why some of these issues had become a common practice but there wasn't a very clear answer. When I asked the squads to help me better understand these problems - deadlines and technical limitations were suggested as the main reasons why our squads currently work the way they do.

My hunch was that whilst these reasons were a factor in holding our squads back it wasn't the full picture. Varying levels of experience, self-organisation and misaligned expectations for collaboration in each squad also contributed to some of the issues we had identified, and to help fix this we needed to hit the reset button and go back to basics.

Piloting a more efficient way of working

Our squads needed a template and much better guidance on how to design and build great products so I decided to launch a six week (3 x 2 week sprints) pilot for one squad in our tribe that would help us do just that.

Format

Without wanting to re-invent the wheel I wanted to return to some of the more foundational ways of working for agile squads where I'd personally seen success in designing and building great products.

I started by revisiting Jeff Patton's guide to dual-track development where one part of the squad focuses on predictability and quality (development track) and the other focused on fast learning and validation (design and discovery).

This dual-track approach is not to be confused with 'duel' track where those two tracks in the squad are separate or competing in any way. In fact the very opposite is true. Whilst there are two tracks in the squad everybody is involved in discovery, planning and design tasks where possible.

Each discipline lead (product, design and engineering) also guided daily activities we felt could be taking place alongside some helpful tips that might encourage the squad to try something different to deliver more efficiently.

We also encouraged each squad to share daily diary studies documenting what went well and what could have been better. This data would help us identify areas for improvement when we come to scale this way of working across the rest of our tribe and beyond.

Objectives

There were two sets of objectives that I'd identified; squad objectives that were measurable through common agile metrics (cycle time, lead time, number of deployments and throughput rate) and behavioural objectives that were measurable through regular feedback, retros, and observation after the pilot was completed.

Squad objectives:
  1. Ship incremental value to our customers every sprint.
  2. A more efficient approach to user research in an agile environment.
  3. Use a more varied set of MVPs with clearer success metrics.
Behavioural objectives:
  1. From Inertia (worried about doing the right thing) to initiative (ask for forgiveness, not permission).
  2. From overly ambitious (trying to do too much in one sprint) to a ship and learn (not everything has to be perfect when shipping into production) mentality.
  3. From a command & control way of working (waiting to be told what to do) to self-organisation (ask why and make improvements to suit your squad).
Key results

I identified five key results that would help us track the objectives throughout the pilot.

  1. Improved lead time.
  2. Improved cycle time.
  3. Increased throughput rate.
  4. Increased number of deployment(s) per sprint.
  5. Valuable, constructive and useful insights from the squad.
Selecting a squad participant

Our Pay & Control Costs (P&CC) squad were a perfect candidate for this pilot. This squad owns several customer goals including setting up direct debits, paying for services, managing your billing account and more.

A few members of the squad were still fairly new to BT and EE so we could get the benefit of a fresh perspective, they had existing access to our Loop design system, libraries, and staging environments as well as a meaty problem to solve: every month, we receive thousands of calls to our customer service team to help support people who have forgotten their login details and wish to make a payment or a customer who wishes to pay on behalf of someone else (i.e an elderly relative).

The squad put forward a hypothesis that providing an online journey for our customers to pay the debt off their account or pay on behalf of someone else with very little upfront information required would alleviate the pressure on our call centres and make our customers lives just that bit easier.

Out with the old, in with the new

This was our chance to throw the rule book out of the window and change our existing way of working to see what might stick.

Here are a few adjustments we made:

  1. Dedicated days of user research support
    Limited user researcher capacity and long lead times for synthesis meant that we needed to try a different approach for how our squads conducted research.

    Instead of booking research in advance (as we do today) we trialled dedicated days of research support per sprint instead. On the same three days of the sprint (usually the Monday, Tuesday, Wednesday of the second week) the squad would have a dedicated user researcher to help them learn and validate hypotheses.

    In those three days, our researcher would help determine what type of test would be suitable (depending on what the squad wanted to learn), gather participants, conduct the test (with the whole squad in attendance) and share the results with the squad.
  2. Clearer guidance for what and when to test
    Another key aspect of our revised approach to research was also a better understanding of when something actually needed testing or not and to help figure this out I put together a very simple guide based on effort and commercial impact (that you can see below).

    If something was a high commercial impact (checkout, broadband or mobile hub pages) and high effort to implement then we needed to validate it through a form of evaluative research to help us mitigate the risk of what we were shipping. If it wasn't high commercial impact or high effort to implement then there really was nothing stopping us from getting this out into the wild and learning from real customer behaviour.
  3. Clearer success metrics for MVPs
    For reasons that are still a little hazy, we often tend to launch products without ever defining clear metrics for success and/or failure so we rarely know if an MVP is viable or not.

    The pilot encouraged the whole squad to define a very clear metric of success at the very beginning of sprint 0 once they had successfully identified both the customer and business goals/opportunities and generated a hypothesis.
  4. Continuous testing
    Instead of tacking on end-to-end testing after the code is ready, continuous testing involves executing the right set of tests at the right stage of the delivery pipeline - without creating a bottleneck.

    Adopting this new way of testing code helped the squad establish a safety net that protected the user experience in an accelerated development process and avoided software failure headlines when it was too late.
  5. Flexible deployment cycles
    At BT and EE, we have typically maintained a fixed monthly release cycle. Whilst this helps give our engineering and testing teams the predictability and reassurance in repeated cycle times it's not nearly frequent enough to enable us to quickly ship and learn about customer behaviour.

    We needed to fix this and find a way that we could ship code when the squad was ready.

The outcome

The squads' hypothesis was tested and validated through a rapid single-featured MVP that was designed, built and shipped to a small percentage of our customers over four weeks.

Before the test was shipped, the squad established their measure of success as an increase in conversion rate (of customers successfully making a 'logged out' payment) between 5% and 10%.

After the first day, the conversion rate had increased by a massive 11%!

The results

The results from the pilot were very encouraging. Not only did we drastically reduce both cycle and lead times for the P&CC squad we increased throughput rate and completely re-energised their way of working and approach to iterative product development.

Pre-pilot squad metrics:
  1. Cycle Time: 3 weeks.
  2. Lead Time: 6.5 weeks.
  3. Throughput: 25 tasks.
Post-pilot squad metrics:
  1. Cycle Time: 7 days. 👉 33% reduction.
  2. Lead Time: 4 weeks. 👉 61% reduction.
  3. Throughput: 42 tasks. 👉 68% increase.
OKR review

It was a bit of a mixed bag of results when looking back at our OKRs.

Whilst the squad were able to ship more incremental value to our customers every sprint and use a different type of MVP with clearer success metrics we weren't able to find an efficient approach to how we conducted research in an agile environment.

The squad themselves had all the support they needed when they needed it but the researchers were quite burnt out in the process and providing dedicated days of support like that clearly wasn't sustainable.

Our researchers were working hard - not smart.

Despite this observation, the squad's change in behaviour was exciting. Without asking, the squad took the initiative to completely change our existing deployment cycles from monthly to completely un-restricted. This might sound somewhat unremarkable but this was a massive step forward for us to be able to ship and learn on a much quicker time scale.

Squad feedback

The squad were generally quite receptive and open-minded to the pilot. They learnt a lot and could see the benefits of what we were trying to achieve.

Having available user testing resource has been great alongside a very supportive tribe. Hannah, Scrum master
Testing within three days was stressful and we struggled to get the right kind of participants recruited in time. Having said that, it was great to see the squad organised and focused with clear learning objectives. Arthur, User Researcher
Working on specific tasks in a limited timeframe has allowed us to work in a much more focused way. We seem to be faster at unblocking obstacles and making sure the work gets done. Tim, Content Designer
Overall a great experience, we are working a lot more efficiently as a squad, constantly collaborating and supporting each other. Jordan, Product Owner
If we were able to conduct unmoderated tests ourselves it would be much easier, and we could test more often. Jon, Product Designer

What's next?

Since the pilot concluded in mid-October the P&CC squad have continued to embrace this way of working to great success.

We've used their story to scale these processes and ways of working to other squads across BT Digital as well as other CFUs (customer-facing units) by hosting regular lunch n learns and drop-in sessions where people can come and learn more about what we achieved and how we did it.

As for a renewed approach to user research in an agile environment, it's back to the drawing board.

Our research ops team and I have since been collaborating toward a longer-term vision of being able to provide user testing capability in each squad by giving them the right training and direct access to usertesting.com. Doing so will hopefully be able to provide the flexibility and independence the squads need to deliver efficiently throughout each sprint.

More to come, watch this space!