Who this is for.
You're in an SLT meeting. The programme director identifies groups that the organisation is not currently reaching. Everyone agrees they are relevant. Then someone asks how are they going to be reached, and the room goes quiet because there is no route to those groups and no way to hear from them. Not because nobody cares, but because no one has built the infrastructure. This playbook is for ops leaders deciding whether that is their problem to solve and what that looks like in practice.
The backstory.
Music in Hospitals & Care (MiHC) brings live music into hospitals, care homes, and community settings across the UK. Before the pandemic, their North office was part of a national programme with 448 professional musicians and over 1,400 healthcare partners, delivering sessions to more than 100,000 people a year.
According to MiHC's Annual Report 2020/21, Covid-19 brought almost everything to a halt. In-person delivery collapsed to 70 sessions across the year. The North office in Manchester closed. The programme that had been built over decades - the venue relationships, the musician networks, and delivery itself - was effectively suspended.
As MiHC began planning a return to in-person delivery under the banner #BackToLive, the North office approached a Cranfield Trust volunteer for support in building a business plan.
Reality check.
New Philanthropy Capital (NPC), the UK charity think tank, has documented the gap between charities collecting feedback and whether that work incorporates what beneficiaries want. In research produced with Keystone Accountability (the foundational UK study on this problem), NPC found that charity users typically have little influence over organisational decisions and no defined way to signal what they would like to change, and that the UK domestic sector lags behind the international development sector on user voice. The focus in governance and reporting, the NPC found, tends toward funders rather than those meant to benefit.
The NPC finding points to an infrastructure deficit rather than a values one. Most charities that gather feedback gather it from people already engaged with their services: partly a design choice, partly a resource constraint, and partly the result of no one choosing to build anything different. Forms go to session participants. Surveys go to partner staff. Testimonials come from people the programme has already reached. Useful, but structurally unable to uncover the voices of groups the charity does not yet hear from. The result is an organisation with robust feedback alongside a persistent blind spot: it knows what the people it currently serves think about what it currently offers, but not what the people it does not reach would choose. This is not just a programme issue. It is an infrastructure issue, which means ops has a role in solving it.
The real problem.
For MiHC, the programme team identified three underserved groups: rural communities, children, and working-age adults. The team could not work out who was accountable for fixing it, mainly because creating operational infrastructure is not what programme teams do.
MiHC's objectives for the period included increasing its knowledge of the ‘musical needs of the people we support’ and ‘developing a cohesive business plan’. MiHC was not unique in this. It is a common challenge across charities of all sizes: the processes in place work for those already in the programme, but no one has built the feedback mechanisms required for those outside it. This is the moment ops either steps in, or the objective rolls forward to the next programme planning cycle unchanged.
The cost of not stepping in is not just uneven reach. It is also an evidence problem. If a charity cannot hear from underserved groups, it cannot design well for them or show funders what has changed as a result. Impact reporting starts much earlier than the report itself. It starts with building the mechanisms that generate the right evidence.
The pandemic pause brought that cost into focus. MiHC's challenge is common in rebuilding programmes, with the choice between reconstructing what existed and deciding which groups should be reached. Too often, without a structure in place, the familiar partners get called, the familiar formats get reinstated, and the groups that were missing before are still missing.
What they did.
Cranfield Trust provides pro bono management support to charities through experienced business volunteers. As the case study presents it, the volunteer assigned to MiHC's North office did not tell the team who to target or what to build. Instead, they created the operational framework that ensured the programme team's aspirations could be executed, turning their objectives into a repeatable process with a set of tools and named accountability. Coffee Break Ops reads the intervention as three operational shifts.
The first intervention was a gap analysis: mapping current delivery against potential need to identify which populations were significantly underserved. The ops contribution was repeatability: a consistent starting point that any planning cycle could return to. For MiHC, this analysis identified three priority groups: rural communities, children, and working-age adults.
The second change was deepening the analysis of how MiHC worked with beneficiaries. In this process, MiHC defined a new relationship, seeing their beneficiaries through a 'customer/product' lens: customers, like in any business, have preferences. This meant that MiHC's sessions were products that customers could choose or decline. This shifted the discussion from principle to design, and it changed the questions asked in programme planning meetings. 'What would this group want?' becomes the default instead of 'Does this group benefit from what we offer?' It is a technique transferable to any planning cycle, where the risk is designing for groups you already know rather than those you don't.
The third intervention was the accountability structure. The Cranfield Trust volunteer maintained an ongoing mentoring relationship after the business plan was written, keeping the priorities connected to live decisions instead of archived as intentions. Without that structure, the business plan would have been written and filed away.
The source case study does not describe in detail how MiHC reached families outside the existing programme: which routes were used, and what questions were asked. However, the case study highlights the achievements of these interventions, as demonstrated by the Lullaby Hour initiative. Designed around what families with young children said they wanted, Lullaby Hour is described as ‘very successful’, a winner of a national innovation award, and subsequently funded for UK-wide rollout.
The toolkit.
What this requires in practice is simple but often missing: a gap audit, a listening infrastructure map, a route-to-voice map, and a delivery plan with named accountability. The point is not the template itself. It is creating repeatable infrastructure that any function in an organisation can use, so underserved groups can shape decisions before the next planning cycle closes.
Gap audit template
A one-page structured document with three columns: decision area, audience currently informing it, and audience not currently informing it. Here, the focus is on the decisions that make a difference, e.g., programme design choices, funding applications, delivery model changes, and service developments. The goal is to identify a working list of key decisions, and not create an exhaustive audit of everything. Once completed, the gap audit template produces a current record of whose voice is shaping decisions and whose is not. The aim is to prioritise hearing from two to three underserved groups or audiences. If the gap audit runs longer than one page, it has become a research project rather than a truly useful diagnostic tool.
Listening infrastructure map
This is a short document that answers three questions for any function: what do we currently use to gather input, who does it reach, and what is our current data not telling us? The third question needs to be worked through with a specific scenario, using real data, rather than being answered hypothetically. For MiHC, the scenario would have been this: if a funder asked us to evidence the impact of our work on working-age adults in rural communities, what data would we have? The answer, before the business planning process, was very little because no one in those groups had been asked. Running that question against your own programme or function is likely to reveal the same gap. The feedback you have is only as broad as the people you are already reaching.
Route-to-voice map
For each audience a function isn't currently hearing from, create a maintained list of organisations, roles or individuals with an existing relationship to that audience. The examples will vary by function and context. For example, routes to beneficiary groups might run through health visitors, community centres or housing associations; whereas routes to staff cohorts could run through trade union reps or staff networks; and routes to funders might go through peer organisations or sector contacts. These route maps are starting points for a conversation, but too often they risk becoming substitutes for it, and the listening exercise never reaches the intended audience directly.
One-page delivery plan
Gap. Priority. Pilot. Feedback route. Next step. Those are your column headers for your one-page delivery plan, applicable to any function, and kept tight enough to force choices. The delivery plan identifies a specific pilot deliverable within eight weeks and a person with accountability for it. Not necessarily the project lead, but someone who has the authority to ask challenging questions when the deadline arrives, as well as the mandate to act on the answers. The accountable person's role at each check-in is to review both progress and decisions, in particular, what has happened, and what needs to be resolved next. The format is the same no matter the function, e.g. programmes designing a new session, HR testing a new staff engagement tool, or finance creating a new budget consultation process. Without an accountable person and a defined pilot end date, the delivery plan becomes a statement of intent rather than a plan.
What shifted.
The success of MiHC’s Lullaby Hour suggests that this kind of ops intervention can do more than improve an existing programme. It can create the conditions for something new to emerge. Lullaby Hour did not come from refining a familiar format. It came from building a way to hear from people the organisation was not previously reaching.
MiHC’s Annual Report 2021/22 suggests the longer-term gain was not just one successful initiative, but a more consistent way of working. The evaluation framework and Theory of Change helped make listening part of programme design rather than a one-off exercise. That is the bigger shift: not just a better idea, but a more repeatable way of generating better ideas, better evidence, and better future decisions.
Why it worked.
When MiHC's North office began planning #BackToLive - their internal name for the post-pandemic return to in-person delivery - the default response was tempting: call existing partners, reinstate existing formats, and recreate what had worked before. The Cranfield Trust engagement interrupted this default by creating a process the team had to work through, asking challenging questions, and consciously reviewing their unknowns, before making any rebuilding decisions. Without the interventions, the familiar programme would have returned, but with crucial gaps.
The ‘customer/product’ lens seems like a mission identity question, but it is not. Its value was practical: it changed the direction of programme planning meetings, so that an ops leader can add the prompt to a planning template without requiring teams to adopt or create a completely new framework. Because the prompt is structural rather than mission-based, it can be used within every function of the organisation.
The mentoring relationship that emerged from the business plan process was a deliberate move away from mentoring as support, and instead used check-ins to ensure teams were accountable. In a world where plans are commonly delayed due to delivery and time pressure, an external relationship turned ‘later' into a specific date, which meant that when someone asked about the rural community gap, there was an obligation to have an answer ready.
Where this breaks.
The most common challenge to overcome is building the infrastructure after your programme team needs it. The gap audit is commissioned when a planning deadline is a week away; the route-to-voice map is assembled the morning before a community engagement session; the delivery plan is created for the specific project rather than maintained as a standard. Everything exists, technically, but is made under pressure, for a single use, and it will not be there next time. The early warning sign here is a programme planning cycle in which someone wants to create something that should already exist.
The second issue is treating templates, such as the toolkit described in this case study, as delivered the moment they are handed over, and assuming adoption has happened rather than managing it. New organisational infrastructure needs an embedding phase, a period when someone in an ops role stays close enough to the adoption phase to notice drift before it becomes the new normal. A simple test is whether the ops lead can still explain what the infrastructure is for, when it was last reviewed, and where adoption is drifting.
Ownership is the final failure, and it is circular. It starts with silence because nobody has assigned the problem, so it has no in-tray to sit in. When someone eventually tries to move it, the ops function points out that deciding what the organisation needs to hear is a programme decision. The programme team points out that building the mechanism to do it is an ops job. Both are right. And neither produces the required infrastructure. The early warning sign is that the gap has appeared in more than one strategy document, but ddoesn'thave a named owner in any of them. The circular argument does not need to be resolved before work starts, but it does need to be identified as the reason work cannot start.
Still brewing.
Most ops leaders reading this will do the arithmetic on capacity before they do it on opportunity. Building and maintaining this kind of infrastructure can look like one more thing on an already busy list. But MiHC's case study suggests the issue was not a lack of data. It was that the existing data described the wrong population. The organisation had feedback mechanisms, and a 70-year history showed they worked. What they did not produce was insight into what working-age adults, rural communities, or families with young children would choose.
What the business planning process added was not an entirely new category of work. It created a more structured version of work that many organisations are already doing in fragments: identifying gaps, finding routes in, testing a response, and assigning accountability. The difference is that, once built, those mechanisms can be reused rather than recreated under pressure for each new project.
That is the choice the MiHC story brings into focus: either ops builds the mechanism, or programme aspirations roll forward into the next planning cycle unchanged.
Source for this case study: https://www.cranfieldtrust.org/articles/27-how-a-business-plan-helped-our-charity-become-more-customer-focused
Other sources
NPC, User Voice: Putting People at the Heart of Impact Practice, 2016
Note: the specific document titles used here reflect Coffee Break Ops’’interpretation of the fframework'srequirements. The source case study describes the approach and outcomes. Some document names have been inferred from common practice in equivalent implementations.
Every week, a real ops problem from a real charity:
What they did, what it reveals, and what you can take back to your desk.
Sign up to get it in your inbox.
