Who this is for.
You’re sitting in a managers’ team meeting, and someone asks which projects are at risk. You look around the table. Everyone reports differently. One team uses a traffic-light system. Another summarises in a paragraph. A third says nothing because their project doesn’t seem to be on the agenda this month. By the time a problem has arisen, it’s already urgent.
You’ve tried templates. You’ve tried shared trackers. Adoption lasts for a few weeks and then slowly dies, usually because the task of tracking feels larger than the problem it was meant to solve, and no one senior enough is visibly using it. This playbook, drawn from Trussell, is for when inconsistency of tracking isn’t just frustrating, it’s starting to produce decisions you can no longer justify.
The backstory.
Trussell coordinates a national network of food banks, relying on over 36,000 volunteers to deliver frontline services across the UK.
The post-COVID years brought a sustained and significant increase in demand, and in 2022/23, Trussell distributed almost 3 million emergency food parcels - a 37% increase over two years.
That growth accelerated operational activity. Trussell was running multiple cross-functional projects simultaneously, including network development, technology, fundraising, and infrastructure, each with its own teams, timelines, and reporting habits. The organisation had governance infrastructure in place, including a strategic plan, annual plans, and delivery reviews, which were designed to track progress and highlight problems early. The question was whether the information reaching those reviews was reliable enough to act on.
Reality check.
Trussell’s 2022/23 accounts record £55.2 million in expenditure, up almost 10% on the previous year, and explicitly described as a response to growing need and an acceleration of work across the network.
The accounts also show significant investment in network support and organisational development during this period. What they don’t show is whether the projects driving that spend were being governed consistently enough to flag problems before they became costs.
An organisation accelerating at that rate, with multiple parallel workstreams and a distributed volunteer network, is taking on financial exposure if its project reviews aren’t based on comparable data.
The real problem.
The problem, as presented, was inconsistency: each team ran projects differently, used different formats, and applied different definitions of risk.
But inconsistencies of this kind typically produce a governance structure that makes decision-making challenging, as they yield information you can’t fully trust, because no one has agreed on what information a project is required to produce. Not what tools to use. Not what format to use. What questions every project, at every stage, should be able to answer, and especially what it is this project trying to do? Who is responsible for making it happen? What is currently at risk? What decision is needed, and from whom?
Without that agreement, project reviews can track activity but struggle to compare risk. Leaders know things are moving. What they can’t easily see is which things are stuck, or why, or what it would take to unblock them, that is, until someone escalates. And escalation, in a fast-moving organisation, is always late.
The surprising thing is that the governance infrastructure existed - that's usually where a consultant might poke around to diagnose the problem. Here, the question was: what was emerging from the infrastructure and who was accountable? This is a pattern that appears repeatedly in growing charities. But it rarely starts with a poor process. It starts with an unclear role. When no one has been explicitly assigned responsibility for a project's momentum, a role often confused for an overseer or passive listener of updates, inconsistency fills the gap. Teams manage upwards in whatever format feels natural to them, and reviews receive whatever information arrives. The framework, when it eventually comes, is trying to solve a process problem that is really an accountability problem.
Standardising delivery is mistaken for a project management intervention. It is not. It’s a governance one, and the piece of governance most often missing isn’t the template. It’s a person with the authority to unblock, and the explicit obligation to show up and use it.
What they did.
Trussell introduced a shared project lifecycle with a standard sequence of stages that every project follows from initial idea through delivery to formal closure. This gave teams a common language and a common structure without dictating how each stage was managed internally.
They created a small set of standard documents, including a project definition covering scope, goals, and ownership; a project plan with shared milestone format; a watch list for tracking risks, issues and dependencies; and a closure note. The documents were designed to be proportionate to the task, but light enough that using them felt easier than not using them.
A bespoke handbook was written specifically for Trussell, using real internal projects as examples rather than generic scenarios. This meant the guidance reflected the realities of how the organisation works, including its EDI commitments and values, and reduced the gap between what the handbook described and what teams recognised from their own work.
Adoption was treated as a long-term effort rather than a one-time rollout. Repeated training sessions, structured sponsor involvement, and short feedback loops kept the framework active and enabled refinement in response to its use in practice.
The toolkit.
The following document types are often required for an implementation like this. Some are named directly in the source, whilst others are standard components for this type of framework. Together, they cover best practices for making a project visible and governable.
Project Definition
Can be one page that includes scope, goals, named owner, and named project sponsor.
Project Plan
Timeline and key milestones in a shared format, proportionate to the size of the work.
Watch List
This is a single live document tracking risks, issues, and dependencies. Updated before every review.
Closure Note
A brief record of what was delivered, what was learned, and what should carry forward to the next project.
Handbook
Can be briefer than you think - 4-8 pages will do. Titled ‘How we do projects here,’ this links directly to the documents above and uses real internal examples throughout.
One-Page Status Update
For fortnightly or monthly updates that cover what has progressed, current blockers, and decisions or support required from outside the team.
One-Page Project Lifecycle
The standard stages, with a brief definition of what good completion looks like for each.
Sponsor Drop-in Agenda
This is a structured meeting (30-45 minutes) for the project sponsor that allows for decision-making and unblocking, but crucially not status reporting. The agenda - focusing on actions - is essential as without it, the meeting defaults to a briefing.
What shifted.
At Trussell, review meetings could, for the first time, compare progress across projects using the same format and the same risk language. A leader looking across the portfolio could now see where things were stuck without waiting for someone to escalate, because the watch list made it visible as a matter of course, and not as a result of a problem becoming urgent enough to surface.
The sponsor drop-in changed function in a way that helped projects progress. Before, sponsors were being brought up to speed - being given a list of things that happened in excruciating detail, solely present to receive information rather than to act on it.
But with the changes, the project sponsor was already informed, because the status update had been circulated in advance - they didn’t need to be in the meeting to hear what had happened, but to discuss and drill down into the decisions and unblocking. Decisions that had previously taken weeks to be made were made in the room. The project team left with an answer, and not a stalling follow-up.
The closure note also created a capability that hadn’t existed before, whereby teams could now start a new project without having to rebuild context from scratch. Organisational knowledge was no longer held by individuals and became accessible to whoever took on similar work next. In a network operating at Trussell’s scale, that compound effect over time and quality is significant.
Why it worked.
The core mechanism at play was proportionality. By this, we mean the time it takes to complete a process in proportion to the process itself. When a process feels heavier than the task it’s there to support, people route around it, and it’s not because they’re resistant to process or, as some would say, lazy - they really are not. It’s because they’re rational.
The design idea here was to make the right way the easiest way. A one-page status update takes less time than explaining the project from scratch in a meeting. A watch list is faster than an email thread about risk. So the transferable principle is that adoption follows ease, and not instruction or get-buy-in campaigns. If your framework isn’t being used, the first question isn’t ‘how do we train people’, but ‘is using this faster than not using it?’
Grounding the handbook in real internal examples closed off a specific potential for non-adoption. When guidance feels imported, for example, from a consultancy, from another sector, or from a course, teams read it as advice for a different kind of organisation - it’s abstract and not relevant. Using Trussell's own projects meant the examples in the handbook were work the team recognised. Whoever writes your handbook needs access to live internal projects, and not hypothetical ones. Generic handbooks will only get read once.
The sponsor drop-in worked because it resolved a design problem that had been mistaken for a behavioural one. Sponsors were now visibly and structurally responsible for decisions. The signal this sent to everyone in the projects was as important as the decisions themselves: senior leadership was actively using the new framework. In organisations where senior teams aren’t visibly engaged with a process, adoption below them unsurprisingly collapses. And the answer isn’t more training. The answer is to ensure that senior engagement with the framework is visible. In a smaller organisation, this can be as simple as the CEO referencing the watch list in a team meeting. The tool is less important than the signal.
Where this breaks.
The most common point of failure for implementing this is the sponsor who completes the role in name only. They appear on the project definition. They attend the drop-in. But the meeting becomes a briefing rather than a decision point, and the project loses the one thing the sponsor role is supposed to provide: the authority to unblock. The early warning sign is simple: if you leave two consecutive drop-ins without a decision made or a blocker removed, the role has already drifted. Don’t wait for the delivery to slip before naming the problem that has arisen.
A related failure is the definition problem. We’ve seen instances where teams don’t know what counts as a project - it’s not their fault; no one teaches it unless you choose to learn it. This produces two challenges that look opposite but have the same cause. In the first, some teams over-apply templates to routine tasks and generate paperwork they think adds value, but it doesn’t; in the other, teams avoid templates altogether, yet the work still needs structure.
The early warning sign here is volume. If the framework produces documents for everything or for nothing, the information isn’t reliable. A simple fix before you realise you have the problem is to define what a project is for your organisation. It needs a clear boundary, a defined endpoint, and an agreed reason for existing. Write that definition - one sentence will do - and ensure everyone is using it.
Templates can also drift into compliance theatre. The project definition gets filled in, the watch list gets updated, but none of it connects to how decisions are being made. The thing to look out for is to ensure the watch list continues to change between reviews. Live risk tracking moves. A static watch list means either nothing is at risk (unlikely) or no one is maintaining it honestly (the usual reason). If it hasn’t changed in a month, ask why in the next drop-in.
Still brewing.
The Trussell case study is about process. But the process only embeds when someone is responsible for embedding it, and this doesn’t mean just monitoring or reporting on it, but actively keeping the project moving.
A question to brew ahead of your next project review: for each project currently running in your organisation, how many have a named sponsor who knows exactly what they're there to make happen, and not just to oversee, but to unblock and decide? And if you asked those sponsors, would their answer match yours?
Source for this case study: https://www.managementcentre.co.uk/
Note: the specific document titles used here reflect Coffee Break Ops’ interpretation of the framework's requirements. The source case study describes the approach and outcomes. Some document names have been inferred from common practice in equivalent implementations.
Every week, a real ops problem from a real charity:
What they did, what it reveals, and what you can take back to your desk.
Sign up to get it in your inbox.
