CAPABILITY FACTORY · OUR PROCESS
How the work actually runs
Every Capability Factory engagement runs through the same method — a structured arc from problem to measured outcome, with traceability maintained at every step. This page describes it honestly: how the work is shaped, what you'll experience, and what makes the rigor real. No hand-waving, no black box.
Discover. Build. Measure. Operate.
Every engagement moves through four stages.
Discover. We understand your business, structure your problems, and build the capability model. Two 90-minute scoping sessions plus the analytical work the Engine runs between them produce a commitment-grade package: problem register, capability register, outcome qualification, traceability, ROI model, phase plan.
Build. Your leadership validates the model and commits to outcomes. Scope locks. Capabilities get enabled in week-long architect sprints — Monday sprint kick-off, Friday demos, scope adjustments every week.
Measure. After the capabilities are running, we compare the outcomes to the targets your leadership committed to. Some outcomes land in days. Some take months. Some need a full year of operating data. We measure them honestly.
Operate. We stay with you. The measurement from one engagement feeds the next engagement's discovery. Partnership compounds.
What runs through all four stages is the traceability: every problem traces to a capability, every capability traces to an outcome, every outcome is tied to a measurable target you named. Nothing gets built without that chain.
Integrity of intent
The thing that distinguishes Capability Factory's method from traditional consulting engagements is what runs underneath the four stages: a continuously maintained model that connects what you scoped to what gets built to what gets delivered.
The most common failure mode of consulting engagements isn't scope drift during execution — it's that the work gets delivered to spec, but the spec was never connected to the business outcome the buyer needed to move.
Built to spec, wrong spec.
The deliverables ship. The business stays broken.
Our method prevents this because the traceability is structural, not aspirational. As your model evolves through scoping, workshop, and delivery, every connection stays intact. What gets scoped is what gets built. What gets built is what was envisioned. What was envisioned was traced to the outcome from the start.
That's what we mean by integrity of intent. It's the discipline that makes the capability you receive the capability you actually needed.
The Capability Engine runs the method
The method is executable at this speed because of the platform.
Two 90-minute scoping sessions producing a commitment-grade analytical package isn't something you can do by hand. Six to eight weeks of traditional discovery, compressed into days, isn't something you do by hand either. The method requires infrastructure, and the infrastructure is our proprietary platform — the Capability Engine.
The architect runs the method. The Engine handles the scale. AI-assisted signal extraction reads the inputs you bring. The Engine clusters, structures, and maintains the connections between every signal, problem, capability, and outcome. The senior architect makes the calls; the Engine handles the analytical labor that supports them.
The Engine is how collaboration happens. If you want to engage with it directly — and most clients do — you work alongside your architect inside the platform. Your leadership interactively defines outcomes. Your stakeholders contribute signals and problems. Your architect refines the model as new information emerges.
The artifacts are the same either way. If you prefer to work with your architect and receive the artifacts in standard form, you get the same analytical package. The Engine is how the work gets produced. It's not a prerequisite for getting value from the method.
Your capability model stays live. As long as we have a relationship, your work in the Engine stays current. When a new problem surfaces, we don't start over — we pick up where we left off. The accumulation is real and it belongs to you.
What you'll see at every stage
Most consulting engagements are opaque by design. The deliverables are slides. The commitments are narrative. The outcomes are claimed but not measured. The client has no way to check whether the work was real.
Our method is the opposite. At every stage, you receive the artifacts that document what's been done and what's been agreed. The package you can expect:
- The problem register — every named problem your stakeholders surfaced, with scope, owner, and cost to the business.
- The capability register — every capability we'll enable, with what it does, who uses it, and what it produces.
- The outcome qualification table — every outcome, with its baseline, its target, the data source it'll be measured against, and when it becomes visible.
- The traceability worksheet — the living reference document that shows how every problem traces to a capability traces to an outcome. Issued to you at scoping; updated as the engagement runs; available to verify at any point.
- The benefit realisation plan — how each outcome will be tracked, when it becomes measurable, and what your business needs to do to realize the benefit.
- The outcome measurement — actuals versus commitments, after the work has run long enough to produce the data.
These aren't slides. They're the evidence that the work was real, the scope was honest, and the outcomes either landed or didn't. The transparency is the point.
How a phase works in practice
The structure above describes the analytical work and the commitments that shape an engagement. The actual delivery work — enabling the capabilities — runs in sprints. Here's what the experience looks like:
Week 1. Discovery and analysis happen in two 90-minute sessions, with the Engine running the analytical work between sessions. By the end of the week, your problem, capability, and outcome registers exist in draft form.
Week 2. Your leadership reviews the model and commits to outcomes. Scope locks against those commitments. The qualified package gets issued.
Week 3. Scope converts to sprint count. The SOW gets signed. The first sprint begins.
Weeks 4 onward. Delivery in one-week sprints. Monday sprint kick-off, Friday demos, scope adjustments every week. Capabilities come online one at a time, each against the traceability and outcome commitments established earlier.
After delivery. Measurement runs over the appropriate timeframes for each outcome — some weeks, some months, some a full year. The Continuous Fit retainer, if you're on one, keeps the relationship live and your model current in the Engine.
For larger engagements — multi-phase programmes — the same pattern repeats per phase. The first phase of delivery has its own small discover-build-measure loop; the next phase picks up from there, building on what came before.
See how this would run for your business.
Two 90-minute sessions, a real analytical package at the end, and a clear understanding of what the work would look like for you.