Problems, Solutions, Participants, and Choice Opportunities: Evaluation in a Garbage Can


Problems, Solutions, Participants, and Choice Opportunities: Evaluation in a Garbage Can

Why evaluations in international organizations go unused — and what it would take to change that

An Inconvenient Starting Point

Evaluation in international organizations rests on a foundational assumption: that producing rigorous evidence about what works and what doesn't will inform and improve organizational decisions. It is a compelling premise. It is also, for the most part, wrong — not because the evidence is poor, but because the assumption about how decisions are actually made is flawed.

Study after study confirms what most evaluation practitioners quietly know: evaluation findings are chronically underused. Reports are acknowledged, management responses are filed, recommendations are "accepted" — and organizational behaviour remains largely unchanged. The profession has spent decades trying to fix this by improving evaluation quality, strengthening follow-up mechanisms, and making reports more accessible. These are worthy efforts. But they address symptoms, not the root cause.

The root cause is structural. It lies in the nature of decision-making itself in complex international organizations.

"We have been trying to fix evaluation use by improving the supply of evidence. The real problem is that we misunderstand the market in which that evidence must compete."

The Garbage Can: How Decisions Actually Happen

In 1972, Michael Cohen, James March, and Johan Olsen published a landmark paper describing decision-making in what they called "organized anarchies" — organizations characterized by ambiguous goals, unclear processes, and fluid participation. They proposed the garbage can model, in which decisions are not the product of orderly problem-solving but the result of four largely independent streams colliding:

Problems — concerns and issues that demand organizational attention, generated inside and outside the organization, often exceeding the organization's capacity to address them.

Solutions — proposals, ideas, tools, and approaches that exist independently of problems. Crucially, solutions often precede problems: they are answers actively searching for questions to attach themselves to.

Participants — individuals who move in and out of decision arenas depending on their time, attention, and competing demands. Who is in the room when a decision is made is often a matter of circumstance, not design.

Choice opportunities — occasions when the organization is expected or required to produce a

decision: a board meeting, a strategy review, a budget cycle, a crisis.

A decision occurs when these four streams happen to converge. A problem becomes visible at the same moment a viable solution is available, the right participants are present, and a choice opportunity is open. The coupling is often driven by timing, proximity, and political opportunity — not by systematic analysis.

If this sounds abstract, consider how a new strategic priority suddenly appears in an organization's framework, how a crisis response gets designed in 72 hours, or how a particular programmatic approach gets adopted across an agency. In each case, the garbage can is at work.

Why Evaluations Go Unused: A Garbage Can Diagnosis

Viewed through the garbage can lens, the chronic underuse of evaluation is not a mystery. It is predictable. Evaluation fails to influence decisions for specific, structural reasons:

1.   Evaluation enters the wrong stream.

Most evaluations produce detailed problem descriptions — findings about what went wrong, what underperformed, what gaps exist. But in a garbage can, problems are the least scarce resource. Every unit, every stakeholder, every external review generates problems. What is scarce are credible, actionable, politically viable solutions. Evaluation reports that diagnose without prescribing decision-ready options are adding to an already overflowing problem stream while contributing nothing to the solution stream where influence actually happens.

2.   Evaluation arrives at the wrong time.

Evaluations are typically timed to project or programme cycles — midterm reviews, final evaluations, ex-post assessments. But organizational decisions follow a different calendar entirely: replenishment rounds, leadership transitions, strategy renewals, governing body sessions, crisis moments. An evaluation that lands three months after a strategy has been approved is an archive document. The same evaluation landing three months before that approval is a strategic input. The mismatch between evaluation timing and decision timing is one of the single largest drivers of non-use.

3.   Evaluation is absent from the choice opportunity.

The garbage can model shows that who participates in a decision moment matters enormously.

Yet evaluation offices are typically absent from the spaces where decisions actually crystallize — strategic planning retreats, budget allocation meetings, crisis response teams, informal leadership consultations. The evidence exists on a shelf or in an inbox. It is not in the room.

And in a garbage can, what is not in the room does not exist.

4.   Evaluation lacks a carrier.

John Kingdon, building on the garbage can model, identified the critical role of "policy entrepreneurs" — individuals who actively couple problems, solutions, and political opportunities. Evaluation evidence without a carrier — someone who champions it, translates it, and injects it into decision moments — is orphaned evidence. Most evaluation offices invest heavily in production and almost nothing in brokering.

5.   Evaluation fights the ambiguity instead of navigating it.

International organizations operate with famously ambiguous goals: "reduce poverty," "promote peace," "achieve sustainable development." Traditional evaluation tries to pin down clear objectives and measure performance against them. When goals are ambiguous, this approach produces findings that feel disconnected from how the organization actually understands its own work. The evaluation speaks a language of precision in an environment that operates on negotiated ambiguity.

"Evaluation has been solving the wrong problem. The challenge is not producing better evidence. It is getting that evidence into the garbage can at the moment the streams converge."

Changing the Game: Six Shifts to Make Evaluation Matter

If the garbage can is the operating system — not a pathology to be fixed but the reality to be navigated — then evaluation must adapt. Here are six strategic shifts that could fundamentally change whether and how evaluation evidence gets used:

Shift 1: From Problem Description to Solution Packaging

Evaluation must move beyond diagnosis. Every evaluation should produce not just findings and recommendations, but 2–3 decision-ready options — concrete, costed, politically assessed alternatives that decision-makers can pick up and act on. The goal is to enter the solution stream, not just add to the problem stream. This requires evaluators to think like advisors, not just assessors.

Shift 2: From Project-Cycle Timing to Decision-Calendar Timing

Evaluation offices should map the organization's decision calendar — every major governance meeting, strategy review, budget cycle, and leadership transition — and reverseengineer evaluation timelines from those dates. The question should not be "when does this project end?" but "when is the next decision this evidence could influence?" This single shift could transform evaluation relevance overnight.

Shift 3: From Reactive Reporting to Proactive Agenda-Setting

Evaluation has underused power to shape which problems are visible to the organization. Through synthesis products, evaluation briefs, annual reports, and meta-evaluations, evaluation offices can seed the problem stream strategically — keeping critical issues alive in organizational discourse even between formal decision points. This is not advocacy. It is ensuring that evidence-informed problems compete effectively with politically driven ones for organizational attention.

Shift 4: From Management Response to Evidence Brokering

The management response mechanism — where managers formally respond to evaluation recommendations — creates an illusion of use. Real influence requires ongoing relationships with policy entrepreneurs: the mid-level and senior staff who broker between technical and political spaces and who actively couple streams in the garbage can. Evaluation offices should identify these individuals and invest in sustained advisory relationships, not transactional compliance exchanges.

Shift 5: From the Shelf to the Room

Evaluation heads and senior staff should negotiate standing presence at key decision forums — not to present reports, but to inject evidence in real time when streams converge. A two-minute intervention at a strategy retreat, grounded in evaluation evidence, can have more impact than a 100-page report distributed afterwards. Independence does not require absence from decision spaces. It requires integrity within them.

Shift 6: From Measuring Against Fixed Objectives to Sense-Making

Alongside traditional accountability evaluations, evaluation offices should invest in developmental, utilization-focused, and complexity-aware approaches that help the organization make sense of what it is actually doing in conditions of ambiguity. This means evaluating not just "did we achieve our stated goals?" but "what are we learning, what is emerging, and what does it mean for our direction?" In an organized anarchy, the sense-making function may be evaluation's highest-value contribution.

The Mindset That Needs to Change

Why Evaluations Go Unused

(Current assumptions)

How to Change That

(Garbage can–adapted approach)

We assume evidence drives decisions

Evidence must actively find decisions

Time evaluations to political and decision

We time evaluations to project cycles windows

We treat recommendations as the output

Produce decision-ready options as the output

    We equate independence with distance            Practice independence as trusted presence

We measure success by recommendations accepted

Measure success by evidence present when streams converge

    We fight organizational ambiguity  Navigate ambiguity through sense-making

We invest in report production

Invest equally in evidence brokering

A Challenge to the Profession

The evaluation profession in international organizations stands at a crossroads. We can continue to refine our methods, improve our reports, and strengthen our follow-up systems — all within a framework that assumes rational decision-making. Or we can confront the reality that we are operating in organized anarchies, and that our influence depends not on the quality of our evidence alone, but on our ability to navigate the garbage can.

This is not a call to abandon rigor. It is a call to add strategic intelligence to methodological excellence. The best evaluation in the world is worthless if it is not in the right stream, at the right time, in the right form, carried by the right person, when a choice opportunity opens.

• • •

The garbage can is not going away. The four streams — problems, solutions, participants, and choice opportunities — will continue to flow through our organizations in their messy, unpredictable way. The question for evaluation is whether we will keep standing outside, producing evidence that nobody asked for, or step inside and start shaping what comes out.

The evidence on evidence use is clear. It is time we used it.

References and intellectual foundations: Cohen, M.D., March, J.G. & Olsen, J.P. (1972), "A Garbage Can Model of Organizational Choice," Administrative Science Quarterly; Kingdon, J.W. (1984), Agendas, Alternatives, and Public Policies; Patton, M.Q. (2008), Utilization-Focused Evaluation; Weiss, C.H. (1979), "The Many Meanings of Research Utilization," Public Administration Review.

💬 I'd welcome a conversation: How does your evaluation office navigate the organized anarchy? What has worked to get evidence into the room when it matters? What hasn't? Let's move beyond lamenting non-use and start strategizing about it.

#Evaluation       #DecisionMaking      #InternationalDevelopment          #OrganizedAnarchy       #EvidenceUse

#GarbageCanModel #PublicPolicy #MonitoringAndEvaluation


Comments

Popular posts from this blog

Organizational Effectiveness through the Monitoring, Evaluation, Accountability, and Learning (MEAL) Lens

The Law of Least Resistance and the Wide Adoption of Generative AI in the Workplace

Understanding and uncovering program assumptions - The hidden pillars of program success