Scroll to top

Breaking the Misconceptions Part 4: Misconceptions About Governance

Even when process design is sound, assets are improving, and teams are strong, performance can stall if the plant cannot reliably convert uncertainty into decisions, across shifts, across functions, and under pressure. Governance is not cadence for its own sake; it is the operating design that routes problems when local loops don’t converge, forces recurring loss into prevention rather than management, and makes trade-offs decidable instead of negotiable.

By Ozan Ozaskinli, Eda Kocakarin, and Sercan Aldatmaz

1. Introduction

Food manufacturing has real headroom, capacity gains, waste reduction, and more stable performance, yet many plants struggle to turn that headroom into sustained progress. As the earlier articles in this series outlined, the constraint is often not equipment or technology, but the misconceptions that quietly shape how work is managed across six dimensions: Process, Physical Assets, People, Governance, Systems & KPIs, and Planning. Each article in this series challenges a specific set of assumptions that limits performance and shows where capability is hiding in plain sight.

Part 1 focused on misconceptions about process optimization. Part 2 shifted the focus to people, the most overlooked driver of operational performance, beliefs that shape who stays, who learns, how teams are staffed, and how supervisors lead. Part 3 turned to physical assets, where the cost of a belief is paid in minutes, yield, and reliability rather than in arguments: small repairs crowd out lifecycle thinking, bottlenecks get managed from memory while conditions shift, and changeovers get treated as work at the machine while prerequisites and support remain unmanaged. In each case, the damage is rarely dramatic failure. It is drift, created by decision rules that sound reasonable in isolation and survive because they feel familiar.

Part 4 moves to Governance: the steering layer that determines whether capability compounds or dissipates. The pieces in this Governance section focus on those control points at a high level: escalation as a method switch, forums that decide and close instead of multiplying coordination, continuity that carries operational state across shifts, and decision rights at intersections so seams don’t turn into queues. The aim is straightforward: repeatable control, so improvements in process, assets, and people translate into stable output, predictable quality, and reliable service.

Misconception 1: “We run regular meetings, so control is in place.”

One pattern that reliably flags governance issues is a meeting cycle. Performance slips, meeting time expands to “regain control,” then the system collapses under its own weight, without producing more control.

Here’s what that cycle looks like. A miss triggers an extra check-in, a longer daily, a new forum. For a while it helps. Then agendas sprawl, the same topics return with new dates, and meetings begin consuming the time needed to solve what they surface. Attendance becomes selective, sessions are shortened and the operation drifts back toward improvisation. When results slip again, structure gets added again.

You don’t need months to confirm it. The pattern announces itself in the first week: the calendar is “full,” everything is “urgent,” and recurrence is treated as background. That language is the fingerprint.

When we recognize that signature, we sit in the meetings because governance is observable there. Even the smallest forum reveals the system: what gets named, what gets deferred, and whether there is a reliable place for it to be closed.

You can see it in a 6:30 a.m. production meeting. Planning opens with a schedule gap: yesterday finished short, today is tight, and an outbound load is at risk unless the first line runs clean. Operations flags staffing: two absences on packaging, a trainee on the filler, and a mid-shift changeover that compresses the window. Quality brings up an open deviation from the night, a temperature excursion that needs containment and extra checks. Maintenance is present because the capper has been the same recurring irritation all week; small stops have become background.

The room moves quickly into action. A supervisor reallocates an operator to cover packaging. Planning reshuffles the run order to protect the outbound load. Quality negotiates a hold point and sampling without delaying startup. Someone mentions the capper again, but it stays a “watch it” item; maintenance does a quick adjustment and agrees to keep an eye on it. The day is staffed, sequenced, and defended.

That is a useful meeting. It stabilizes the shift. But notice what it cannot do: it cannot reduce the probability that the same capper stop shows up tomorrow. The issue is acknowledged, managed, and normalized. Nothing forces a prevention decision, assigns ownership for eliminating recurrence, or creates a closure point where the organization must return with an answer. The plant remains coordinated around repeat loss.

What’s missing isn’t effort. It’s a control loop. A governance system needs a forum that can decide, an owner who will execute, a mechanism to close the result, and inputs strong enough to support learning.

Forum design matters because decisions have horizons. A daily meeting can coordinate the next 24 hours; it cannot redesign standards or allocate investment. But it should know where recurring issues go next, what question is being handed off, and when that decision returns. Without a defined ladder, daily control, weekly learning, monthly tradeoffs, plants discuss everything everywhere and resolve nothing with finality.

Decision rights matter because actions are not decisions. In the scene, work was assigned, but no one had explicit authority or obligation to decide how the capper stops recurring, what prevention path is chosen, and what resources are committed. Authority stays ambiguous; “watch it” becomes the operating model.

Closure matters because governance is memory. If the plant cannot answer “what did we decide last time, did it work,” recurrence becomes permanent and meetings become a place to re-coordinate around what should have been prevented.

Data is both an enabler and an obstacle. Most plants have metrics; fewer use them as steering inputs. When data becomes a courtroom, people defend numbers rather than diagnose. When data is weak, leaders avoid it and steer by anecdote and seniority. Either way, the room struggles to separate noise from repeatable patterns worth a decision.

Leadership is the multiplier. Architecture can point to the right forum; only leaders can enforce what the room will and will not accept: vague “look into it,” recycled issues without consequence, or “watch it” as an operating model. Culture follows that conduct. If surfacing structural issues creates friction or punishment, people narrow the conversation to what is safe: recover-the-day actions and defensible explanations.

Changing this is hard because governance doesn’t fail on design; it fails on stickiness. Under pressure, teams revert to behaviors that keep the shift intact. And once people learn that structural topics go nowhere, or make life harder, they stop bringing them.

Making it stick requires behavior installation. The meeting ladder has to be explicit, with a clear decision expected at each rung, and a few non-negotiables held in the room: clear questions, explicit owners, closure checks, and disciplined handoffs when the horizon or authority isn’t present. Follow-through must remain visible so governance becomes a system that remembers, not a conversation that resets daily.

This works best with a deliberate handoff. The behavior is modeled early, then leaders run the forum, a temporary Shadow Chair holds the discipline that prevents reversion, and outside support tapers as the habit becomes self-sustaining. If leadership is the constraint, bad news is punished, accountability is refused, recurrence is tolerated, no meeting architecture will hold. Coaching can close a skill gap; it cannot replace willingness.

The first impact is stability: fewer repeat disruptions, fewer “known issues” managed indefinitely, and less time spent coordinating around predictable loss. Over time, that stability becomes capacity, because the plant spends more of its attention preventing recurrence and less of it re-organizing the same day.

Misconception 2: “Escalation is a sign of failure.”

Plants prize ownership. Problems are supposed to be solved where they occur, by the people closest to it. That instinct is healthy until containment turns into improvisation. When uncertainty stops shrinking and the plant keeps running, “ownership” becomes paying for learning at scale.

Escalation is a mode switch: route a stubborn problem to the capability that can shorten time-to-truth, change how the plant tests so uncertainty stops turning into scrap, lost capacity, and service risk. Repeated use does something more important than a single save. It builds a detection-and-response muscle: triggers get clearer, handoffs get faster, and the organization starts reacting to drift instead of waiting for failure. Early on, escalation limits damage. Over time, it becomes prevention because the threshold moves, teams escalate on the first signal that uncertainty isn’t collapsing, not after the third full-batch miss.

A muffin producer learned the distinction the hard way. Muffins left the oven looking normal, then collapsed on cooling. The team ran a batch and saw the failure. Someone spotted an obvious explanation: the wrong recipe. They corrected it and ran again. The second batch collapsed the same way.

That should have triggered a method change. Instead the plant stayed in production mode: full batches, parameter tweaks, and shift-to-shift handoffs of theories. Day shift pointed to cooling speed and oven temperature. Night shift inherited “try variations,” adjusted within limits, and watched the collapse repeat. By morning the loss wasn’t just scrap; it was capacity, time, and a threatened schedule.

Only then did the method change. Operations tried smaller tests. R&D came to the floor, then into the lab where variables could be isolated. They suspected an ingredient or utility effect but couldn’t pin it down. Eventually the supplier’s technical team was pulled in. The trigger was simple: site water pH had drifted low enough to destabilize the product. Correct the pH and stability returns.

The point isn’t that the plant “should have known” pH. The point is that the site stayed inside a local loop that couldn’t converge long enough for a technical question to become an operational one.

Late escalation usually isn’t a single mistake. It has multiple roots, and they sit at different depths.

Sometimes escalation isn’t a real operational concept on the floor. People may have the word, but not the model: what conditions qualify, what escalation is intended to trigger, and what changes when it happens. Without that, teams don’t “refuse” escalation; they keep working because that’s the only visible option.

More often escalation exists, but it’s vague. “Use judgment” becomes policy, and escalation becomes personality and habit. The confident operator tweaks longer. Supervisors keep running full batches because stopping feels like defeat. Hypotheses get handed off because it preserves motion.

Then come the penalties. Escalation creates extra work, documentation, coordination, disruption, and personal risk if it’s treated as exposure: an invitation to explain what you missed, publicly, under time pressure. If escalation reliably produces hassle, scrutiny, or blame, people delay until they can defend themselves.

The most corrosive penalty is learned futility. Plants escalate and nothing changes. Help arrives late. Decisions stall. Over time, people stop escalating early not because they don’t care, but because early and late feel identical: both bring hassle, neither reliably brings closure.

Fixing this isn’t primarily about stricter rules. It’s building a process that reliably changes method when uncertainty persists, rehearsing it because these events are low-frequency, and removing the penalties that keep people from using it. It needs a clear trigger and a defined method shift, and because these events are rare, it has to be rehearsed, not just documented.

Leaders have to make escalation safe and decisive, fast support, learning without blame, and reliable closure, or the floor will delay until the loss is already sunk.

Escalation isn’t a sign of failure. It’s insurance against low-probability, high-loss events, the ones that don’t happen every week, but when they do, they destroy capacity, service, and confidence. Plants that build this muscle don’t just reduce scrap. They start catching weak signals while there’s still room to pause, test, and recover, before the plant has paid for the answer in full batches.

Misconception 3: “Shift handovers are for attendance checks.”

Most plants accept the baseline: night shift underperforms. Scrap is higher, recoveries are slower, variability is wider. The reasons are familiar, more call-outs, thinner benches, fatigue, lighter maintenance and quality coverage, and they’re all real. The mistake is treating those constraints as permission to stop designing.

Two patterns usually travel together. Nights drift. Then day shift gets hit by failures that were seeded overnight. Plants file these as separate events, “night had a bad run,” “day had a quality incident”, but they’re often one chain in a coupled operation.

The mechanism is continuity failure. Operating state isn’t carried across the boundary, so each shift restarts from partial truth. “State” is practical: readiness conditions, constraints, open loops, containment actions, equipment condition, and pending decisions. When those don’t transfer, the incoming shift reconstructs the situation on the fly. Nights pay more because fewer resources exist to collapse uncertainty quickly.

Day supervisor and incoming night leader meet for overlap. The time disappears into staffing: two absences, who covers the mixer, whether a trainee can run a critical step, how to float an operator without breaking another station. The plant uses its continuity window to assemble a roster.

Meanwhile, the night’s run has a hard constraint. The butter was pulled from cold storage late and needs time to temper. If they start mixing immediately, texture risk spikes. That fact never makes it across the handover.

The roster gets settled and day leaves. Night starts the mixer on schedule without that constraint in view. Texture fails; the first batches are scrapped. The run began from the wrong start condition.

Once continuity is weak, the night/day pattern follows. Night is a different configuration: thinner diagnostic capability, slower access to maintenance and quality judgment, fewer people who can separate noise from a signal worth stopping for. Open loops migrate forward: a marginal parameter, an unresolved equipment behavior, a containment action without closure, a decision deferred. Day shift encounters the risk under higher throughput pressure when options are narrower, the consequence becomes service.

Shift deviation then hardens into behavior. Day blames night. Night expects blame and softens constraints into vague notes and “watch items.” Day learns the handover is incomplete and rebuilds its own picture, discounting what it receives. If surfacing issues doesn’t lead to closure, people stop surfacing them early. Trust drops, transfer gets thinner, and “that’s just how nights are” becomes the accepted explanation.

There are two valid designs for nights, depending on the objective.

If nights are mainly for risk control, schedule to the capability you have. Put high-ambiguity work, new materials, tight specs, heavy changeovers, engineering-dependent setups, where support exists. Use nights for stable runs. Day shift’s job is to remove uncertainty before handoff: stage, verify readiness, lock the plan, and carry open loops to a clear status so night isn’t improvising at startup.

If the plant needs the capacity, raise night capability on purpose: rotate your best operators into key roles, train against night losses (startups, changeovers, chronic micro-stops), and build coverage that isn’t one absence away from chaos. Give nights clear stop rights and a support route that responds (duty engineer/remote windows/trigger list). If you can’t staff engineers at night, package engineering into the day with standards, approved ranges, and simple playbooks.

The boundary needs a lightweight mechanism that doesn’t collapse into paperwork. Carry over only exceptions that can change output in the next shift, blockers, active constraints, open deviations, recurring micro-stops you’re tolerating, and deferred decisions. Keep the list brutally small (~five items max) and visible. The cap does the work: it forces prioritization, prevents “watch it” from becoming a hiding place, and keeps closure executable in minutes. Once the list grows, it stops getting read, turns into narrative, and the handover reverts to improvisation, exactly the conditions under which night drifts and day shift inherits surprises.

When plants invest to lift night capability, the payoff is not subtle. We’ve seen like-for-like night output move from roughly 50% of day shift to about 80%, a step change in capacity driven by design: capability where it’s needed, support that responds, and closure that prevents the same uncertainty from being re-paid every night.

Misconception 4: “Functional excellence leads to business success.”

Plants can stall at intersections. Cross-functional work slows down, decisions arrive late, and the same disputes return under new dates. In steady flow the interface is usually fine; specialization is doing its job. The stall is conditional’ when conditions are ambiguous or degraded and a trade-off has to be chosen quickly.

That is the intersection problem. The plant has not defined how trade-offs are decided when objectives collide: who can decide, what constraints apply, and how accountability is carried.

Inside a function, ownership usually leads to action. At an intersection, ownership runs into limits because the decision is not “do the task.” It is an allocation decision: which objective yields, by how much, and under what safeguards. When that allocation is unclear, decisions slow down even while activity continues, requests for approvals, demands for more evidence, and handoffs aimed at moving liability to someone else. The surface looks like bureaucracy. The underlying logic is risk management.

The intersections that break most often share a common shape: the upside is shared or delayed while the downside is local and visible. Spend now to prevent downtime later. Accept a bounded release risk to protect a shipment. Take a schedule hit to stabilize quality. Even when the enterprise case is clear, the person asked to absorb the downside has to justify it against a local scoreboard.

That scoreboard is what turns intersections into stalemates. When functional KPIs are used as judgment, they behave like veto rights. People learn what kinds of losses are punishable and which are defensible. In that environment, uncertainty becomes dangerous. “No data” becomes a shield because acting under ambiguity is treated as a personal bet. And even when data exists, it doesn’t settle the decision unless the organization has agreed what evidence counts and who adjudicates disputes; otherwise the argument shifts from the decision to the inputs.

The stall is often quiet. It looks procedural: more signatures, more analysis, more “alignment.” Sometimes it shows up later as friction, support slows down, interpretations tighten, exceptions disappear, because the intersection never produced a durable decision, only a temporary concession. Over time that becomes equilibrium: local exposure is priced higher than enterprise value.

Pressure amplifies it. Schedule stress rewards motion and punishes verification. Slow feedback raises exposure because uncertainty lasts longer. Power imbalance makes disagreement expensive; intersections turn political because challenging a strong head carries risk. Blame cultures push constraints underground until the cost is already sunk. Clear rules help, but they do not survive tolerated overrides. If leaders allow powerful functions to ignore the decision contract, the system reverts to waiting.

Fixing intersections is not “better collaboration.” It is making trade-offs decidable.

You don’t prethink everything. You pre-own the collisions that recur: release under missing conditions, downtime versus service, deviation containment versus plan, spend-now versus avoid-later, act-now versus wait-for-data. Each needs three elements. A default when conditions aren’t met, so the first hour isn’t consumed by negotiation. A bounded decider, so choices don’t depend on who is loudest or most risk-averse. And a return path, once, so the intersection learns and next time is cheaper.

Bounded decision rights are the ingredient most plants skip. Without them, the safe moves are delay or veto. With them, the plant can move without gambling: conditional operation with explicit limits, time boxes, extra checks, rollback rules. Uncertainty triggers a controlled path instead of a fight about who is allowed to act.

Credit is the other hinge. If avoided loss is treated as imaginary while visible spend is treated as real, the plant will buy fragility. Maintenance will defer replacements because it is easier to defend a lower cost line than defend uplift. Planning will favor aggressive targets with later explanations over realistic targets that require upfront trade-offs. Quality will default to veto when downside is personal and upside is diffuse. Those are not moral failures. They are predictable outputs of the evaluation system.

When intersections close properly, what was chosen, what it protected, what it cost, what rule changes next time, the stall shrinks. Decisions arrive earlier, reversals happen less, and time stops disappearing into waiting-for-signoff and re-litigation under worse conditions.

Strong functions are necessary. They are not sufficient. If trade-offs at intersections are not designed to be decided quickly and defensibly, functional excellence will keep competing with itself, and the plant will stay busy while the system stays slow.

Authors

 

Ozan Ozaskinli

Partner and Managing Director

Ozan.Ozaskinli@valuegeneconsulting.com

 

 

Eda Kocakarin

Consultant

Eda.Kocakarin@valuegeneconsulting.com

 

 

Sercan Aldatmaz

Principal

Sercan.Aldatmaz@valuegeneconsulting.com