Stakeholder

Engaging actively. Experiencing value. Validating outcomes.

For the short version, click here.


Introduction

We are stakeholders.

We engage actively. Not passively.

We participate in research. Prototyping. Refinement. Testing. Sprint Reviews. Continuous usability testing.

We collaborate. We work together.

We experience value firsthand. We validate outcomes through use.

We are customers. Users. Decision-makers. Financial sponsors. Subject matter experts. Legislators. Governance.

Each of us is different. Different needs. Different power. Different influence.

But we share common ground.

Active engagement. Evidence-based validation. Timely feedback.


On Critical Expectations & Limits

Each stakeholder has critical expectations. Quantified. Measurable.

We define what "good enough" means. For us. In numbers where possible.

Not vague wishes. Not unmeasurable hopes. Specific thresholds.

A customer expects response time under 2 seconds. Not "fast."

A decision-maker expects ROI above 15% within 18 months. Not "profitable eventually."

A user expects task completion in 3 clicks or fewer. Not "easy to use."

A financial sponsor expects operating cost reduction by 30%. Not "cost savings."

These are critical expectations. Below these limits, the product fails for us.

We also have limits. Hard constraints.

A legislator defines maximum data retention at 90 days. Not "reasonable retention."

Governance requires security audit completion within 5 days of incident. Not "promptly."

A subject matter expert limits technical debt to 15% of total codebase. Not "manageable."

These limits are non-negotiable. They represent boundaries we cannot cross.

We quantify both. Expectations and limits. We make them measurable.

We communicate them early. We validate against them continuously.

Teams need to know our critical thresholds. Our hard constraints.

Not after building. During discovery. During refinement. Throughout.

Quantification creates clarity. "Fast" means nothing. "Under 2 seconds" means everything.


On Active & Intentional Engagement

For successful adoption, regular intentional interactions between stakeholders and teams are crucial.

Not status meetings. Not presentations. Not reports.

Collaborative sessions. Working together.

We participate in research sessions. User interviews. Observation studies. Discovery workshops.

We review prototypes. Low-fidelity. High-fidelity. We provide feedback early.

We engage in refinement. We help clarify needs. We answer questions. We reduce ambiguity.

We test Increments. Hands-on. Real usage. We validate whether our jobs get done.

We participate in Sprint Reviews. As collaborative working sessions.

We inspect the Increment and learnings from experiments, result feedback, or side-effects.

We provide real feedback. We collaborate on what comes next.

Not status reports. Not PowerPoint presentations. Working sessions.

We engage in continuous usability testing. Throughout development. Not just at the end.

We observe. We experience. We report. We help teams learn.

We bring openness. Transparency. We share what we see. What we experience.

We embrace uncertainty. We don't demand false certainty where none exists.

We recognize that engagement often reduces risk. More proactively than planning alone.

Regular interaction surfaces problems early. Creates shared understanding. Reduces surprises.

Our active engagement is risk management. Through collaboration. Through transparency.

This is active engagement.


On Needs, Limits, & Context

We have needs. Wants. Expectations.

We have limits. Hard boundaries. Non-negotiable constraints.

We experience satisfaction gaps.

A satisfaction gap is the difference between what we experience now and what we wish our experience was.

We know our jobs to be done. Our struggling moments.

We articulate what success looks like for us.

We communicate our limits clearly. Regulatory boundaries. Legal constraints. Technical thresholds.

These aren't preferences. These are hard limits we cannot cross.

Sometimes we discover needs through experience. Through prototyping. Through testing.

We communicate what we expect. We make our constraints clear.

We understand trade-offs. We prioritize what matters most.

We don't demand certainty where none exists.

We embrace empiricism. Evidence over assumptions.


On Value Experience

We use the product. We experience it directly.

We validate value through actual use. Not assumptions. Not proxies.

We observe whether our jobs get done better or "struggling moments" become less struggling.

We notice changes in our work or life.

Value is what we experience. Not what teams assume. Not what reports claim.

We provide evidence. Value realized or not realized.

We confirm whether outcomes meet our needs. Through observation. Through measurement.

The best validation is our actual experience. Result feedback.

We notice unintended consequences. Side effects. Both positive and negative.

We report unexpected results. We help teams understand full impact.

We discover what we didn't know we needed. Through prototypes. Through testing. Through use.

We learn what's possible. We adapt our expectations informed by evidence.

Value remains an assumption until we confirm it.

Through observation. Through measured outcomes. Through experience.


On Influence, Power, and Behavior

We make choices. We approve. Select. Authorize.

We decide what to fund or support.

We choose among options presented to us.

We use our power appropriately. Not to command. To enable.

We model the behavior we want to see.

If we want teams to embrace uncertainty, we embrace it first.

If we want teams to be transparent, we're transparent first.

If we want teams to learn from failure, we demonstrate learning from our own failures.

If we want teams to collaborate, we collaborate actively.

Our behavior matters. Teams watch what we do more than what we say.

We advocate for what matters.

We shape direction through our voice and choices.

We embrace continuous adaptive strategy for competitive advantage.

Not rigid long-term plans. Adaptive strategies that evolve with learning.

We don't demand fixed roadmaps when markets shift rapidly.

We fund experiments. We support hypothesis-driven planning.

We adapt our strategies based on evidence and emerging opportunities.

Continuous adaptation isn't indecision. It's intelligent response to reality.

We understand our limits.

Legal boundaries. Regulatory constraints. Organizational policies.

We communicate these constraints clearly.

We don't expect teams to guess our constraints.

We provide funding when we can. Access. Expertise.

We remove obstacles within our power.

We enable teams to succeed.

We don't micromanage how teams work.

We respect their professionalism.


Types of Stakeholders

Customers

We purchase or select the product.

We may be buyers. Decision-makers. End users.

Sometimes the customer is not the same as the end-customer.

B2B2C. B2B2B.

Our satisfaction matters. We pay for value.

Users

We interact directly with the product.

We experience it firsthand.

Our feedback is crucial for ongoing improvement.

We may or may not have purchasing power.

But our adoption and engagement are essential.

Sometimes the user is not the same as the end-user.

Decision-Makers

We have authority to approve, select, or authorize adoption.

We evaluate options. We make final calls.

We consider needs of users and broader organization.

We may or may not use the product ourselves.

But our choices directly impact which Products are adopted.

For successful Scrum adoption, it's often better to proceed with imperfect information and capture emerging result feedback.

Financial Sponsors

We provide funding and resources.

We assess viability, value, and feasibility.

We invest informed by potential to deliver continuous value.

We influence Product vision, strategy, and goals.

We ensure alignment with return on investment.

We support long-term sustainability.

For successful Scrum adoption, flexible attitude and flexible funding matter as new information comes to light.

Subject Matter Experts

We contribute deep knowledge or unique skills.

Technology. Design. Compliance. Specific domains.

We support usability, feasibility, professionalism.

We help address satisfaction gaps.

We don't get in the way of self-managing Scrum Teams.

For successful Scrum adoption, regular transfer of learning from subject matter experts to and across the Scrum Team is crucial.

Legislators

We establish rules, policies, and boundaries.

External or internal policy makers.

We define legal, regulatory, or organizational frameworks.

We shape how Products operate.

We ensure alignment with mandates.

For successful Scrum adoption, it's crucial not to exaggerate or underestimate legal requirements.

Governance

We provide structures, standards, regulations, norms, processes, and practices.

We guide Product direction, decision-making, and accountability.

We foster transparency. We guide adherence to standards.

We provide mechanisms for managing risks.

We support Product adaptation to changing needs.

For successful Scrum adoption, governance must be coherent with Scrum.

Single point of contact per governance area. Intentional interactions around quality and compliance. Regular inspection and adaptation of governance. No surprises.


On Working with Teams

We don't expect teams to read our minds.

We communicate clearly. We engage regularly. Actively.

We show up for research. For prototyping reviews. For refinement sessions.

We test Increments. We participate in Sprint Reviews as working sessions.

We provide feedback quickly. While it matters.

We validate value through our actual experience.

Not through proxies. Not through status reports. Not through presentations.

We work alongside teams. Not above them. Not distant from them.

We respect Product Owner decisions about what gets built.

We respect Product Developer professionalism about how it's built.

We respect the facilitation style of the process.

We understand that teams need to inspect and adapt.

Plans change. Requirements evolve. We learn together.

We don't demand certainty where none exists.

We support an empirical approach. Evidence over opinion. Experience over assumption.

We participate in Sprint Reviews as collaborative working sessions.

Not status meetings. Not executive reviews. Not approval gates.

We inspect the Increment and learnings from experiments, result feedback, or side-effects.

We provide real feedback. We collaborate on what comes next.

We avoid Taylorist behaviors. Command-and-control. Status reporting. Approval hierarchies.

We embrace adaptive behaviors. Collaboration. Evidence. Inspection. Adaptation.

We reduce distance between those who present problems and those solving them.

We're willing to be close to the work.


On Being Good Stakeholders

We show up. We engage actively.

We participate in research. In prototyping. In refinement. In testing.

We engage in Sprint Reviews as working sessions. Not status meetings.

We participate in continuous usability testing.

We provide timely, honest feedback. Evidence-based. Experience-based.

We make our needs and constraints clear.

We validate value through our experience. Not through reports. Not through proxies.

We support teams with our influence and resources.

We respect professional boundaries. We don't micromanage.

We embrace learning and adaptation. We don't demand false certainty.

We understand we may not always know what we need.

We discover through prototypes. Through testing. Through use.

We're open to learning what we didn't expect.

We trust teams to solve problems well.

We provide direction, not solutions. Context, not commands.

We measure success by outcomes we experience. Not outputs teams produce.

We recognize teams need psychological safety to do their best work.

We don't punish experiments that fail. We play back learning.

We build and protect trust. The kind of trust that comes from openness. Healthy collaboration. Courage. Commitment. Lack of disrespect or blame. Focus. Humility. Caring about collective results.

We avoid Taylorist behaviors. Status meetings. Approval gates. Command hierarchies.

We embrace adaptive behaviors. Collaboration. Evidence. Transparency.

We understand adoption reduces distance between those who present problems and those solving them.

We're willing to be close to the work. To engage actively. To collaborate meaningfully.

This is what good stakeholders do.


On Our Responsibility

Value creation requires effective, constructive collaboration.

We're part of that collaboration. Active participants.

We can't delegate our engagement to others.

Status reports don't replace our direct experience.

Presentations don't substitute for our validation.

We need to show up ourselves. Actively.

We participate in research. In prototyping. In refinement. In testing.

We engage in Sprint Reviews. As working sessions.

We test continuously. Throughout development.

We provide our own feedback. Evidence-based. Experience-based.

We make our own decisions. Informed by our experience.

We're accountable for articulating our needs. Clearly. Completely.

We're accountable for our part of delivery.

Business readiness. Prerequisites. Dependencies. Blockers. Uncertainty.

These are our responsibilities. Not the team's.

Teams are accountable for delivering valuable outcomes.

But we're accountable for validating value.

For confirming whether outcomes meet our needs.

For providing evidence that helps teams learn.

For creating the conditions where teams can succeed.

Without our engagement, teams work in the dark.

Without our feedback, teams make assumptions.

Without our validation, teams build the wrong things.

Without our accountability for our part, teams fail despite doing good work.

With our active engagement, teams can deliver what truly matters.

With our timely feedback, teams can adapt quickly.

With our evidence-based validation, teams can learn and improve.

With our accountability for our part, delivery succeeds.

This is our responsibility.

Active engagement. Evidence-based feedback. Experience-based validation.

And accountability for business readiness, prerequisites, dependencies, and embracing uncertainty.

Not status meetings. Not approval gates. Not passive observation.

Collaboration. Evidence. Adaptation. Accountability.


On Stakeholder Accountability

Delivery is not all on the team. We have our part to play.

We're accountable for business readiness planning.

Are systems ready? Are processes in place? Are people trained?

Teams can build the product. But if the business isn't ready to receive it, value fails.

We're accountable for putting business prerequisites in place.

Legal approvals. Compliance clearances. Infrastructure changes. Vendor contracts.

These are not team responsibilities. These are stakeholder responsibilities.

We're accountable for resolving crucial dependencies and blockers in a timely manner.

When teams surface blockers, we act. Quickly.

Not "we'll look into it." Not "I'll escalate."

We resolve. We remove. We enable.

Dependencies outside the team's control? Our responsibility to address.

We're accountable for embracing uncertainty.

We embrace outcome-oriented roadmaps. Not feature lists. Not delivery dates.

We embrace hypothesis-oriented planning. Not commitments to solutions.

We understand that outcomes are uncertain. We can't control them.

But we can run experiments. We can test hypotheses. We can learn.

We don't demand certainty from teams. We provide certainty of support.

Certain we'll fund learning. Certain we'll act on blockers. Certain we'll adapt as we learn.

We're accountable for our decisions and their consequences.

We chose to fund this work. We chose this direction. We chose these priorities.

If outcomes don't materialize, we own that. Not the team.

Teams are accountable for delivery quality. For professional work. For learning.

We're accountable for business readiness. For prerequisites. For dependencies. For enabling conditions.

Different accountabilities. Both essential.

Delivery success requires both.


On Continuous Adaptive Strategy

We embrace continuous adaptive strategy. For competitive advantage.

Strategy is not a plan. Strategy is a direction of travel. Continuously adapted.

Markets change. Competitors move. Customer needs evolve. Technology advances.

Static strategy fails. Adaptive strategy wins.

We don't create five-year plans. We create adaptive strategic direction.

We set direction. We run experiments. We learn. We adapt.

We use evidence to inform strategic choices. Not opinions. Not assumptions.

What worked last quarter may not work this quarter.

What works for competitors may not work for us.

We test our hypotheses about strategy. We validate through evidence.

We adapt strategy based on what we learn.

Continuously. Not annually. Not quarterly. Continuously.

This creates competitive advantage.

Competitors with static plans can't adapt fast enough.

We adapt faster. We learn faster. We move faster.

We embrace uncertainty in strategy. We don't pretend to know the future.

We create options. We test options. We keep what works.

We embrace outcome-oriented strategic roadmaps. Not feature-based plans.

Outcomes we want to achieve. Hypotheses we want to test. Experiments we want to run.

Not commitments to specific solutions. Not detailed delivery schedules.

We measure strategic success by outcomes achieved. Not plans followed.

Did we achieve the outcome? Did we create competitive advantage?

Not: Did we stick to the plan?

We provide certainty of strategic support. Not certainty of strategic plan.

Certain we'll fund learning. Certain we'll adapt based on evidence. Certain we'll move quickly.

Continuous adaptive strategy requires courage. To change direction. To admit we were wrong.

It requires humility. We don't know everything. The market will teach us.

It requires evidence. Data over opinion. Observation over assumption.

It requires speed. Adapt quickly. Move quickly. Learn quickly.

This is how we create and maintain competitive advantage.

Through continuous adaptation. Through evidence-informed strategy. Through speed.

Static plans in dynamic markets fail. Adaptive strategy wins.


What We Don't Do

We don't attend status meetings. There aren't any.

We don't wait for perfect information before engaging. We engage early. Continuously.

We don't accept vague requirements. "Fast" isn't specific. "Under 2 seconds" is.

We don't hide behind intermediaries. We engage directly with teams.

We don't punish learning. We don't blame experiments that fail.

We don't demand certainty in complex environments. We embrace empiricism.

We don't confuse outputs with outcomes. Delivered features aren't validated value.

We don't create approval bottlenecks. We collaborate in flow.

We don't use our power to command. We use our influence to enable.

We don't measure success by adherence to plan. We measure by outcomes achieved.

We don't pretend to know everything upfront. We learn through discovery.

We don't treat teams as order-takers. We respect their professionalism.

We don't skip research, prototyping, or testing to "save time." These activities reduce risk.

We don't ignore side effects. We notice them. We report them.

We don't validate assumptions with more assumptions. We validate with evidence.

We don't separate discovery from delivery. Both happen together.

We don't postpone engagement until the end. We engage from the start.

We don't confuse activity with progress. Busy doesn't mean valuable.

We don't build walls between problem-presenters and problem-solvers. We reduce distance.

We don't blame teams for our failures. If business isn't ready, that's on us.

We don't expect teams to resolve our dependencies. That's our job.

We don't demand feature-based roadmaps when uncertainty requires hypothesis-based planning.

We don't push all accountability onto teams. We own our part of delivery.

We don't create static five-year plans. We embrace continuous adaptive strategy.

We don't stick to plans when evidence suggests a better direction. We adapt.

We don't measure strategic success by plan adherence. We measure by competitive advantage achieved.

This is what we don't do.


Closing Thoughts

We are stakeholders.

Each of us is different. Different needs. Different power. Different influence.

But we share common ground.

We all want Products that deliver value.

We all need to engage actively. Intentionally.

We all validate through our experience. Not through assumptions.

We all have responsibility to collaborate well.

We all embrace continuous adaptive strategy for competitive advantage.

Successful adoption depends on us.

Active participation in research, prototyping, refinement, testing.

Engagement in Sprint Reviews as working sessions.

Continuous usability testing throughout development.

Timely feedback informed by evidence and experience.

Clear communication of needs, limits, and constraints.

Validation of value through actual use.

Accountability for our part of delivery.

Embracing uncertainty and outcome-oriented roadmaps.

Continuous adaptation of strategy based on evidence.

We're not passive recipients of deliverables.

We're not approval gates in a command hierarchy.

We're active participants in value creation.

We're collaborators in discovery and delivery.

We're validators through experience and evidence.

We're adapters of strategy for competitive advantage.

Our engagement shapes outcomes.

Our feedback enables adaptation.

Our validation confirms value.

Our adaptive strategy creates competitive advantage.

This is the stakeholder role.

Engaging actively. Experiencing value. Validating outcomes.

Creating stakeholder outcomes (and other) improvement.

Latest Stories

This section doesn’t currently include any content. Add content to this section using the sidebar.