The Only Three Best Practices
Experimentation, learning, and situational awareness will take you most anywhere you want to go.
For years, I have had a quote living in my head rent-free: "There are only three best practices: learning, experimentation, and situational awareness.”
[ I attribute this quote to Donald G. Reinertsen, but I can’t find the exact reference. Everyone should read “Managing the Design Factory” and “Principles of Product Development Flow: Second Generation Lean Product Development” at least once. ]
Best practice, you say?
I like using Dave Snowden’s Cynefin framework to disabuse leaders of the fantasy that best practices work anywhere through the simple act of adoption. It describes which kinds of practices work as complexity increases.
Here’s what that looks like:
Cynefin describes five domains of complexity – from the bottom right, counter-clockwise:
Clear - Events are known, understood, and predictable. Clear activities, processes, and systems can often be identified as toil and automated into extinction.
Complicated - Things are a little more tricky. There are more rules, regulations, and governance considerations. Expertise and analysis factor much more for problems and situations in this domain.
Complex - It’s essentially impossible to predict cause-and-effect relationships in this domain. Complex is where you must experiment and develop constraints enabling said experimentation. Come with an informed opinion, but be ready to be wrong and try again.
Chaotic - All hell has broken loose. There’s no predicting, only preparing. Think natural disasters. You can only act, get behind the blast shield, and hope nothing blows up.
Confused - The area in the middle. You have no idea what domain of complexity you’re in. Unpack subsystems and categorize them.
A common problem is people naively operating as if they’re in the clear domain.
Anything involving groups of people or distributed systems has a strong pull toward the complex domain—the power grid isn’t always predictable, servers crash, and people flake, et al. These examples underscore that systems are composed of subsystems and related to other systems, each of which can live in its domain of complexity.
Back to the problem: If a group selects best practices thinking they’re the clear or complicated domains but actually in complex or chaotic domains, where being experimental and adaptive is important, their system will quickly become confused.
[ Want to go deeper? Dave Farley recently did a nice deep dive into Cynefin on his YouTube channel. ]
TDD and The Big Ball of Mud
I spent years training and coaching software development teams in Test-driven Development (TDD).
I’d show up to a client whose manager wanted their team to adopt TDD. At that point, they had already purchased a two-day workshop. It was a good workshop, full of mindfully designed exercises to elevate awareness of the practice and overcome skepticism through hands-on practice. I did this for years and mastered this tiny universe of my own curation.
After the workshop, it was common to follow up with the teams, helping to adapt the practice to their situations, code, etc. The majority of the time, I’d get a pretty rude awakening. Their code was legacy—very few tests, highly coupled, and hard to understand and navigate.
TDD is a design tool. Specifically, it’s a design feedback system (H/T
) that pairs extremely well with emergent design. You use TDD to design programs in a bottom-up or emergent style.But what I found were systems that had already been designed. They were envisioned top-down and thrown over a wall by a solution architect or some other technical alpha, inherited by a team acting as a group of soloists with no shared explicit design strategy. Over the years, these systems slowly, persistently, and unknowingly grew into a big ball of mud. From the authors of the pattern, Brian Foote and Joseph Yoder:
A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated.
Do you see the problem here?
TDD is a design approach that works better in Cynefin’s domains of clear and complexity. It’s not the first place to start when your codebase is complex. Getting into a flow with TDD on a big ball of mud often requires a lot of refactoring. More relevant techniques exist to start experimenting with, including introducing a testable facade/seam, Strangler Fig, or Characterization Testing.
TDD is about simple (clear) design, building up components one failing test at a time. Big Ball of Mud is an adjunct architectural style (complex) that often emerges from a lack of an explicit design strategy. These are very different situations requiring very different practices.
How useful do you think that 2-day TDD training was when the team couldn’t put that learning to immediate use?
Experimentation
In my TDD example, I mentioned that we discovered we’d have to experiment with different practices to bring a legacy codebase into a state where we could design new functionality test-driven. These experiments are categorically technical.
Product experiments are also important. Does this feature create a user behavioral outcome aligned to a business objective? Does anyone care about this functionality? Does it solve a problem? Would a large enough population of customers hire this tool to perform a job they need to be done? Which color or copy yields a higher percentage of clicks on this CTA?
Mature product engineering teams think in experiments, whether they’re experimenting with product ideas or their workflow. Experimentation is their default behavior. Sure, sometimes there will be intuitionally good things to do that don’t need experimentation, but the standard behavior is “let’s test that idea.”
Once groups establish this mindset, usually through practice, they’ll start peeling back deeper dimensions of experimentation, such as cost and time. “How might we test this idea cheaply and quickly?”
Welcome to Uncertainty
A willingness to experiment comes with the acknowledgment of uncertainty. It’s a profound paradigm shift. In Cynefin terms, you’re starting to operate from a complex domain. More bluntly, you’re trading hubris for humility.
It’s not that you have bad ideas. You will have bad ideas. It’s that you’ve constructed a system that rejects bad ideas quickly so the good ones can attract more attention and investment!
Introducing people to the experimental way of living is harder than you’d think. I’ve been lucky to lead teams whose members were willing to experiment (influencing hiring helps). That said, I find there’s a limit to how much experimentation people will tolerate. Lack of follow-through is a good metric to use when you’ve hit that limit. My best advice is to start small and increase the experimental ceiling over time.
Learning & Practice
Experiments yield learning. Either our experiment worked, or it didn’t. Sometimes, we can pivot based on a better-informed hunch. Other times, we need to go back to the drawing board.
Keeping a tight loop with small and cheap experiments increases our likelihood of finding something that works. This workflow works equally well for product and engineering challenges.
The journey from learning to behavior change gets slept on a bit. I like what David A. Garvin wrote in his article “Building a Learning Organization” in Harvard Business Review:
A learning organization is an organization skilled at creating, acquiring, and transferring knowledge, and at modifying its behavior to reflect new knowledge and insights.
Learning that doesn’t impact behavior tends to evaporate. Someday, it’ll condense back into drops of sweet, sweet wisdom, but for now, it’s lost.
I think of your standard team retrospective, where new insights or observations manifest as cathartic moments. We might find agreement or closure, but what do we do about these signals? Many times, we stop at acknowledgment at the expense of follow-through. I’ll bet you that insight comes back up in a few months.
Learning has a close sibling: practice. When we find something out, we may want to repeat it or avoid it. That involves changing our behavior.
When decide to act on observation or insight, we might practice with a clear goal and intentional rigor. Consciously executing an experiment aimed at improving performance, with regular periods of reflection, is called deliberate practice. It comes from the domains of psychology and sports. A coach joins the practitioner in these settings to give feedback and stoke motivation.
Learning alone is not enough. You have to put what you have learned into action. That’s when new habits and behaviors will emerge, and that’s when mindset starts to take hold.
Situational Awareness
Situational awareness refers to a company's ability to understand and react to internal and external events and influences that could affect its current and future operations.
We implement situational awareness in our work at Nerd/Noir through our core principle, “Visual is valuable.”
It’s not enough for me to have my own picture in my head. We must share and evolve a collective picture. The data structure we prefer to work from isn’t so much a list (backlog) as a bitmap (whiteboard).
I attribute some of our collaborator teams’ success to our diligence in jointly modifying a shared map that reflects our understanding of client organization design, obstacles, needs, etc. Experienced people who create a shared picture and stay in regular communication.
We are drawn to mapping technologies that let us plan our work as a journey: C4 model (particularly with dynamic diagrams), Event Storming, Story Mapping, Impact Mapping, and Wardley Maps. There’s a map for every terrain. Once we have that map, we plot journeys with waypoints to gain feedback (through deployment or lo-fi sharing).
When we combine the artifact with the collaboration of keeping that artifact up to date and good communication, we have situational awareness. It’s never the artifact itself; it’s the communication and collaboration around it where efficacy happens.
[ I have much more to say on this topic, particularly what situational awareness means for leadership teams. Stay tuned. ]
Leverage Uncertainty
You’re probably operating in a complex environment, whether you know it or care to admit it. This means approaching practices as an emergent set of behaviors and discovering constraints. These aren’t easy tasks, but the investment in rigor pays dividends in results.
Buzzword bingo and the illusion of “I can just purchase it” will either fail or push you toward chaos as you bang your head against the wall, trying to make it work. Avoid the sales pitches. Avoid the industry hype. Think adaptation over adoption.
For leaders, work backward when trying to change something about your product or engineering organizations. What’s the result that you want? These outcomes shouldn’t be “we’re doing practice XYZ,” but they may lead you to a place where you can start experimenting and learning which practices make sense.
Discovering relevant best practices for your context requires experimentation, learning, and practice. Situational awareness is a key ingredient and a force multiplier in this endeavor. Leadership should value experimentation and create enablers in the form of strategic constraints and time to focus.
Start there, and the right practices for you will follow.