The duality of knowledge and control

In experimentation, you leverage control to gain knowledge, in engineering you leverage knowledge to gain control.

I recently used this paraphrasing of a quote from Claude Shannon during a conversation with a team considering changes to experimentation tooling. This framing forces you to clarify what you mean to do - something generally useful.

A more general take on this quote replaces engineering with delivery. That is, in delivery you should be leveraging what you know to effect a change in a system, for instance a new entry point in a product.

In most contexts it’s useful to be explicit in determining which mode you are operating in: delivery or experimentation. I’ve seen this done incredibly well (when I worked in a very large social network for instance) and also very poorly (unfortunately this is much more common).

In experimentation mode, you craft an intervention which is subject to your control. A change in a user flow, a new ranking model or the color of a button. You should only change what you can deterministically control. Indeed you can’t really directly account for those things out of you control and thus, quite often you’ll do some form of random sampling. In this setting, you have leveraged what you can control, the change in the user flow, to learn something new; in this case something about how those users flow through your product.

In engineering (or delivery) mode, you’ll use what you know (sometimes learned through experimentation) to maximise for an outcome you are looking for: less friction in a user flow, less risk in an interaction that has compliance implications, etc.

So, when experimenting:

Ideally, your experiment should expresses a specific hypothesis, that if true, you can immediately exploit (by deploying the intervention). This is not easy or cheap, the fidelity and polish of your intervention needs to be quite high - because if succesful you will literally deploy it.

This is ideal because of two reasons:

  • You get to value quicker - the time from positive results to maximising potential value from the intervention is smaller
  • What you learned in your experiment most closely matches reality - your results wont be partially invalidated because of changes in your intervention (when you polish it for instance) or through time (especially relevant in adversarial environments where actors learn to circumvent your interventions)

However, you might have a broader ’experiment’ on your hands, such as questioning if this product will work. In such cases, you are likely trying to reduce risk by investing less initially, perhaps rolling out a minimal version to test for fit. This approach can be seen as deploying a ‘minimum viable product’ (MVP). The MVP strategy is about learning from the market response to a bare-bones version of your product, rather than about experimenting in a controlled environment.

This distinction is crucial to understand. Experimentation should be designed to test specific hypotheses under controlled conditions, where variables can be managed and outcomes can be directly attributed to the changes made. This process informs your engineering or delivery strategy by providing concrete data and insights.

If you are building an MVP, don’t confuse a reduction in your scope (to explore fit),** with unstable, buggy or broken experiences**. You’ll be unable to determine if poor customer engagement or reaction is from the value your product offers, or from a poorly built experience. This is especially true in 2023.

When you build something for production:

Your focus shifts to reliability, scalability, and meeting the defined requirements. This mode is about applying the knowledge gained from experiments (and other sources) to create a robust, functional product or feature.

It’s easy, and can be quite costly, to pretend you are in one mode (experimentation) when you are in the other (delivery). It’s also easy to avoid this mistake! Actively try to do so!

Key Takeaways:

  • Clear Objective: Understand whether you’re experimenting to gather data or delivering a finished product, even if it’s a bare bones one
  • Resource Allocation: Allocate time, energy and resources appropriately based on the mode you’re operating in
  • Outcome Expectation: Clearly articulate (first to yourself) expectations about the results based on whether you’re in an experimental or delivery phase
  • Learning from Experiments: Use the insights gained from experiments to inform and enhance your delivery efforts, don’t forget the negative results, log them for posterity so you can leverage what you’ve learned later

Recognizing the differences between these two modes is a hallmark of effective engineering and product teams. Remember: The goal is not just to build or test, but to do so with a purpose and a clear understanding of the desired outcome.