Sunday, 6 August 2017

Quality-Value: Heuristic or Oracle?

The other week James Thomas wrote down some thoughts related to “Quality is value to some person”. I added some diverse thoughts - a mini-brain dump. This post is an expansion on one of those thoughts.

Start point
I made the assertion that the heuristic/statement (“quality is value to some person”) had been primarily used in a software testing context as a tool to illustrate that stakeholders are the gatekeepers of quality and not necessarily testers.

To me this is ok - but ultimately not a proactive approach helping the team, company or organisation work out how to delight it’s current and potential customers with good/acceptable/superior value and quality.

Point to explore
The point I made on James’ blog that I want to expand on is:
“4. In what scope is the “quality is value to some person” used in SW testing? I don’t know if it really matches Jerry’s original intent. I think it has been used to find a responsible stakeholder to discuss test results & objectives with, and probably also to aid testers to explain that they are not the sole arbiters of quality.  
4.1. I read the intent (from Jerry’s QSM) as highlighting a relationship and a perspective (i.e. subjective rather than objective) - which by nature can’t (usually) be static. I haven’t really seen/read of anyone applying it from this perspective to SW testing. I wonder how it would look…”
So, the question I will explore- if this is a transient and subjective statement - what use can it have to software testing or software development as a whole? I make some claims (statements and opinions) and questions - partly re-anchoring the context in which the statement* could be used:
  1. Who is making the statement*?
  2. When is it useful to make the statement*?
  3. To what scope does it apply?
  4. It’s bigger than software testing

Who is making the statement*?
I assert that it makes no sense for only certain people in a development team or project to have this view (“quality is value to some person”) - therefore, this is a team or project view of quality, or preferably a team together with a product owner or even customer view on quality.

This starts to imply a synchronised view on acceptable goals for a product, or feature, or vertical slice of a feature - as a goal for the team/group rather than individuals using it as a reminder (or in the worst case a defensive and passive position).

When is it useful to make the statement*?
I assert it is a useful reminder at the start of work (goals of a product, or feature) - these may be preliminary goals used in a prototyping activity or a hypothesis on what a customer might want. Doing this at the start is establishing a common goal for the team (or project or program, etc). This is not a statement useful for gatekeeping but useful for goal alignment - alignment of subjective views if you will.

To what scope does it apply?
I assert this applies to the whole development and delivery chain. Therefore, this implies that synchronisation between development and delivery teams (or even a DevOps set-up) would be desirable. Again, the alignment is about aligning common goals, not gatekeeping.

It’s bigger than software testing
Hopefully, this point is obvious?
Applying the statement* to product development (and delivery) I assert that it soon becomes clear(?) that it is much more than software testing and should be running through the whole chain. It doesn’t need to be a defence mechanism used by testers if alignment on goals has been achieved within the team, program, organisation, company.

Flip side? From Heuristic to Oracle
Suppose individuals or teams are using the statement* as a reminder/defence mechanism to illustrate that one or more stakeholders need to take a view on quality - what could this mean? I’d interpret it as a symptom of the team/organisation and its maturity with regards to delivering synchronised development & deployment quality. 

Another way to look at it: it’s a canary call for silos and local-optimisations. You could say it could be used as an oracle for spotting an organisational problem! More on that in another post…

*Statement: "Quality is value to some person"

Tuesday, 25 April 2017

Where is Testing ...?

It's in people's nature to notice change and differences. It's also in people's nature to make assumptions (or stop and ask questions) when they expect something and, to them, it is missing.

So, if when trying to optimise a number of teams working together for the benefit of customers, something people are familiar with is not obviously there, then people can get tripped up.

Due to this, it can become difficult to discuss and explore new models or approaches due to questions about what is missing - or, really, not visible, not obvious, not understood.

TL;DR -> Aim (Why): Delight Customers; Plan (How): Where to invest effort, observe and act; Execute (What): Experiments to perform and data to gather - to match the "How". These are universal "test skill" attributes - i.e. "testing" is everywhere in well-functioning product development, delivery and operation.

Consider a model for product development, ideally optimised to get feedback from users of the product and work on customer needs. This can be many software components, many software systems and many configurations. Assume the aim for the company producing a product is: to delight the customers & users of the product.

Product Inception, Development, Delivery and Monitoring

Note, that this model can be applied on a team or company level.

Potential problem: Interpretation
If 10 people look and study this model there will be more than 10 interpretations. Very often this is a result of how someone frames the object to study (or problem to solve). This can be a factor of where they "sit" in the model/company, what influence they have and what they "want" to do.

Potential question: Where's testing? 
Some people might look at the model and wonder where is "testing" done? This can be a leading question - sometimes as a function of thinking of "testing" as separated from other activities, sometimes as a function of what someone is used to anchoring to. Sometimes it might be a worry - how do we understand the customer is getting what they asked for.

I don't see this model reflects a particular development model or even a type of "testing school". Conversely, I'm not sure any testing school has put the work into supporting such a model (for optimising feedback from customers and working on customer needs).

Potential Approach: Find a place for testing.
One way is to find how testing contributes to each box of the model. There is a trap with this - if the whole is not also considered (or at least adjacent boxes) then this approach will tend to a local-optimisation in each box and not necessarily between boxes. It's an approach that tends to place testing in boxes - in the extreme it creates separated testing boxes. In the ultra extreme it creates a standard for SW testing detached from SW development.

Potential assumption: Specialised testing is not needed.
If you can't see it in the model then it's not needed, right? But, note - I haven't spelled out product architectural and system design. It doesn't mean they are not needed. So, that leads to the question - what is the model conveying. This model is not a WYSIATI (what you see is all there is) model - or rather it is above a level of practices. 

Ok, so what use is such a model????

My take?
Yes, the model is on a very high level, but that's the point - use an example where it appears as though the thing you want to talk about is absent...

When I discuss such a picture above and a discussion about testing comes up, my approach is, "think how testing helps each of the above boxes", or really think of the boxes containing:


To give a number of questions, for example:

Product in Use & User Feedback
- How to observe or get data about the product in use? Getting data about customer opinions, complaints or new wishes and needs?
- How to make judgements and derive opinions about the data (form hypotheses)?
- How to create experiments to gauge and observe the product performance in use or product usage?
- How to evaluate the results of those experiments, data and results?

Product Backlog & Development
- How will observability of the product and product usage be prioritised, developed and in what circumstances?
- How should the product architecture and supporting environment look to be observable?
- How should the product architecture support (fast) feedback on product changes? (hypothesis)
- How should the supporting environment support product changes? (hypothesis)
- How should experiments be created to observe and gather data on the product, its usage and performance?
- How to understand the results and data and whether the experiments are giving data on the hypotheses?

Product Delivery
- How to observe and understand product delivery and deployment?
- How to understand if a product delivery will delight or disappoint a customer (new or old)?
- How to create experiments to gather data on product delivery and potential response from customers?
- How to evaluate the data from the experiments of product delivery? What does the experimental data indicate about product delivery and potential (or actual) customer reaction?

And finally.... the whole:

Product Inception, Development, Delivery and Operation
- How do we observe product usage and customer satisfaction?
- How do we create an understanding of what the customer wants and is happy with?
- How do we create experiments to understand our understanding during development, delivery and operation? Do we have consistency of hypothesis through development & delivery?
- How do we evaluate the data from development, delivery and operation to a consistent picture? Do we have data to help understand delivery to customers, customer perception, feedback to the product development teams? Do we have data to understand what can be improved?

Most of these questions are "how" questions. They are predicated on supporting a model that optimises feedback from a customer and providing a product that a customer wants - the why. The "what", the implementation, is the least important - although it is important.

Sometimes "where's testing?" questions are really about "what" rather than the purpose and meaning. This is a check observation to keep in mind.

And So....
  1. Notice, all of the above might be more recognisable as test and fact-based advocacy (observations), test and fact-finding analysis and design (hypothesis & experiments), test and fact-finding execution (experiments and iterating on experiments) and test and (qualitative) data advocacy and reporting (sense making).
  2. Notice, it is everywhere in the SW development, delivery and operations loop. You might want to be ultra-specialised and constrain your "test" advocacy-design-execution-reporting skills to a small subset of the whole. Or, you might realise that those same observation-hypothesis-experimentation-sensemaking skills are needed (and can be used) everywhere. The trick is to realise that, then to balance the amount of time you want to spend in a small subset of a product development activity - whether as a team, separate team or individual and balance those skills elsewhere.

So - the testing skill set tied to observations, hypothesis forming, experimentation and evaluation and sense making are vitally important all through the product inception, development, delivery and operations flow!! In my experience successful teams and organisations have these skill sets in multiple places, not isolated.

Of course, if your practical skill set (or comfort zone) limits you to a small subset, you might want to work on expanding those boundaries - at least for the good of the people and teams around you.

Potentially Related Post