Relevant Parameters for Testing

I’m working on reporting for my Model-based Testing Workbench.  As part of that, I’ve been considering how to best store test parameters (inputs to the test and results coming out of the test).

I was just kind-of going off on my intuition: I need some input parameters in the model domain and in the real-world domain.  I need the same with outputs.  And then I have some expected results.  That makes five.  Do I need more?

I decided to break down the characterization into smaller groups: input/output, logical/physical and expected/actual.  If we then fill in a table with the values, we get:

(1) input logical expected
LogicalInput input logical actual
(2) input physical expected
PhysicalInput input physical actual
ExpectedOutput output logical expected
LogicalOutput output logical actual
(3) output physical expected
PhysicalOutput output physical actual

We see the 5 identified types fit in the table, and there’s three fields left open, denoted (1)-(3).  Let’s consider these.

(1) makes little sense in a testing setting.  It would represent an expected logical input, as opposed to an actual logical input.  We control the input values we feed into the test, so the expected value is the same as the actual value.  (1) it not relevant for testing.

(2) and (3) are expected physical inputs/outputs.  The MBT paradigm works entirely in the model domain, so we do not concern ourselves with the physical values.  We translate physical outputs to logical outputs using the output mapping (working in the opposite of the input mapping, which translated logical inputs to physical inputs), so we make no expectations on either physical inputs nor outputs.

It would be a valid choice to instead translate actual logical outputs to expected physical outputs.  This would be useful for stubbing the implementation using a model (or generating a model-based implementation), but has been put on the shelf for the MBT Workbench for now.  If we were to do that, the LogicalOutput in the table would no longer be relevant.  (2) would still not be relevant as it would still be a en expected input, which we control and therefore can set equal to the actual value.

Being systematic about the categorization of the variable types has given me confidence that the 5 types of values I have identifies are all the relevant types, and has also given insight into the consequences of reversing the output mapping.

As an amusing aside, (1) and (2) might make sense if we were given a (logical or physical) result of a service and were to try and estimate the input parameters that caused the output.

The success of a test is just comparing all ExpectedOutputs with all LogicalOutputs, so testing goes:

  input mapping   service   output mapping  
Logical Input -> Physical Input -> Physical Output -> Logical Output

and at the same time:

  model  
Logical Input -> Expected Output

finishing by comparing the two end-results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.