RFC: Conveniences in CPN Tools

CPN Tools offers a very rich environment in which most anything can be realized.  Colored Petri nets have no need for inhibitor arcs, ordered or bounded places, reset arcs, etc.  But sometimes it would be convenient if we didn’t have to go thru hoops to do something that is conceptually easy.  This would also free resources and complexity from implementing inhibitor arcs for the nth time to actual model logic.

Some extensions are easy to conceptualize, some are easy to implement and some are trivial in concept but difficult to do in a nice, efficient way that suites colored nets.  For example, prioritized transitions (much more about those here).  Basically, a priority should be able to define the priority as any expression over the variables over the transition, but this breaks a fundamental property making the CPN Tools simulator efficient, so instead we only allow closed expressions in the implementation.  Some are not even easy to conceptualize.

Here, I propose some more or less syntactical conveniences and would love to hear comments.  Some of my proposals have concerns that I would love to hear comments on, and for some I am not sure if I capture all the cases that are encountered often enough in practice that they should be handled.  This is a request for comments from practitioners, theoreticians, and teachers: what do you think of these features and how should I prioritize them?

I treat bounded places, ordered places, inhibitor arcs, reset arcs, multiset variables, and volatile transitions.  At the end I’ll sum up my concerns but before we get there, let us introduce the different concepts and introduce the concerns.

Bounded Places

Bounded places only allow a certain number of tokens on a place.  In its simplest form, it would be something like this:

Here, A is not enabled until B fires as the capacity of p1 is 5, p1 already has 5 tokens, and executing A would produce another token on p1.

That is the easy non-colored case.  We could also envision multi-set bounds, e.g.:

Here, B is enabled but A is not.  A would produce something making the marking of p1 exceed its bound (41 ++ 22 vs 31 ++ 32) whereas B would stay within the bound (31 ++ 32 <= 31 ++ 32).

So far, so good.  We can generalize this even further by allowing any predicate on the place:

Here, evenNumberOfTokens is a function which returns true if the number of tokens is even.  The intuition is that the predicate must hold in the initial marking and after firing each transition, but not necessarily in the middle of firing a transition (so a transition may consume a token from p1 and produce 3 tokens for example).  A and C are not enabled as they would make the number of tokens odd (9() or 7()).  This is clearly more general than the two previous solutions, but has the disadvantage that it breaks monotonicity.  This means that a transition can become disabled by adding more tokens to a place and consuming tokens from a place can enabled a transition.  This can probably be worked around to retain efficiency at least to some degree.

This is not very colored, though.  Consider the example:

Here, the intended meaning (the syntax is wrong) is that on p1 we can only have 3 tokens (multisets, predicate) for each process. Thus, A { p = 2 } is not enabled, but A { p = 1 } is.  Unfortunately, places do not have a concept of variables, so it is not obvious how to do this with variables.  We could of course solve it using the general predicate, but that would be cumbersome.  An alternative would be to use general predicates and make convenience functions, so we would write, e.g., something like:

Here, each is a function which takes a comparator and a bound and returns a predicate on pairs checking for each that grouping by the first component, the number of tokens satisfies the predicate and the bound.  As can be seen, this would require an extensive library (separate functions for each tokens structure and for the three different kinds of predicates presented above).  This could be extended on an as-needed basis, though.

For this construct I only have two concerns:

Concerns

  • Are general predicates needed?  They do impact efficiency (though this can be minimized to when they are used).  The bounds can be implemented using this construct, but more efficient implementations are possible.
  • Is the colored solution sufficient and useful?  Are there better ways of doing this (I assume yes, as this solution is quite kludgy)?

Ordered Places

Ordered places imposes an order of tokens on a place.  This can be queue (FIFO; first in, first out), stack (LIFO; last in, first out), or priority queue behavior.  This could look as follows:

Here, we see queues, stacks, and priority queues respectively.  Assume in each case, we execute the sequence A { n = 2 }, A { n = 3 }, A { n = 1 }.  For the queue, this would mean that we’d have to execute the sequence B { n = 2 }, B { n = 3 }, B { n = 1 }.  For the stack, we would execute B { n = 1 }, B { n = 3 }, B { n = 2 }, and for the priority queue, we’d execute B { n = 1 }, B { n = 2 }, B { n = 3 }.  We note that only the priority queue needs any configuration, namely a function comparing two elements.  We could also map each element to an integer, but that is less important.

This is of course the uncolored version, and we would like to add something like (we only consider the queue, the other have the exact same issues):

In this case, we have two processes and the intuition is that each should have their own queues, so executing A { p = 1, n = 1 }, A { p = 2, n = 2 }, A { p = 1, n = 3 } would permit any interleaving of the two traces B { p = 1, n = 1 }, B { p = 1, n = 3 } and B { p = 2, n = 2 }.  Here, anything with p = 1 does not influence anything with p = 2 as we assume that p is bound from Processes and then (p, n) is consumed from p2.

Another, equally possible interpretation is that the only interleaving is B { p = 1, n = 1 }, B { p = 2, n = 2 }, B { p = 1, n = 3 }, as (p, n) is bound from p2 and then a token is consumed from Processes.  It is important that this is resolved in a consistent manner.  For example, one could say that tokens from ordered places are always consumed either first or last and we allow searching.

Even with a resolution to this, we still get problems when using more than one queue, say like this example:

Suppose that A executed with 1, 2, 3, and B with 3, 2, 1.  What do we now choose for C? 1 because it is first on p2?  3 because it is first on p4?  2 because the average distance is lowest (though the same as for the others to keep the example simple)?  Nothing because p2 and p4 does not have the same head?  Thus our big concern is:

Concerns

  • How do colored ordered places behave?
  • Do we allow searching (and hence folding)?  Searching breaks the declarativeness of enabling…
  • Are ordered places searched first or last?

Inhibitor Arcs

Inhibitor arcs test that no tokens are present on a given place.  This is useful for case analysis.  In the simple form, it would be something like this:

Here A is disabled as p1 contains tokens and B is enabled as p2 doesn’t.  Colored inhibitor arcs are fairly straight-forward:

Here A is enabled as p2 does not contain a token with value 2 and B is not as p2 does contain a token with value 1.

The concept of inhibitor arcs is quite simple, and the only difficulty is efficient implementation.  One implementation strategy would be to bind all variables and at the end test whether any tokens reside on inhibited places matching the expression defined by the expression on the inhibitor arc, but this is probably not the most efficient strategy, as a lot of bindings can be ruled out by checking the inhibited place.

Reset Arcs

Reset arcs consume all tokens from a place and are useful for resetting parts of a model when a choice is made.  In the simplest case it could look like:

When A is executed, is consumes a token from p1 and all tokens from p2.  A is enabled whenever there is a token on p1 regardless of how many tokens are on p2 (also none).

Again, we would like a colored version, e.g., something like:

Here, executing A { p = 1 } would remove 1(1, 1) ++ 1(1, 3) from p2 and executing A { p = 2 } would remove 3`(2, 2) from p2.  We would not be able to use n in any expressions on output arcs, however, as it could have multiple values (1 and 3 in the case of A { p = 1 }).  Thus, a more sensible syntax might be:

Here, we use SML’s wildcard notation, underscore, to indicate that the value can be anything.  We could use multiset variables (see later) if we wanted to use the value.

Next, we need to consider in which order arcs are evaluated.  The most natural would be to evaluate first input arcs (pre-conditions), reset arcs, and then output arcs (post-conditions).  Consider this example:

Executing A would here require at least one token on p1.  p3 would also need at least one token, and after executing A, it would have no tokens.  The extra arc from p3 to A consumes one token and the reset arc the remaining.  p2 would not need any tokens, but after executing A, it would contain one.  So far, so good.

The question is how to handle double arcs.  Without reset arcs, we can think of them as either test arcs or as a combination of an input and an output arc.  With reset arcs, we can no longer do that.  Consider this example:

If the double arc is a test arc, executing A would require a token on p2 (and p1) and would yield no tokens on p2, whereas considering A as a double arc would require a token on p2 and yield one token on p2.

Another thing, that would be interesting is something like this example:

The intuition is that all place instances under the subpage are reset when A is executed.  This would break locality, but no more than fusion places already do.  There are a couple concerns with this, however.  Now, we allow arcs between two transitions; what happens if the subpage tag is deleted?  What is the meaning of adding an inscription to the arc (something like (n, _) from the previous example)?  Must all places on the subpage have the same type?  Just something that matches the pattern?  Thus, the concerns for reset arcs are:

Concerns

  • Are double arcs test arcs or a combination of input and output arcs?  Should a second kind of double arc be added for the other meaning?  (Probably not.)
  • Are reset arcs from substitution transitions important enough to warrant adding arcs between transitions?
  • What is the meaning and consequences of inscriptions on reset arcs from substitution transitions?  Currently, such inscriptions no semantics.

 Multiset Variables

Design/CPN used to have multiset variables.  These are variables that instead of just matching a single token value, matches an entire multiset.  CPN Tools can (by accident) emulate some of the behavior by using list variables, but it cannot bind multiset variables, so you can to do something like this:

Here, executing A would consume the list from p1 and produce 3 tokens on p2, i.e., 11 ++ 12 ++ 1`3.  While we can do the above, we cannot flip the arcs, but we can do something like this:

When we execute A, we consume the list from p1 and the three tokens 11 ++ 12 ++ 1`3 from p3, and produce 3 new tokens on p2.

Real multiset variables would just match any multiset over their type.  You could use them as any multiset, and, e.g., specify that you wanted to consume 1`1 ++ ii from a place, indicating that you want at least one 1 present and consume other tokens as ii.

For multiset variables there are two concerns, one of efficiency and one of semantics.  There are two possible semantics for multiset variables: maximal multisets and arbitrary multisets.

Maximal multisets would always require that the multiset variable consumes as many tokens as possible.  This would be very useful and could, e.g., implement reset arcs trivially: a reset arc is an arc from a place to a transition with a multiset variable that is discarded (or used; we could in fact allow multiset variables on reset arcs instead of the wildcard if we want to use the value).  Bounded places and inhibitor arcs could also trivially be implemented using multiset variables and double arcs if they are guaranteed to be maximal.  The problem of maximality is what it means, e.g., in this example:

If we just assume that multiset variables matches any multiset, we no longer have this problem, but then we can no longer use them to implement reset and inhibitor arcs and bounded places.  Also, this means that things become less efficient, as there are  exponentially many choices for each multiset variable.

Maximality can be alleviated by weakening the places they are allowed and, e.g., only allow them to occur alone on input arcs, but this would diminish their usage.  This could of course be made compatible with (at least certain) future and all current implementations.

Thus, the concerns are:

Concerns

  • Maximal or any multiset semantics?
  • Syntactically restrict for maximality?

Volatile Transitions

Transitions have some restrictions that most users are not entirely aware of.  This means you cannot use the time function or reference variables in guards (or, equivalently, input arcs), for example.  Actually, you can but the behavior is not as expected.  Here is an example using reference variables:

Here, we have just executed A, setting r to 1.  We would expect B to be enabled as it effectively consumes a token n where n = 1 from p3.  However, B is not enabled as enabling is not rechecked after executing A as there is no direct dependency between the two.

Making B volatile would mean that enabling is rechecked after each step.  This would be easy to implement, but would have to be done in a particular way to avoid allowing users to blow up simulation time.  This is similar to how only simple expressions can appear on input arcs: either all variables have to be bound or of small color sets if they are used in a complex expression.

By making transitions explicitly volatile (like volatile variables in C or Java, where no caching assumptions are made), we can recheck enabling at each step for a transition.  We can check that reference variables and the time function are not used on any non-volatile transition.

It is possible to be more efficient by making a stronger dependency analysis and basically internally turning reference variables into places.  This is quite complex compared to how often it is done, so I’d prefer no to go this way.

The concerns for volatile transitions are:

Concerns

  • Should this be explicit?  Nobody understands volatile variables anyway…
  • Is it worth it to detect (at least some) unsafe uses?
  • It it worth it to do the full dependency analysis?
  • How does this play together with random distribution functions?

Summary

I have presented some syntactical conveniences and some concerns with each.  Some are fairly easy to handle conceptually (inhibitor arcs and volatile transitions) while some are quite complex (ordered places and multiset variables).  Reset arcs are in-between and have easy simple interpretations, though the colored variant is a bit of a mess.  Some are easy to implement (volatile transitions, reset arcs, and ordered places) while others require more effort (inhibitor arcs and multiset variables).

Which of these would be of most use?  Which do you have to implement the most in your real projects?  Which do your students most often have problems with?  I am not interested in what would be theoretically most fun, as this is all about making practical use more accessible.

Does any theoreticians have any suggestions for some of these?  Which of the alternatives where multiple interpretations are possible does most often occur in practice?

To summarize, my concerns were:

  • Bounded places: Are general predicates needed?  They do impact efficiency (though this can be minimized to when they are used).  The bounds can be implemented using this construct, but more efficient implementations are possible.
  • Bounded places: Is the colored solution sufficient and useful?  Are there better ways of doing this (I assume yes, as this solution is quite kludgy)?
  • Ordered places: How do colored ordered places behave?
  • Ordered places: Do we allow searching (and hence folding)?  Searching breaks the declarativeness of enabling…
  • Ordered places: Are ordered places searched first or last?
  • Reset arcs: Are double arcs test arcs or a combination of input and output arcs?  Should a second kind of double arc be added for the other meaning?  (Probably not.)
  • Reset arcs: Are reset arcs from substitution transitions important enough to warrant adding arcs between transitions?
  • Reset arcs: What is the meaning and consequences of inscriptions on reset arcs from substitution transitions?  Currently, such inscriptions no semantics.
  • Multiset variables: Maximal or any multiset semantics?
  • Multiset variables: Syntactically restrict for maximality?
  • Volatile transitions: Should this be explicit?  Nobody understands volatile variables anyway…
  • Volatile transitions: Is it worth it to detect (at least some) unsafe uses?
  • Volatile transitions: It it worth it to do the full dependency analysis?
  • Volatile transitions: How does this play together with random distribution functions?

Let me know in the comments what you think!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.