The usual solutions

The simple answer is to use ever faster processors - an approach used more often than most designers would admit. This carries an inevitable cost that always present problems for most embedded applications with volume production.

The problem can be particularly frustrating for designers, as it is seldom necessary to do all of the input/processing/output at every tick.

The most obvious improvement is to split the processing - say between alternate ticks. Or it may be possible to postpone some low priority tasks and not execute them if the processor detects that a large processing load is about to occur - say at midnight.

This can be at least a partial solution but does hugely increase the complexity of the analysis required.

Consider the simple solution of having some processing done on even ticks (P0) and the rest only on odd ticks (P1). The first question is whether to update all of the inputs on every cycle. Some inputs will be shared creating a set (I0 and I1) common to both cycles.

If all of the input is done each time then the alternate routes will be using different data and so may come up with inconsistent answers. P0 processes I0 and detects a pressure is outside its limits and creates an alarm. By the time P1 processes I1 the pressure has fallen and this cycle attempts to apply normal processing.

If the system tries to freeze the inputs to guarantee each path sees the same data then the response will suffer and critical events might even be lost. Making both P0 and P1 process I0 means I1 will be discarded - the overpressure condition is never detected. The tick rate might as well be halved. To be useful this approach needs to have some I1 discarded whilst others are processed - but this creates the situation where we have two versions of the pressure one from I1 and the other held over from I0.

Predicting the response - and ensuring it meets the design requirements - for all possible combinations of inputs - is difficult even if the processor is simply flip-flopping backwards and forward on alternate cycles.

If the aim is to make the decision on whether to split the processing dynamically then the task of ensuring consistent data into the processing and hence sorting out acceptable combinations becomes nearly impossible. Most of the time P0 processes I0 and P1 processes I1 - occasionally though P1 processes some of I0 and some of I1.

A further problem with these approaches is that they are almost impossible to test. Determining test cases that will detect any problems must be done at the upper, system, level but requires a knowledge of the details of the implementation. P0 and P1 will perform perfectly on their own - and together provided I0 and I1 are well behaved.

Just don't put the system out in the field where it going to see an avalanche of inputs when a digger slices through a cable, or things go haywire for a few second because of a brown out, or create an unusual sequence of event say when daylight saving time kicks in.

The approach can work for simple applications using careful analysis. Alternatively, and this often the only solution when the problem become apparent late on in the implementation - is to add buffer memory - if this is available and doesn't add to the product cost.


    More information on the:

    and for the originators of deltaFSD:


deltaFSD is now a Sourceforge project

  At 22:06:11 01/28/20© Copyright DeDf 2003 - 2010Base: deficiency
File: deficiency.txt
Side: gifs/deltafsdside.gif