Options
All
  • Public
  • Public/Protected
  • All
Menu

Module cache

Introduction

The Cache class can be used by the modules realizing heavy computations to reuse some results that can not have changed from one run to another.

This scenario is actually quite common, and the immutable nature of the data produced by the modules makes it easy to manage (and safe 😷).

Here is an hypothetical scenario illustrating the discussion:

            physicalMdle 
                         \
                          combineMdle-->simulatorMdle-->viewerMdle
                         /
 sliderMdle-->solverMdle

In the above workflow:

  • the central module is the simulatorMdle, it takes a physicalModel and a solverModel as input to realize a simulation. To actually do the simulation, it computes an intermediate result relying only on the physicalModel, this computation is actually the bottleneck in terms of performance - e.g. it can take a couple of seconds.
  • the solverMdle returns a solverModel, it somehow relies on the value of some parameters provided by a sliderMdle (the user is expected to aggressively move from left to right 😈).
  • the physicalMdle emit a physicalModel, it has been tuned at 'building time' (there is only one version of it emitted as it is not connected to some inputs)
  • the combineMdle emit a new message [solverModel, physicalModel] with their latest version available each time one of its input receive a new value (in this case it will always be from the bottom path)
  • the viewerMdle somehow displays the result of the simulation

If this scenario is handled naively:

sliderMdle emit a new value -> solverMdle emit a new solverModel ->combineMdle emit a join with this newly created solverModel with the existing physicalModel -> simulatorMdle takes a couple of seconds to emit the simulation result -> viewerMdle display the results.

Our final user will be frustrated because he was expecting to get instant feedback on the result while aggressively moving the slider 🤬.

This is unfortunate because the bottleneck in terms of performance of the computation relies only on the physicalModel and this one not only did not changed in terms of its content, but it is actually exactly the same object (same reference, the combinerMdle just used the physicalModel already available each time a new solverModel was coming in).

🤯 Thanks to immutability, equality on references at different times guaranty that no properties of the object has been modified. It is very important for Flux and its 'functional' approach as it provides a straightforward way to safely (and immediately) retrieve existing intermediate results of computations from references, even for complex & large data-structures.

In the scenario depicted above, by caching the intermediate result against the reference of the physicalModel, the simulatorMdle can actually immediately retrieves the right intermediate result rather than having to redo the computations. This will makes our slider-aggressive user happy 🤩 as the time consuming step of the computation has been removed and he can hopefully get a responsive experience while looking at the viewer.

The nature of Flux applications, driven by events and with a common use of flow combination type of modules, end up of having such opportunities of (aggressive) optimizations actually quite often. It takes a bit more work for the developer to handle it, the Cache store has been designed to facilitate this work.

Usage of the cache store in the modules

Let's try to implement a skeleton of the simulatorMdle of the above example:


class PhysicalModel{...}
class SolverModel{...}

let contract = contract({
     description: 'expect to retrieve physical & solver models',
     requireds:{ 
         physModel: expectInstanceOf(PhysicalModel, ['physicalModel', 'physModel']),
         solverModel: expectInstanceOf(SolverModel, ['solverModel']) 
    }
})
export class Module extends ModuleFlux {

     result$ : Pipe<number>

      constructor( params ){
         super(params) 

         this.addInput({
              contract,
              onTriggered: ({data, configuration, context}, {cache: Cache}) => 
                 this.simulate(data, context, cache)
          })
         this.result$ = this.addOutput()
     }

     simulate( 
         data: {physModel, solverModel}: {physModel: PhysicalModel, solverModel: SolverModel},
         context: Context,
         cache: Cache ) {

         cache.getOrCreate$( 
             new ReferenceKey('physModel', physModel),
             () => this.longComputation(physModel)
         ).subscribe( (intermediateResult) => {
             let result = intermediateResult / fastComputation(solverModel) 
             this.result$.next({data:result,context})
             context.close()
         })
      }

     longComputation(physModel: PhysicalModel): Observable<number>{
         // a long and hard computation...
         return of(42).pipe(delay(2))
     }

     fastComputation(solverModel: SolverModel): number{
         // a fast computation...
         return 42
     }
 }

Index

Generated using TypeDoc