Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Several suspicious issues have emerged over time with the way we operate the options merging and expansion process in the framework, which we now collect here, as things are coming to a head prior to the significant framework rewrite hopefully expected later in the year. It seems that various of the goals and design ideals of the options merging system are conflicting, and we will need to make some hard decisions and significant changes in order to establish a coherent model.

The JIRA FLUID-5800 contains a lot of discussion about the options merging algorithm itself, and records some vacillations over whether we felt the need to move over to a more well-attested grade linearization algorithm such as C3 (described in the classic Dylan paper). As it seems to be turning out, whilst our linearization algorithm, which could be described as a "depth-first painter's algorithm", can lead to some peculiar results, it might well be defensible in the light of our overall approach to graphs of authors described in our Onward paper as well as not being directly implicated in our most serious problems. The JIRA most clearly giving the flavour of our most serious problems is FLUID-5668 "corruption when material written for members requires merging". An issue very similar to this came to light in work for the Onward paper itself.

Problems exposed by mergePolicies

The problem is most clearly exposed when trying to work with "mergePolicies" which are now by far the most immature and undeveloped part of the options merging pipeline, essentially untouched since the days of the ancient framework of 2009. They don't even feature support for global name resolution, implying that any grade definition using them can't even be represented in a JSON store. They haven't been updated primarily since it hasn't been very clear what to do with them, since almost any use of them seems to lead to difficulties. The framework makes use of them in many cases, for example where listeners, modelListeners, modelRelay, model, etc. require something other than the simple-minded "painter's algorithm" combination of options by merging. This in itself is suspicious since we shouldn't have the framework require to make use of facilities which are not easily usable by the general user. For example, we consider that the ContextAwareness facilities are more or less a success since they can be implemented simply using standard primitives of the framework - in "usermode" as we say in the Onward paper. If the user doesn't like them, equivalent or more powerful facilities can be easily cooked up using new syntax and the same facilities. The same isn't true of the more fundamental framework features listed above.

The most direct sign of the rot was the strange fluid.mergingArray "exotic type" implemented as a stopgap solution for FLUID-5668. This type also ended up pressed into service in the Onward paper when we needed a member which accumulated a list of matrices to be multiplied later, rather than each overwriting each other. This is necessary because of the failure of the mergePolicy "accumulate" algorithm to be what one could call "properly monadic" - it is unable to take account of where it is in its workflow - that is, whether it has been presented with an argument which has been processed once, and hence, in some view, is "inside a container", or whether it has been presented with a fresh piece of material from the "outside world". Another huge workflow problem with mergePolicies stems from the rather odd irregularity of our workflow with respect to merging and expansion. This, like mergePolicies, has been somewhat "emergent" and currently takes the following form:

  1. An initial phase of "pure merging" where grade definitions are composited (after the linearization process we spoke of above) into a "merged grade definition" - nothing very unusual occurs during this phase and it was always envisaged that its output might be cached, as itself, pure JSON, although this is now seeming a little unrealistic (especially since we are still not at the stage where all grade definitions themselves can be expressed as pure JSON - most importantly, because of mergePolicies themselves)
  2. A mixed phase of "combined merging and expansion" - this forms the core of the "globally ginger process" that was brought in by the FLUID-4330 and essentially marked the birth of the "modern framework" (filed in 2011 and delivered in 2013)
    1. Material is assembled into "mergeBlocks" (the first of which corresponding to defaults, others to other sources of options such as subcomponents, the user's direct arguments, and options distributions) each of which has a "source" consisting of unexpanded material and a "target" consisting of expanded material. Each of these blocks is elaborated in a data-driven way as finally-resolved values for options are demanded across the instantiating component tree
    2. The total list of "target" blocks is merged together into the final component's options, again, piecemeal in a data-driven way

The fundamental problem with the design of mergePolicies is that they are completely blind as to whether they are operating as part of the harmless phase 1 (all they were ever really designed for) or the complex data-driven phase 2b. This means that they may encounter material which has either been expanded or not expanded. This implies that they must make NO ATTEMPT WHATSOEVER to inspect their arguments given their unknown workflow. We make an exception for this structure-agnosticism in the mergePolicies operating core framework primitives such as listeners, modelListeners etc. in which we insist on a fully knowable top-level key-value structure that the framework's expansion process will not attempt to cross (this insistence is responsible for essentially intractible "bugs" such as FLUID-5208). This is also what makes it essentially impossible for the "average user" to ever effectively craft a new mergePolicy - since they effectively have to operate the expansion part of the workflow (2a.) themselves - this can be seen in all of the core primitives listed, e.g. fluid.mergeModelListenersfluid.mergeListeners which both punch through to a custom and rather complex fluid.resolveListenerRecord, fluid.establishModelRelay and the like. This is somewhat forgiveable when the core workflow does require so much intricate resolution and even reinterpretation of the meaning of configuration, but unacceptable when the user is trying to do something as simple as accumulating a list of matrices as in onward.imageRenderer.

Problems exposed by "source" type options distributions

Another framework primitives that exposes problems cause by the irregular "sandwich" workflow (simple merging, followed by ginger combined expansion then merging again described in 1, 2a and 2b in the previous section) is options distributions of the "source" type. These are naturally "cruising for a bruising" since their purpose is to broadcast (presumably already fully expanded and merged) options material in its final form from one location in the tree to another. In practice they do not work this way, but operate directly at the "mergeBlocks" level described in the previous section. On encountering a "source" distribution the framework, rather than attempting to distribute the finally merged result, instead distributes as best it can the raw materials leading to the result - that is, it filters each of the mergeBlocks at their current position for "source" entries matching the distribution and constructs one, filtered, incarnation of that block at the distribution target site with a filtered "source" record. In theory, this approach, being slightly more principled, has the potential for better results than distributing fully processed options (for a start, being capable of dealing with "site of expansion" issues such as FLUID-5258), but in practice the implementation is a little half-baked. Our solution to FLUID-5887 whilst resolving the headline site of the issue in Builder.js, ended up exposing an obnoxious problem in InlineEdits.js, where there is an ancient style of distribution (of the kind we used to have in the Uploader) that attempts a "total source distribution":

    fluid.defaults("fluid.inlineEditsComponent", {
        gradeNames: ["fluid.viewComponent"],
        distributeOptions: {
            source: "{that}.options",
            // This seems to imply that we've got no option but to start supporting "provenance" in options and defaults - highly expensive.
            exclusions: ["members.inlineEdits", "members.modelRelay", "members.applier", "members.model", "selectors.editables", "events"],
            removeSource: true,
            target: "{that > fluid.inlineEdit}.options"

This, by distributing ALL sources of options, including those derived from fluid.modelComponent itself, caused a double-initialisation via the members: modelRelay definition in core fluid.modelComponent

        members: {
            model: "@expand:fluid.initRelayModel({that}, {that}.modelRelay)",
            applier: "@expand:fluid.makeHolderChangeApplier({that}, {that}.options.changeApplierOptions)",
            modelRelay: "@expand:fluid.establishModelRelay({that}, {that}.options.model, {that}.options.modelListeners, {that}.options.modelRelay, {that}.applier)"

Whilst we were able to fix up the core grade linearization algorithm to ensure that a given grade is contributed at most once to a particular hierarchy, the options distribution process effaces the information as to where particular material has come from, since by this point in the workflow it has ended up in a general mergeBlock bucket of type "defaults" - note that having multiple blocks of type "defaults" was one of the architectural surprises which "mid-level mature options distributions" produced. This particular problem looks like it might be solved by "provenance" - see below, in "ideas" - the general mergePolicy problem, however, looks like it can't be.

Some ideas

One idea that has been kicking around for a while is the use of "provenance" in options. This will actually be highly useful bordering on essential once we have visual authoring tools - Clemens asked in our of our chats "how well the user has to understand the merging model" and I replied that different sections of options which have come from different source "could be painted in different colours", etc. Certainly, the most useful framework diagnostic implies that each option in a finally merged component could be tracked back to the original site of its authorship (with extra marks, for showing how it got there). This idea has been kicked around for some time, for example with a short mention in FLUID-4930, our post-rewrite "panic JIRA" about the poorly-characterised failure entitled "retrunking" that will require one of the main clarifications of "expansion policy intention" underlying the upcoming FLUID-4982. If we had "per-options provenance" on every piece of options, this would allow us to resolve issues like the "source distribution" one above, since we could insist that a distribution "only distributes options with provenance that does not arise from one of the grades already present at the target".

However, this implies to start with, not only hugely increased storage and computation costs for the options merging process, but also a (previously considered) fatal compromise on the status of "options" and "defaults" as being pure JSON. However, if we require "provenance" to even interact with mergePolicies, this seems painfully unavoidable. The provenance must somehow accompany the data, which implies it is packaged somehow. This also presumably allows some resolution of our "monadicity" issue where it is unclear what stage of processing the options have reached. However, the "immutable revolution" already represents some kind of compromise since even the relatively harmless-seeming act of "freezing" all defaults (and eventually, all options) with Object.freeze represents a (smallish) violation of the expectation that "options material represents pure JSON".

Now, at least at runtime, we can store the provenance map inside the component's shadow, making it accessible with an API. But during options merging, it seems that we have to make some attempt to supply this via direct arguments. This seems to imply some form of "mergPolicy hoisting". The core algorithm inside "onward.arrayConcatPolicy" has already become strongly obscured. This implies that we want to "hoist" the core algorithm (that of concatenating arrays or some other thing) into a form where it only gets to manipulate options abstractly, which are then hidden behind a kind of armoury of "exotic types" like fluid.mergingArray of the kind which we traditionally considered very distasteful. This armoury would let us discover i) the location of the material, ii) its state of expansion, iii) its source, e.g. in grade terms - but to the core algorithm would then represent a kind of opaque, immutable token. Pretty unpleasant, but this seems hard to avoid.



  • No labels