These are some thoughts regarding the behaviours that should be supported by the openshop protocol in order to facilitate resource sharing between widely separated clusters on common projects.
A major problem with implementing such a scheme in openshop is that it requires that some jobs (jobs above a site-split in the tree) be replicated than one site. The framework allows this, provided that one of them is the authoritative copy in project space. But what happens if a job has descendents that belong to more than one site and then the job is rerun? The authoritative copy reruns and updates its internal state, but the copies retain the old state. After that, rerunning any child nodes on other sites that depend on the data in question will cause them to read stale data and be corrupted. This is an example of the more general problem in replicated data networks: cache coherency. The resolution of the problem requires a robust set of job states and interlocks. A good way to avoid illegal or ill-defined conditions like the one described above is to define a set of rules that restrict the conditions under which jobs are allowed to change their state. The actions that cause a job to change its state are implemented in its methods. A major steps in a job's evolution it is required to request permission to make those changes with the job manager, which is the central componented of the openshop framework. The manager decides whether to allow the state change or not, and if permission is granted then the manager records the new state of the job in the project database. This centralized transaction processor is a powerful concept because it not only allows all jobs objects to conform to the same state machine model, but it also can implement complex rules that involve interactions between jobs. The manager is able to implement the locking rules that can guarantee proper project coherency across a multi-sited project.