Version: 0.3
This document contains several sections which describe various aspects of the proposed or potential support for improved logical model integration in the Eclipse Platform (bug 37723). The requirements document can be found here and a description of the initial worked performed in 3.1 is here. The following list summarizes each item and provides a brief description of the impact on the Eclipse Platform and it's clients.
The bulk of the work for this item is in the Problems view itself. Clients would only need to do work if the current filtering was inadequate. The work would involve defining model specific filters and properties display. For JDT, work is probably not required as there is a strong enough relationship between resources and java elements so model specific filters may not be required.
Support for retargetting would need to be added to the platform and to any client that anticipates that higher level models could be built on top of their model, including JDT.
The work items for this are:
There is little work anticipated here for JDT since their model in similar enough to the file model. Model tooling with models that hide the file structure will need to provide a team participant.
JDT has a custom viewer that handles label updates so they will need to adapt any new mechanism for performing label update propagation.
Compare already has the above mentioned support for models that have a one-to-one mapping between files and model elements so only clients who have more complicated mappings would need to provide this additional support. For JDT, there should not be much work here since they already provide a file-based content viewer. Compare will need to make use of any new API in the local history operations.
The bulk of the work for clients here will be providing the synchronization view. This includes JDT as it would be beneficial to see a structure in the synchronize view that matches what appears in the Packages Explorer.
After presenting proposals for each of these area, we discuss the potential role of EMF and present some generic Team Scenarios that describe how the functionality we are proposing would play out.
Making the problems view more logical model aware has been broken into several pieces, as described in the following sections.
How do we improve the usability of filters in the problems view. Work has started on this is 3.2 and there are several bug reports on the issue (108013, 108015, 108016). The basic ideas are:
An additional requirement identified by clients is the ability to filter on model specific information. We will need to collect some concrete scenarios on this to better understand the requirement.
Each problem type has different relevant properties. Java errors has a file path and line number. Other models may have other ways of describing a problem (e.g. a resource description and a field name). Ideally each problem would display it's relevant properties. However, the Problems view often contains many different types of problems, each of which may have different relevant properties. Table widgets have a single set of columns, leading to the following possibilities:
Given that users may want to see different problem types in the Problems view at the same time, the most practical approach is to provide a generic set of columns (e.g. Severity, Description, Element, Path, Location) and allow the problem type to dictate what values appear in the columns.
The Problems view currently supports custom Quick Fixes for a problem type. Another useful feature would be the ability to navigate to model specific view. There is currently a Show In Navigator which could be enhanced to support showing the affected model element in a model view (e.g. Show in Packages Explorer for Java problems).
If there was some way of determining what role the user was paying at a particular time, it would be possible to tailor views to that particular task. Such information could be used to enable particular Problems view filters.
The Common Navigator is being pushed down from WTP into the Platform. The view allows model tooling to add a root node to the Navigator and control what appears under that node. Clients that wish to plug into the view will need to provide a root element, content provider, label provider, sorter and action set for inclusion in the navigator. Clients with existing Navigator style views can decide whether to keep their view separate or integrate it into the Common Navigator. For JDT, they will probably want to integrate with the view to remain consistent with the Platform.
One aspect of the Common Navigator that is of particular interest to team operations
is the ability to obtain a content provider that can show a logical model in
a tree viewer. This would allow logical models to appear in team operation views
and dialogs. The Common Navigator proposal defines an extension that provides
this capability. The class for this extension is the NavigatorContentExtension
and it provides the following:
For this API to be useable in Team operations, the NavigatorContentExtension
contributed by the model must have access to the context of the team operation.
Outside the context of a team operation, the content extension only has the
local workspace from which to build its model tree. However, within the context
of a team operation, there may be additional resources involved, specifically,
resources that exist remotely but not locally (i.e. outgoing deletions or incoming
additions). The model's content extension would need access to a team context
so that these additional resources could be considered when displaying the model.
The following list summarizes the requirements that would be placed on a NavigatorContentExtension
when being used to display the model in a team context.
Support for this can either be integrated with the Common Navigator API or made available as Team specific API (see Model Display in Team Operations). Our preference would be integrate the team requirements with the Common Navigator requirements so that model providers only need to implement one API. In the rest of this section we will address the following two questions:
NavigatorContentExtension
?The next two sections propose answers to these questions.
In the Common Navigator API description that was available at the time of writing,
the NavigatorContentExtension
is instantiated for each viewer and
has the ability to have state of the viewer available to it. In the context
of a team operation, the team provider would create the viewer that will be
used to display the model tree. It could also associate the team context with
the viewer so it was available to the context extension.
A team operation requires the ability to obtain a content provider that can
consider the team context when it builds a model tree. Since the tree is built
by the content provider, the following method to NavigatorContentExtension
will need to consult the viewer state to see if a team context is available.
ITreeContentProvider getContentProvider()
where ISynchronizationContext
is the interface that defines the
team context. The model would be responsible for displaying a model tree that
included relevant model objects that may not exist remotely but as part of the
team operation.
In addition, the ability to decorate model elements with their team state is
required. Adding the following method to NavigatorContentExtension
would provide this capability:
ICommonLabelDecorator getLabelDecorator()
The provided decorator would need to consult the team context that is available from the viewer state in order to determine the proper decorations for each model element.
The other remaining requirement is filtering based on team state. Filtering is not as well defined in the Common Navigator proposal but a similar approach as described for the other two requirements could also be used to provide a filter that filters on team state.
The above is more to provide an idea of what is required instead of the exact solution. The final solution will depend on what the final shape of the Common Navigator.
The ISynchronizationContext
API below could be used to provide
the team context to a model provider. It makes use of the following API pieces:
SyncInfo
contains a description of the synchronization state
of a file system resource. The synchronization state includes a direction
(incoming, outgoing or conflicting) and change type (addition, deletion or
change). SyncInfoTree
contains a description of all of the resources
that are out-of-sync. ISynchronizeScope
) defines the input used to scope
the synchronization. It has a set of root resources and a containment check
to define whether a resource that is a child of one of the roots is contained
in the scope. Particular subclasses may provide additional information (e.g.
the set of resource mappings that define the scope).The model provider can use this information to determine what model tree to build, the synchronization state of model elements and what additional elements need to be displayed.
/** * Allows a model provider to build a view of their model that includes * synchronization information with a remote location (usually a repository). * * The scope of the context is defined when the context is created. The creator * of the scope may affect changes on the scope which will result in property * change events from the scope and may result in sync-info change events from * the sync-info tree. Clients should note that it is possible that a change in * the scope will result in new out-of-sync resources being covered by the scope * but not result in a sync-info change event from the sync-info tree. This can * occur because the set may already have contained the out-of-sync resource * with the understanding that the client would have ignored it. Consequently, * clients should listen to both sources in order to guarantee that they update * any dependent state appropriately. * * This interface is not intended to be implemented by clients. * * @since 3.2 */ public interface ISynchronizationContext { /** * Synchronization type constant that indicates that * context is a two-way synchronization. */ public final static String TWO_WAY = "two-way"; //$NON-NLS-1$ /** * Synchronization type constant that indicates that * context is a three-way synchronization. */ public final static String THREE_WAY = "three-way"; //$NON-NLS-1$ /** * Return the scope of this synchronization context. The scope determines * the set of resources to which the context applies. Changes in the scope * may result in changes to the sync-info available in the tree of this * context. * * @return the set of mappings for which this context applies. */ public ISynchronizeScope getScope(); /** * Return a tree that containsSyncInfo
nodes for resources * that are out-of-sync. The tree will contain sync-info for any out-of-sync * resources that are within the scope of this context. The tree * may include additional out-of-sync resources, which should be ignored by * the client. Clients can test for inclusion using the method * {@link ISynchronizeScope#contains(IResource)}. * * @return a tree that contains aSyncInfo
node for any * resources that are out-of-sync. */ public SyncInfoTree getSyncInfoTree(); /** * Returns synchronization info for the given resource, ornull
* if there is no synchronization info because the resource is not a * candidate for synchronization. * * Note that sync info may be returned for non-existing or for resources * which have no corresponding remote resource. * * * This method will be quick. If synchronization calculation requires content from * the server it must be cached when the context is created or refreshed. A client should * call refresh before calling this method to ensure that the latest information * is available for computing the sync state. * * @param resource the resource of interest * @return sync info * @throws CoreException */ public SyncInfo getSyncInfo(IResource resource) throws CoreException; /** * Return the synchronization type. A type ofTWO_WAY
* indicates that the synchronization information (i.e. *SyncInfo
) associated with the context will also be * two-way (i.e. there is only a remote but no base involved in the * comparison used to determine the synchronization state of resources. A * type ofTHREE_WAY
indicates that the synchronization * information will be three-way and include the local, base (or ancestor) * and remote. * * @return the type of merge to take place * * @see org.eclipse.team.core.synchronize.SyncInfo */ public String getType(); /** * Dispose of the synchronization context. This method should be * invoked by clients when the context is no longer needed. */ public void dispose(); /** * Refresh the context in order to update the sync-info to include the * latest remote state. any changes will be reported through the change * listeners registered with the sync-info tree of this context. Changes to * the set may be triggered by a call to this method or by a refresh * triggered by some other source. * * @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener) * @see org.eclipse.team.core.synchronize.ISyncInfoTreeChangeEvent * * @param traversals the resource traversals which indicate which resources * are to be refreshed * @param flags additional refresh behavior. For instance, if *RemoteResourceMappingContext.FILE_CONTENTS_REQUIRED
* is one of the flags, this indicates that the client will be * accessing the contents of the files covered by the traversals. *NONE
should be used when no additional behavior * is required * @param monitor a progress monitor, ornull
if progress * reporting is not desired * @throws CoreException if the refresh fails. Reasons include: * The server could not be contacted for some reason (e.g. * the context in which the operation is being called must be * short running). The status code will be * SERVER_CONTACT_PROHIBITED. */ public void refresh(ResourceTraversal[] traversals, int flags, IProgressMonitor monitor) throws CoreException; }
Model tools in Eclipse are typically layered. In the very least, there is the model layer (e.g. Java) and the file-system layer (i.e. IResource). However, in some cases, there may be more than two layers (e.g. J2EE<->Java<->IResource).
There is already refactoring participant support in Eclipse which appears to meet several of the requirements logical models have. The original proposal for refactoring participation is described here. The implementation does vary slightly from what is in the proposal but the proposal is still a good description of the concepts involved.
Here is the summary of the features taken from the document:
One possibility was to support participation in operations at all levels. That is, JDT could participate in IResource level operations in order to react to resource level changes. For instance, Java could participate in a *.java file rename in the Resource Navigator and update any references appropriately (thus treating the file rename as a Java compilation unit, or CU, rename). This would lead to the following additional requirements:
Experiments were done by JDT in Eclipse 3.0 and the following observations were made for a package rename vs. a folder rename in which Java participates:
The next section addresses these issues by combining operation retargeting with participation in order to address these issues.
To ensure that participants access models in a consistent state all operations have to be executed on the highest level model and the operation has to describe what happens in the lower level models to load corresponding lower level participants. For example when renaming a CU the rename refactoring also loads participants interested in file renames since a CU rename renames the file in the underlying resource model. However the system should help the user to keep higher level models consistent when manipulating lower level models. One approach would be that the systems informs about those situations and allows the triggering of the higher level operation instead. For example a rename of a *.java file in the resource navigator could show a dialog telling the user that for model consistency the file is better renamed using the Java Rename refactoring and if the users wants to execute this action instead. Doing so has the other nice side effect that models are not forced to use the LTK participant infrastructure. The way how to participate could be left open for the plug-in providing the model operations.
One potential complication arises when multiple models want to "own" or "control" a resource. This is less of an issue if one is a higher level model built on top of a lower level one. For instance, a J2EE model may override the Java model and assume ownership of any Java files that are J2EE artifacts, such as EJBs. However, problems arise if the two models are peers. For instance, there may be several models that are generated from a WSDL (web services) descriptor file. The user may need to pick which model gets control for operations performed directly on the resource.
Note that this feature area has a great deal of overlap with the Improve Action Contributions work being proposed by the UI team.
It is not clear the operation retargeting is desirable. That is, if a user performs a delete on a file, it may be disconcerting if the delete is actually performed on an EJB that consists of several files. An alternate approach is to detect when an operation on a lower level model may have an effect on a higher level model and ask the user to confirm that they really do want to perform the operation on the lower level model.
The support for having Team operations appear in the context menu of logical
elements is based on ResourceMappings
. This support was available
as non-API in 3.1 and the is described in the Support
Logical Resources - Resource Mappings document. Here is a summary of what
is required for this:
RemoteResourceMappingContext
that gives the model access to the remote state and contents of the files
involved in the operation.RemoteResourceMappingContext
. In many
cases, this is straight forward and doesn't require the context at all. In
others, the model may need to be able to query the file structure or file
contents from the context in order to determine which files need to be included.A RemoteResourceMappingContext
is a means to allow the model to
see the state of the repository at a particular point in time. There are many
different terms used by different repository tools to identify this type of
view of the repository including version, branch, configuration, view, snapshot,
or baseline. The type of operation being performed dictates what files states
are accessible from the RemoteResourceMappingContext
. For example,
when updating the local workspace to match the latest contents on the server,
the context would need to allow the client to access the latest contents for
remote files whose content differs from their local counterparts in order to
allow the model to determine if there are additional files that should be included
in the update. When committing, the context would need to provide the ancestor
state of any locally modified files so that the model could ascertain if there
are any outgoing deletions.
There are still some outstanding issues that need to be solved in this area.
The following sections describe proposed solutions to these issues
In order to ensure that the proper resources are included as the input to a team operation, we introduce the concept of a model provider. A model provider has the following:
Model providers would be used in the following way to ensure that the proper resources were included in a team operation.
This mechanism can be used to ensure that operations performed directly on files include all the files that constitute a model and also will ensure that the effects can be displayed to the user in a form consistent with the higher level models that are effected. This will be covered separately in the Displaying Model Elements in Team Operations section.
Most Team operations have multiple steps. To illustrate this, consider an update operation. The steps of the operation, considering the inclusion of resource mappings and other facilities described in this proposal are:
Each of these steps may involve separate calls from the repository tooling to the model tooling. The model would not want to recompute the remote model state during each step but instead would rather cache any computed state until the entire operation was completed. One means of supporting this is to add listener support to the team context associated with the operation and have an event fired when the operation using the context is completed).
The contents of views in Eclipse are determined by a content provider. In most cases, the structure of what is displayed matches the model structure but in some cases it does not. One such example is the package explorer when it is in hierarchy mode. In this mode, the children of a package are its files and its subpackages. When the user performs an operation on a package in this view, they may reasonably expect the operation to be performed on the package and its subpackage. However, the package adapts to a resource mapping that only includes the files and not the subpackages.
The simplest solution to this problem is to require that the content providers wrap objects when they need adaptation to resource mappings and are displayed in a way that does not match the model structure. In our Java example, this would mean creating a new model object (e.g. DeepJavaPackage) whose children were the java classes and subpackages. The advantage of this approach is that the process of converting a model object to a resource mapping can be performed by the model without any knowledge of the view configuration. Some of the concerns of this approach are:
Another solution to this problem would be to:
The advantage of this approach is that model tooling can still use content providers to provide alternate views of their model without wrapping model objects or providing new model objects. The disadvantages are:
Given the complexity of the second solution, the first is preferable from an implementation standpoint. However, we need to determine if clients can accept this solution.
This section describes the support that is proposed to be added in Eclipse
3.2 to support the decoration of logical model elements. In Eclipse 3.1 and
prior, logical model elements could still be decorated. However, the only inter-model
adaptability support was for models whose elements had a one-to-one mapping
to file system resources (i.e. IResource
). Here is a summary of
the issues that we are hoping to address in 3.2.
ResourceMapping
.
ResourceMapping decoration makes use of the general adaptability mechanism
but also requires support for triggering label updates for any logical element
whose decoration depends on the state of one or more resources (see bug
86493).As stated above, point one has already been completed. The following sections describe potential solutions to the remaining two problems. The first two sections describe potential solutions using the existing architecture while the third presents a unified solution that makes use of the team context described in the Common Navigator section.
Some repository decorations are propagated to the root of any views that display elements shared in the repository. This is done in order to provide useful information to the user. For instance, the "shared with CVS" decoration (by default, the icon) should appear on any object on which a CVS operation can be performed. Similarly, the dirty decoration (by default, a ">" prefix) should appear on any views items containing a dirty child in order to help the user find dirty items. For the purpose of discussion, we will use dirty decoration when describing our proposal but the same will hold true for other decorations that require propagation.
When a file becomes dirty, a label change must be issued for any items visible to the user whose dirty state has change or that is a direct or indirect parent of such an item. When we are dealing strictly with file system resources, this is straight forward. When a file becomes dirty, a label change is issued for the file and the folders and project containing the file. Any views that are displaying these items will then update their labels. It is the responsibility of models that have a one-to-one mapping from files to model elements to update the labels of the corresponding model elements as well. For instance, JDT maps the file, folder and project label changes to label changes on Java model elements such as Compilation Units, Packages and Java Projects so that decorations in the Packages Explorer get updated properly.
However, problems arise for logical models elements that do not have a one-to-one mapping to file resources. For instance, consider a working set that contains several project. The repository provider does not know that the working set is being displayed to the user so does not issue a label update for it. The view displaying the working set does not know when the state of the children impact the label of the parent. It could try to fake it by updating the working set label whenever the label of any children are updated but this could result in many unnecessary and potentially costly updates.
The following points summarize the aspects of the problem that should be considered when showing repository decorations in a model view:
It is interesting to note that the requirement in point 2 can be solved using the Team Operation Participation mechanism described previously. However, addressing the last two points will require additional support. The next two sections describe a potential solution. It is useful to note that any solution we come up with must consider the broader context of the direction decoration support in Eclipse will go. We have tried to consider this when drafting this proposal.
Currently, a decoration change is broadcast implicitly by issuing label change events for the elements that need redecoration. From a repository tooling standpoint, this means generating a label change on any changed file resources (and there ancestor resources if the decorator that represents the changed state is propagated to parents). It is then up to the model tooling to translate these label changes of file resources to label changes on the appropriate model elements.
An alternative approach would be to make the decoration change notification explicit. Thus, the repository tooling could issue a decoration change event that contains the resources that needs redecoration. It would then be up to any views that are displaying a repository decoration to update the label of any elements appropriately. This would mean determining the set of elements that correspond to the given resources.
As stated in point 4 above, a possible optimization is to only issue the label change if the state of the decoration has changed. This can be accomplished by including, as part of the change notification event, a property evaluator that evaluates and caches the properties for each element it is provided and indicates whether a change has occurred which requires the item to be redecorated.
In the previous section we mentioned the possibility of having a property evaluator that indicated whether a label change was required. This evaluator could also indicate whether a reevaluation for the parent of the element is required. That is, if the evaluator calculated that the dirty state of the element had changed, it could indicate that the label update was required and that the evaluator should be executed with the parent element as input in order to determine if a label change was required for the parent and if the process should be repeated for the parent element's parent.
This calculation could be long running. Thus, it should be performed in a background job with minimal use of the UI thread. This may be a bit tricky as JFace viewers are not threadsafe (i.e. they are mostly invoked from the UI thread). The current JFace viewers persist the tree of elements in the SWT tree items so accessing them needs to be run in the UI thread. Also, label changes need to be run in the UI thread. These factors must be considered when designing a solution.
In this section we present API on ResourceMapping
that supports
change determination on logical model elements. With this API, the algorithm
used by the decorator would be this:
getChangeState
method on the resource mapping to provide
a change state given a remote mapping context that does not allow contact
to the server.In addition to the API on ResourceMapping, it would also be beneficial to provide an abstract lightweight decorator that team providers can use to get the above described behavior.
Here are the proposed API additions to the ResourceMapping
class.
Note that there would be additional API added to ResourceMapping
and RemoteResourceMappingContext
to aid models in their calculation
of the change state.
public abstract class ResourceMapping { /** * Constant returned bycalculateChangeState
to indicate that * the model object of this resource mapping does not differ from the * corresponding object in the remote location. */ public static final int NO_DIFFERENCE = 0; /** * Constant returned bycalculateChangeState
to indicate that * the model object of this resource mapping differs from the corresponding * object in the remote location. */ public static final int HAS_DIFFERENCE = 1; /** * Constant returned bycalculateChangeState
to indicate that * the model object of this resource mapping may differ from the * corresponding object in the remote location. This is returned when * getChangeState was not provided with a progress monitor and the remote * state of the object was not cached. */ public static final int MAY_HAVE_DIFFERENCE = 2; /** * Calculate the change state of the local object when compared to it's * remote representation. If server contact is required to properly * calculate the state but is not allowed (as indicated by an exception with * the code *RemoteResouceMappingContext.SERVER_CONTACT_PROHIBITED
), *MAY_HAVE_DIFFERENCE
should be returned. Otherwise *HAS_DIFFERENCE
orNO_DIFFERENCE
should be * returned as appropriate. Subclasses may override this method. * * It is assumed that, whencanContactServer
is *false
, the methods *RemoteResourceMappingContext#contentDiffers
and *RemoteResourceMappingContext#fetchMembers
of the context * provided to this method can be called without contacting the server. * Clients should ensure that this is how the context they provide behaves. * * @param context a resource mapping context * @param monitor a progress monitor ornull
. If *null
is provided, the server will not be * contacted andMAY_HAVE_DIFFERENCE
will be * returned if the change state could not be properly determined * without contacting the server. * @return the calculated change state ofHAS_DIFFERENCE
if * the object differs,NO_DIFFERENCE
if it does not * orMAY_HAVE_DIFFERENCE
if server contact is * required to calculate the state. * @throws CoreException */ public int calculateChangeState( RemoteResourceMappingContext context, IProgressMonitor monitor) throws CoreException { try { int changeState = ... return changeState; } catch (CoreException e) { if (e.getStatus().getCode() == RemoteResourceMappingContext.SERVER_CONTACT_PROHIBITED) return MAY_HAVE_DIFFERENCE; throw e; } } }
The complexities described in the previous sections arise because of the separation of models and decorators. An alternate approach would be to use the team context discussed in the Common Navigator section for any model view. Such support would work something like this.
The details would be the same as those discussed in the Common Navigator section.
This would simplify the decorator update story as the view would then listen
to both resource deltas and team deltas and update model elements and labels
appropriately. The model will have enough information available from the tam
context to make the decisions about propagation in any way they deal appropriate.
The models will also be able to determine the change state of their model elements
for themselves so no additional API on ResourceMapping
would be
required.
There are two types of merges that can take place: automatic and manual. Automatic merges (or auto-merges) are merges the either do not contain file level conflicts or whose file level conflicts can be resolved without user intervention. Manual merges require the user to inspect the conflicting changes and decide how to resolve them. In either case, involvement of the model in these two types of merges is beneficial. For auto-merges, model knowledge can increase the likelihood of a manual merge being possible and for manual merges, model involvement can enhance how the merges are displayed and performed.
In this section we describe the API we propose to add to support model merging:
IResourceMappingMerger
: an interface that model tooling implements
to allow repository tooling to perform head-less merges when possible on resource
mappings. The merge will also indicate when head-less merges are not possible.IResourceMappingEditorInputFactory
: an interface that model
tooling implements to allow resource mappings to be merged manually.MergeContext
: an API which allows the model tooling to interact
with the repository tooling in order to perform model level merges.Given a set of resource mappings, the repository tooling needs to be able to obtain the model tooling support classes which will perform the merging. This will require:
getModelId
method on ResourceMapping
to associate
a model id with each resource mapping.The steps for performing an optimistic merge would then look something like this:
IResourceMappingMerger
is obtained and invoked for each group.IResourceMappingEditorInputFactory
is used to obtain a set of editor inputs for these elements.When the model is asked to merge elements, either automatically or manually, it will need access to the remote state of the model. API for this is also being proposed.
In this section, we propose some API that will allow for model based auto-merging.
Before we do that, we should first mention that Eclipse has a pluggable IStreamMerger
(introduced in 3.0) for supporting model based merges when there is a one-to-one
based correspondence between a file and a model object. However, this is not
currently used by CVS (or any other repository provider as far as we know) but
this can be part of the solution we propose here.
The proposed API to support model level merges consists of the following:
IResourceMappingMerger
: This is similar to the IStreamMerger
but is obtained from resource mappings and is provided a MergeContext
form which the model can obtain any ancestor and remote file contents that
it requires.MergeContext
: Provides access to the ancestor and remote file
contents using RemoteResourceMappingContexts
and also has helper
methods for performing file merges and for signaling the context that a file
has been merged so that the file can be marked up-to-date.Below is what IResourceMappingMerger
the would look like. It contains
a merge
whose semantics differ depending on the type of the merge
context. A merge is performed for three-way synchronizations and the replace
occurs for two-way contexts. The model can determine which model elements need
to be merged by consulting the merge context which is presented in the next
section.
/** * The purpose of this interface is to provide support to clients (e.g. * repository providers) for model level auto-merging. It is helpful in the * cases where a file may contain multiple model elements or a model element * consists of multiple files. It can also be used for cases where there is a * one-to-one mapping between model elements and files, although *IStreamMerger
can also be used in that case. * * Clients should determine if a merger is available for a resource mapping * using the adaptable mechanism as follows: * * Object o = mapping.getModelProvider().getAdapter(IResourceMappingMerger.class); * if (o instanceof IResourceMappingMerger.class) { * IResourceMappingMerger merger = (IResourceMappingMerger)o; * ... * } * * Clients should group mappings by model provider when performing merges. * This will give the merge context an opportunity to perform the * merges optimally. * * @see org.eclipse.compare.IStreamMerger * @see org.eclipse.team.internal.ui.mapping.IResourceMappingManualMerger * @since 3.2 */ public interface IResourceMappingMerger { /** * Attempt to automatically merge the mappings of the merge context(MergeContext#getMappings()
). * The merge context provides access to the out-of-sync resources (MergeContext#getSyncInfoTree()
) * associated with the mappings to be merged. However, the set of resources * may contain additional resources that are not part of the mappings being * merged. Implementors of this interface should use the mappings to * determine which resources to merge and what additional semantics can be * used to attempt the merge. * * The type of merge to be performed depends on what is returned by the *MergeContext#getType()
method. If the type is *MergeContext.TWO_WAY
the merge will replace the local * contents with the remote contents, ignoring any local changes. For *THREE_WAY
, the base is used to attempt to merge remote * changes with local changes. * * Auto-merges should be performed for as many of the context's resource * mappings as possible. If merging was not possible for one or more * mappings, these mappings should be returned in an *MergeStatus
whose code is *MergeStatus.CONFLICTS
and which provides access to the * mappings which could not be merged. Note that it is up to the model to * decide whether it wants to break one of the provided resource mappings * into several sub-mappings and attempt auto-merging at that level. * * @param mappings the set of resource mappings being merged * @param mergeContext a context that provides access to the resources * involved in the merge. The context must not be *null
. * @param monitor a progress monitor * @return a status indicating the results of the operation. A code of *MergeStatus.CONFLICTS
indicates that some or all * of the resource mappings could not be merged. The mappings that * were not merged are available using *MergeStatus#getConflictingMappings()
* @throws CoreException if errors occurred */ public IStatus merge(IMergeContext mergeContext, IProgressMonitor monitor) throws CoreException; }
It is interesting to note that partial merges are possible. In such a case,
the merge
method must be sure to return a MergeStatus
that contains any resource mappings for which the merge failed. These mappings
could match some of the mappings passed in or could be mappings of sub-components
of the larger mapping for which the merge was attempted, at the discretion of
the implementer.
In order for repository tooling to support model level merging, they must be
able to provide an IMergeContext
. The merge context provides:
IMergeContext
extends
the ISynchronizationContext
introduced in the Common Navigator
section). ResourceMappingScope
that
provides access to the resource mappings involved in the merge.The following is the proposed API methods of the merge context.
/** * Provides the context for anIResourceMappingMerger
* or a model specific synchronization view that supports merging. * * TODO: Need to have a story for folder merging * * This interface is not intended to be implemented by clients. * * @see IResourceMappingMerger * @since 3.2 */ public interface IMergeContext extends ISynchronizationContext { /** * Method that allows the model merger to signal that the file in question * has been completely merged. Model mergers can call this method if they * have transferred all changes from a remote file to a local file and wish * to signal that the merge is done.This will allow repository providers to * update the synchronization state of the file to reflect that the file is * up-to-date with the repository. * * Clients should not implement this interface but should instead subclass * MergeContext. * * @see MergeContext * * @param file the file that has been merged * @param monitor a progress monitor * @return a status indicating the results of the operation */ public abstract IStatus markAsMerged(IFile file, IProgressMonitor monitor); /** * Method that can be called by the model merger to attempt a file-system * level merge. This is useful for cases where the model merger does not * need to do any special processing to perform the merge. By default, this * method attempts to use an appropriateIStreamMerger
to * merge the files covered by the provided traversals. If a stream merger * cannot be found, the text merger is used. If this behavior is not * desired, sub-classes may override this method. * * This method does a best-effort attempt to merge all the files covered * by the provided traversals. Files that could not be merged will be * indicated in the returned status. If the status returned has the code *MergeStatus.CONFLICTS
, the list of failed files can be * obtained by calling theMergeStatus#getConflictingFiles()
* method. * * Any resource changes triggered by this merge will be reported through the * resource delta mechanism and the sync-info tree associated with this context. * * TODO: How do we handle folder removals generically? * * @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener) * @see org.eclipse.core.resources.IWorkspace#addResourceChangeListener(IResourceChangeListener) * * @param infos * the sync infos to be merged * @param monitor * a progress monitor * @return a status indicating success or failure. A code of *MergeStatus.CONFLICTS
indicates that the file * contain non-mergable conflicts and must be merged manually. * @throws CoreException if an error occurs */ public IStatus merge(SyncInfoSet infos, IProgressMonitor monitor) throws CoreException; /** * Method that can be called by the model merger to attempt a file level * merge. This is useful for cases where the model merger does not need to * do any special processing to perform the merge. By default, this method * attempts to use an appropriateIStreamMerger
to perform the * merge. If a stream merger cannot be found, the text merger is used. If this behavior * is not desired, sub-classes may override this method. * * @param file the file to be merged * @param monitor a progress monitor * @return a status indicating success or failure. A code of *MergeStatus.CONFLICTS
indicates that the file contain * non-mergable conflicts and must be merged manually. * @see org.eclipse.team.ui.mapping.IMergeContext#merge(org.eclipse.core.resources.IFile, org.eclipse.core.runtime.IProgressMonitor) */ public IStatus merge(SyncInfo info, IProgressMonitor monitor); }
Providing the capability to manually merge a set of model elements require two things:
The first requirement is met by the team context proposal outlined in the Common Navigator section. The second can be met by giving such a view access to the merge context discussed in the Model Level Merging section. This context provides enough state and functionality to display a two-way or three-way comparison and perform the merge.
There are two types of displays that a Team operation may need:
Both these requirements are met by the team context proposal outlined in the Common Navigator section.
There are two aspects to consider for this feature:
In the following sections we outline some specific scenarios and describe what would be required to support them.
Logical model browsing in the repository would need to be rooted at the project as that is where the associations between the resources and the model providers is persisted. This leads to the following two requirements:
There are two options for providing the remote project contents to the model provider.
The second option is definitely preferable from a model provider standpoint because of the potential to reuse existing code. There are, however, a few things to consider:
IProject
does allow the
reuse of model building code. However, that code may have been written with
the assumption that the file contents are all local. Having an IProject
that is a view of remote state may introduce some performance problems.java.io.File
)
could be obtained from an IFile
using getLocation().toFile()
.IProject
will be read-only. The model code will
need to handle this. Ideally, this would be identified up from so the model
provider could indicate to the user which operations were not available. In
the absence of this, the model provider would need to fail gracefully on failed
writes.Several of the issues mentioned above would benefit from having an explicit distinction between projects that are remote views of a project state and those that are locally loaded in order to perform modifications.
When browsing a remote logical model version, the user may then want to compare
what they see with another version. If the browsing is done using a remote IProject
,
then the comparison is no different that if it were performed between a local
copy of the model and a remote resource mapping context.
The user want to see the change history of a particular model element. In order to do that, we need the following.
The above is straight forward if there is a one-to-one mapping between files and model elements. Repositories can typically provide the history for a particular file efficiently. The model could then interpret each file revision as a model change (i.e. the model provider could show a list of model element changes using the timestamp of file changes as the timestamps for the model element changes). If a user opened a particular change, the model provider would then load and interpret the contents of the file in order to display it in an appropriate way.
In the case where there are multiple model objects in a single file (many-to-one), the model provider would need to interpret the contents of the file in order to determine if the model element of interest actually changed in any particular file change. This could result in potentially many file content fetches in a way for which repository tooling is not optimized (i.e. repository tooling is optimized to give you a time slice not to retrieve all the revisions of the same file). One way to deal with this would be to have the model provider use the file history as the change history for the model element with the understanding that the element may not have changed between entries. Another possibility would be to do the computation once and cache the result (i.e. the points at which each element in the file changed) to be used in the future. As new revisions of a file are released, the cache could be updated to contain the latest change history. This would only need to consider the newest revision as he history doesn't change. It may even be possible to share this history description in the project.
The final case to consider is when a model element spans multiple files (one-to-many). If the files that make up a model element never change, then it is simply case of looking at the history of each file involved and building the element history from that. However, it becomes more complicated if the number or location of the files that make up a model element can change. The calculation of the change history can then become quite expensive depending on how the files that make up a model element are determined. For example, determining what files make up a model element may require the contents of one of more files to be read. Thus, you end up in the same situation as the many-to-one case. The same solutions proposed for that case could also be used here.
The one-to-many case is interesting for another reason. Different repositories provide different types of history. For instance, CVS only provides file based history. In the UI, the history based CVS operations are only available for files but not for folders or projects. That's not to say that the history couldn't be obtained for a folder or project, it is just that it can be expensive to determine (i.e. would require transferring large amounts of information from the server). Other repositories could potentially provide higher level histories. For instance, Subversion treats commits atomically so the history of a project could be determined by obtaining the set of all commits that intersected with the project.
This is important because supporting Team operations on logical model elements blurs the distinction between files and folders. That is, logical model elements adapt to ResourceMappings which could be a part of a file, a complete file, a set of files, a single folder, a set of folders, etc. The question is whether the ability to see the history of a model element should be available for all model elements or for only some.
Supporting history on arbitrary model elements requires the repository to be able to produce a time slice for each interesting change. This may be possible for some repositories, such as Subversion since it supports atomic commits. However, for others like CVS, there is no built in way to determine all the files that belong to a single commit. This could potentially be deduced by looking at a set of file histories and grouping the changes by timestamp but this would be a prohibitively expensive operation. Another possibility would be to present a reduced set of time slices based on version tags but this has it's own potential failings (i.e. tags are done at the file level as well so there are no guarantees that a tag represents the complete time slice of a project).
Ideally, users would be able to browse their model structure in the repository and pick those items which they wish to transfer to their workspace (i.e. checkout). In Eclipse, projects are the unit of transfer between the repository and the local workspace. This has the following implications:
The majority of the work here would need to be done by the repository tooling. That is, they would need to provide remote browsing capabilities and support partial project loading if appropriate. The ability to support cross-project references would also need additional API in Team that allowed these relationships to be stated in such a way that they could be shared and converted back to a project.
Although not part of the Platform, it is worth while to mention the potential role of EMF in a lot of the areas touched by this proposal. For EMF models, much of the required implementation could be done at the EMF level thus simplifying what models would need to do. Some possibilities are:
The following sections mention some of the issues we've come across when prototyping using EMF.
One of the requirements for supporting team operations on logical models is to be able to identify and compare model elements. By default, EMF uses object identity to indicate that two model elements are the same element. This works when you only have one copy of the model. However, for team operations, there can be multiple copies of the model (i.e. local, ancestor and remote). EMF does support the use of GUIDs (i.e. when XMI is used) but it is not the default.
This brings rise to another issue. Team operations can involve up to 3 copies of a model. Putting and keeping all 3 models in memory has performance implications. A means of identifying a model element without requiring that the entire model be loaded would be helpful.
Another issue is that EMF objects do not implement IAdaptable
but any objects that adapts to a ResourceMapping
must. One solution
would be to have EObject
implement IAdaptable but this is not possible
since EObject cannot have dependencies on Eclipse. This means that the owner
of the model must ensure that each of their model objects that adapt to ResourceMapping
implement IAdaptable
and their getAdapter
method match
that found in org.eclipse.runtime.PlatformObject
. Another option
is to remove the assumption made by clients that only objects that implement
IAdaptable
can be adapted. This is tricky since anyone can be a
client of the adaptable mechanism. We can ensure that the SDK gets updated but
can make no guarantees about other clients.
In this section, we describe what some Team scenarios might look like with the logical model integration enhancement we have discussed in previous sections. We will describe the scenarios in terms of CVS.
In this scenario, the user selects one or more model elements and chooses Team>Update. Currently what happens is each file that is updated will get it's new contents from the server. For files that have both local and remote modifications, the server attempts a clean merge but if that is not possible, the file will end up containing CVS specific markup identifying the conflicting sections. For binary files, no merge is attempted. Instead, the old file is moved and the new file downloaded. In both these cases, it is the users responsibility to resolve the conflicts by editing the file in order to remove any obsolete lines and the CVS conflict markup or decide which version of the binary file to keep, respectively. It should be noted that this "after-the-fact" conflict resolution will not be acceptable for many higher level models.
The goal of a Team>Update is to do an auto-merge if possible and only involve
the user if there are conflicts that need to be resolved. For operations in
the file model space, this can be done on a file by file basis. That is, an
auto-merge can be attempted on each file individually and only those files for
which the auto-merge is not possible would require user intervention. This should
be fairly straight forward to implement for CVS. The IStreamMerger
interface that was added in Eclipse 3.0 can be used to determine whether an
auto-merge is possible and perform the merge if it is. The files for which an
auto-merge is not possible could then be displayed in a dialog, compare editor
or even the sync view in order to allow the user to resolve any conflicts.
It is not clear that this file-by-file approach would be adequate for merges involving higher level model elements. The reason for this is that it is possible for a model element to span files. Auto-merging one of those files while leaving another unmerged may corrupt the model of disk. The decision about when auto-merge is possible and when it is not can only be made by the model tooling. Therefore, some portion of the merge will need to be delegated to the model.
There are several sub-scenarios to consider:
IResourceMappingMerger
with the
platform, then the merge of the model elements belonging to that model will
be delegated to the merger. The model merger will attempt an auto-merge at
the model level thus ensuring that the model on local disk is not corrupted.
If an auto-merge of one or more elements, is not possible, these will be returned
back to the Team operation for user intervention. The mechanics of this are
described in more detail below.When updating a model element, it may be possible that the merge is possible
at the model level. In other words, if an IResourceMappingMerger
is available for one or more resource mappings, the merge can be performed by
the model without ever dropping down to a lower level merge (e.g. file level
merge). This makes the assumption that the model doing the merge will not do
anything that corrupts lower level models. However, it does not ensure that
higher level models will not be corrupted. Hence, ideally, the Team operation
would still need to check for participants at the model level to ensure that
higher level models in order to include other resource mappings in the merge
if required.
If no model level merge is available, the update will need to be performed
at the file level. This means that participants at the file level must be queried
for additional resource mappings and then the merges can then be performed on
these files using the appropriate IStreamMerger
.
Manual Merging
Model objects that cannot be merged automatically need to be merged manually. There are two main pieces required to create a UI to allow the user to perform the manual merge:
Both of these pieces must be available given a set of resource mappings. The adaptable mechanism should be adequate to provide these in whatever form they take. If either are absent, the manual merges can still be performed at the file level.
For repositories, check-ins or commits happen at the file level. Here are some considerations when supporting commits on logical model elements.
Ideally, what the user would like to see is all the files and model elements being committed arranged in such a way that the relationships between them are obvious. If additional elements to those that were originally selected are included in the commit, these should be highlighted in some manner.
Tagging in repositories happens at the file level and, at least in CVS, can only be applied to content that is in the repository. This leads to the following two considerations when tagging:
The above two points really require two different views. The first is much like the view used for committing where the user sees any outgoing changes but this time with a message indicating that it is the ancestor state of these elements that will be tagged. The second is just a model based view that highlights those elements that will be tagged but were not in the original selection.
Replacing is similar to Update but is not as complicated as the local changes are discarded and replaced by the remote contents (i.e. no merging is required). However, there are the following considerations:
The requirements here are similar to tagging except that the determination
of additional elements is based on what the incoming changes are and, hence
could be displayed in a synchronization type view. There are similarities with
update in the sense that the existence of an IResourceMappingMerger
may mean that extra elements need not be affected at all.
As with Update, Replacing could be performed at the model level if the model
has an associated IResourceMappingMerger
. The mechanics would be
similar to Update except that no manual merge phase would be required. Also,
the model merger would either need a separate method (replace
)
or a flag on the merge
method (ignoreLocalChanges
)
to indicate that a replace was occurring. When performing a replace, the ancestor
context is not required.
The ability to provide model support in the synchronize view would be a natural byproduct of several of the requirements discussed above. To summarize, what would be required is:
These are all included as requirements for previously mentioned operations. The only additional requirement for Synchronize view integration is that the synchronization state display must keep itself up-to-date with local file system changes and remote changes. The synchronize view already has infrastructure for this at the file level which the model provider could use to ensure that the model elements in the view were kept up-to-date.
This section presents the requirements on various parties for this proposal. The parties we consider are the Eclipse Platform, Model Providers and Repository Providers.
The 3.2 Eclipse Platform release schedule is:
The Platform work for the items is this proposal and their target availably dates are:
Target dates are given for all items but this may be subject to change, especially for those items currently under investigation. For Remote Discovery, we are too early in our investigation to commit to a delivery date.
The model providers will need to do the following work to make full use of the support outlined in this proposal.
ResourceMapping
.ModelProvider
for determining team operation participation.IResourceMappingMerger
for performing model level
mergesNavigatorContentExtension
for the Common Navigator.
The model can chose whether to provide any, all or some of the above facilities. If they do not, then a suitable resource-based default implementation will be used.
Repository providers will need to provide the following:
RemoteResourceMappingContext
that allows the model to view
the ancestor or remote state of the repository.ISynchronizationContext
that allows the model to query the
synchronization state of the local resources with respect to the ancestor
and remote repository state.IMergeContext
which supports programmatic invocation of merge
operationsThe repository provider can decide which model provider facilities to make use and only using a subset may reduce the amount of work a repository provider must do. However, to achieve rich integration requires the repository provider to implement everything.
Here are some open issues and questions
Here are some assumptions we have made or limitations that may exist.
Changes in Version 0.3
Changes in Version 0.2