Friday, April 3, 2009

Eclipse is a Product and that’s a Good Thing (tm)

I know that Bjorn is just trying to stimulate the discussion and challenge the status quo with his recent blog posts. That’s a good thing. I do find myself disagreeing completely with his latest proposal to stop distribution of binaries from and instead shift the responsibility to various member companies. Basically, we would go from the FireFox model to the Linux model.

Here is a quote from Bjorn’s blog post responding to some of the negative comments about his proposal:

Changing Things Will Kill Eclipse. I just don't see this. For example, The Linux Foundation doesn't distribute a single binary and yet Linux is so popular that it is scaring Microsoft. Wayne also pointed this out.

The important question to ask here is could Linux have been much more successful if there was a single canonical distribution provided by the Linux Foundation. Yes, Linux is getting more popular, but not nearly as fast as people would have liked and its adoption curve against competition is not nearly as good as the adoption curve for other major open source projects that follow the product model. Why is that? The dozens of different Linux distros fragment the market, create confusion for new users (have to make a choice between distros that are potentially radically different when you don’t yet have a clue), create barriers to skill transition (just because you’ve learned how to use Linux on your machine doesn’t mean that you will be able to use Linux on friend’s machine), and make it significantly more expensive for vendors to deliver new software for the platform (what works on one distro may not work on another). Now compare that to FireFox. It is much younger project than Linux, but has already managed to make significantly more progress against competition than Linux. This is not an accident. You don’t have to take my word for it, a number of people have written about these problems with Linux and how they stand in the way of its growth. Why would we want to emulate the Linux model?

Platform alone is not a good strategy. No matter how good the platform is, companies and individuals will only use the platform if it enables them to reach a significant user base. It takes a product to do that. Not dozens of different products that confuse the users and make it more difficult to build on the platform, but a single trusted canonical product. Eclipse as a Product helps ensure success of Eclipse as a Platform.

Ultimately, what is the problem that we would be trying to solve by stopping distribution of binaries at Bjorn makes the argument that Eclipse community delivers such a poor quality product and that users are having such a hard time receiving adequate support on the forums that we need to do something drastic to address the problem. How do we evaluate this argument? You cannot look at forum posts alone. The voices of a few disgruntled individuals drown out the opinions of thousands of satisfied users. After all, people only go to forums when they have problems. I would look at Eclipse adoption curves instead as a true measure of user satisfaction. A product that has significant quality problems would not keep growing. The growth would stall and we should see adoption numbers going down. We are not seeing that with Eclipse. The evidence just does not backup Bjorn’s argument.

I do happen to agree that there is more that we could do to help users get better support through paid channels, but we do not need to resort to drastic measures like what Bjorn is proposing. The harm from going forward with this proposal would be far greater than potential benefits.

Sunday, March 1, 2009

Count me out from p2 fan club

I don’t make habit of ranting about technology, but p2 has been driving me up the wall. The old update manager may not have been perfect, but at least it didn’t have the bad habit of preventing installation cases that should work from working.

So we are putting finishing touches on a new version of Oracle Enterprise Pack for Eclipse (OEPE) and it’s time to test various installation scenarios. Eclipse Ganymede SR2 was also just released last week, so we are verifying compatibility with it. One of the basic installation scenarios that we are testing starts out with an all-in-one kit that includes Eclipse Ganymede GA with the previous version of OEPE. The second step in the scenario is to update all of the Eclipse components in that installation. The new version of OEPE requires at least SR1. I let Eclipse search for updates and install everything that it finds. That works. Presumably at this point, the installation should be equivalent to a fresh Eclipse Ganymede SR2 install. The final step is to install the new version of OEPE from a local update site. P2 thinks for a while and then says there are problems that will prevent the installation from working and refuses to go forward.

WTF? I know perfectly well that the plugins we are installing are compatible with Ganymede SR2. They’ve been built using SR2 as the target platform and they work just fine when simply added to the Eclipse installation. Now what? Taking a look at the problems reported by p2, I find about a hundred messages that look like the following.

Cannot find a solution where both "bundle org.eclipse.wst.validation [1.2.0,1.3.0)" and "bundle org.eclipse.wst.validation [1.1.0,1.2.0)" are satisfied.
Unsatisfied dependency: [ 1.0.204.v200708151945] requiredCapability: osgi.bundle/org.eclipse.core.resources/[3.2.0,3.4.0)
Unsatisfied dependency: [ 1.0.204.v200801222138] requiredCapability: osgi.bundle/org.eclipse.core.resources/[3.2.0,3.4.0)
Unsatisfied dependency: [ 1.0.204.v200711140435] requiredCapability: osgi.bundle/org.eclipse.core.resources/[3.2.0,3.4.0)
Unsatisfied dependency: [org.eclipse.wst.command.env.ui 1.0.203.v200709052219] requiredCapability: osgi.bundle/org.eclipse.core.resources/[3.2.0,3.4.0)

These messages don’t make a whole lot of sense. None of them reference plugins that I am trying to install and there is no hint in the messages regarding what actually caused the problem. I looked at one of them in detail to make sure I wasn’t missing something obvious. The second message says that a dependency of plugin cannot be found. That makes sense since it’s a very old version of the plugin that’s not compatible with Ganymede. The question is why p2 trying to resolve that plugin? I look at my installation and I see versions 1.0.304.v200805140230 and 1.0.306.v200810082309. That makes sense. Those versions correspond to what shipped with Ganymede GA and Ganymede SR2. I do not find version 1.0.204.v200708151945 that p2 is complaining about anywhere.

At this point, I gave up trying to make sense of the problem messages and proceeded to blindly try various changes to the way the OEPE update site is constructed to see if I would get a different result. Two alternatives I tried made this scenario work: (a) removing all version constraints from plugin dependencies and (b) reverting to the old-style update site with a site.xml file and no p2 metadata. We still need to do some more testing, but we probably will go with (b) and give up on p2-enabling our update site.

I had reservations about p2 since the poor way in which it was rolled out roughly a year ago, but after a year of fighting with it and this recent experience, I can honestly say (without an ounce of exaggeration) that p2 is the worst regression ever introduced into the Eclipse Platform. I understand the problems that p2 is supposed to solve, but there is just no excuse for destroying the most basic of core scenarios in the process. If it wasn’t ready for Ganymede, it should have stayed as an incubator for a while longer.

Thursday, February 5, 2009

Field-level key bindings

Sometimes you need to create key bindings in Eclipse which are scoped to just a single control. I had one such case today. I have a table cell editor that’s based on the TextCellEditor, but adds a small graphical browse button. It works great as long as you are using a mouse, but I needed to make the browsing function keyboard accessible for people with disabilities.

Eclipse has a nice command framework that lets you define commands in abstract sense, place them in contexts, define key bindings and finally associate handlers to do the actual work. Finding an example that puts all of these concepts to work together in a particular way can be challenging, so I thought I would share my solution to the above problem and explain some of this API along the way.

The first step is to define the command. A command is an operation that a user can perform, but we don’t actually specify how to perform that operation when defining the command. That comes later when we add a handler. Every command must belong to a category. Here we define a category as well. You will typically want to create a category for each broad functional area to make it easier for users to find and manage your commands in the preferences.

<extension point="org.eclipse.ui.commands">
    name="My Category"/>

The next step is to define the context. The context controls what commands are available via key bindings based on where in the workbench the user is working. Typically views and editors define contexts, but there is nothing stopping you from defining one that is more focused. In this example, we will create a context for fields with browsing capability.

<extension point="org.eclipse.ui.contexts">
    name="In Browseable Field"/>

The final declarative step is to define the key binding. The following assigns Ctrl+L to the browse command in the browseable field context.

<extension point="org.eclipse.ui.bindings">

And now for the final bit of magic… The following function brings it all together by enabling the browseable field context and associating a handler with the browse command when the specified text field gains focus. When the focus is lost, the context and the handler are deactivated.

public static void addBrowseKeyBinding( final Text textField,
                                        final Runnable browseOperation )
    final IHandler browseCommandHandler = new AbstractHandler() 
        public Object execute( final ExecutionEvent event )
            return null;
    final IWorkbench workbench = PlatformUI.getWorkbench();
    final IHandlerService handlerService 
        = (IHandlerService) workbench.getService( IHandlerService.class );

    final IContextService contextService 
        = (IContextService) workbench.getService( IContextService.class );
    final IHandlerActivation[] handlerActivationRef = new IHandlerActivation[ 1 ];
    final IContextActivation[] contextActivationRef = new IContextActivation[ 1 ];
        new FocusListener()
            public void focusGained( final FocusEvent event )
                final IHandlerActivation handlerActivation
                    = handlerService.activateHandler( "my.browse.command", browseCommandHandler );
                handlerActivationRef[ 0 ] = handlerActivation;
                final IContextActivation contextActivation
                    = contextService.activateContext( "my.browseable.field.context" );
                contextActivationRef[ 0 ] = contextActivation;

            public void focusLost( final FocusEvent event )
                handlerService.deactivateHandler( handlerActivationRef[ 0 ] );
                contextService.deactivateContext( contextActivationRef[ 0 ] );

Sunday, February 1, 2009

Better way to manage dependency version ranges

OSGi provides an extremely powerful and precise mechanism for controlling acceptable version ranges when specifying dependency on bundles or packages. In theory (as described by policies of various projects at Eclipse), the developer would take into account his plugin’s API and behavior needs, cross-reference that with version information about the bundle in question and carefully craft the version range in dependency declaration to accurately reflect his plugin’s actual needs while leaving the version range as open as possible to allow users maximum flexibility when composing an installation. Further, in theory, the developer should be continuously aware of dependency version ranges specified in his product’s various plugins and how they correlate to functionality exposed by those plugin. As development progresses, the developer is supposed to be able to spot when he started depending on functionality that’s not available in the specified min version and reset the min version accordingly.

That’s the theory. In practice I haven’t met a single developer with sufficient time on their hands or sufficient mental capacity to keep all of the necessary information in his head at all times in order to properly apply this policy. What I’ve seen happen most often is that the min range gets set based on whatever the plugin version happens to be at the time the dependency is first added. PDE helpfully inserts this information in your manifest by default. The max version then gets set by applying a team policy (typically by bumping up either the major or the minor version). This happens when dependency is first introduced. As the code continues to evolve, the min version is typically not touched again. The max version is incremented when the build gets broken by the dependency bumping up their versions past a certain point. The cycle repeats.

After many years of observing this situation, I am convinced that having developers manage version ranges creates a lot of overhead and does not yield satisfactory results no matter how hard people try.  To me, dependency version ranges are most useful when you have shipped your product in binary form. When taken collectively across a component (collection of bundles), they represent a statement of what your team is willing to support as a working configuration. Ideally, this information should be consistent across plugins and as accurate as possible.

Any time you talk about setting version ranges, you are considering three versions:

  1. The version that you developed and did most of your testing with. I call this the “target version”. Typically, this is what you would list as recommended configuration in your documentation.
  2. The minimum version that you are willing to support. The level of testing you can afford to allocate to this version is bound to be less than what you would allocate for the target version, so there is a certain amount of risk that an undetected issue is going to slip through. The further back you go from the target version when setting the minimum version, the greater your risk.
  3. The maximum version that you are willing to support. Since this version will typically not exists at the time of your ship date, setting this version involves an educated guess based on understanding of what policies your dependencies use when incrementing their versions and the degree to which you are relying on undocumented (internal) code and behaviors. The spread between the target version and the maximum version is where you highest risk lies. On one hand you’d like to assure long viability of your release in the field. On the other hand, the further out you go, the greater the risk that your product will not work and make a liar out of you in the eyes of your users.

Because getting the above version decisions right and consistent across a component is extremely important, it is not a good idea for individual developers to be making these decisions on a plugin-by-plugin basis. In an Open Source environment, this should be a component-wide decision made collectively by the committers. In a commercial environment, this decision is often made higher up in the organization based on availability of resources and target user base considerations.

When the overall decision is made, it is typically expressed in broad terms. For instance… “this version will ship on Ganymede SR1, but should work with all versions of Ganymede starting with GA”. It is then up to developers to translate that requirement into version ranges in the manifest.

That’s a ton of tedious manual work with lots of room for mistakes. In other words, a perfect candidate for automation. A few years ago, I wrote a set of two custom Ant tasks to automate this process. The first task reads an Eclipse installation and produces an inventory file that lists id and version of every bundle found. The second tasks takes as input an inventory file representing the minimum platform, an inventory file representing the target platform and a policy for setting the maximum versions.  For every dependency, the task looks up the version from the minimum platform inventory. That becomes the left-hand-side of the version range. It then looks up the bundle version in the target platform inventory and applies the policy function to it. Here are some examples of policy functions: “x.y.z ->x+1.0.0”, “x.y.z ->x.y+1.o” or the extremely conservative “x.y.z ->x.y.z+1”. You can set different policies for different plugins or components based on what you know of their versioning conventions. The version returned by the policy function becomes the right-hand-side of the version range.

We have been using these two tasks to automate and improve the quality of our version ranges for several releases of Eclipse tooling products at BEA and now at Oracle. Developers don’t set the versions on the dependencies specified in the bundle manifests stored in the source repository. At the end of every build, a process runs that splices version ranges into the manifests just prior to packaging the bundles for distribution. The target inventory is always generated on the fly based on whatever the product is building against. The minimum platform inventory is generated once when the minimum platform decision is made. The inventory is then stored in the source repository.

This has been an extremely useful process improvement for us. Not only do we have more confidence in the version ranges encoded in our product distributions, but it takes significantly less work for developers to manage all of this. The developers never have to think about dependency versions during normal course of development and integrating new versions of dependencies takes less work (since version ranges in manifest don’t have to be fixed manually to get the build to work).

Saturday, January 31, 2009

Ant Snippets : expand-to-single-dir

I try to feign ignorance about anything Ant-related, but it hasn't been working very well for me. It seems that no matter which way I turn, I am elbows deep in some Ant-based project. I guess I just like automating the tedious aspects of software engineering and I hate shell scripts even more than I dislike Ant. So I decided to do something very evil. I figure if I start spreading some of the knowledge around, other people will pick up some of these automation tasks and I can take a break from Ant for a while.

First, the most powerful tool in your Ant arsenal is the Ant-Contrib package from SourceForge. Pure Ant is declarative in nature and lacks basic code flow constructs such as loops and if statements that most engineers live by. This is intentional. The core Ant premise is that you should describe what you want to accomplish rather than spelling out how to accomplish it. That works for some people, but I am not one of those people. I take my Ant with a good dose of Ant-Contrib.

Now that we got preliminaries out of the way, I will share a little snippet of Ant that I wrote a while ago to help wrangle various zip-based distributions into usable form. One of the more common chores that you run into is to take a bunch of zip files in a specified directory and expand them all into a single directory (overwriting files as necessary). In the context of Eclipse this scenario comes up often when setting up target platforms from integration builds of various relevant projects or when packaging your Eclipse-based product for distribution (aka the all-in-one kit). It is nice to be able to have a utility in your arsenal that can do this easily and without hard coding the names of all the zips that need to be extracted.

<target name="expand-to-single-dir">
  <check-required-property name="src"/>
  <property name="dest" value="${src}"/>
  <property name="includes" value="*.zip"/>
  <property name="excludes" value=""/>
  <property name="strip.path.prefix" value=""/>
    src="${src}" dest="${dest}" includes="${includes}" 
    excludes="${excludes}" strip-path-prefix="${strip.path.prefix}"/>
<macrodef name="expand-to-single-dir">
  <attribute name="src"/>
  <attribute name="dest"/>
  <attribute name="includes" default="*.zip"/>
  <attribute name="excludes" default=""/>
  <attribute name="strip-path-prefix" default=""/>
    <mkdir dir="@{dest}"/>
    <for param="component.archive">
        <fileset dir="@{src}" includes="@{includes}" excludes="@{excludes}"/>
        <unzip src="@{component.archive}" dest="@{dest}" overwrite="true">
          <mapper type="regexp" from="^@{strip-path-prefix}(.*)$$" to="\1"/>
<macrodef name="check-required-property">
  <attribute name="name"/>
      <not><isset property="@{name}"/></not>
        <fail message="Property '@{name}' must be specified."/>

The above snippet defines expand-to-single-dir macro that you can easily use when writing your Ant targets and it also defines a target with the same name. The target is useful as a stand-alone tool that can be invoked directly from the command line. Since I am an Eclipse developer these days, the second part of this snippet adds a convenience target that calls the above utility in a way that also strips the typically-unwanted leading “eclipse” directory that is present in all Eclipse distributions.

<target name="expand-eclipse-zips">
  <check-required-property name="src"/>
  <property name="dest" value="${src}"/>
    src="${src}" dest="${dest}" includes="*.zip" strip-path-prefix="eclipse/"/>