My ADTRAN

ADTRAN

ADTRAN
TECHNOLOGY BLOG


Thoughts on code development, access technologies, and telecommunication networks from developers within ADTRAN’s R&D organization.

ARCHIVED BLOG POSTS

ADTRAN

ADTRAN
TECHNOLOGY BLOG


Thoughts on code development, access technologies, and telecommunication networks from developers within ADTRAN’s R&D organization.

ARCHIVED BLOG POSTS




ADTRAN

ADTRAN
TECHNOLOGY BLOG

Thoughts on code development, access technologies, and telecommunication networks from developers within ADTRAN’s R&D organization.

ARCHIVED BLOG POSTS

Written by:
Nathan Alderson - @nathanalderson

Recently, this amazing video came across my twitter feed. It shows the cutover process from one switching technology to another at an AT&T central office back in 1984. These cutovers were a delicate process because they involved a service outage during the transition, including emergency service. By the time this video was made, Western Electric had honed the process to a science and could complete a cutover in well under a minute. Seriously, you should watch the video. It involves actual giant wire cutters. And mustaches.

Of course, it was in that same year that the federally-mandated breakup of Bell Systems went into effect, in which AT&T was forced to divest itself of the regional carriers. Previously, Western Electric had been the sole supplier of all components of the public telephone network—from long distance to the central office to the wiring and telephones in your own home. As a consequence of the 1984 breakup, the regional carriers were no longer required to purchase all of their equipment from Western Electric. Incidentally, ADTRAN was founded in 1985.

These days, at ADTRAN, we have a different kind of cutover problem. We use Scala quite heavily in our Mosaic Cloud Platform product. With Scala, updates from one major version (they use an <epoch>.<major>.<minor> versioning scheme) to the next are source compatible, but are not binary compatible. That means that Scala libraries must be compiled and published separately for each major version of Scala that they wish to support. The convention is to indicate which version of Scala a library is compiled against by appending a suffix to the artifact name (for example, my-library_2.11 versus my-library_2.12).

Several years ago, when Scala 2.11 was newly released, we had only a handful of libraries and projects, so we made the decision to perform a wholesale cutover from Scala 2.10 to 2.11. We made a plan. We got out the giant wire cutters. We all grew mustaches. Even then, it was difficult to manage the transition. Things had to be ordered just right. There was even one sorta-circular dependency that had to be treated specially. It definitely took more than a minute. Actually, it took weeks to prepare and several days to execute.

A few months ago, in November 2016, Scala version 2.12.0 was released, followed closely by 2.12.1. Unlike before, we now have dozens of libraries and over a hundred microservices, and it is impractical to force everyone to cut over together. Instead, we desired to do what many of our upstream libraries do: publish both 2.11 and 2.12 versions of our libraries from the same source code.

But that led to a problem. Most of those upstream Scala libraries that we use are built with sbt, which makes it quite easy to accomplish this. Our build infrastructure is all in Gradle, though, and searching around the internet didn't really reveal any simple solutions. We've been very happy with Gradle as our build tool, and this felt like a major hole in the Gradle+Scala ecosystem to me. So I built a plugin, and we're releasing it as open source.

The plugin allows a project to build against multiple versions of Scala. You declare a list of Scala versions via a project property (e.g. scalaVersions = 2.12.1, 2.11.8), and then declare dependencies like this:

compile "org.scala-lang:scala-library:%scala-version%"
compile "org.scala-lang.modules:scala-parser-combinators_%%:1.0.4"

Notice the placeholder values %scala-version% and _%%. Those will get filled in by the plugin with the appropriate Scala version and suffix, respectively. Now when you run Gradle tasks, they'll run once for each Scala version you listed. For example, if I define a task as follows:

task myTask() {
    doLast { println "myTask for Scala version $scalaVersion" }
}

And then call that task:

~/my-project$ ./gradlew myTask
:myTask
myTask for Scala version 2.12.1
: recurseWithScalaVersion_2.11.8
: my-project:myTask
myTask for Scala version 2.11.8
 
BUILD SUCCESSFUL

...you can see that myTask ran once for Scala version 2.12.1 and once for 2.11.8.

A few additional features are worth mentioning:

  • You can control which Scala versions get executed by default and override the default from the command line. This is useful if you only want to build one version routinely while developing, but want your continuous integration server, say, to build all versions.
  • Your jar file will have the appropriate suffix added (for example, my-project_2.12-1.2.3.jar).
  • Your POM file will have its dependencies set correctly (no _%% or %scala-version% placeholders present).
  • Multi-project builds are supported.

You can find the plugin source code on Github and you can use the plugin via Maven Central or plugins.gradle.org. The README file has detailed instructions for adding and configuring the plugin for your project.

Technical Details

There were a number of technical challenges with getting the plugin to work properly. If you're interested in how the plugin works, please read on. Otherwise, feel free to skip this section.

This plugin is fundamentally built around using GradleBuild tasks to recursively run your build for each version of Scala that you specify. When the plugin is applied, it examines the build properties (typically specified in either gradle.properties or on the command line) to determine which versions of Scala should be targeted for this run. The first (or potentially only) version is run in the current build context. Then for any additional versions of Scala, a new GradleBuild task is created with a name like recurseWithScalaVersion_<version>. The tasks property of this new task is set to match the tasks that were provided on the command line.

Then, those new GradleBuild tasks are appended to the project.gradle.startParameter.taskNames list so that they will be executed as if they had been specified on the command line, too. We have to take care to pass along the parameters from the root build to each recursive build and also to ensure that we don't create an infinite recursion chain. Here's what the task-adding function looks like:

private void addTasks() {
    def recurseScalaVersions = determineScalaVersions()
    if (!project.ext.has("recursed") && !project.gradle.ext.has("recursionTaskAdded")) {
        def buildVersionTasks = recurseScalaVersions.collect { ver ->
            project.tasks.create("recurseWithScalaVersion_$ver", GradleBuild) {
                startParameter = project.gradle.startParameter.newInstance()
                startParameter.projectProperties["scalaVersion"] = ver
                startParameter.projectProperties["recursed"] = true
                tasks = project.gradle.startParameter.taskNames
            }
        }
        def tasksToAdd = buildVersionTasks.collect{ it.path }
        project.gradle.startParameter.taskNames += tasksToAdd
        project.gradle.ext.recursionTaskAdded = true
    }
}

This approach has the advantage of a fairly straightforward implementation and it creates a really simple user interface: basically, declare your Scala versions then run tasks like you normally would.

Once I had that working, performing the dependency substitution was quite easy. The following code will call a function replaceScalaVersions() with an instance of a DependencyResolveDetails for each dependency prior to attempting to resolve it.

project.configurations.all { conf ->
    conf.resolutionStrategy.eachDependency { replaceScalaVersions(it) }
}

I did run into an unexpected issue when implementing replaceScalaVersions(). The basic goal is to inspect the requested dependency, substitute the placeholders, and then call useTarget() with the updated name/version. However, I found that calling useTarget() on project dependencies caused issues. There really shouldn't be a need to do any substitutions on these anyway, so a check was added to only call useTarget() if the target was actually changed in some way. Here's the full function:

private void replaceScalaVersions(DependencyResolveDetails details) {
    def newName = details.requested.name.replace(
            project.scalaMultiVersion.scalaSuffixPlaceholder,
            project.ext.scalaSuffix)
    def newVersion = details.requested.version.replace(
            project.scalaMultiVersion.scalaVersionPlaceholder,
            project.ext.scalaVersion)
    def newTarget = "$details.requested.group:$newName:$newVersion"
    if(newTarget != details.requested.toString()) {
        // unnecessarily calling `useTarget` seemed to cause problems in some cases,
        // particularly with `project(...)`-style dependencies.
        details.useTarget(newTarget)
    }
}

The final challenge was getting the POM files right. Gradle writes the dependencies section of the POM file using the requested dependencies rather than the resolved dependencies. In this case, that meant that my POM files were being written with the _%% and %scala-version% placeholders, which made them unusable by other projects not using this plugin. Modifying the POM files correctly turned out to be quite challenging. Fortunately, I found the excellent nebula-publishing-plugin from the Nebula project by Netflix, which includes a plugin for putting resolved dependencies in your POM file. It didn't fit exactly what I needed, but I was able to adapt the code easily to fit my use case. In particular, their plugin only applies to projects using the maven-publish plugin, but most of our projects are still on the maven plugin. Here is a useful snippet I worked out which will apply a function (in this case, resolveMavenPomDependencies()) to the POM file generated by both plugins:

private void resolvePomDependencies() {
    project.afterEvaluate {
        // for projects using the maven plugin
        project.tasks.withType(Upload).collectMany {
            it.repositories.withType(MavenResolver)
        }.each { resolver ->
            def poms = resolver.activePomFilters.collect { filter ->
                (filter.name == "default") ? resolver.pom : resolver.pom(filter.name)
            }
            poms.each { pom -> pom.withXml { resolveMavenPomDependencies(it) } }
        }
        // for projects using the maven-publish plugin
        if (project.plugins.hasPlugin("maven-publish")) {
            project.publishing.publications.withType(MavenPublication) {
                pom.withXml { resolveMavenPomDependencies(it) }
                artifactId += project.ext.scalaSuffix
            }
        }
    }
}

You'll also notice that for the maven-publish plugin, I had to update the artifactId to include the Scala version suffix. Otherwise, it defaults to the project name. The maven plugin, on the other hand, seems to use the name of the artifact, which the plugin already set elsewhere.

The actual implementation of resolveMavenPomDependencies() is rather lengthy so it isn't included here, but it is an interesting read. If you're interested, you can find it on Github. It receives the POM file as an XmlProvider object and walks the dependencies, looking up each one in the resolved dependencies of the compile, runtime, and test configurations. If it finds a match, it rewrites that node in the XML accordingly.

Conclusion

We have been using this plugin internally at ADTRAN for some time. Hopefully, by releasing this as open source, other Gradle users in the Scala community will be able to more easily manage the transition from Scala 2.11 to Scala 2.12 and beyond. No giant wire cutters needed.

Feedback is more than welcome! Feel free to file issues or pull requests on the Github project or hit me up on Twitter.

Archived Blog Posts