Open Source Tools for Java Deployment
by Bruno Souza and Edson Yanaga
Published May 2014
Step up your game on projects of any size.
Continuous deployment is a set of automation practices that reduces lead time and improves the reliability, quality, and overhead of software releases. Implementing continuous deployment requires some work, but it has a very positive impact on a project. By establishing a sound deployment pipeline and keeping all of your environments—from development to test to production—as similar as possible, you can drastically reduce risks in the development process and make innovation, experimentation, and sustained productivity easier to achieve.
Originally published in the May/June 2014 issue of Java Magazine. Subscribe today.
Developers are usually aware of the basic building blocks for deployment: code repositories, such as Git and Subversion; and build tools, including Ant, Maven, and Gradle. But what other tools can help you step up?
In this article, we present seven open source tools that you can use right now to improve the deployment process on projects big or small. These tools are among the best and most-used tools in their areas; they attract developers who have created a large body of knowledge, plugins, and connectors that can be used in a wide range of situations and integrated with other tools and processes.
But more importantly, these tools can dramatically improve your deployments. They can empower your team to build better and more-innovative software in a less stressful environment.
Open Source Tools for
Continuous Deployment
View Illustration>>
#1 Jenkins
Start with Continuous Integration
Power and flexibility in Jenkins: The Build Pipeline Plugin makes it easy to chain Jenkins jobs to organize a deployment pipeline.
Released early in its life as the Hudson continuous integration (CI) server, Jenkins is the most active and most used CI server today. (For more on Hudson, check out “Mastering Binaries with Hudson, Maven, Git, Artifactory, and Bintray,” by Michael Hüttermann, in this issue.)
Jenkins is the cornerstone of automation in your project, no matter what language your project is written in. But with its origins—and popularity—in the Java community, Jenkins is particularly well suited for Java projects from desktop installers to remotely deployed web application archive (WAR) files to application servers in the cloud.
Because of its extensive plugins community, Jenkins can connect to diverse systems. With its flexible and powerful job definition mechanism, you can use any kind of build tools to automate every part of the development process. This functionality creates a system that can grab information from multiple sources, run the build steps to create nearly any type of application you can think of, and connect to other target systems to deploy the application to whatever infrastructure is needed.
To build a good deployment process, you need to grab a few of those plugins. Here are some that can’t be left out of any deployment environment:
- Build Pipeline Plugin. The Jenkins delivery approach involves the chaining of related jobs through build triggers. This flexible functionality lets you create sophisticated pipelines, but it can be difficult to see the pipeline’s “whole picture,” the relationships between jobs, and where in the pipeline the build is at a given moment. The Build Pipeline Plugin solves that problem by providing a nice overview of the build pipeline, so you can easily follow the build as it happens and even decide when things should be automatic or require a manual trigger.
- Parameterized Trigger Plugin. In a build pipeline, developers use the artifact that is generated as output from one job as the input for the next job in the pipeline. The Parameterized Trigger Plugin informs the next job which build result it must use to keep the pipeline moving.
- Copy Artifact Plugin. A complement of the Parameterized Trigger Plugin, the Copy Artifact Plugin uses the parameters received from the previous job to fetch that job’s resulting artifacts and uses them as the starting point of the next job.
#2 Chef
Turn Your Infrastructure into Source Code
Chef is a provisioning automation framework that simplifies the configuration of your development or production environment—whether on-premises or cloud-based. It is a Ruby-based tool that does wonders to deploy any infrastructure needed for your Java application.
By using Chef, you can define a project’s infrastructure with Ruby scripts that are versioned within the project’s source code repository. Chef scripts are called recipes, and they are bundled in cookbooks. With scripts written in Ruby, you have a well-known, generic scripting language to automate infrastructure activities.
Any time you need to evolve the infrastructure, you can evolve the scripts and rebuild the full environment with the new definitions. By doing so, Chef replaces lots of documented (and undocumented) manual installations, configurations, and ad hoc decisions, and it creates clean, versioned, automated steps to generate the infrastructure. This powerful concept allows the infrastructure definition to evolve with the application and promotes team interaction. Having a common tool, a common repository, and a clear deployment process goes a long way to integrate the development and operations teams.
Chef handles all kinds of infrastructure components: from your operating system and the software that needs to be installed to any configuration you need to apply, such as users and IP addresses. But Chef can go much further if needed, handling firewalls, network devices, multiple servers, cloud environments, and other components that form your infrastructure. Besides building your application environment, Chef includes a client/server architecture that lets you centrally manage patches and software upgrades on multiple servers.
Lose the Stress
These tools can dramatically improve your deployments. They can empower your team to build better and more-innovative software in a less stressful environment.The easiest way to start using Chef is with its chef-solo client version, which simplifies environment provisioning so you can build the necessary infrastructure for your project with a single command. After you have your cookbooks, you can use knife, Chef’s command-line interface (CLI), to prepare a server by installing chef-solo in it, as shown in Listing 1. And you can automatically provision the server from the cookbooks, as shown in Listing 2.
$ knife solo prepare host_username@host
Listing 1
$ knife solo cook host_username@host
Listing 2
#3 Vagrant
Reproduce Development Environments
Of all the tools mentioned in this article, Vagrant is probably the least concerned with deployment per se. Vagrant is focused on the developer, but it bridges development and production, helping to reduce inherent risk by minimizing discrepancies between the different environments of the deployment process.
A common problem developers have is constructing a local development environment in which to run the application they’re building. Target systems are becoming more sophisticated and complex, with multiple web and application servers, databases, library dependencies, queues, service integration frameworks, caches, and load balancers, among other elements. This situation complicates the possibility of having development and test environments that closely resemble the final production environment, making it difficult to create a reliable build-test-deploy pipeline.
Vagrant generates virtual development environments from a text file definition called the Vagrantfile. You can have this definition checked in with the source code of your project, and every developer can then use Vagrant by running a single command to provision a fully functional development environment identical to everyone else’s. Locally, this environment is created as a virtual machine running within Oracle VM VirtualBox, Oracle’s open source virtualization product.
Big Impact
Implementing continuous deployment requires some work, but it has a very positive impact on a project.After you have a Vagrantfile configured, you run a single command:
$ vagrant up
Here’s how Vagrant meets deployment: your CI server can use the Vagrantfile to build virtual machines for your test and quality assurance (QA) environments. Then later—especially if Vagrant is integrated with Chef, Packer, or Docker—the Vagrantfile forms the basis for your continuous deployment process to build your final production environment.
At first look, Vagrant might seem similar to Chef, but in fact, they work great together. Chef helps you provision your infrastructure and build your systems from development to production. Vagrant, however, helps you create provisioned virtual machines; it can use Chef to do the actual provisioning. Even better, Vagrant works amazingly well with Chef’s chef-solo client to build development and test environments. This functionality allows Chef to define your infrastructure, and you can run Vagrant to have a development environment that’s similar to the target production system. All environments—development, test, QA, and production—can evolve together from the same Chef description, which is versioned, tagged, and kept in your code repository.
#4 Packer
Generate Images for Multiple Environments
It’s all in the image: Packer bundles the environment and the application, producing a ready-to-run virtual image appliance.
In a “normal deployment,” you provision, or prepare, your systems for operation. Starting from a base system, you install all the needed software until you have a fully functional environment. Tools such as Chef help you automate the provisioning process, making it repeatable and easy to evolve.
However, virtualization and cloud computing changed normal deployment. Today, your “base system,” which is usually encapsulated into a virtual image, can be prepared with everything you need, all the way to the application level and even the data. What we used to call provisioning could take some time, but now it’s simply starting a virtual machine and can be accomplished in seconds with everything ready to go.
Building new virtual images is a painful process. Because of that, many teams keep large portions of their solution outside the image. Then, after the virtual image is instantiated, they run extra processes to update the application from the repository, download data, and adjust configurations. That way, the base image stays reasonably stable over the course of many deployments. This process is good, but it complicates the evolution of the environment. It can create weird failures when the application is deployed in an image that doesn’t have the latest infrastructure improvements to support it.
This is where Packer comes in. Packer is a powerful command-line utility that automatically generates images for different providers (Oracle VM VirtualBox, VMware, AWS, DigitalOcean, Docker, and so on). It can run at the end of your CI process, and it reuses your Chef provisioning. The result of Packer’s execution—which can take some time, depending on your provisioning needs—is a fully functional, everything-integrated, configured, and running virtual machine image that’s ready to start.
Because Packer generates the image automatically, you can run it at every release build and include in the image the latest version of your application, already deployed in the configured application server, with the data that needs to be there. You just need to start an instance and your system is up and running in a few seconds. It’s ready to increase the pool in your load balancer, replace a failing server, or take over the job from the server that was running the previous version.
All you need is a Packer template with your image definition, and then you run the following command:
$ packer build template.json
#5 Docker
Try Self-Sufficient and Portable Lightweight Containers
Creating ready-to-boot virtual machine images is a sophisticated way to use cloud computing so you can scale your application. But dealing with full stacks, complete operating systems, application servers, and the rest might be overkill. In many cases, all you want is to add one more application to your stack.
Platform-as-a-service (PaaS) solutions have been well received by developers, because they can add their application to a predefined stack with little overhead to worry about. Simply push the code, and it runs. But this approach might be too simple. It requires that you run your application in a shared environment inside a cloud provider, with whatever stack is provided, and sometimes with less flexibility than you’d like.
This is where Docker comes in. Built on top of Linux Containers (LXC), Docker creates portable, self-sufficient containers that include your application. Docker can be run locally on your computer or scaled to cloud deployments. By encapsulating everything your application needs in the container abstraction, Docker makes deployment easy. At the same time, because Docker deals with containers, your application doesn’t need to carry all the baggage of a full operating system installation. Docker containers can be extremely lightweight, so you can run many containers—even on small systems.
You can configure a Docker image to include everything you need, as a PaaS provider would. And then, each container can host your application, with only what needs to be changed from application to application. Containers are easy to start, stop, manage, and evolve. They are also isolated, and applications can have their own version of libraries.
Docker can be used both to create a PaaS-like environment and to make your CI process easier by using containers to manage test and QA environments with the same container definition that’s put into production later. Docker also integrates well with Chef and Packer, and you can use those tools to generate a container from your application automatically. This same container can be used locally for builds and tests; it can be run by your CI server for integration and deployment; and it can scale to virtual machines, local servers, and private or public clouds to run your production.
After you have a Dockerfile configured, you can easily start new containers by running the following command:
$ docker run image/name command
For example, Quinten Krijger has an Apache Tomcat image that you can download by running the following command:
$ docker pull quintenk/tomcat7
And then you can use the following command to run the image as a daemon:
$ docker run -d quintenk/tomcat7
#6 Flyway
Database Migrations that Follow Your Applications
Powerful database migrations: Flyway can leverage the use of the JDBC API to handle schema migrations.
We can’t talk about deployment without mentioning databases. Developers understand how to upgrade an application, maintain compatibility with older versions of APIs, and deprecate functionality. But what about the database? The hardest part of an automated deployment can be underdocumented SQL scripts that need to be run “just before” production, differences between the code and the schema, and evolution and rollback situations.
Flyway shines in these situations. Developed in Java and focused on the needs of Java developers, Flyway is a database migration framework that can migrate from any version of your database schema to the latest version. Flyway keeps the database definition of your application safe inside your version control system and turns your database into a clear, precise, and versioned set of instructions that can be re-created or migrated every time that is needed.
Flyway has a CLI utility and also integrates with build tools such as Maven and Ant. It can run as part of your CI process to upgrade or create a database before running tests or going into QA. Flyway can run SQL scripts or—if you have sophisticated database needs—specific Java code to handle migrations through the JDBC API. Through its API, you can also manage migrations from inside your application to implement database management functionality. If you are distributing an application, Flyway can handle the database creation or migration on the first run.
By turning your database schemas into code in your repository, Flyway provides visibility into the database and helps the whole team participate in its evolution. This integration pushes everyone to work together toward continuous deployment.
With Flyway configured, you can create or migrate your databases using this CLI command:
$ flyway migrate
#7 Rundeck
Support the Ops in DevOps
Managing multiple nodes: Rundeck runs commands in several nodes, and aggregates the results.
That brings us to DevOps, a term that means communication, collaboration, and integration between developers and IT operations professionals. As developers, we know how good tools promote collaboration. When we save infrastructure definitions in our repositories or we automate infrastructure creation in a way that benefits both developers and IT operations professionals, we promote DevOps strategies in our projects. Each of these seven tools helps the collaboration between development and operations.
Collaboration is important so we can deliver the software products and services that customers expect. But each group has a different view of what software and services are. While developers think about code, bugs, libraries, and dependencies, IT operations professionals consider servers, nodes, security, and auditing. Developers worry about functionality; operations professionals worry about availability. Developers think that automation means building the source code and generating artifacts for deployment. Operations professionals might call automation the discovery of new nodes and the dispatching of commands to multiple servers. These different views focus on the same servers and the same services, and they’re both responsible for getting working software to the end user.
Rundeck, a Java-based web application, helps the integration and collaboration of these two worldviews by functioning as the operations console for your environment. Rundeck knows about the details of the environment, such as the nodes you have and the services you’re running. It can execute actions in the environment, by either running predefined jobs or executing ad hoc commands on one or more nodes.
Rundeck integrates beautifully with the other tools in this set, and it retrieves detailed environment information from Chef servers or collects node information directly from your cloud provider—so you have an updated view, even in a dynamic cloud environment. Rundeck then executes actions on a single node or on hundreds of nodes. It does that either directly through Secure Shell (SSH) or by integrating with Chef or other tools, and by defining filters to decide what to run and where. The results and outputs are then aggregated, helping make sense of what is happening.
A secure web dashboard provides access control to Rundeck jobs. Jobs are also turned into API calls that can be called from scripts through a CLI or directly from Jenkins jobs through a plug-in that further simplifies the integration. This allows operations professionals to define actions that developers can run, creating self-service, on-demand IT services. Thus, IT procedures essentially are automated, encapsulated, and made easily available for the build process to call into. Operations professionals are part of the process and define what can be done. But they don’t become a bottleneck, because the services they define can be run by developers when needed.
Learn More
The Rundeck CLI is a powerful management tool. If you want to copy and run a script to all your UNIX machines, you can simply run the command shown in Listing 3.
$ dispatch -F 'os-family: unix' -s myScript.sh
Listing 3
Conclusion
The tools described in this article are all open source tools, and you can implement them today in your project. They can be used to deploy all kinds of applications, and are particularly well suited to deploying Java applications and services.
Try these tools and experience how they help you to move toward a full deployment pipeline for your Java projects.
Go: One More Tool
ThoughtWorks has just released Go, a continuous integration and release management server, as an open source, Apache-licensed project. Go is a Java-based server and has a role similar to Jenkins, but the two have slightly different concepts. Constructed around the full build pipeline rather than around independent jobs, Go focuses on the code-build-test-deploy process. This structure makes continuous deployment a main focus in your project and helps the team think about it from the start. It’s still too early to say whether developers will accept Go and what place it will claim in this market. However, it’s a great concept and a fully functional tool that can be easily integrated with the seven other tools.
Ready-to-run deployment: Go’s built-in pipeline clearly shows successful, failed, and manually triggered builds.
Bruno Souza is a Java developer and open source evangelist at Summa Technologies and a cloud expert at ToolsCloud. He is the founder and coordinator of SouJava—one of the world’s largest Java user groups—founder of the Worldwide Java User Groups Community, and director of the Open Source Initiative (OSI).
Edson Yanaga is a technical lead at Produtec Informática and a principal consultant at Ínsula Tecnologia. An open source user, advocate, and developer, his expertise encompasses Java, application lifecycle management, cloud computing, DevOps, and software craftsmanship. He is a frequent speaker at international conferences.