Wednesday, 18 July 2018

Step-by-Step High Availability with Docker and Java EE

Start from Java EE...


Reliable, scalable applications are difficult to develop. There are far too many concerns that developers need to take care of: security, session management, component distribution, modularization, dealing with databases, and so much more. This is where the Java EE platform shines. Java EE addresses those issues for large-scale, multitiered, scalable, reliable, and secure network applications.

Oracle Java Guides, Oracle Java Learning, Oracle Java Certification, Oracle Java Study Materials

Java EE is an integrated bundle of standard, defined Java APIs. The platform supports the development of a wide range of server-side application architectures. It also defines a standard packaging format. This allows the development of applications that are then deployed, moved, run, and managed in a portable way.

The "magic" of Java EE portability is inside the Java EE containers. Open source projects such as GlassFish, WildFly, and Apache TomEE are Java EE containers. So are proprietary servers such as Oracle WebLogic Server and IBM WebSphere, just to name a few. All those servers implement the APIs and support the WAR and EAR packaging of Java EE applications.

Java EE does the heavy lifting for you to create the application. With it, all you need to do for a highly available application is to install and configure a server. And install and configure one of those Java EE containers. You also need to install load balancers in front, install databases, install this, and configure that...In other words, you "only" need to prepare the infrastructure underneath your application.

If you want to scale to multiple servers or migrate this infrastructure to another place, suddenly this whole Java EE thing looks a lot less portable.

Docker to the Rescue!


Java EE allows you to package your application in a specific format and just "run it." In a different way, Docker also allows you to package your application and then, just run it. Docker is an open source project that automates the deployment of the full stacks needed by applications. It does that by creating another type of "container."

Docker as a project is just a few years old. But it is an integration of a whole bunch of technologies that have been around for quite some time. At the lower levels, Docker uses operating system–level virtualization technology called Linux Containers (LXC). Docker uses LXC to create isolated process spaces to run applications. It also uses union file systems to create layers of the file system to share across multiple instances. In the upper levels, Docker defines a package format. It encapsulates the creation and deployment of the full stack needed by an application. That is probably Docker's most impactful innovation.

During a Docker build process, you create a Docker image, which is a portable representation of your application fully installed and configured. Docker's package format is so important that it quickly became the de facto standard. It is now supported by all relevant players in the industry.

With Docker, you build the complete stack for an application: from operating system configurations, Java EE containers, and even the Java EE application itself. This becomes a nice binary package, the Docker Image. You can then deploy the image inside a container, move it, run it, and manage it in a portable way.

The Importance of Portable Containers


Docker containers are the real, running instance of a Docker image. You can run, pause, move, and even migrate containers to another infrastructure. With Docker, you have a streamlined pipeline to build, test, deploy, and run your application.

You can manage containers in a standard way, regardless of their contents. This makes containers the catalyst for the current movement around DevOps. Developers (Dev) focus on the application development. They choose their stacks, the Java version, libraries, and much more. And they package all those things inside Docker containers. The Operations (Ops) team focuses on guaranteeing the needed infrastructure: security, performance, and management.

Having a clear interface between the Dev and Ops teams is what really allows the implementation of DevOps. Of course, there are other ways to define this interface, but containers are ahead of the pack.

Software Appliances


In Java EE, you have a clear separation between the application server and the application. So, it is a normal practice to freeze the application server and keep updating the application. You deploy new versions of the application to the same application server.

But modifying existing servers is a less effective practice for systems management. A much better solution is to construct immutable servers. That is, servers that are never updated. To install new versions of the application, you deploy a whole new server. When it is running fine, you destroy the old ones. Virtualization and cloud computing made this whole process much easier. Containers take this a step forward.

By rebuilding your images every time you build a new version of the application, you end up with what is called a software appliance, that is an image that contains the full stack that is needed—including the latest version of your application already installed, configured, and ready to run. You can do this with virtual Images, but the use of containers makes this a much easier process, not to mention a lot faster.

Putting It All Together


So, let's do the following tasks, step by step, in ten minutes:

1. Choose one of your Java EE applications to run on a TomEE server.
2. Create a software appliance.
3. Add a highly available configuration.
4. Run the application on immutable servers.
5. Add a load balancer.

Choose an Application

You can use any Java EE application, so decide which application you would like to use. Later, when we build the image, we will add your chosen application to the Docker image by deploying the EAR or WAR file to TomEE.

Containers can be used to automate the build and also the testing of your Java EE application, but this will need to be discussed in a future article.

Create the Appliance

To run your application, let's create the TomEE Docker image, and deploy the application to it.

One of the biggest advantages of Docker is its ability to promote reuse. You can build images on top of existing images. In our case, there is an official TomEE Docker image that we can simply use.

To this image, we will add your application and build a new image out of it. During the build of your application, we will rebuild the Docker image. To do that, all we need is a simple Dockerfile, which you can get from the GitHub repository. To get the Dockerfile from the GitHub repository, run the following command to clone the repository:

git clone https://github.com/eldermoraes/ha-dockerjavaee.git

After you clone the repository, you'll find the following Dockerfile in the project directory that was created by the git clone command:

# Use TomEE official Docker image as a basis
FROM tomee:8-jre-1.7.2-webprofile

# Configure our server with high availability (HA) settings
ADD server.xml /usr/local/tomee/conf/server.xml

# Add some users to TomEE, so we can log in to the
# admin panel later to see the results
ADD tomcat-users.xml /usr/local/tomee/conf/tomcat-users.xml

# Now we add our application.
# This is the last step, so we can use Docker caching
# capabilities every time we re-create the container
# with a new version of the application

ARG WAR_FILE=warfile.war
ADD ${WAR_FILE} /usr/local/tomee/webapps/${WAR_FILE}

Note that the application is the last thing added to the Docker image. Docker will cache all the steps, so it will only need to rerun the last one when you update the application, making the build incredibly fast. This will make it easy to re-create the image in every build of the application. That's our application appliance.

With this Dockerfile, you can build your image. You can add this to your automated build process. We are doing that by hand here to make it simple:

docker build -t tomee-war --build-arg WAR_FILE=app_test.war .

We can then run our appliance:

docker run --name host1 -p 8080:8080 tomee-war

Add High Availability

If we need to scale the application, we can add more "servers" to it or, in our case, more containers. TomEE uses multicast discovery to find the members of the cluster. We can then run multiple containers and they will automatically join each other in a cluster. The example configurations were done in the server.xml file in the project. For more advanced configurations, just alter the server.xml file and Docker will include this file in the image in the next build.

So, in case load becomes an issue, we just start some more TomEE nodes:

docker run --name host2 -p 8081:8080 tomee-war
docker run --name host3 -p 8082:8080 tomee-war

Here, for learning purposes, we are just running multiple containers in a single Docker installation. This is slightly useful for load balancing but probably useless for HA. But you can start containers anywhere. You can run those same commands on other machines running Docker. More-advanced services can handle containers in multiple machines (more about that later).

Run on Immutable Servers

Docker containers are stateless. Anything modified inside a container will just disappear if you re-create it. So, treat them as immutable servers. You get that for free!

Add the Load Balancer

So far, so good. We have a cluster, but each one of those containers is responding in an isolated manner. We need a load balancer in front of our containers, to spread the requests to our multiple instances.

Docker comes again to the rescue. There are several premade Docker images that handle load balancing. And you can always create your own. For now, we'll use the excellent NGINX load balancer from Jason Wyatt, which is very easy to run. There's no need to build anything; there is just a simple configuration file. All you have to do is run the following command:

docker run --name loadbalancer -p 80:80 --link host1:host1 --link host2:host2 --link host3:host3 --env-file ./env.list jasonwyatt/nginx-loadbalancer

The env.list file should look like this:

# automatically created environment variables (docker links)
TOMCAT_1_PORT_8080_TCP_ADDR=host1
TOMCAT_2_PORT_8080_TCP_ADDR=host2
TOMCAT_3_PORT_8080_TCP_ADDR=host3

# special environment variables
TOMCAT_PATH=/app_test
TOMCAT_REMOTE_PORT=8080
TOMCAT_REMOTE_PATH=/app_test
TOMCAT_HOSTNAME=loadbalance

And that's it! We now have a fully functional, TomEE cluster, running on immutable servers, behind a load balancer.

Next Steps


This was just a simple example, but it shows the power of using Docker containers to run Java EE applications. From this simple start, you can easily do a lot more to create an advanced environment, for example:
  • Create a build pipeline to automate this process. You can use Jenkins to automate the building of your application and the creation of the Docker image. That way, you can always run the latest version of your project with a simple Docker run command.
  • Automate tests. Containers are amazingly useful for running test environments. Using a simple Docker run command, you have a fresh, clean container that is already running the latest version of your application. Just run the tests and then destroy the container in the end.
  • Create a more dynamic environment. TomEE automatically finds its cluster members. In the example, we just ran multiple containers dynamically if the load increases. But our load balancer example is not dynamic right now. Using Docker events or other solutions, it is possible to create dynamic configurations. That way, Docker can recognize when containers come alive or disappear.
  • Use databases and other services. With Docker, it is very simple to start any service your application needs, be it databases, message queues, NoSQL databases, and much more. Experiment with existing Docker images available on Docker Hub. You'll be amazed at how extensively you can build your environment from existing configurations!
  • Use multiple machines. You can run this example on multiple machines to have a real HA cluster. But if you run an orchestration platform to manage your containers, you will be able to go much further. How about running the containers on multiple machines managed from a single point? Or migrating containers to machines with less load? Docker Swarm, Jelastic Platform, and Oracle's new acquisition StackEngine are some Docker orchestration products. They can make a multihost solution super easy.
  • Perform cloud migration. With containers, it is possible even to migrate your containers from one cloud vendor to another. In most places, you can simply re-create the containers somewhere else. But Jelastic has a very nifty demo of Docker container migration, without downtime, that is very cool to see. Can you imagine migrating your Java EE application that way?

Related Posts