The other day, my colleague Andrew Philips and I were discussing various topics. We started discussing the reasons that our customers pick XL Deploy over some of our competitors. Before I started at XebiaLabs, I worked for a large midwestern bank. I was also part of the team that evaluated XL Deploy and other products. During the evaluation the Sales Engineer pointed out we could see the actual script that XL Deploy was going to execute during our WAS deploy.
As a developer and system administrator who has written a lot of code and worked on many systems, I have come to dislike graphical only development tools. They always fail somehow and then you are left trying to figure out what happened. These graphical tools hold a lot of promise, but sometimes they seem to be more trouble than they are worth. Furthermore, systems that use source code work better with existing version control systems.
I like to know that I can find out what my tool is doing. If something goes wrong, it is great to be able to trace the problem back. XLDeploy is great at this. Let’s have a look at this feature.
First, I will start a deployment of the NerdDinner Application on some Windows Servers as follows:
Notice that there are eyeballs on the deployment steps in the deployment plan. We can click on those and see the actual script that is going to be run for this deployment step.
After reviewing the script, if you don’t like it, we can easily extend the plugin to change the behavior (see our Customization Manual).
The ability to review the deployment scripts, combined with XL Deploy‘s extensibility gives XL Deploy customers the of power to automate and control their deployments.
If you use Gradle to build your project, you can now automate project deployment using the new XL Deploy plugin for Gradle, which is available in the XebiaLabs community.
With this plugin you get a new task in your Gradle project, “deploy“, which installs your application to a given environment. So you can easily deploy a new snapshot version of your project to development environment. Or you could hook the deployment task to your Gradle release process to automatically install a new version in an acceptance environment.
In this post I will demonstrate how you can setup XL Deploy and Gradle to automatically install your applications.
Note that if you have a CI server then you could as well use XebiaLabs CI plugins to execute deployment of successful builds, like Jenkins plugin, Bamboo plugin or TFS plugin.
How You Can Use It
The Gradle plugin uses XL Deploy to do the heavy lifting of application deployment. This means that you need to have a running instance of XL Deploy server, and it gives you a lot of benefits. You can configure any type of complex deployments, like deploying your application to a cluster of nodes behind a load balancer, for example. And once you’ve done that once, you can reuse it for development, acceptance and production environments, thus avoiding deployment-level bugs. Moreover you can deploy to both Windows and UNIX-based platforms, as well as a vast range of supported middleware, see this page for the list.
There are three steps to getting Gradle and XL Deploy setup ready to deploy your applications:
Install XL Deploy if you don’t have it yet.
Configure the environment in XL Deploy where application will be deployed.
Add the xl-deploy plugin to your Gradle project.
I will show how to do that on example of a HelloDeployment web application which will be deployed to a Tomcat server. For simplicity, the Tomcat server will be running on localhost.
If you never used XL Deploy before, please follow steps below to install it and configure an environment. Otherwise you can skip it.
Install XL Deploy
If you don’t have XL Deploy yet you can install the free trial edition: just go to the download page and follow instructions. It should take around 5 minutes to install. When finished XL Deploy will be available at http://localhost:4516/ by default.
Configure An Environment
I need to let XL Deploy know where my Tomcat server is located. To do that I will configure a new “local” environment, in XL Deploy 4.5.2 it takes three steps:
Add your local machine to the Infrastructure tree of the Repository tab:
You can then test if configuration is OK by right-clicking on the created tomcat server and executing “stop” and “start” commands.
Create an Environment called “local” and add the Infrastructure/localhost/tomcat/localhost virtual host to it. This is where the web application will be deployed.
If you have problems or your Tomcat setup is more complex than described here, you can check this tutorial.
Note that by the time you read it the XL Deploy 5.0.0 may be out which has a fully revamped UI to configure environments.
Configure Your Gradle Build
Now when the XL Deploy setup is ready we get to the point when the xl-deploy Gradle plugin comes in play.
I will be deploying a simple web application called HelloDeployment which is built using Gradle war plugin. After adding the xl-deploy plugin the build.gradle looks like following:
The xl-deploy plugin adds two tasks to the project: dar and deploy. dar task packages the application in a format understandable by XL Deploy. DAR package contains all artifacts which need to be deployed (there may be more than one in your application), and a special manifest file. The manifest file must be created by path src/main/dar/XL Deploy-manifest.xml in the project:
The artifact(project.war) part of the manifest file adds the WAR file built by Gradle into the DAR package. You can read more about the “magic” functions you can use in manifest file in the README of the project.
The deploy task uploads generated DAR file into XL Deploy and optionally executes the deployment. The environmentId parameter of the task shows to which environment to deploy. You can find more configuration options listed here.
Finally you can execute the deployment using command gradle deploy. Use -i option to see more details about deployment tasks being executed:
➜ HelloDeployment git:(master) gradle deploy -i
...
:war
...
:dar
...
:deploy
Importing dar file /Users/bulat/fun/gradle-xld-plugin/src/test/resources/HelloDeployment/build/libs/HelloDeployment-1.0-SNAPSHOT.dar
Application [Applications/HelloDeployment/1.0-20150203-115939] has been imported
... Application not found in deployed => preparing for initial deployment
Deployeds to be included into generatedDeployment:
Creating a task
-> task id: 2b7e0579-5eca-4884-ba33-27b444553235
Executing generatedDeployment task
-----------------------
Task execution plan:
-----------------------
2b7e0579-5eca-4884-ba33-27b444553235 Description Initial deployment of Environments/local/HelloDeployment
2b7e0579-5eca-4884-ba33-27b444553235 State PENDING 0/1
2b7e0579-5eca-4884-ba33-27b444553235 step #1 PENDING Update the repository with your deployment.
-----------------------
Task execution progress:
-----------------------
2b7e0579-5eca-4884-ba33-27b444553235 step #1 DONE Update the repository with your deployment.
Updating the repository... OK
-----------------------
Task execution result:
-----------------------
2b7e0579-5eca-4884-ba33-27b444553235 Description Initial deployment of Environments/local/HelloDeployment
2b7e0579-5eca-4884-ba33-27b444553235 State EXECUTED 1/1
2b7e0579-5eca-4884-ba33-27b444553235 Start 2015/02/03 11:59:42
2b7e0579-5eca-4884-ba33-27b444553235 Completion 2015/02/03 11:59:42
2b7e0579-5eca-4884-ba33-27b444553235 step #1 DONE Update the repository with your deployment.
Updating the repository... OK
BUILD SUCCESSFUL
Total time: 6.918 secs
Based on our experience we have found there are several land mines that you need to be aware of when evaluating your Application Deployment (or Application Release Automation/ARA) tools. As a Senior Principal Architect of a large midwestern bank, I had the opportunity to evaluate XL Deploy and other tools. Some of the key differences and important design considerations that we found were as follows:
Are you planning on implementing the DevOps best practice of treating Configuration as Code? If yes, then do you prefer the approach of copying configurations from one environment to another or would you prefer to define the configuration required independent of the target?
XL Deploy helps you define your deployable CIs independent of the target when possible. In this way, you can always discover, compare, report and apply versioned configurations across your deployment pipeline.
Will the tool’s reporting support continuous improvement?
You should evaluate the reporting capabilities carefully. Having useful reporting is critical to any continuous improvement process.
What is the future of the tool?
Will you need to rebuild your entire CD environment in 6 months?
How hard will it be to manage workflows?
Other tools use workflows. We know that workflows are problematic and an approach that is not scalable. XebiaLabs has a white paper explaining why Workflows are not a suitable solution. XL Deploy uses a model-based approach to define your application deployment process. Since XL Deploy uses a model it can more efficiently deploy only the application components that need to be changed.
Will OS level patching cause agent problems?
Since XL Deploy does not have agents, there is nothing to patch on your target servers. It is less likely that an OS patch on your target servers will break you CD process.
Do you want an agent on your target servers? XL Deploy is agentless, we use the same methods you currently use to log on to your servers to deploy your configuration items. We don’t require additional software. Since XL Deploy uses the same methods that you would use to install your CIs manually, you don’t need to install any additional code. XebiaLabs also has a white paper describing why agents are not an ideal approach.
How much Network Traffic will be driven by the agent?
Agents can consume a lot of network traffic calling home to the master server.
How well will the tool integrate with the rest of your environment?
Large software organizations try to lock you into their solutions. Therefore, you may experience problems trying to integrate with other tools. If you are looking for “best of breed” solutions some tools are going to have problems keeping up with your integration points.
Are you comfortable with tool “magic”?
With XL Deploy, you can see our deployment scripts that will be used during a deployment. If you don’t like our approach you can override our behavior. Transparency is a better approach (see my blog post “Transparency In Your App Deployment Tools Is A Good Thing”). XebiaLabs is much more agile than large organizations and can quickly extend our product to support your needs. Furthermore, there is an active community and plenty of examples to guide users on how to customize XL Deploy.
Is the level of complexity of the tool worth the effort?
Because some vendors try to wedge their tool into a broader suite of tools, the level of complexity will go up. We believe the better approach is to choose tools with open APIs that solve delivery challenges across all of your tools.
Do you want to use an app deployment tool that has weak support for state of the art CI tools like Jenkins, Bamboo or TFS?
Integrating Jenkins with some tools is awkward and involves using workflows that inject an unbelievable amount of complexity into a task. Integrating with XL Deploy is simple.
Under releases section, a pre built extension jar file is available for download that can be dropped directly in $XLD_HOME/plugins folder to use the functionality
Here’s the xl-rule for adding a step for deployment to Openshift for a War file
Now create a new infrastructure to a machine that has the RHC Client library installed on it ( RHC tools can be downloaded once you sign up for a free trial with openshift ). You may even try to login that machine and test a deployment manually using rhc client
You have to ensure that the user you’re connecting with should have rhc client available for use and they have performed rhc setup with their openshift username and password so that they can easily deploy applications
Now once you test the connectivity for the overthere connection to remote machine, then right click and create a new container of type rh.OpenShiftClient. Provide your openshift credentials for the rhc client connectivity
Add the infrastructure to a new environment
Under application, create a new application and then a deployment package 1.0. Under that you can create an artifact of type rh.TomcatWAR and upload a sample war file available
Now start a new deployment and drag-drop the deployment package and environment.
On opening plan analyzer, you can see a step that appears for deployment of war file to the openshift environment
As you can look above in the deploy-artifact.sh.ftl script, the war file is copied to a temporary location, then it creates a predefined folder hierarchy as mentioned by red hat docs. It then copies the war file under the right folder and bundles everything in tar.gz. It takes the property appName of deployed for the name of the application
I’m at the AWS Summit in NYC, which seems a fitting place to talk about the new integration between XL Deploy and AWS CodePipeline. CodePipeline is Amazon Web Services’ cloud-hosted continuous delivery service, officially launched yesterday. As such, it’s similar to XL Release, but in the AWS developer tools ecosystem and with a focus on automated Build -> Deploy -> Test cycles.
CodePipeline’s default deployment providers are AWS CodeDeploy and AWS Elastic Beanstalk. These two are well suited to a variety of application types, but there are also plenty of apps out there – especially existing enterprise applications – that do not currently fit the CodeDeploy or Beanstalk models.
If you’re looking to deploy to enterprise application servers, portals, CMSes, ESBs; if complex database updates are part of your deployment; if you’re looking to orchestrate deployment of on-prem applications that are load-balanced on-prem, and not by ELB; if you’re looking for a consistent deployment, security and audit model that covers applications managed by CodePipeline as well as others that are still released differently – in these and many other situations, you should take a look at the CodePipeline integration with XL Deploy.
Image may be NSFW. Clik here to view.
The integration is pretty simple to get running: add XL Deploy as a custom action type for your CodePipeline account, select XL Deploy as your deployment provider, configure your XL Deploy server to retrieve deployment tasks from CodePipeline, and your deployment steps in CodePipeline will be picked up and handled by your on-prem XL Deploy server. You can even have multiple deployment tasks (in different CodePipeline pipelines, or even in the same one) submitted tasks to different XL Deploy servers in your datacenters.
“I have not yet met a software developer who is more proud of the ROI of the software they develop than they are of the human response to what they are making. The history of our species has been to find creative ways to use tools to make our lives easier, and software is nothing else. So if I feel that what I am building or hoping to help other people build, isn’t, it becomes a little bit more boring.”Image may be NSFW. Clik here to view.
In the midsts of product launches and keynote speeches, I was able to sit down with one of the foremost thought leaders in the DevOps world to talk about the why’s of an ever changing industry and the life of a 33 year old Vice President of Products. Whether you have seen him jumping out of planes or speaking about software delivery on the big stage, Andrew Phillips pushes the limits of the technology world without forgetting the people and tools that led him there. This is Beyond The Laptop: Andrew Phillips
What did you have for breakfast?
“Some really good oatmeal, almonds and berries. Normally breakfast is a session on the laptop.
What is your background? Were you always a DevOps nerd?
“I started on the road to becoming a career scientist, my background is in AI and Mathematics working on computational neuroscience, how our brain works as a thinking machine, which was very fascinating until I moved to the Czech Republic (for a girl) where there wasn’t that much work for a non Czech speaker going around in computational neuroscience so I ended up in IT instead. As a software developer, I starting thinking while its fun to build interesting software it’s a rather introspective kind of approach. I worked with TomTom for a while, the GPS guys. This was the first time that I was building software that was actually used by people I met socially and I started to thinking about software as compared to other professions and really wanting to do something with software to achieve something.”
So why the CD/DevOps field?
“The thing that motivates me about these two fields is that they are related to this idea, that part of being an ethical professional who works with machines, is to make these machines do good and useful things for other people and that means creating a connections with the end user. So for me CD is just a process approach that tries to make that possible.”
Can you define DevOps?
“No I don’t think anybody can. Of course, there are many definitions out there, and that causes a lot of confusion, which isn’t helping. But I think the important point is not to get fixated on any particular definition of DevOps because that implies that meeting that definition is somehow a legitimate goal. DevOps isn’t a goal, it’s a means of trying to achieve your goals. So you should define your business goals and apply practices and approaches that are out there to get closer to that. Whether that means you are “more aligned with” one particular definition of Devops or the other (or none of them!)…who cares?”
Is DevOps a culture or a process?
“Neither, it is a mindset. I think DevOps boils down in many cases to a number of reasonably well defined processes that the very first couple of DevOpsDays featured in a bunch of talks, where some of the first people to talk about this said don’t copy what we do, try to understand what problems we are trying to solve and learn to think the way we think. That’s a statement that is as profound as it is glib.”
When it comes to DevOps, what do you wish people thought more about?
“I would like people to think about more, is not how do we do DevOps but why are we doing DevOps. Because from the why you should be able to derive the how. Where as if you only have the how and not the why, then it is very difficult for you to know whether you are doing things right.”
Can you give me an example of one of these methodologies being taken out of context and used as just a word.
“Something like a DevOps team or a DevOps engineer or anything along those lines. Its obviously total nonsense. DevOps is not something that an individual does, DevOps is a way that we all want to act together. Its like saying you have a camping team, you don’t have a camping team, you go camping together, its an activity, but beyond that, there’s an ‘outdoorsness’ feel to it, you can’t boil it down to having a camping czar and he’s going to do all the camping in the organization. I am just repeating what others have said before me. I think there is a more nuanced way to talk about this.”
You have been speaking a lot recently about containers and microservices, is this what the future is going towards right now, should everyone be concentrating on microservices and data centers?
“It’s not goinImage may be NSFW. Clik here to view.g to be a surprise that the answer is no. Microservices are a technical thing and they make a lot of sense for use cases but until you figure out your problem, there is no use in adopting them for the sake of adopting them. Microservices are the server-side version of distributed computation and they make sense for the all the same kind of reasons. They are more flexible, they are easy to change in isolation, you can experiment with them more nicely, you can make them more portable, you can move things around, there are lots of classic arguments that point to this. This is nothing new. You should bear in mind that this is a very new space and there are many new questions, and like in any new space you need to be aware that if you jump into it now, you going to have to be prepared to have the emergency care capability that any emerging technology needs.”
What is your favorite part of being the VP of a multinational company, such as XebiaLabs?
“Learning more about Powerpoint than I ever thought? No I’m kidding, the great thing is, is that we are doing something that a lot of interesting and cool companies are trying to implement, and they are willing to work with us to make it happen. The other things that are fun is we are a very global organization, I lost count last time we did a language count on how many languages we speak at Xebialabs. So I think for an organization our size, growing as it is, we are an incredibly diverse bunch and it’s a lot of fun.”
“Try to sleep. In the hours that remain when I am not trying to help customers or building Powerpoint slides, I try to stay as real as I can when it comes to technology. I am a member of the Apache Software Foundation and the VP of the Apache jclouds project right now. I survived writing a book, which was a lot of fun. I have an enormous increase in sympathy for people who go through the editing process. Writing a book is easy but editing a book is painful. My most recent book is called Scala Puzzlers. We use Scala internally and it is one of the most interesting languages out there right now. Other things I do, I work on open source, I try to spend as much time as I can with my girlfriend, then I relax by jumping out of planes and flying in wind tunnels.
Where do you see XebiaLabs going in the next year or two?
“What I am saying is not far beyond what people might guess. Ultimately, ‘write better software to make our users happier’ needs to go beyond just the mechanics of how to get software from point A to point B and it needs to go into the entire process of how do I know what my users want and how do I verify that what I give my users actually makes them happier. For me its like the CD maturity chain is first ‘let’s deliver software better’, second ‘deliver better software better’.”
This blog post caters to users who have a good understanding of Tasks in XL Releases and also know how to code in python. It talks about how you can make the best use of enhanced Jython API that has been available since XL Release 4.6.0 to dynamically generate tasks in a phase inside a release during runtime.
Jython API can be used in two different ways in XL Release.
It can be injected as a code snippet into a Script Task and will then be executed when the release execution reaches the Task. This can be thought of as a macro for those who have used MS Excel/Project in the past along with VB Macros. The only downside of putting code snippets inside a script task is that its fragile, can be easily broken with a single keystroke by mistake and doesn’t give nice appearance with fancy user inputs
The second more fancier way is to properly wrap around your code snippet in a Custom Task and expose the input and output fields in UI Text fields. This provides a better way to encapsulate the code snippet and its much more stable since the inline code logic can’t be tampered with
Here’s a Jython code snippet to create both Simple and Custom script tasks with XL Releases. The following two code snippets when directly pasted into a script Task separately and executed would leave to creation of a variety of new tasks in the same phase namely a manual task, notification task, script task, deployit task, webhook and jenkins
Simple Tasks with Jython
import sys, string, time
import com.xhaus.jyson.JysonCodec as json
from com.xebialabs.xlrelease.domain import Task
from com.xebialabs.deployit.plugin.api.reflect import Type
from java.text import SimpleDateFormat
def createSimpleTask(phaseId, taskTypeValue, title, propertyMap):
parenttaskType = Type.valueOf(taskTypeValue)
parentTask = parenttaskType.descriptor.newInstance("nonamerequired")
parentTask.setTitle(title)
sdf = SimpleDateFormat("yyyy-MM-dd hh:mm:ss")
for item in propertyMap:
if item.lower().find("date") > -1:
if propertyMap[item] is not None and len(propertyMap[item]) != 0:
parentTask.setProperty(item,sdf.parse(propertyMap[item]))
else:
parentTask.setProperty(item,propertyMap[item])
taskApi.addTask(phaseId,parentTask)
createSimpleTask(phase.id,"xlrelease.Task", "this is cool", {'description':'coolio'})
createSimpleTask(phase.id,"xlrelease.NotificationTask", "this is the title", {'description':'this is the description'})
createSimpleTask(phase.id,"xlrelease.ScriptTask", "this is the title", {'description':'this is the description'})
createSimpleTask(phase.id,"xlrelease.DeployitTask", "this is the title", {'description':'this is the description','server':'localhost'})
Custom Script Tasks with Jython
import sys, string, time
import com.xhaus.jyson.JysonCodec as json
from com.xebialabs.xlrelease.domain import Task
from com.xebialabs.deployit.plugin.api.reflect import Type
from java.text import SimpleDateFormat
def createScriptBasedTask(phaseId,taskTypeValue,title,precondition, propertyMap):
parenttaskType = Type.valueOf("xlrelease.CustomScriptTask")
parentTask = parenttaskType.descriptor.newInstance("nonamerequired")
parentTask.setTitle(title)
childTaskType = Type.valueOf(taskTypeValue)
childTask = childTaskType.descriptor.newInstance("nonamerequired")
for item in propertyMap:
childTask.setProperty(item,propertyMap[item])
parentTask.setPythonScript(childTask)
parentTask.setPrecondition(precondition)
taskApi.addTask(phaseId,parentTask)
createScriptBasedTask(phase.id,"webhook.JsonWebhook", "this is the title",None,{"URL":"http://myurl", "method":"PUT", "body":"{key1:val1,key2:'value is 2'}", "username": "user", "password":"pass"})
createScriptBasedTask(phase.id,"webhook.XmlWebhook", "this is the title",None,{})
createScriptBasedTask(phase.id,"jenkins.Build", "this is the title",None,{})
Now the next cool thing would be to encapsulate these into Custom Task wrapper so that we can only expose the input and outputs
Steps
Lets think of a use case. So my use case for this blog is such that i want to be able to generate multiple XL Deploy Deployment Tasks based on a list of Application vs Environment map list. User should be able to invoke it either through the UI or REST API.
So now let’s stop our XL Release server and go to the XLR_HOME/ext directory.
In there we’ll edit the synthetic.xml to add a new custom task type.
Then create a new folder called mytype under XLR_HOME/ext directory
Under mytype folder, create a new python script called GenerateDeployments.py
import sys, string, time
import com.xhaus.jyson.JysonCodec as json
from com.xebialabs.xlrelease.domain import Task
from com.xebialabs.deployit.plugin.api.reflect import Type
from java.text import SimpleDateFormat
def createSimpleTask(phaseId, taskTypeValue, title, propertyMap):
parenttaskType = Type.valueOf(taskTypeValue)
parentTask = parenttaskType.descriptor.newInstance("nonamerequired")
parentTask.setTitle(title)
sdf = SimpleDateFormat("yyyy-MM-dd hh:mm:ss")
for item in propertyMap:
if item.lower().find("date") > -1:
if propertyMap[item] is not None and len(propertyMap[item]) != 0:
parentTask.setProperty(item,sdf.parse(propertyMap[item]))
else:
parentTask.setProperty(item,propertyMap[item])
taskApi.addTask(phaseId,parentTask)
serverId = "Configuration/Deployit/" + str(server)
deploymentList = deploymentMap.split(",")
phaseList = phaseApi.searchPhasesByTitle(targetPhase,release.id)
if len(phaseList) == 1:
for item in deploymentList:
itemSplit = item.split(":")
deploymentPackage = itemSplit[0]
environment = itemSplit[1]
createSimpleTask(phaseList[0].id,"xlrelease.DeployitTask", "Deployment of %s to %s"%(deploymentPackage,environment), {'description':"Deployment of %s to %s"%(deploymentPackage,environment),'server':serverId,'deploymentPackage':deploymentPackage,'environment':environment})
Now restart the server and login using browser.
Create a new Template called MorningReleaseTemplateImage may be NSFW. Clik here to view.
In the properties, make sure to set Script User and Password to something like admin/admin or a user that has permissions to execute scripts
Create two phase prepare and deploy
Create first task in prepare phase as mytype.GenerateDeployments
Create Second task as a manual Task
Create another manual Task in Deploy Phase
Set the following input properties in the custom Task
Image may be NSFW. Clik here to view.
XL Deploy Server Reference to a target XLD Server name under configuration
Target Phase to add Deployments to a future phase name. Deploy in this case
DeploymentMap to a variable for Now called ${deploymentMap}
Now our Template is ready to be kicked off remotely. I am using a firefox REST Client to trigger a release. Here’s how its done
From the browser in which the XLR with template is open, copy the last part of the address bar which is like this eg, Release8717722
Open a new REST Client window and type the following and submit Send
The Response in XL Release is that it created a new release and triggered it. It also ended up creating deployment tasks dynamically which were initially not part of the template.
In the past, application deployment meant moving lots of components – provided by developers to lots of servers, databases etc. managed by Operations. With Docker and containers, we often hear statements like: “That all goes away now – developers simply have to deliver a ready-to-go Docker image, and we’re done! No more need for app deployment tools like XL Deploy!”
Having worked with many users moving towards container-based deployments, it turns out that that statement simply isn’t true: while Docker and containers can certainly make some aspect of packaging and deployment easier, many challenges remain that tools like XL Deploy can help with.
Here’s how:
The packaging of an traditional application depends how it has been written: operating system (Linux, Windows), language (Java, C#, Ruby Python, JS..), its behaviors (Frontend, Backend, Web, Processing), using or not database (SQL, NoSql)… It is also true with the associated environment and its infrastructure: Host (Virtual, Cloud), the operating system (Linux, Windows), the middleware (Tomcat, Apache Httpd, IIS, IBM WebSphere, ..), the data (MySQL, MongoDB).
In XL Deploy, we would have the following classic deployment where:
the PetPortal version 2.0-89 contains: petclinic (jee.War), petclinic-backedn (jee.War), petDatasource (jee.Datasource), sql (sql.SqlFolder), logger (a custom type to configure log4j.properties)
Docker’s promise is the following: “one single kind of item in the package (docker.Image) will be deployed on a single kind of target (docker.Machine) to become a running Docker container.”
What are the modifications if I package and deploy my PetPortal application using Docker?
in the package, instead of having 2 jee.War file (petclinic, petclinic-backend), I would have 2 docker.Image based on a tomcat:8.0 from the Docker Hub and that contains the war file and its configuration but I would keep my ‘smoke test’ and my ‘sql’ folder. Moreover, I would need to package the property file externally to my image (an image is like a commit, do not modify it! )
in the environment, I would replace the ‘tomcat.Server’ and the ‘tomcat.VirtualHost’ by a single ‘docker-machine’ only and I would keep my ‘sql-dev’ MySql SQL client, test-runner-1 (smoketest.Runner).
If we examine the result, there are surprisingly few changes between the original deployment plan generated by XL Deploy, and the ‘dockerized’ version. The main difference is that we are now using a couple of docker commands:
No more ‘war copy’ steps but ‘docker pull’ steps
No more ‘start tomcat process’ steps but ‘docker run’ steps
If we detail into the docker run 2 commands generated by XL Deploy and its xld-docker-plugin:
for the ‘petclinic-backend’ container, the command is simple as we can read in any docker blog post or documentation: docker run -d --name petclinic-backend petportal/petclinic-backend:1.1-20150909055821
for the ‘petclinic’ container, the command is a bit complex because we need to configure it – according to the documentation the ‘docker run’ command may accept up to 56 parameters, many of them can be added several times. In our simple case: we need
to link it with the petclinic-backend,
to manage exposed ports,
to mount volumes to apply the configuration,
to set environment variables:
so the generated command is: docker run -d -p 8888:8080 --link=petclinic-backend:petclinic-backend -v /home/docker/volumes/petportal:/application/properties -e "loglevel=DEBUG" --name petclinic petportal/petclinic:3.1-20150909055821
As the ‘petclinic’ container needs to be linked to the ‘petclinic-backend’, the xld-docker-plugin takes care to generate the steps in the right order to run first the linked container and then the other container.
Like the other middleware, the main pain point is not to call remote commands but to generate the right commands with the right configuration in the right order.
Two pain points in particular stand out immediately:
the container configuration with its more than 50 parameters including the network, os, security settings: TCP port mapping, links with other containers, Memory, Privileges,….cf docker run documentation
the volume management, for example: The container configuration is often done by providing a set of files that need to be first uploaded to docker-machine and then mount the volume in the container; Same story when you run the container that manages data (e.g : SQL database)
In the following scenario, the configuration is managed by the ‘config’ docker.Folder that contains placeholders in .properties files. When a value is modified, the folder should be uploaded and the container restarted.
So it can be difficult to manage the configuration for a given environment, but the application still needs to be deployed on several environments. XL Deploy’s dictionaries help you. Moreover, moving to Docker implies most of the time moving to a Microservices architecture that will imply more deployments. See the XL Deploy Docker Microservice Sample Application
In short:
Q: Does Docker mean the end of application deployment tools like XL Deploy?
A: No, definitely not!
Q: Does XL Deploy help me and my organization use Docker successfully?
A: It most definitely does! XL Deploy can allow your organization to transition to Docker at your own pace, with minimal disruption to your app delivery process.
At XebiaLabs we build products to help companies visualize, automate and control the process of releasing their software. Of course, we have our own release process too – and like pretty much everyone’s release process, there is room for improvement. Naturally, we turned to our own tools to make that happen. In this post I am going to explain how XL Release has helped us to visualize the release process of XL Deploy, our product for deployment automation, and how we optimized the release process by automating it.
Step 1: Model the process with manual tasks
Our customers use XL Deploy on premise. Releasing XL Deploy for us means compiling and testing all components and plugins, assembling the documentation and other resources, putting everything in a distribution and finally making everything available online for download, including the announcement.
When we first started to use XL Release, we modeled our release flow with only manual tasks. That’s not to say nothing was automated: we had a lot of individual scripts, but there was no flow connecting everything together. Having only manual tasks in XL Release felt a bit disappointing back then, but right now I think that was perfectly okay – maybe even desirable – as a start. How can you automate something without understanding the bigger picture? A release plan with only manual tasks is very flexible. We reordered and changed quite a lot of tasks since then, and learned quite a lot. I think the most important things we learned are:
What is the exact scope of the release? Including what actions belong to which people? How do we break up larger tasks into more concrete actions?
What is the order of events? Which tasks depend on each other? Which tasks can be done in parallel? Which phases do we have?
Which systems do we need to interact with? How do we integrate with those systems?
But most importantly, it started the discussion about release automation.
Step 2: Start automating and wire it together
Our goal is to have a release flow that takes our source code and turns it into a downloadable distribution on the website, with the click of a button. The release process is actually quite technical by nature, so we started by automating the release process of individual components. We ended up with many small automated subtasks. From there on we started wiring everything together.
It is important to understand that we manage our release process at different levels. Let’s look at them from the bottom up:
Gradle is our build tool and focuses just on releasing one specific component. It fetches dependencies, compiles the source code and runs tests, but Gradle is also involved in the release itself. We created Gradle plugins for our low level release tasks like bumping version numbers and creating tags in Git.
Jenkins is our middle layer and is basically where all the hard work happens during the release, like executing the actual build process and delegating the work to build slaves. For the release process, there is not a lot of control or logic in this layer. We like to keep this layer as thin as possible.
XL Releasesits on top to orchestrate the flow of the release and delegates work to Jenkins and Gradle. It also interacts with other systems involved in the release process like JIRA and our download server. XL Release is in full control of the release process.
Step 3. Where are we now
Currently most of our high-level release tasks are automated tasks in XL Release. After a couple of releases and many improvements, this is what our release template currently looks like:
To explain in a bit more detail, we distinguish the following phases in our release:
Release dependencies: First we make sure to have all dependencies of the core product available.
Release preparation: In this phase, we do a final check and we put everything in order. This is a very important phase because we try to gather all human input, allowing the rest of the release to proceed in an unattended way.
The core release: If all the builds are green, we build the source code and create the distribution.
Smoke test: We run a final smoke test that integrates all released artifacts together and tests them one last time. A final quality check before we go live.
Go live: After this phase there is no going back. When the product owner accepts the release, everything is published on the website.
Post release: A list of tasks that need to be done to wrap up the release and to prepare development for next release.
Some of our automated tasks include:
Triggering builds in Jenkins: To trigger the release build in Jenkins.
Running remote shell scripts: Triggers a remote shell script to start syncing the distribution from nexus to our download server.
Verifying distributions are downloadable: A custom tasks that checks if the HTTP status code of a specific URL is ok.
Marking the released version as released in JIRA with current date: Yet another custom task that does rest calls to the JIRA API.
It’s quite easy to create custom tasks for XL Release. Have a look at our community plugins for examples on how to do this.
Focusing on release automation changed our mindset. Aiming for a fully automated release influences the way you make decisions. When we are about to make a change that affects the release, the first question asked is: How can we fit that in our automated release process?
What’s next?
We’ve made a lot of progress with our release automation, but there is always room for improvement. There are still some manual tasks left to automate, like automatically creating maintenance branches together with Jenkins jobs after the release, or announcing the release on Zendesk, etc. There will always be manual tasks for things like reviewing the release notes. It’s just a matter of how you put together your flow. We organized our flow in such a way that the manual tasks go in the early phases. We “prepare” the automated tasks to continue in an unattended way.
XL Release gives us high level overview, great flexibility and control, something that would have been impossible with using just Jenkins. The release process changes all the time, especially at the highest (process) level. It helps to have a flexible tool that does not carve the process in stone. For that reason XL Release has been given a permanent place in our release automation tool belt.
If you would like to try XL Release to start automating your pipeline and gaining control and visibility, you can download it here: https://xebialabs.com/products/xl-release/
In the newer versions of XL Deploy (5.x) there is a new feature that allows you to put the XL Deploy server in maintenance mode. This allows you to prevent users from making changes to the repository while the administrators are trying to work with the repository or to cleanly restart the server. One of our customers noticed that we did not have a CLI way to put the XL Deploy server in maintenance mode. That seemed inconvenient, so in this blog I’m going to show you how to extend the XL Deploy CLI with a new object that will provide all of the ServerService methods from the REST API in the CLI.
Just like the XL Deploy server, the XL Deploy CLI has a few ways it can be extended. If you start writing CLI scripts that get complicated, they well become longer. You will likely want to start breaking these CLI scripts up into smaller parts or modules. The CLI tool is an implementation of Jython with extensions for XL Deploy. You can use this fact to create modules that can be reused to make your scripts more powerful.
To figure out where you can put your custom modules in the CLI, start the CLI and do the following:
This is the list of places the CLI is looking for modules. To check this out let’s make a custom module. Since I like pizza I’m going to make a little Pizza class and put it in a Pizza (Pizza.py) module as follows:
def listToppings( self ): for topping in self.toppings: print topping # End foreach
# End Class
We will save this module file in /opt/xebialabs/xl-deploy-5.0.1-cli/lib/Lib/Pizza.py. Now when we run the CLI, this module will be available to us. We can see how that might work as follows:
That is the first way you can extend the XL Deploy CLI.
Another interesting way of extending the CLI is to put scripts in the ext folder. This is the use case that I was initially interested in. When we start the CLI there are several objects that are loaded to allow the CLI to interact with the XL Deploy server. When you start the CLI you can get a list of these objects by running the help() command as follows:
admin > help() XL Deploy Objects available on the CLI:
* deployit: The main gateway to interfacing with XL Deploy. * deployment: Perform tasks related to setting up deployments * factory: Helper that can construct Configuration Items (CI) and Artifacts * repository: Gateway to doing CRUD operations on all types of CIs * security: Access to the security settings of XL Deploy. * task2: Access to the task block engine of XL Deploy. * tasks: Access to the task engine of XL Deploy. !Deprecated! Use task2 instead.
To know more about a specific object, type .help() To get to know more about a specific method of an object, type .help("")
admin >
This is useful, but we are missing the serverService object from the REST API (com.xebialabs.deployit.engine.api.ServerService). Instead of building a REST client in my scripts, I want this to be a part of my CLI. To do this we can create another module with a new ServerService class and create an instance of that class that will be available to all of our scripts. The source code for this module is available on Gist. With this file saved in our ext folder we can start the CLI and interact with the serverService object just like other XL Deploy objects in the CLI. Some examples of how we can use this module are as follows:
admin > serverService.help() * state * shutdown * maintenanceStop * maintenanceStart * licenseReload * info * gc admin > serverService.maintenanceStop() RUNNING admin > serverService.help() * state * shutdown * maintenanceStop * maintenanceStart * licenseReload * info * gc admin > serverService.help("maintenanceStart") Put server into MAINTENANCE mode (prepare for shutdown). admin > serverService.maintenanceStart() MAINTENANCE admin > serverService.state() MAINTENANCE admin > serverService.maintenanceStop() RUNNING admin >
There are plenty of possibilities for customizing the CLI with these techniques.
XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control to deliver software faster and with less risk. Learn how…
Today we’ll talk about how you can write a Control Task in XL Deploy using Jython for managing the contents of the XL Deploy Repository itself.
Control Tasks
Control tasks are actions that can be performed on middleware or middleware resources; for example, checking the connection to a host, start/stop a middleware server is a control task. When a control task is invoked, XL Deploy starts a task that executes the steps associated with the control task.
To trigger a control task on a CI in the repository, do the following:
List the control tasks for a CI. In the Repository, locate the CI for which you want to trigger a control task. Right-click the item to see the control tasks.
Execute the control task on a CI. Select the control task you want to trigger. This will invoke the selected action on the CI.
Provide parameters. If the control task has parameters, you must provide them before you start the control task.
Control tasks comes as both simple tasks which you can trigger, and more complex ones where you can pass some parameters for a desired behavior.
The links provided below will help you build your knowledge of Control Tasks
The third link provides knowledge of the most common types of control tasks available:
The shellScript delegate has the capability of executing a single script on a target host.
The localShellScript delegate has the capability of executing a single script on a the XL Deploy host.
The shellScripts delegate has the capability of executing multiple scripts on a target host.
The localShellScripts delegate has the capability of executing multiple scripts on the XL Deploy host.
jythonScript Delegate
If you want to create or update your XL Deploy Repository, the jythonScript Delegate can come to the rescue.
I’ve provided some samples of how to use jythonScript Delegates. You can copy/paste the content in your synthetic.xml under XLDeploy_HOME/ext folder. and create a utils folder XLDeploy_HOME/ext. Create the mentioned scripts and copy content in them under the utils folder.
Sample 1 use case : Fill a dictionary with values from properties file
Usage :To use the following task, do the following
Go to the repository view in UI
Create an empty dictionary anywhere under the Environment hierarchy and save it.
Then right click on the dictionary and select “Add Raw Properties”
In the new tab on right side, copy/paste the content of a properties files and the execute
The task will fill the dictionary with the key/value pairs from the properties file and give back an output that consists of key={{key}} for all keys. You can paste it back into your properties file and voila! You have a ready dictionary along with properties file updated with placeholders.
Synthetic.xml
<type-modification type="udm.Dictionary">
<method name="getProperties" label="Add Raw Properties.." description="Pulls dictionary values from raw properties" delegate="jythonScript"
script="utils/addProperties.py">
<parameters>
<parameter name="text" size="large" />
</parameters>
</method>
</type-modification>
utils/addProperties.py
import sys
print "=================================================================="
print "Copy new file contents with replaced placeholders from below here"
print "=================================================================="
properties = [value.strip() for value in params.text.split("\n")]
rep = thisCi._delegate
for prop in properties:
if prop.find("#") !=0 and len(prop) != 0:
keyVal = prop.partition("=")
rep.entries[keyVal[0]] = keyVal[2]
print keyVal[0] + "={{" + keyVal[0] + "}}"
else:
print prop
repositoryService.update(thisCi.id,thisCi)
Sample 2 use case : Create multiple folders under a hierarchy
Usage : To use the following task, do the following:
Go to the repository view in UI
Right click on any parent (Infrastructure/Environment/etc) or any existing Directory
Select the option “Create Folders under”
Then, in the new tab, provide a coma separated list of folder names and execute
Once you have refreshed the view, you’ll see new folders created under the hierarchy
import sys
from com.xebialabs.deployit.plugin.api.reflect import Type
def createFolder(foldername):
if (repositoryService.exists(foldername)):
return False
typeObj = Type.valueOf("core.Directory")
myfolder = metadataService.findDescriptor(typeObj).newInstance(foldername)
repositoryService.create(myfolder.id, myfolder)
return True
for folder in params.folderNames.split(","):
if createFolder(thisCi.id + "/" + folder):
print "Created Folder : " + thisCi.id + "/" + folder
Sample 3 use case : Perform a control task on multiple items of the same type
Usage : To use the following task, do the following:
Go to the repository view in UI
Go to infrastructure and right click on that or some directory. (NOTE: Now this code only showcases the capability of running multiple control task at once, but references to only overthere.Host type and executes control tasks that don’t take any parameters. If you want to change this behavior, you would have to further modify the script and method signature)
Select the option “Perform Bulk Control Task”
The in the new tab, first select a list of hosts from the available hosts
Second, type a control task name like checkConnection and then execute
In the log section below, you would see that the control task would iterate over each Host and execute a checkConnection
You can see them later if you go to Reports > Controls Task tabs and search for the current date/time range
Synthetic.xml
<type-modification type="internal.Root">
<method name="performBulkControlTask"
description="Performs a control task on multiple CIs of same type"
task-description="Performs a control task on multiple CIs of same type"
delegate="jythonScript"
script="utils/performBulkControlTask.py">
<parameters>
<parameter name="hostList" kind="list_of_ci" referenced-type="overthere.Host" description="Host Selection" />
<parameter name="controlTaskName" kind="string" description="Control Task Name" />
</parameters>
</method>
</type-modification>
utils/performBulkControlTask.py
import time
for item in params.hostList:
ctrlobj = controlService.prepare(params.controlTaskName,str(item))
taskid = controlService.createTask(ctrlobj)
taskBlockService.start(taskid)
taskobj = taskBlockService.getTask(taskid)._delegate
print "Executing " + taskobj.description
time.sleep(5)
for item in taskobj.steps:
print item.description + " : " + str(item.state) + "\n" + item.log
taskBlockService.archive(taskid)
You can change the logic of any of these scripts for a different desired behavior. Hopefully I have been able to showcase the capability of what you can achieve using jythonScript delegate.
One of the great merits of using XL Deploy is that it helps you to maintain one way of working across the different types of middleware and different applications. With the new provisioning features introduced today in XL Deploy 5.5, we now add the same way of working for provisioning infrastructure to this list. Now it really doesn’t matter anymore where you deploy, whether it is to physical boxes, virtual infrastructure or to the many cloud solutions that exist today. You can provision the infrastructure in exactly the same way as you do your deployments.
By using templates for infrastructure, you can easily create standard environments for your development team to use. This feature ensures that you can very easily create a self service portal where your development teams can spin up new infrastructure just as easily as deploying applications to existing infrastructure.
With this new feature, XL Deploy offers a scalable way of orchestrating and governing your favorite cloud or container solutions using its extensible plugin mechanism. You can integrate with any infrastructure you like, preventing vendor lock-in and letting you choose the best infrastructure for the job. Out of the box we support creating EC2 instances in Amazon Web Services (AWS) and preparing your machines with the Puppet provisioner from Puppet Labs. With many cloud solutions you pay per container used. When using XL Deploy, the un-deployment of applications can be linked to the tear down of environments so that it will definitely help you to control your cloud cost.
Why Cloud Provisioning with XL Deploy?
You only want to pay for resources that you are actively using.
You want to easily (and temporarily) add more capacity using both cloud and datacenter resources
You want the resources to be automatically created when you need them.
Your backend is hosted in your datacenter but your frontend is in the cloud. You need to deploy them together.
You want to have the same repeatable way of working whether you deploy to the cloud or to traditional environments.
You want the same level of control whether you deploy to the cloud or to traditional resources.
You want to be able to switch clouds easily. XL Deploy lets you stay independent.
Here’s an example. Let’s say you temporarily need more capacity to support the busy Christmas sales. You could temporarily add new cloud infrastructure to supplement your on-prem infrastructure with the same amount of control, using the same way of working. When the high-volume window is over you can just as easily remove the unneeded capacity and keep the templates ready at hand for next time. Can you imagine setting up a test environment, with all its dependencies, deploying the application(s) you want to test on and just as easily tearing down everything after the tests have been completed? With the same easy process applied to your development, pre-production and production environments? Many of our customers are contemplating to move (parts of) their infrastructure into the cloud, and XL Deploy can help you make this process very easy. You create a template for your cloud environment and you just drag the applications onto the cloud environment… and if you change clouds, no problem.
This new feature takes all the advantages of our deployment approach and applies it to the environment provisioning process… not replacing the excellent provisioning tooling of today, but seamlessly adding an enterprise orchestration layer on top.
It’s official, mainframes are here to stay. And that’s a good thing, considering how much we depend on them to this day. Mainframes are used in almost all industries from finance and banking to government, insurance, and much more. The question now is what to do about those good old-fashioned “dinosaur” apps. XebiaLabs’ Sunil Mavadia, Director of Customer Success, discusses this in an article recently published on DevOps.com. According to him, those apps must be modernized or we all stand at risk of falling behind.
But how? One thing for sure is that scripting is not the answer. Updating mainframes requires modifications to code, data interfaces and other components across multiple platforms simultaneously. Scripts on the other hand are impossible to audit and lack end-to-end process visibility.
Another possibility is to throw away existing applications and start from scratch. This, however, is the least efficient option as the enterprise already has an investment in these applications. From an enterprise standpoint, this leaves only one other option.
Building bridges between legacy applications and modern engineering is what Sunil prescribes as the only practical solution as it’s the one way to simultaneously “cut risk, cost and complexity, while improving…responsiveness to ever-changing customer needs.” The best part about this is that enterprises are becoming increasingly open to adapting to DevOps because of the clear benefits it provides them.
We’re excited to announce the beginning of our blog series, “DevOptimus Prime’s Tool Tips!” This series will explore DevOps and Continuous Delivery tools, best practices, how-to’s, and new features. Transform your release pipeline with DevOptimus Prime’s Tool Tips.
XL Deploy includes fine-grained access control that ensures the security of your middleware and deployments. The security mechanism is based on the concepts of principals, roles, and permissions.
Principles, Roles, and Permissions
Principles
A security principal is an entity that can be authenticated in XL Deploy. Out of the box, XL Deploy supports only users as principals; users are authenticated by means of a user name, and password. When using an LDAP repository, users and groups in LDAP are also treated as principals.
Roles are groups of principals that have certain permissions in XL Deploy. Roles are usually identified by a name that indicates the role the principals have within the organization, for example, deployers. In XL Deploy, permissions can only be granted to or revoked from a role.
When permissions are granted, all principals that have the role are allowed to perform some action or access repository entities. You can also revoke granted rights to prevent the action in the future.
Permissions
Permissions are rights in XL Deploy. Permissions control what actions a user can execute in XL Deploy, as well as which parts of the repository the user can see and change. XL Deploy supports global and local permissions.
Global permissions
Global permissions apply to XL Deploy and its repository.
The following table shows the global permissions that XL Deploy supports.
Permission
Description
admin
Grants all rights within XL Deploy.
discovery
The right to perform discovery of middleware.
login
The right to log into the XL Deploy application. This permission does not automatically allow the user access to nodes in the repository.
security#edit
The right to administer security permissions.
task#assign
The right to reassign any task to someone else.
task#takeover
The right to assign any task to yourself.
task#preview_step
The right to inspect scripts that will be executed with steps in the deployment plan.
report#view
The right to see all the reports. When granted, the UI will show the Reports tab. To be able to view the full details of an archived task, a user needs read permissions on both the environment and application.
controltask#execute
The right to execute control tasks on configuration items.
Local permissions
In XL Deploy, you can set local security permissions on repository nodes (such as Applications or Environments) and on directories in the repository.
When managing XL Deploy, you need to maintain regular backups of your repository so you can restore in case of failure. You can simply back up the whole repository or use the CLI based method for import/export. This method allows you to export the XL Deploy repository tree to a ZIP file that can be imported into the same or another XL Deploy server. The ZIP file contains all configuration item (CI) properties, including artifact files.
For example, you can use this feature to create CIs in a sandbox or test instance of XL Deploy and then import them into a production XL Deploy instance.
Export and import of all the permissions and roles that are applied either globally or on individual hierarchies is not supported. However, you can use the custom cli script, Export/Import roles and permissions, which can help you with both import and export of all roles/permissions in a JSON file.
Here’s how you can use it:
Download the raw file and save it as a python script.
Go to XL Deploy CLI client.
Copy the script under CLI_HOME/ext folder.
Start the CLI and connect to the target XL Deploy Server.
To Export, use the following command : exportSecToFile(absoluteDirectorypath)
e.g., exportSecToFile("/user/myuser/home/") This will write security.json in that folder.
To Import in a fresh instance, use the following command : importSecFromFile(absoluteDirectorypath)
e.g., importSecFromFile("/user/myuser/home/security.json")
NOTE: Make sure you’ve imported the infrastructure, environment, and other hierachies first before using this script, otherwise you’ll receive an error if it can’t find a hierachy to apply permissions to.
Continue mastering XL Deploy with our XL Deploy how-to page. It shows users all the tips and tricks they need to optimize their pipeline and start releasing software faster.
One of our customers was kind enough to share with us why they chose XL Deploy for Deployment Automation, rather than a well-known enterprise vendor who offered big discounts for their product. They described how XL Deploy has a number of crucial features that enterprises need in order to deploy software at scale:
Version and dependency management. Advanced dependency management is a key requirement for this customer’s teams. When trying to deploy a microservice, the tool needs to tell you which other services and which other versions must also be deployed.
Ability to control things through the Command Line Interface (CLI). These users like to control things at a very low level, and for them it’s all about automation. So having everything accessible through the CLI is a big win.
Ability to model everything. Another critical feature is the ability to model everything, from the infrastructure and environment, to the application itself and all the configuration data that goes with it.
With XL Deploy’s model-based approach changes are simple to make and propagate to all environments.
Agentless architecture. XL Deploy’s agentless architecture is a huge benefit to this customer as they do not like deploying agents to their 200 nodes!
Satellite module. The XL Deploy satellite module provides a reliable, scalable way to deploy to data centers all over the world.
Visibility into what’s deployed where. Finally, the team finds it crucial to have clear visibility into what’s deployed where. No more guessing about whether or not something has been deployed, whether it’s in the right place and whether the right things have been deployed along with it.
Before implementing XL Deploy, this customer also evaluated a workflow-based deployment automation tool from a large enterprise vendor. “We didn’t go too wide on our evaluation because most products didn’t meet our minimum requirements for the project,” said their CTO. “XL Deploy was the clear choice for our deployment automation solution because it contains a number of features that were critical for us. Even heavy discounts on the other tool didn’t outweigh some significant deficiencies. We made our decision based on value, not price.”
Martin Van Vliet, Vice President of Engineering for XebiaLabs.
Building software since the good old days of Apple ][, Martin van Vliet is a born developer. What began as a high school hobby, turned into a life-long love for creating technical solutions that help people solve real-life problems.
As Vice President of Engineering for XebiaLabs, Martin is passionate about using agile software development and Continuous Delivery to optimize the delivery of software. He’s also a dedicated coach and mentor to his team who enjoys helping them learn and grow in their careers. In this interview, he offers expert advice on how today’s developers can make sure they’re doing do fabulous, innovative work that helps their customers, companies—and their own careers.
Martin, how do you see the role of software developers today?
First let me say that it’s a great time to be a developer! Developers today have the best jobs in the world. I see them as having a lot more fun and challenging things to do than ever before. They don’t just write code and hand it off to someone else to deliver to customers. They get to be at the center of shaping product strategy. They get to make sure that the features they work so hard to build make a huge difference in the lives of customers. They have the chance work on projects that in the end really matter. To me, after the hands-on work of building code, that’s the best thing about creating software—having it affect other people in a really positive way.
What do you mean by working on projects that matter?
Things that matter are great features that help customers. Things that matter are products that catapult the business ahead of the competition. Another thing that matters is each developer’s satisfaction with the work they’re doing and their career growth. How can you grow if you’re not learning new things?
XebiaLabs developers enjoying an important company perk – free food!
Everyone, including developers, wants to look forward to coming into work every day. To do things that are really creative and make a difference to themselves and others. But you need the freedom to do creative, innovative things and that requires thinking strategically so you’re not bogged down solving problems that have already been solved. Day-to-day, it’s so easy for all of us to get distracted by things that don’t have much impact, right? Same thing goes for developers. I mean, developers are very technical people and they love to solve technical problems. But it can be really hard for them to find the time to think about if it’s the right problem to be solving.
Can you say more about solving the right problems?
Sure can. I think the best way to answer that question is through an example. When I first started out as a developer, I was working for a company in Silicon Valley. One of the system admins built a CI server as a big Perl script. At the time there were no CI servers, so it was a useful problem to solve. It was building code, which is fun, but it also had a very important purpose because there were no alternatives to it. But now absolutely no one, not a single developer, would do that because there are Continuous Integration tools available that do that work for you. There are also tools you can use to help you automatically deploy your applications without having to worry about operational and maintenance issues. So developers today have more freedom to focus on solving technical problems—the right problems—the ones that require new and innovative ways of thinking and that get them excited to come to work every day.
How do you think XebiaLabs tools help developers?
By being their new best friend, of course! Seriously, our tools are developer friendly because while they automate a lot of boring tasks, they’re also very flexible. We use a model-based approach that allows you to standardize your processes but still lets you make adjustments where you want to. It’s more scientific in that it enables repeatability of processes, but you maintain control. In comparison, you have tools like Jenkins, and to a certain extent open source tools, where just about everything is tweakable.
Now Jenkins, while we’re talking about, is missing what we call “content.” Content is built-in knowledge of middleware or other tools in your environment that it needs to communicate with. Jenkins is a tool that can run scripts, but you have to supply the scripts to do stuff like copying files, restarting a server, doing a deployment, and on and on. That to me seems like a problem that’s already been solved. A tool like XL Deploy comes with plugins that have deep knowledge of the middleware they talk to. That means you can tell it what you want done, and XL Deploy knows how to do it. You simply don’t need to worry about it. Then, XL Release handles the pipeline orchestration so you don’t have to script yourself. There’s no reason for you to do it—it’s boring, and it’s definitely not the impactful, fun work you could be doing.
And here’s another thing. Do developers really care about how software is installed on a production server or about orchestrating the whole testing of it in the pipeline? Usually developers want to build cool stuff the user would use. So we take out the drudgery. We take everything that’s needed to ensure the delivery of the software and get it running somewhere—that’s our job. Plus, you get rid of all the errors in deployment. If you’re trying to deliver something to your business or testing team, the worst thing that could happen is the software is ready but the installations fails. All that work, all that creative energy spent just to have it fail. That’s totally frustrating and just the kind of work you don’t need. With our products, you press a button and it works.
What can developers do when they’re freed up from boring tasks?
The good stuff! Like playing around with new features to figure out what’s truly engaging customers before you fully release the software. Developers get to be in the driver’s seat here, shaping product decisions, influencing strategy, and doing the technical work they love to do. Another thing developers can do is tackle scalability issues, like how to scale your software to Internet size. Now that’s what I call challenging! That’s work that matters and that you learn from.
Were you ever in a situation where you could have used products like the ones from XebiaLabs?
Image may be NSFW. Clik here to view.I’ll need to jump in the way-back machine for that one, but yes, I once worked for a Dutch national railway company. I worked on the software for the displays that showed train schedules and status. These displays are in every station in The Netherlands. When they first started to go live, a lot of bug fixes were needed. My team had a standby system whereby we could get a call—even in the middle of the night—if there was a failure because we had to get the displays back up and running again.
So there I am at like, 1am, half asleep and having to act fast to make the software work again. And on top of that, the team had to install it into an environment and worry about stuff like, did we have all the right settings? Is everything configured correctly? Did we forget to update something? If we’d had XL Deploy, it would have been like “here you go, we have a new fix for you,” and we would drag and drop it into an environment, and it would be live. We could have done what we needed to do and gone back to bed. Unfortunately, this is a scenario that too many developers can relate to. What I’m saying is that they don’t have to continue working this way.
Can you recommend any resources for developers to learn more about doing impactful work?
One of my favorite reads is a book by Edmund Lau called The Effective Engineer. Lau talks about the concept of leverage. How much effect or gain does a software engineer get versus how much effort they invest? If you have a lot of leverage, you might work on something for five minutes and it makes a huge difference. On the other hand, you could work on something for weeks with little leverage and it makes little or no difference. Lau says that using the right tooling makes for huge leverage because you’re not doing all sorts of unnecessary work to get a project going. This has made such a difference in how I think about developing software and how I motivate and mentor the people on my team, who are doing great work. I’m really proud of everything we’ve been able to accomplish together.
Last question—what advice would you offer developers starting out today?
I’d start with “be curious and learn as much as you can.” Today’s software development is almost always a polyglot environment, meaning that several different programming languages are used. Learning different programming paradigms, languages, frameworks and tools allows you to choose the right tool for the job and provides fresh insight into how to solve complex issues.
And of course, make use of all the wonderful tools and infrastructure out there today that can make your life as a developer easier. Use tools to do the grunt work and focus on the high value, high impact stuff.
As enterprises grow and scale to meet market demand, they’re finding it vital to move away from monolithic applications. Instead, a great number of organizations are transitioning to development architectures with many small components that allow them to release software much more quickly.
Some IT groups, for example, are now using things like modules and microservices for development. While these approaches can speed the development process, they can also introduce new complexities as companies try to scale. That’s because they often split applications into many moving parts that depend on each other. Application dependencies can create problems such as:
Multiple components deploying one after the other, which can make deployments slow.
Deployments happening in the wrong order, for example, a package could be deployed in an environment before a component it depends on, leading to unstable behavior.
Lack of flexibility in a team’s deployment plan to adapt to their use cases.
So, although approaches like microservices may offer a short-term remedy, eventually the complexity of application dependencies will exceed the capabilities of those approaches to manage them. This complexity will only be compounded by increasing market demand for more frequent releases, delivered ever faster.
New and Improved Dependency Management
When it comes to deployments, three things matter:
High availability—Applications/services must always be available. Upgrades can cause downtime to services and interfere with Service Level Agreements (SLAs).
Short deployment times—Deployment time must be reduced. Depending on the number of deployments that happen in a day, lost time can multiply.
Low risk—Failures and other deployment problems present great risk to the organization. The higher the risk of failure, the higher the cost.
XL Deploy provides a wide range of features to help unravel the complexity of deploying multiple packages, ensuring that your applications are highly available, deployed quickly and are at low risk for failure. The latest version of XL Deploy, version 6.0, offers enhancements to its Dependencies feature that allow IT teams to effectively manage application dependencies while meeting the deployment criteria above.
Enhancements to Dependency Orchestration
XL Deploy uses “deployment orchestrators” to handle dependency management so IT teams can optimize deployment plans and deploy more quickly. Deployment orchestrators give you the flexibility to deploy applications as follows:
Sequentially—Work in sequence to ensure order.
In parallel—Work in parallel to reduce step times.
By group—Group tasks under a common theme, for example, by metadata or a server.
XL Deploy v6.0 adds two new orchestrators for application dependencies:
Sequential-by-dependency—Ensures dependencies are deployed in reverse topological order.
Parallel-by-dependency—Ensures that multiple dependency packages can be deployed in parallel.
XL Deploy deploys complex applications in parallel and manages application dependencies.
These new orchestrators offer additional capabilities for application dependency management. For example, for an application that is installed on multiple servers, IT teams can deploy to each container/host in parallel, and, within each group, deploy the dependencies in the correct order and in parallel. This capability ensures not only that all servers in the environment are updated at the same time, it does so in a way that optimizes the deployment of the packages within them. Deployments happen more quickly as well.
Dependency Resolution: More Flexible than Ever
XL Deploy v6.0 adds greater flexibility to its dependency resolution strategy. Users can now set rules to determine whether a particular version of a dependent package will to be updated. For example, if an application only needs v2.0 of a component, and v2.0 is already installed, then v2.5 won’t be deployed even if it’s available. XL Deploy now offers two dependency resolution options:
Latest—Always use the latest version of a package.
Existing—Use what already exists on the environment, if it is valid, to save deployment time.
Version 6.0 release builds on the already strong Dependency Management functionality of XL Deploy.
Upgrade to XL Deploy 6.0 Today
The latest enhancements to XL Deploy 6.0 offer optimized application dependency capabilities to provide a superior experience for enterprise IT teams, including the ability to:
Scale complex applications that have a variety of dependencies.
Handle complex dependencies at higher speed.
Ensure tasks happen in the right order while still deploying software more quickly.
Improve performance when generating deployment plans that have application dependencies.
Produce reliable deployments with parallel-by-dependency and sequential-by-dependency orchestrators.
Get faster and more flexible deployments by combining the new orchestrators with existing parallel orchestrators.
Update application packages only when necessary (rather than updating packages that aren’t required).
Deploy fewer components, saving time and resources.
Shared values, if you’re doing it right, doom and gloom if you’re not.
Previously, system administrators frequently were responsible for security: applying patches, building firewalls, enforcing security practices. Work often involved a lot of exciting manual labor, unique tooling, and esoteric techniques.
Image may be NSFW. Clik here to view.
Rare, esoteric techniques
Here Come The DevOps
As time passed, both security operations and maintenance operations have evolved. The DevOps movement emerged as an attempt to build the bridge between people who write code, people who maintain the infrastructure to run it, and people who make the business decisions. These changes have put emphasis on the new set of techniques and values. These techniques and values can either be beneficial or problematic for the security posture.
Image may be NSFW. Clik here to view.
New exciting techniques
Through the years, in one way or another, I’ve been responsible for building, running, and enhancing product infrastructures for companies I worked in, or for their clients. Somehow, many things emerging into the landscape from DevOps seem pretty familiar if looked at from the right angle. Working for a cryptographic engineering company Cossack Labs, where demands towards the development security are slightly higher than the regular “hey, we’ve got payment processing data and potentially millions in regulatory fines”, I’ve had an opportunity to rethink many of ideas behind building secure processes in a modern way. Somehow, many DevOps concepts perfectly fit into the picture.
This post is an inquiry into mental models that make DevOps beneficial for security, instead of being detrimental.
Looking at DevOps through its values, DevOps is all about business performance through technical innovation:
Improving value delivery to the customer: delivering continuously, reliably, predictably, via automated scenarios.
Improving change management and value creation: using continuous integration and testing everything, programmatic infrastructure, testable user experience.
In the end, business is able to charge clients faster and more frequently: just because more value gets delivered faster, with increasing consistency. All of this is achieved by using certain processes and tools. When used blindly, it brings a lot of damage.
Driving Blindly
Security is an invisible showstopper for all the aforementioned values if done wrong:
Real-world impact: Customer data leaks and service disruptions ruin any value you aim to deliver faster. Lawsuits and regulator fines will not just affect the developers, in many cases they will rule the affected companies out of business.
More causes for damaging impact: Quicker changes, done while impairing security and reliability, will introduce more leaks, more service disruptions, and indirect damage as well: your systems will be used to damage others.
Even fixing things quickly is a threat: Failure recovery, which puts “time to get back in production” first, without security considerations, is a frequent path to introduce security inconsistencies into a system.
At the same time, security-conscious DevOps could significantly increase the security posture:
Improve value reliability: Optimize for reliability, repeatability, and predictability of value delivery, and the speed will organically follow. Use programmatic and consistent security controls.
Improve value verification: Optimize change management for the deterministic state of any code where state includes code’s security status verified automatically.
Prepare for the bad times: Aggregate metrics for incident response and backup controls for threat mitigation.
Whether your company has a security policy and security-aware developers and operators, or you’re just looking forward to getting started, DevOps tooling can bring a lot of process/infrastructure safety from day zero, then help evolve it into consistent security posture.
Learn how release pipeline orchestration solutions help even the most audited of enterprises efficiently manage and optimize their software release pipelines.
Transforming DevOps Practices for Security
Even though tooling is just a part of DevOps phenomena, it’s a rather important part of it.
From viewpoint of traditional secure development practices, many DevOps approaches are not just “compatible” — they’re perfectly fitting. Let’s take a look.
Image may be NSFW. Clik here to view.
Even though a bit chaotic, DevOps methodologies can be put to a good use!
Test everything!
Functional testing as Dynamic Application Security testing: First of all, you want to know that security controls within your application actually work.
Testing security functions in functional tests and running specific non-functional tests against known vulnerabilities allows ensuring that security controls do work as expected on every iteration of continuous integration.
Things to try:
BDD-security suite as the testing framework for functional security testing, infrastructure security testing, and application security testing. Integrates with Jenkins!
Gauntlt, a number of Ruby hooks to security tools to integrate into your CI infrastructure.
OWASP Zap and OWASP Zapper (Jenkins plugin): automate attack proxy to test some of the attacks.
Mittn, F-Secure’s security testing tooling for CI.
Source code analysis as Static Application Security testing: it’s worth investing effort into making sure that detectable vulnerabilities get detected automatically.
Reducing vulnerability risk by scanning sources with static code analysis tools / static application security tests for possibly vulnerable code during CI iterations.
Blocking or parallel? Obviously, when developing for security, security tests should be blocking, period.
Collection of audit and operational data
Metrics DevOps’ use to monitor and tune development and production environments are an important context for audit logs relevant to security. Being gathered together, in a secure way, can make intrusion detection and incident response stellar.
Automatic configurations 101
Over the course of years, I’ve had several occasions to have automatic configurations in building firewalls and access control, and while some of the stuff was taxing to operations, it’s the simplest way to have safe defaults, consistent firewalls, and some confidence in your access control system.
Moreover, positive security for something like application firewall means enumerating all possible scenarios of moving around HTTP endpoint graphs with parameters and stuff… easy to make manual mistakes, easy for insiders to hide a channel to leak data out.
Generating firewall rules programmatically, if done right, is a great nerve savior.
Orchestrated automatic configurations
Orchestrating multiple automatic configuration tools is where the magic starts: imagine you know the role and address of every node in the system. Now imagine you can whitelist access based on role model, and basically have positively modeled iptables configuration instantly drawing out a significant portion of threats.
Know thy limits
Investing in automation shouldn’t be religious. Preventing attacks and managing ongoing incidents is still a decision-making process of a human being under stress, pretty much like the OODA loop in the military methodology.
Things that can be automated, should be. The rest should be orchestrated in a way that humans, who should actually make important decisions, will not only have all the relevant metrics and audit logs but means to act quickly, efficiently and across vast infrastructures of today as well.
Getting Serious and Boring
Image may be NSFW. Clik here to view.
Well-orchestrated DevOps environment
After your infrastructural tools have been adjusted, you’re already far ahead of many development and production environments. However, whatever amount of labor you’ve put into it, it’s worthless if there still is a simple way to mount an attack against your infrastructure. Security is an asymmetric game: you have to defend against every threat, while attackers have to find just one weakness to succeed.
And so, security policy and procedures are still necessary. Having built parts of the necessary infrastructure, next steps will be simpler: understand risk model and security objectives, audit current state, define gaps in processes, techniques, then come up with a development plan and then, finally, go back to engineering, but with an all-encompassing construction plan.
Conclusion
If DevOps practitioner sees that DevOps is about enhancing business value and business value includes business continuity and security — suddenly, tooling falls into place and helps adjacent operational needs.
If DevOps practitioner is just excited by tools and fancy lingo — it’s a wreck in the making. And if you add security into the equation, the wreck might be more than just inefficient development and maintenance, but all the typical scary tales coming true.
Organizations across every industry are upgrading core technology platforms to accelerate software delivery. In addition, heavily regulated industries like Insurance demand strict practices to ensure compliance requirements are met. Delivering software to customers quickly while overcoming challenges presented by upgraded solutions is critical. Today, leading insurance providers such as NJM Insurance rely on software to manage the information of their policyholders… and their credibility and reputation are on the line if that software fails.
As NJM upgraded core technology platforms to stay competitive, they found their volume and cadence of work increasing. They quickly reached their limits as they tried to scale their entirely manual software deployment processes. NJM’s number of environments had increased to more than 100. At the same time, NJM experienced considerable growth in the number of deployments necessary (1,000+ per month, all handled manually). To deal with this rising complexity, NJM turned to DevOps, and they set their sights first on Deployment Automation.
The Solution
After analyzing several tools, NJM Insurance ultimately chose the XebiaLabs DevOps Platform to handle the increasing complexity in their deployment processes and environments. Within 6 months of adopting XL Deploy, they saw a staggering 30-50% increase in deployment speed. Their new automated deployments now take minutes instead of days. NJM was handling 1,000 deployments per month manually and has increased to more than 1,500 by automating with XL Deploy. At the same time, deployment reliability and standardization across environments increased substantially as well.
Learn how release pipeline orchestration solutions help even the largest of enterprises efficiently manage and optimize their software release pipelines.
“There is a lot more going on in the new world than the old world. We didn’t need another problem. We needed a solution, and that was the XebiaLabs DevOps Platform,” stated Vito Iannuzzelli, the Assistant VP of IT who was leading the DevOps charge at NJM.
“I wanted to provide a single system of record of what happens from the minute we request a deployment until the minute it is completed—all in a repeatable way with visibility, control, and proper record keeping. With XL Deploy, we have introduced a highly visible, zero-touch process that is fully traceable and auditable.”
“XebiaLabs filled a significant gap and enabled the need for consistent, reliable, and repeatable processes. The entire deployment process has become streamlined and automated, which reduces the impact on our team’s limited resources. XL Deploy provides easy access to the evidence required to satisfy audit controls as applications move from development through production. It ensures all steps are executed in the right order and tracked consistently.”
With the future in mind, Vito and the NJM team are steering their focus towards their entire release pipeline. “We need a tool that automates our entire pipeline, not just deployments. XL Release from XebiaLabs provides end-to-end visibility, reporting, and control across all our tools. We expect XL Release to deliver the same benefits across the entire software release pipeline that we’ve seen for deployments from XL Deploy.” With the XL Release implementation about to begin, it will be the catalyst for ongoing success around software delivery, auditability and compliance at NJM.
“As we continue to move applications to the XebiaLabs DevOps Platform, total cost of ownership continues to go down as productivity increases. If you’re looking to improve, accelerate and streamline your end-to-end software delivery—and enforce compliance steps in a repeatable, auditable process—you want XebiaLabs.”
For more details on NJM’s story and results, you can read the full case study here.
Kommunal Landspensjonskasse (KLP) is Norway’s largest insurance company, delivering safe and competitive financial and insurance services to the public sector, enterprises associated with the public sector and their employees.
KLP’s completely manual deployment processes demanded a significant amount of time to manage software deployments across geographically dispersed teams. With over 160 enterprise applications and an average of 1,600 deployments per month, deploying, un-deploying and releasing new versions was both difficult and time consuming. KLP knew they needed to streamline their process.
After evaluating XL Deploy, KLP chose XebiaLabs because it met their requirements for an easy-to-use tool with drag-and-drop features, a scalable architecture, and simple integration with their environment. KLP now delivers on average 1,600 deployments per month for over 160 applications across 19 environments using XL Deploy. Deployments now take between 15-30 minutes–an average time savings of 1.25 hours per deployment.
Results
No more developer time spent on deployments
Over an 80% improvement in production deployment speed
Immediate realization of ROI as a result of moving from manual to automated deployments
Increase in job satisfaction for Operations team members, who no longer work off-hours
Fast and easy maintenance, deployment and un-deployment of new software
No more deployment bottlenecks
Significantly reduced deployment-related errors
Deployment success rate of more than 90%, with any failures due to human error
When it comes to application deployment, “simple ‘n easy” workflows seem quite appealing. However, they suffer from many problems when applied to enterprise deployment automation. Learn why some users choose to use workflows, the drawbacks they struggle with and the top 5 reasons XebiaLabs doesn’t use them.
“We had a completely manual deployment process prior to the XebiaLabs DevOps Platform, so we saw immediate ROI. As soon as we implemented XebiaLabs, our deployment process became completely automated and no longer required a team of people pushing each deployment and updating scripts. Now, we don’t even think about deployments – they just happen. And more than 90% are successful,” said Rune Hellem, Senior Operations Officer, KLP.