Quantcast
Channel: Deployment Automation – XebiaLabs
Viewing all 59 articles
Browse latest View live

11 Black Holes of DevOps: Don’t Get Lost in Space

$
0
0

DevOps black holes

As you scale to create an enterprise DevOps process that works effectively across hundreds of applications and thousands of people, you’ll inevitably discover that things don’t work the same way they did back on the developer’s laptop with a single team, or a single environment…

Is your organization kicking off its DevOps transformation? Aiming for wild success? No doubt you’re headed for all sorts of adventures! If you’re like many organizations, you’re setting off on the first full-blown leg of your DevOps journey and your crew’s excitement is palpable. You’ve had a successful DevOps pilot with a team or two, and now it’s time to reach for the stars. Now you’re going to implement DevOps across your enterprise… and if you’re like many intrepid explorers who have gone before you, you have to plan for hundreds or even thousands of people, hundreds or thousands of applications, and thousands of deployments per month.

And like those previous pioneers, you’ve probably written a lot of scripts to make your pilot projects work—scripts that create infrastructure, scripts that manage deployments, maybe even scripts that try to orchestrate release pipelines or string together your DevOps toolchain.

But as you stretch to implement DevOps at enterprise scale, scripting DevOps can start to consume your teams completely, sucking them into black holes of perpetual maintenance and unmanageability, never to be seen or heard from again. Or at least never to reach the efficiency and productivity your business demands.

As you embark on your enterprise DevOps journey, you’ll find you have many new complexities to consider. Here are 11 core DevOps requirements that turn into black holes when you try to write scripts to meet these needs:

  1. Deployment plans that can span multiple stages and environments
  2. Release orchestration and dependency management that can handle complex application architectures
  3. Software delivery pipelines that support advanced release and deployment patterns
  4. Intelligent, automated rollback from failures
  5. Standardized processes that scale DevOps across your organization
  6. Visibility into your team’s real-world release processes, covering both manual and automated tasks
  7. Collaboration with stakeholders who aren’t script-savvy
  8. Reports that reflect the end-to-end release pipeline
  9. Proactive assessment and mitigation of release risks
  10. Automatic collection of compliance data as a built-in part of the release process
  11. Governance to enforce separation of duties and enterprise access control

As you’ll discover on your journey, DevOps processes that are simple for one project and one team grow drastically in complexity as you scale them across the organization, seek to meet compliance requirements, and make plans for continuous improvement. Scripting release and deployment instructions may work at small scale, but scripts become prohibitively expensive to create and nearly impossible to maintain as your DevOps initiative grows.

If any of these “black hole” requirements are core parts of your DevOps “spacescape,” don’t fall into the trap of scripting them as you scale… or you’ll watch helplessly as your precious resources get sucked into the void!

Want the full details on these 11 black holes and how you can vector around them? This new white paper explains it all!

DevOps black holes

For further exploration:

The post 11 Black Holes of DevOps: Don’t Get Lost in Space appeared first on XebiaLabs Blog.


Why an Enterprise DevOps Framework is Critical for Scalable Container Migration

$
0
0

Today’s forward-thinking companies are investigating container technology as a way to deliver software faster. But most implementations and typical container tools focus primarily on the technology of creating and running containers. Now, as IT teams have learned how to work with containers, companies are challenged with how to run containers at scale and standardize and manage release processes across hundreds of applications.

The ephemeral and disposable nature of containers requires significant effort to keep track of rapidly-changing container deployments, manage dependencies between multiple microservices, and enforce security, audit, and compliance processes.

Teams initially resort to writing custom code and scripts to address these challenges. Unfortunately, these scripts are unique to each container environment and instance, so there is no standardization across teams, environments, or stages in a release. Scripting enterprise deployments quickly becomes too expensive and impractical at scale.

Enterprise Framework for Container Use at Scale

As companies expand and scale their use of containers to deploy applications, they need a framework in place to provide mission-critical, enterprise-focused capabilities. This framework should enable these companies to:

  • Create technology-agnostic processes – Design standard and repeatable processes that work for hybrid environments
  • Manage application complexity– Manage and orchestrate complex application release processes and dependencies between microservices
  • Incorporate IT governance automatically – Maintain infrastructure to address compliance, control, security, reporting, and audit requirements
  • Get code-to-production visibility– Provide real-time visibility into all aspects of release processes and components, no matter where they live

 

FREE WHITE PAPER

Enterprise Software Delivery in the Age of Containers

Containers can help simplify and create more agile software delivery environments. But retaining this efficiency becomes difficult as deployments span multiple applications and environments. Download this free whitepaper to learn where containers fall short, and how Release Orchestration and Deployment Automation bridge the gap between the promise of containers and the realities of enterprise application delivery.

 

XebiaLabs DevOps Platform: The Framework for Large-scale Container Deployments

The XebiaLabs DevOps Platform offers orchestration, analytics, and deployment automation functions that are specifically designed to complement container infrastructures, helping companies manage applications deployed in containers at speed and scale. Critical capabilities include:

  • Standardization, automation, and control of complex software release pipelines, deployment processes, and configurations
  • Dependency management between applications and between release processes, and release orchestration across the complete Continuous Delivery pipeline
  • Complete visibility into the software delivery and deployment status across all environments
  • Compliance, security, reporting, governance, and audit trail capture enforced throughout the release process
  • Hybrid deployments that are managed across a mixture of containers, VMs, and traditional environments
  • Release and deployment information that is easily accessible across all teams, both technical and non-technical

XebiaLabs offers deep integrations with most common container tools and related PaaS platforms, including Docker, Docker Compose, Docker Enterprise, Kubernetes, Google Container Engine (GKE), RedHat OpenShift, Terraform, Cloud Foundry, Helm, AWS EC2 ECS, and Microsoft Azure Container Instance.

Rob Stroud, Chief Product Officer, XebiaLabs, and former Principle Analyst for Forrester Research

“As container technology usage grows beyond the sandbox, companies need to find ways to take advantage of the power of containers without being held back by unscalable, manual processes developed for one-off projects.

Using a holistic enterprise framework will accelerate software development and delivery, while reducing risk and providing deep insights and guidance. This structured approach allows organizations to focus on creating applications that differentiate the business and to deliver true business value rather than writing ‘plumbing code’ that is expensive, introduces risk, and is not scalable.”

Learn More

The post Why an Enterprise DevOps Framework is Critical for Scalable Container Migration appeared first on XebiaLabs Blog.

Stay Out of the Rain, Episode 2: Adapting to the world of cloud deployments

$
0
0

“Stay Out of the Rain” is a series of essential insights for modern companies looking to release software faster and with higher quality, across ever-changing and complex environments. Written by XebiaLabs Chief Product Officer, former Forrester Research Principal Analyst, and long-time software guru Robert Stroud, this series lays down a clear path of DevOps best practices for moving to the cloud, flags the puddles to avoid, and shares secret shortcuts to get you back on track when the inevitable thunderstorm strikes. Read on and let Rob teach you how to stay out of the rain!

Cloud adoption is accelerating as it becomes increasingly popular with DevOps teams. The cloud offers access to affordable, flexible IT resources that application developers can use to speed up development, often in conjunction with automated delivery and deployment pipelines. According to LogicMonitor’s Cloud Vision 2020: The Future of the Cloud Study, 83% of enterprise workloads will be in the cloud by 2020, with “excelling at DevOps” being one of the most common driving factors (58%) after digital transformation and IT agility (from 83% Of Enterprise Workloads Will Be In The Cloud By 2020 by Louis Columbus).

Today, many organizations are taking a cloud-first approach to building new applications. Empowering DevOps teams to use cloud resources is a smart way to facilitate process improvements and foster innovation in both application development and operations. However, deploying applications to Production-ready cloud environments at enterprise scale poses some unique—and a few not-so-unique—challenges for DevOps teams.

1. Scripting cloud deployments doesn’t scale

Teams start out small, developing one or two applications for the cloud as an experiment. At this small scale, it’s natural for them to try to automate deployments by writing scripts as they figure out what’s needed to get an application up and running in the cloud.

But as organizational scaling takes place and additional teams start transitioning more complex applications to the cloud, they quickly learn that scripting deployments doesn’t scale. Scripts that work for one application or set of services rarely work for others. More often than not, each development team writes their own scripts, leading to the proliferation of processes and a complete lack of standards.

The challenge becomes increasingly complex as the organization grows beyond a simple setup where they only use one cloud provider’s Infrastructure as a Service (IaaS) product. Teams often need to adopt, integrate, and manage multiple cloud services to actually achieve a working application. And at enterprise scale, it’s common to use multiple cloud providers or to adopt a hybrid setup that combines public and private cloud infrastructure.

Scripting, although relatively simple for one-off tasks, erodes development team productivity. As deployments grow in complexity and scale, teams increasingly spend time writing deployment scripts, losing time that could be spent developing value-adding application features.

This problem doesn’t disappear over time; scripts require constant updating as applications grow, cloud APIs change, and cloud providers roll out new services. This perpetual evolution leads to more technical considerations and complications to deal with as teams try to keep cloud deployments manageable.

Rollbacks?

“How do we handle rollbacks?” isn’t a question that’s automatically answered by moving to the cloud. Application deployments still occasionally fail, and you have to be prepared to recover in a fast, automated way—which isn’t something that you can achieve with handwritten scripts.

2. You still have to manage dependencies

Dependencies are a fact of life, whether that means dependencies between applications or microservices, between containers or other virtual machines, or between parts of the infrastructure located in the cloud or on-premises.

Managing dependencies when applications are deployed to different types of infrastructure poses a new challenge: how do you ensure that the right versions of applications are deployed to the right environments, at the right time? Are the versions of microservices in sync with each other? Will your current dependency management strategies work when those environments are cloud-based or a hybrid of cloud and on-premise? And how do you manage dependencies over time as applications and infrastructure change and grow?

WHITE PAPER

Enterprise Software Delivery in the Age of Containers

How can enterprises take advantage of the benefits of container technology without creating more work for their teams? Download this free whitepaper to learn where containers fall short and how Release Orchestration and Deployment Automation tools can bridge the gap between the promise of containers and the realities of complex enterprise application delivery.

 

3. Provisioning cloud-based environments takes time

Cloud computing promises faster, easier access to the infrastructure that you need to run applications: no more ordering servers, installing them in racks, and waiting for a system administrator to install an OS and other software. But many organizations implement manual reviews and approvals for cloud usage in an attempt to ensure security checks are in place or simply to manage costs.

DevOps teams can’t move faster if they’re hindered by cumbersome, time-consuming approval processes before they can spin up and start using new cloud environments. It’s more efficient to automate cloud instance management so that DevOps teams can optimize their cloud usage. And with the right cloud management solution, you can ensure that automated security checks are a built-in, immutable part of the process.

umbrellasEnjoy sunny skies with Continuous Delivery and Deployment

The cloud doesn’t have to rain on your Continuous Delivery and Continuous Deployment efforts; you can take control of deployment processes while rapidly testing and releasing applications on the cloud. Release orchestration and deployment automation can enable you to accelerate delivery while delivering enterprise-level scalability, reusability, security, testing, and standardization, so you can optimize the benefits that cloud-based infrastructure has to offer.

Escape the scripting trap with model-based deployments

To escape the scripting trap, you need to future-proof cloud-based application delivery by leveraging a declarative, model-based solution for automating deployments. A model-based approach decouples applications from environments, so you don’t have to manually script each deployment step in every single workflow for every single application.

You specify the components that make up your application and the environment where you want to deploy them—whether it’s on-premises or in the cloud. Model-based deployment automation can determine the steps that are needed to deploy the application and run them in the right order, every time.

Roll back application deployments intelligently

A significant benefit of cloud-based infrastructure is the ability to destroy cloud instances quickly and easily. In the event of a failed deployment, the ability to rapidly restore the application to a working state instead of completely tearing down and recreating the environment is critical to business success.

To fully ensure application stability, you need a deployment automation solution that supports intelligent rollback to restore the application to a usable state, without impacting other applications that are running in the same environment. Additionally, the rollback needs to work correctly, no matter how much of the deployment was executed successfully.

Centralize dependency management across applications

To efficiently manage application, deployment, and infrastructure dependencies across the enterprise, you need to centralize them in a single place that all teams can access. This structure ensures that each team can maintain the dependency information that is related to their area of responsibility.

An integrated release orchestration and deployment automation platform will identify application dependencies and determine the state of the target environment during each stage of the release and deployment pipeline. This type of infrastructure for your release pipeline provides a single source of truth, ensuring all technical dependencies are in place when you upgrade applications that are running on-premises or in the cloud.

Build cloud environment provisioning into your pipelines

It’s important to enable DevOps teams to provision cloud-based infrastructure automatically on demand, so that provisioning becomes an automated, repeatable, auditable part of the software delivery lifecycle instead of a cumbersome, time-consuming manual process. The right deployment automation platform can also automatically tear down cloud instances that are no longer needed, so you don’t pay for cloud resources that aren’t being used.

The XebiaLabs DevOps Platform protects you from the storm

The XebiaLabs DevOps Platform is recognized by top industry analysts as a leading Application Release Automation solution, providing both release orchestration and deployment automation for enterprise software delivery pipelines. It solves the unique challenges of cloud deployments in the enterprise by:

  • Standardizing deployments and automating rollback for maximum reusability without manual scripting
  • Centralizing configuration and dependency management for all applications, whether they’re running on-premises or in the cloud
  • Making it easy to build automated cloud environment provisioning into development, testing, release, and deployment processes across the enterprise

Related reading

The post Stay Out of the Rain, Episode 2: Adapting to the world of cloud deployments appeared first on XebiaLabs Blog.

JHipster Q&A: Guiding Developers on the Journey to Building Complex Apps

$
0
0

JHipster

If you’re reading this blog, there’s a good chance you’re well versed in developing applications on Java. There’s also a pretty decent chance that you’re at least familiar with JHipster, one of the most popular open source development platforms for generating, developing, and deploying complex applications using Angular/React and the Spring Framework.

In just a few short years, JHipster has become immensely popular in the Java community for easing some of the complexities of building full stack applications with Spring and Angular.

Two of our developers at XebiaLabs are active participants in the JHipster community. Deepu Kesavapilla Sasidharan, Senior Product Developer, is the co-lead of the project, and Sendil Kumar, Full Stack Developer, is a core member of the JHipster team. Having recognized that getting started with JHipster can be a daunting task, Deepu and Sendil set out to give the open source community some guidance.

Full Stack Development with JHipster is the definitive book for guiding developers on their journey to quickly building modern, complex applications with JHipster. With the release of JHipster 5 upon us, we asked Sendil and Deepu to tell us what motivated them to write the book and how it changed their views on software development.

Who is the book for and why does it matter to them?

Sendil: With the wide spectrum of options that JHipster provides, it is difficult for users to understand how to effectively use it. With this book, we tried to help readers embark on a journey, starting with a simple monolith application and driving towards building more complex microservices applications.

Deepu: A lot of developers will benefit from reading the book. Actually, anyone with a basic understanding of building Java web applications and a basic exposure to Spring and Angular/React will find it helpful because it shows people how to use JHipster for cutting-edge full stack development.

For example, this book will be helpful for a full stack developer who wants to reduce the amount of boilerplate they write, especially for greenfield projects.

It will also help a back-end developer who wants to learn full stack development with Angular or React or a full stack developer who wants to learn microservice development. And it will help a developer who wants to quickly prototype web applications or microservices.

WHITE PAPER

Enterprise Software Delivery in the Age of Containers

How can enterprises take advantage of the benefits of container technology without creating more work for their teams? Download this free whitepaper to learn where containers fall short and how Release Orchestration and Deployment Automation tools can bridge the gap between the promise of containers and the realities of complex enterprise application delivery.

What prompted you to write the book?

Deepu: I had been toying with the idea of writing a book for quite some time and was waiting for the right opportunity. I have been part of the JHipster team since its inception and have seen JHipster growing exponentially over the years. I took the lead role so that I could better manage the growing community and serve the growing demands of the Java community.

The project is only few years old but already has over a million installations, 200+ companies using it, 400+ contributors, and 10,000 Github stars — quite impressive for a project in the Java community.

But beyond the documentation, unofficial guides, courses, and so on, there is a lack of proper educational materials. There is a great mini book by Matt Raible (The JHipster Mini-Book 4.5), but it doesn’t show all the capabilities of the platform.

So, when Packt contacted me about authoring a book, I knew that it was the perfect opportunity. I teamed up with Sendil to write a full-fledged book on full stack development with JHipster so that more developers can benefit from this awesome platform.

Sendil: I would like to quote George Orwell here; “Writing a book is a horrible, exhausting struggle. Like a long bout with some painful illness. One would never undertake such a thing if one were not driven on by some demon whom one can neither resist nor understand.”

The “demon” for me is the awesome community that I am proud to be part of. With all of the amazing work that happens in JHipster, you always learn new things and re-learn things to perfection.

I’m hoping that this book will ignite that passion in readers.

What are the biggest pain points/limitations with JHipster that the book addresses?

Deepu: With this book we hope to ease some of the major challenges faced by JHipster users.

JHipster provides a huge range of technology options while creating new projects. It also provides many different architecture choices for monolithic and microservice application development. This is quite overwhelming to new users due to the complexities involved with the different technologies. The book breaks down and explains the myriad options available and guides readers towards the appropriate architecture choices.

JHipster users also face other challenges around the visibility of available options in the framework and a lack of guides and best practices in certain areas. In the book, we address these challenges by providing examples of best practices for driving continued development, CI/CD, and deployment with applications developed in JHipster.

Sendil: As Deepu said, this book addresses the various JHipster options. We use an example application, which evolves through the chapters, as way to simplify JHipster and make it easier for developers to understand. In our opinion, a sample application is worth a thousand pages of code.

How does XebiaLabs support JHipster?

Deepu: When I joined XebiaLabs, I immediately noticed how friendly the company was towards open source. Not only do we use a lot of open source software like Spring and Akka, the company also encourages developers to contribute to the OSS community. This attitude helped me to continue leading JHipster irrespective of the busy work schedule.

XebiaLabs has also been instrumental in the React support that has been added to JHipster. In fact, the addition of React support was the result of some of the product development work that happened at XebiaLabs.

XebiaLabs also supports the JHipster community by sponsoring our attendance at various conferences and events to talk about JHipster, including the JHipster Conference happening in Paris this June.

Sendil: Adding to that, I feel that for any open source project to be successful it needs an awesome community. Reaching out to the community via conferences is vital. Thanks to XebiaLabs’ participation in these conferences we get to spread some JHipster love.

Related Resources

The post JHipster Q&A: Guiding Developers on the Journey to Building Complex Apps appeared first on XebiaLabs Blog.

Take Release Automation to the Next Level, Episode 2: Blaze a Trail with Blue/Green Deployments

$
0
0

Advanced Deployment PatternsThe “Take Release Automation to the Next Level” series gives you insights into the benefits and challenges surrounding DevOps deployment patterns. In this series, we’ll look at how different patterns work, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them.

As DevOps teams mature beyond their first initiatives to automate software build, test, deploy, and release activities, advanced deployment patterns provide a flexible structure that they can use to speed up the software delivery cycle while maintaining control over the way applications are deployed.

One popular deployment pattern is blue/green deployment. In this pattern, a load balancer directs traffic to the active (blue) environment while you upgrade the standby (green) environment. After smoke testing the application in the green environment and establishing that it is operating correctly, you adjust the load balancer to direct traffic from the blue environment to the green environment.

Advanced Deployment Patterns

Advantages of Blue/Green Deployments

The blue/green deployment pattern provides a safe way to upgrade applications without interrupting their use. Blue/green deployments work particularly well for monolithic applications that can take significant time to deploy, because you have full control over the point at which users can access the new version of the software.

Disadvantages of Blue/Green Deployments

Adopting the blue/green deployment pattern can increase operational overhead because you have to maintain duplicate Production environments with identical infrastructure. Additionally, updating database schemas while following a blue/green approach requires caution, as the new version of the application cannot use the database until it has been upgraded.

Rolling Back a Blue/Green Deployment

Blue/green deployments offer fast and straightforward rollbacks. If the transition to the green environment fails, or if something goes wrong once the green environment is in active use, you simply adjust the load balancer to direct traffic back to the blue environment.

Advanced Deployment Patterns

Adopt Blue/Green Deployments with the XebiaLabs DevOps Platform

To adopt the blue/green deployment pattern in a way that can scale across teams, projects, applications, and environments, you must ensure that the full, end-to-end deployment flow is automated: from managing user traffic, to deploying and smoke testing the application, to handling failures in the process. The XebiaLabs DevOps Platform makes it easy to apply the blue/green pattern to your deployments consistently, without requiring you to design or script the deployment flow for every application and environment. With XebiaLabs, you can safely upgrade applications without inconveniencing users. And XebiaLabs’ support for automated rollback, you can quickly revert application versions without risk.

Application Release Automation

Best Practices for DevOps: Advanced Deployment Patterns

This white paper gives you insights into the DevOps best practice of advanced deployment patterns. It describes how each pattern works, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them. Read more.

Related resources

The post Take Release Automation to the Next Level, Episode 2: Blaze a Trail with Blue/Green Deployments appeared first on XebiaLabs Blog.

Take Release Automation to the Next Level, Episode 4: Create Stability with Canary Releases

$
0
0

Advanced Deployment PatternsThe “Take Release Automation to the Next Level” series gives you insights into the benefits and challenges surrounding DevOps deployment patterns. In this series, we’ll look at how different patterns work, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them.

In Episode 3, we looked at rolling updates, which is a deployment strategy that helps teams achieve zero downtime while updating an application. A canary release is a similar strategy that allows you to roll out new features to—and collect feedback from—a targeted group of users.

In a canary release, you upgrade an application on a subset of the infrastructure and allow a limited set of users to access the new version. This approach allows you to test the new software under a Production-like load, evaluate how well it meets users’ needs, and assess whether new features are profitable.

A canary release is similar to a rolling update because it involves releasing a new feature in a staged manner. However, when you perform a rolling update, you only verify that the application is technically stable and functional before moving on to the next stage of the rollout. With a canary release, you evaluate users’ reaction to a new feature or functionality before deciding whether to release it more widely.

Deployment Automation Pattern

Advantages of Canary Releases

Canary releases are a good way to release experimental or beta features to users and gather their feedback. For example, you can use a canary release to roll out a new feature to users in a specific geographic region. If those users like the feature, you can deploy a canary to another region and test the user response there; or, you could choose to upgrade all remaining nodes in the environment, possibly by applying another release pattern to minimize application downtime.

Disadvantages of Canary Releases

Canary releases pose similar risks to rolling updates because multiple versions of the same application run in the environment while the canary release is deployed. For that reason, the environment will be somewhat unpredictable during the canary timeframe, and the application architecture must support running in cluster mode so that multiple instances can access the database.

Rolling Back a Canary Release

Rolling back a canary release is relatively easy because you only need to roll back the application version on servers where the new application version is deployed. User traffic can be redirected to nodes running the older version of the application, so users will not experience application downtime.

Deployment Automation Pattern

Implement Canary Releases with the XebiaLabs DevOps Platform

Canary releases pose similar risks to rolling updates, so successfully adopting canary releases requires similar control over the end-to-end deployment process. It’s important to automate control of the load balancer both when deploying a new application version and when rolling back. The XebiaLabs DevOps Platform makes it easy to implement canary releases consistently across different teams and applications. XebiaLabs also makes it easy to combine the canary release pattern with other advanced deployment patterns, so you have maximum flexibility while staying in control of your software delivery pipelines.

Application Release Automation

Best Practices for DevOps: Advanced Deployment Patterns

This white paper gives you insights into the DevOps best practice of advanced deployment patterns. It describes how each pattern works, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them. Read more.

Related resources

The post Take Release Automation to the Next Level, Episode 4: Create Stability with Canary Releases appeared first on XebiaLabs Blog.

Take Release Automation to the Next Level, Episode 6: Getting Started with Advanced Deployment Patterns

$
0
0

Advanced Deployment PatternsThe “Take Release Automation to the Next Level” series gives you insights into the benefits and challenges surrounding DevOps deployment patterns. In this series, we’ll look at how different patterns work, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them.

In the “Take Release Automation to the Next Level,” we’ve seen how the DevOps best practice of deployment patterns can help you speed up the software delivery cycle while maintaining control over the way your applications are deployed.

Deployment patterns allow you to control the technical deployment of new software versions as well as the rollout of new features to users. You can even release features to limited user groups and test them in Production with much less risk than if you made them generally available.

But while advanced deployment patterns can help you take release and deployment automation to the next level, there are some considerations to keep in mind before you start applying them. There are also best practices that you can follow to reduce the risk of adopting them.

What to Consider When Adopting Advanced Deployment Patterns

1. You might need to change your applications. Minimizing or eliminating downtime often requires multiple versions of the application to be active in Production at the same time, which might require you to change your applications at the code level.

2. Running multiple application versions at the same time means that the database must be compatible with all active versions. You must decouple database updates from application updates. In addition, Development teams must design database changes to be backward compatible. In practice, maintaining compatibility has consequences for both database changes and application logic.

3. You might need to change your infrastructure. Some deployment patterns require you to create and maintain multiple environments that end users can access. Many DevOps teams achieve a multi-environment setup by using virtual infrastructure in the form of cloud-based environments. They might also containerize applications so that they can more easily stop, start, scale up, scale down, and load-balance them using container management and orchestration tools.

However, shifting from on-premises infrastructure and traditional middleware platforms to containers and the cloud cannot be done overnight. Most applications require architecture changes to run effectively in such environments.

Best Practices When Using Advanced Deployment Patterns

1. Limit the number of application versions that are live at the same time to two. This approach reduces the complexity of deployments, infrastructure management, and database compatibility.

2. Deploy in small increments, only releasing a few features at a time. This approach, which is one of the core principles of DevOps, reduces deployment risk because the more frequently you release software, the more reliable your release process becomes, as teams have a greater incentive to fix problems and improve the process.
Deploying small increments also reduces lead time, which is the time that elapses between the start of feature development until that feature is available to end users. Reduced lead time is one of the key goals of Continuous Delivery, as it enables you to deliver value to users and to the market faster.

3. Manage risk by refactoring monolithic applications into microservices that you can deploy independently. Microservices can give you more control over the impact of changes to the environment. They can also increase development throughput and enable faster development cycles, as new features are typically delivered as one or more microservices and don’t require changing the entire application.

Application Release Automation

Best Practices for DevOps: Advanced Deployment Patterns

This white paper gives you insights into the DevOps best practice of advanced deployment patterns. It describes how each pattern works, the advantages and disadvantages of each one, considerations for implementing them, and best practices when applying them. Read more.

Advanced Deployment with the XebiaLabs DevOps Platform

The XebiaLabs DevOps Platform provides deployment automation and release orchestration for your entire software delivery pipeline. It makes release and deployment practices across the enterprise reusable, scalable, and secure through:

  • Proven templates that work for Production deployments at enterprise scale
  • Integrated support for both automated and manual release activities
  • Visibility into the status of running releases and deployments
  • Proactive notification of delays and bottlenecks in the process
  • Detailed error analysis for troubleshooting failed releases and deployments
  • Fully auditable releases and deployments, with detailed reports and analytics
  • Granular, role-based access control over all applications and environments

Related resources

The post Take Release Automation to the Next Level, Episode 6: Getting Started with Advanced Deployment Patterns appeared first on XebiaLabs Blog.

GE Power Fleet Services Boosts Deployment Speed and Revenues

$
0
0

The GE Power Fleet Services team develops software to remotely monitor, run diagnostics on, and capture analytics for the industrial Internet solutions they deliver. But their development and production teams had hit a wall in their strive towards continuous improvement. Unplanned work interrupted planned work, software release processes moved too slowly to meet business demands, and painful, disjointed handoffs between teams disrupted the release process.

The team wanted to develop higher quality software faster and improve collaboration between development and production. They began by improving the front end of their software development process with Continuous Integration using Jenkins, but realized they had a lot of software sitting on the shelf waiting to be released. To fully realize the benefits of their move from waterfall to Agile methods, GE Power knew they needed release automation.

Developers Deploy Twice as Fast with XL Deploy

After evaluating IBM, CA, and XebiaLabs, GE Power chose the XebiaLabs Enterprise DevOps Platform to automate their deployments and orchestrate and control their release pipelines. A major factor in their decision was XebiaLabs’ agentless architecture.
The team began by using XL Deploy to automate their deployments. Introducing a new technology into the stack with the infrastructure team was typically a lengthy process. But with XL Deploy’s agentless technology, they could easily deploy their applications with no interruption to the infrastructure team.
Automating their deployments solved GE Power’s initial bottleneck. Their next challenge was to define and improve their pipeline between development, QA, and production.


EBOOK

The IT Manager’s Guide to Continuous Delivery

Continuous Delivery allows you to get new features and capabilities to market faster and more reliably. This ebook helps managers understand the principles behind Continuous Delivery, explains the transition to a Continuous Delivery organization, and gives practical advice on how to start benefiting from the dramatic improvements Continuous Delivery provides.

 

Getting Control of the Pipeline with XL Release

Early phases of the GE Power pipeline were not representative of the production environment. For example, the configurations were often different, and they often had to abort entire releases, which meant losing critical release metrics.
Using XL Release, GE Power was able to create a controlled pipeline. The team can now define their processes and understand where to automate and how to standardize each stage of the pipeline. XL Release’s metrics and dashboards also allow them to drive continuous improvement, providing metrics that show release failures, identify problems, and offer information to help them solve problems faster.

The Results

Using the XebiaLabs Enterprise DevOps platform, GE Power experienced great results:

  • Releases that took months now take days and only 1/3 of the resources
  • 25 hours per deployment saved, multiplied by hundreds of deployments per year
  • Higher quality software – deployments go into production “first time right”
  • Elimination of rework increased capacity to innovate, which boosted revenue growth
  • Almost 2X return on their XebiaLabs investment this year and expect 5X return next year
  • Critical release data helps team make quick, data-driven decisions and measure success
  • Migration from legacy process to release automation was accelerated due to XebiaLabs’ ease of use

“We automated over 20 packages and found we saved over 25 hours per deployment. One analytic application that took months to get through the pipeline now takes us only days with XebiaLabs. The team is finally deploying into production ‘first time right,’ with no failures.”
Eric Haynes, Director of Architecture and Engineering, GE Power Fleet Services

Check out the full case study here!

The post GE Power Fleet Services Boosts Deployment Speed and Revenues appeared first on XebiaLabs.


KeyBank Gets Deployment Control and Speed with XebiaLabs and OpenShift

$
0
0

“Our DevOps journey began by incident,” said Tom Larrow, DevOps Automation Engineer at KeyBank, one of the largest banks in the U.S., with revenues of 5.8 billion. The “incident,” which Tom described in a recent presentation at the XebiaLabs DevOps Leadership Summit, was a major outage that took their systems down for almost a day. While it made for flashy news headlines, it was, according to Tom, “a major reputational risk.”

KeyBank Logo
But there’s a silver lining. Inside KeyBank, the outage spawned an investigation into their systems, which revealed just how complex they were. One online banking login, for example, required over 190 network hops across two datacenters and a minimum of 6 seconds to get in.
That kind of complexity is a fact of life for a major financial organization, especially one that’s seen much of its growth come from acquisitions. Last year KeyBank acquired First Niagara, which dramatically increased the number of customers using KeyBank’s systems.
“When you acquire companies and plug their systems into yours, you increase the complexity of your system exponentially,” said Tom.
Because of the complexity of their deployments, along with a lack of standard configurations and automation, they could only release software four times per year. Each release was a big deal—all-hands-on-deck, Sunday at 2am, 60 people on a call. It was, according to Tom, like “mission control, with everybody giving a go, no-go for changes.” Because the cycle was so long, they would do things like rush to include new features in the current release or scrap them until the next one.
But a major project was on the horizon: Tom’s team was tasked with building a new online banking platform, which was scheduled for completion by summer of 2017. And when the CEO announced the First Niagara acquisition—and moved the deadline for the new online system up by several months—the pressure was really on. They had to accelerate their process.


EBOOK

The IT Manager’s Guide to Continuous Delivery

Continuous Delivery allows you to get new features and capabilities to market faster and more reliably. This ebook helps managers understand the principles behind Continuous Delivery, explains the transition to a Continuous Delivery organization, and gives practical advice on how to start benefiting from the dramatic improvements Continuous Delivery provides.

Modernizing the Delivery Process

From a platform perspective, KeyBank knew they needed to change how they did business, and that meant modernizing the software delivery process. They began with an overhaul of their old platform, adding Docker containers to build all their online banking components, Kubernetes for container orchestration, and Red Hat OpenShift due to its strong enterprise support and value-added features.
For Continuous Integration, the team installed Jenkins, which they used in conjunction with OpenShift. This approach was great for improving coding, increasing the number and speed of deployments, and fixing issues faster. It was so effective, it allowed them to deliver the new online banking system to their existing customers a year early. The process also helped them meet their deadline for bringing First Niagara users onto their online services, and to rapidly fix any problems with little to no disruption to their millions of new and existing customers. Based on this success, KeyBank wanted to expand their new process to their other applications. “We had to keep improving, Tom said. “That’s the whole point of DevOps.” But spreading the success from the online banking system to their other applications demanded more change.

Reaching the Limits of Jenkins

Tom and his team realized that, while Jenkins worked well for Continuous Integration, it had its limits. Jenkins lacked certain software controls, which led to some problems. For example, there were a few incidents where someone would create an extra job to do testing, and the test code found its way into production.
As a bank in a highly regulated industry, they also saw the need for more mature access and process controls. Maintaining proper segregation of duties and ensuring all aspects of delivery were fully auditable was essential. They knew they needed something more robust—a full enterprise tool. That’s where XebiaLabs came in.

Finding an Enterprise Tool

KeyBank chose XebiaLabs’ XL Deploy and XL Release products to help them get control of their delivery process and scale to include their other applications.
While KeyBank still builds Docker images and pushes them into their registry, they now use XL Deploy to deploy into their lower environments. Before XL Deploy, they were pushing a Docker image that was generically tagged “Dev.” Now, each image includes tags such as a build number and unique identifier. From there, XL Deploy creates a deployment package. The deployment packages are available in XL Release so that the team can select and control exactly which version of code is in any environment at any time. They can also easily roll back to a prior version and capture an audit trail.

By adding XL Deploy and XL Release to their delivery platform, KeyBank has achieved their goal of combining speed and agility with fine-grained control over which software versions go through the pipeline.
To hear KeyBank’s DevOps transformation story firsthand and learn about Tom’s “DevOps Do’s and Don’t’s, visit our DevOps Leadership Summit page or go here.

The post KeyBank Gets Deployment Control and Speed with XebiaLabs and OpenShift appeared first on XebiaLabs.

V7.0 Plugin Adds Fine-Grained Control to Microsoft TFS/VSTS Deployments

$
0
0

Last year, we showed you how to use our Team Foundation Server (TFS) XL Deploy plugin to create, import and automatically deploy applications from TFS and Visual Studio Team Services (VSTS).

This plugin allows you to build a package with TFS/VSTS, import the package into XL Deploy and automatically trigger a deployment to your chosen environment(s). With this integration, you get an easy, consistent way of packaging and deploying your applications from TFS/VSTS to all your target platforms: from legacy enterprise apps running on on-premise middleware, to cloud, PaaS and Docker.
With the new 7.0 version of the VSTS/TFS plugin for XL Deploy, we’ve improved the plugin even further with the addition of two build/release tasks:

  • Package with XL Deploy – Packages your application
  • Deploy with XL Deploy – Imports the package and deploys it to the chosen environment

Although this functionality was already present in the original XL Deploy “Build” task, breaking it into two separate tasks adds fine-grained control: the “Package with XL Deploy” task can be added to the build pipeline so that you’re only building the package once, while the “Deploy with XL Deploy” task can be added to the release pipeline multiple times so you can deploy that one package to different environments in the pipeline. This ensures that the code you’re deploying to production is exactly the same code that was successfully tested in the lower environments.
To demonstrate how these tasks work, I’ll walk you through how to create an XL Deploy package during the build stage and, later on in your release definition, deploy that package to different environments. Let’s get started…

Building the Artifact

As I described in a previous post, we will keep our Visual Studio Build. However, instead of using the XL Deploy build task, we will add the “Package with XL Deploy” task, which you can find in the Package category in the Task catalog.
Microsoft TFS/VSTS plugin for XL Deploy
Once we add the task, we need to point it to the right manifest path, choose whether we want the task to set the version of the created package based on the build number and, in our case, publish the artifact to TFS. If the latter option is selected, once the package is created, it will be uploaded to TFS and associated as an artifact to the current build. At the end, the result should be similar to the following:
Microsoft TFS/VSTS plugin for XL Deploy
 
We can now try the build and, if successful, we should see a DAR package under the build artifacts.
 
Microsoft TFS/VSTS plugin for XL Deploy
Once we see that the artifact uploaded, we can move to the next step.

Deploying from the VSTS Release Management

We are now going to create a new release definition and start with an empty template. As the artifact source, we are going to indicate a build, and from the relevant drop-down menu, select the build definition that we created in our previous step. You also may choose to select a continuous deployment option and a specific build queue for this build.
Microsoft TFS/VSTS plugin for XL Deploy
Now that we’ve created the release definition for the initial environment, we’ll choose Add Task and then select the “Deploy with XL Deploy” build task from the Task catalog.
Microsoft TFS/VSTS plugin for XL Deploy
This task will allow us to import the DAR package into XL Deploy and trigger the deployment for the desired environment. Bear in mind that the plugin will check if the package is already in XL Deploy and, if so, will skip the import. This means that if you are using it multiple times for different environment definitions, it will only be imported the first time.
Now we need to select the XL Deploy Endpoint, which indicates the instance of XL Deploy we are using for this deployment, and fill in the necessary indications about the artifact location and the name of our target environment. For further information about selecting the XL Deploy Endpoint, see “Add an XL Deploy endpoint in Visual Studio Team Services.”
If the release process can pick up the artifact from the TFS server, it means that the release process itself will deliver the artifact, which it gets from the associated build. If you are not uploading the artifact to the TFS Server, you can make it available to the release process from an alternative source, such as a file share. In this case, the release task will copy the file from the UNC path you provide.
Microsoft TFS/VSTS plugin for XL Deploy
And that’s it! Now we can create a new release and let our task delegate the deployment to XL Deploy.

To Learn More

You can find more VSTS specific information here, and the VSTS extension for XL Deploy here.
 

The post V7.0 Plugin Adds Fine-Grained Control to Microsoft TFS/VSTS Deployments appeared first on XebiaLabs.

The Secret Behind Model-Based Deployments

$
0
0

DevOps teams at any organization aim to deliver software faster and more reliably. However, many deployment automation tools use a workflow approach based on manual scripts, which rapidly become too complex to maintain at scale. As teams leverage automation to streamline and standardize the deployment process, a model-based approach helps meet goals more efficiently.
See why manually scripting deployment workflows fails at enterprise scale…every time.
deployment automation

The post The Secret Behind Model-Based Deployments appeared first on XebiaLabs.

Ullink Doubles Deployments and Achieves Zero Production Failures

$
0
0

Ullink provides global, award-winning, multi-asset trading technology and infrastructure that is trusted by the world’s top-tier banks, brokers, and trading venues, from New York to Tokyo. Since 2001, Ullink has been one of the fastest growing technology companies in the financial industry.

Ullink’s Challenge: Deploying Software Faster While Improving Quality

Ullink’s top-tier financial services customers have zero tolerance for failure: even one deployment-related production failure per month was too much for their mission-critical trading needs. Ullink needed a new approach to automatically deploy more than 1,000 changes in production per month, and they had several stringent requirements:

  • Quality & stability: No deployment-related failures in production
  • Automation: Automate application deployment and eliminate manual operations
  • Standardization & rules: Ability to create and enforce rules across applications and environments
  • Extensible: Open APIs so they could execute their scripts
  • Fine-grained access control: User management controls that extend to the granular level of roles and tasks
  • Compatible with current systems: Easy integration with existing in-house technologies
  • Simplified deployments: Ability for anyone to execute a deployment, regardless of technical skill level
  • Data to prove compliance and support continuous improvement: Traceability of who does what and the ability to measure how long tasks took, both before and after changes

Automate Deployments with Provisioning Tools?

Jonathan Berdah, CTO of Customer Service and Operations, Ullink
Jonathan Berdah, CTO of Customer Service and Operations, Ullink

“We started by looking at provisioning tools like Chef, Puppet, and Ansible to build out our DevOps plans,” said Jonathan Berdah, CTO of Customer Service and Operations at Ullink. “At a DevOps convention in Paris, we watched a demonstration of the XebiaLabs DevOps Platform. We immediately understood the limitations we would encounter if we tried to repurpose these provisioning tools and write scripts to automate deployments.
“We looked no further, because XebiaLabs met our needs for fully automated and self-service deployments, granular control over user roles and privileges, and integrations with existing tools.”

Double Deployment Capacity within Six Months with XebiaLabs

Ullink discovered that XebiaLabs wasn’t just a solution for deployment automation; it also helped them rethink their entire software delivery system. “XebiaLabs was the cornerstone of the changes we made to rebuild our entire software delivery platform. We moved from micro-sized manual changes to automated full application deployments,” said Jonathan. 
The XebiaLabs DevOps Platform helped Ullink achieve substantial improvements in their software delivery process:

  • No deployment issues in production, down from 1-2 errors per month
  • Easy for any team member to execute a deployment, regardless of skill level
  • Faster delivery of applications and features to clients due to elimination of bottlenecks in Operations
  • Deployments now take an average of 10 minutes, down from an average of 30 minutes. Previous deployments took up to 5 hours
  • Time savings on manual deployments now allow production engineers to focus their time on other work, such as validating the content of changes rather than manually carrying out the procedure to change things
  • Global, singular view and full control over the software delivery process
  • Better task management: easy to do the right tasks and difficult to do the wrong tasks
  • Improved customer experience: since Ullink implemented XebiaLabs, they’ve had no downtime in production and not a single complaint from customers due to deployment issues
  • Doubled the number of deployments within six months: now automatically deploying > 4000 changes across UAT and Production / month, up from < 2000

“Within six months of implementing XebiaLabs, we doubled our deployment capacity to over 4,000 automatic deployments per month, up from less than 2,000 manual deployments per month. And production is error-free,” Jonathan noted with satisfaction.

Read the rest of Ullink’s story here.

Related Resources

The post Ullink Doubles Deployments and Achieves Zero Production Failures appeared first on XebiaLabs.

How Application Release Automation Helps You Deploy to the Cloud

$
0
0

Most things that matter take work, and that includes migrating applications from on-premise infrastructure to the cloud. Many of the promised benefits of cloud are true: faster delivery of apps and flexible and scalable infrastructure at lower cost, for example. But for enterprises, moving to the cloud is hard, regardless of whether you’re moving to a full cloud or hybrid solution.

Migrating to the Cloud at Enterprise Scale

Migrating applications to the cloud is not as easy as it may initially seem because getting enterprise applications deployed at scale is complex. In fact, the specific steps involved in moving applications to the cloud are inherently tied to the infrastructure itself, further compounding the problem. So, it makes perfect sense that changing that infrastructure landscape is challenging, full of risk, and something that many organizations try to put off.

Fortunately, Application Release Automation (ARA), also called Continuous Delivery and Release Automation (CDRA) and Application Release Orchestration (ARO), can help you overcome these complexities and put you on the road to moving your applications to the cloud at enterprise scale.

What Does Application Release Automation Do for Deployments?

Along with many other necessary release activities, an Application Release Automation solution defines and places your application components and configurations onto the correct infrastructure targets, whether physical, virtual, or cloud. These components can be things like containers, binary code, connection configurations, database updates, queue definitions, or web service endpoints.

The ARA solution packages your application with its components and dependencies then automatically installs it on target environments. In most ARA tools, you do two things to make this happen:

1. Define the application in the ARA solution. Usually, you’ll have a deployment package for each version of the application. This package contains the artifacts and configuration items that will make the application run.

2. Specify the environments where you want to deploy the package.

After you’ve created your deployment package and specified the target environments, you can deploy applications to those environments. Automating deployments in this way is the first step in getting ready to migrate applications to the cloud. But to really make your cloud transition smooth, you need to standardize your deployments. This is where an ARA solution with model-based deployment comes in.

Model-based ARA Solutions Remove the Complexity of Moving to the Cloud

A model-based ARA solution decouples applications from target environments, removing the requirement to manually script each deployment step in every single workflow for every single application. Instead, you specify each component file or “ingredient” that makes up your application along with the environment you want to deploy those components onto. That environment contains the container platforms, cloud-based instances, servers, or middleware required to run your application. As long as the application and environment components are supported, the ARA tool determines the steps that are needed to deploy the application and knows the order in which to run them.

Model-based deployments let you easily create and reuse deployment models and change them on the fly without having to rescript. The ability to model your deployments is critical for scaling DevOps across an enterprise, which typically includes hundreds or thousands of applications, multiple teams, and a mix of old and new technologies.

To put it bluntly, if you want to expand DevOps enterprise-wide, you must have ARA in place. (See what top analysts Gartner and Forrester have to say about why ARA is so important for scaling DevOps.)

Watch the video below to see what model-based deployment looks like in XL Deploy.

ARA Gives You the Flexibility to Choose and Change Your Cloud Strategy

Transitioning to the cloud requires that you to decide how you want to do it—all at once, or piece by piece. Model-based ARA supports a number of different use cases; for example, if you don’t want to jump in head first, you can start small by moving parts of an application, such as static web content and front-end web servers to the cloud. Or you can go big bang and move the whole app to the cloud if you want. Having the choice matters.

A model-based ARA solution doesn’t care if the targets in an environment are on premise or in the cloud. You just need to be able to interact with the target somehow, so SSH, WinRM, or any web service flavor will do. You don’t even need to define a new workflow or deployment steps because the ARA tool evaluates the changes and figures out the new deployment plan for you.

Deployment issues are one of the things you need to solve on the journey to the cloud, among many others. But solving deployment issues shouldn’t be difficult; implementing the right solution will provide benefits that go beyond your cloud move, such as faster and more reliable deployments, the ability to keep full audit trails and maintain compliance, better visibility into and control of the release process, and intelligence about the entire delivery pipeline so you can continually improve.

Check Out the XebiaLabs DevOps Platform for ARA

The XebiaLabs DevOps Platform is an ideal ARA solution to help you transition to the cloud thanks to its model-based approach and the many features it offers for release orchestration. With the XebiaLabs DevOps Platform, you can change an environment or application without having to change a workflow. Once you can do that, you can see the cloud simply as a different environment and not a scary new world that’s only for new applications.

Related Resources

The post How Application Release Automation Helps You Deploy to the Cloud appeared first on XebiaLabs.

How Model-based Deployment Accelerates Software Delivery

$
0
0

Scripting deployments doesn’t scale

Adopting Continuous Delivery means finding ways to speed up development cycles and automate software delivery so that DevOps teams can release their applications faster and more reliably. But once you start to scale, some Application Release Automation tools that promise to accelerate software delivery actually slow it down. They require teams to spend time scripting out deployment workflows step-by-step, which results in time-consuming, expensive script maintenance in the future.

XebiaLabs’ deployment model generates flawless deployments… without scripts

The XebiaLabs DevOps Platform is different. It uses a model-based approach that streamlines and standardizes the software delivery process without requiring DevOps teams to write and maintain deployment scripts.

XebiaLabs’ model-based approach means that you only have to specify what you want to deploy and where; the model does the rest, from automatically generating deployment plans to handling rollbacks if something goes wrong. The XebiaLabs model standardizes your deployments so you can scale up to new environments, new target technologies, and new platforms with ease. Perpetual script maintenance becomes a thing of the past, so teams can focus on more productive and fun projects.

Check out this video!

Check out this video to learn about the fundamentals of XebiaLabs’ innovative deployment model and how it helps you achieve fast, automated, repeatable deployments.

Learn More about Model-based Deployments with XebiaLabs

The post How Model-based Deployment Accelerates Software Delivery appeared first on XebiaLabs.

Scale Global Deployments Exponentially with XebiaLabs Satellite

$
0
0

The XebiaLabs DevOps Platform’s Satellite feature packs a one-two punch. It helps organizations scale complex, long-running application deployments exponentially—without negatively affecting performance—by offloading deployment work from the XL Deploy server to satellite servers. And for organizations that have data centers located around the world, Satellite provides fault tolerance and continuity in the face of network failures.

The icing on the cake is that Satellite does it all—on-premises or in the public, private, or hybrid cloud—without requiring you to install proprietary agent software on every deployment target. That means faster setup, less maintenance work, no audit concerns, and more security for your infrastructure.

The XebiaLabs DevOps Platform 8.5 introduces even easier satellite management for users across the enterprise

The XebiaLabs DevOps Platform 8.5 introduces a friendly graphical user interface that allows both technical users and business stakeholders to monitor and manage the satellites in their system. This new user interface:

  • Features at-a-glance overviews of satellite health for easier monitoring and troubleshooting
  • Makes it easy to organize satellites into groups for enhanced failover and load balancing capabilities
  • Enables one-click maintenance actions such as restarting satellites and synchronizing plugins
  • Provides comprehensive overviews of which deployments are running on which satellites
  • Includes easy drill-down from high-level satellite overviews to low-level deployment tasks

XL Deploy
Developers, Operations staff, and System Administrators can use the new interface to monitor the health of their satellites, to verify that network connections to satellites are working as expected, and to restart and synchronize satellites and when required.

QA Testers, Release Managers, and Product Owners can also use the interface to check the status of their application deployments, to verify where their deployments are running, and to collect information needed to troubleshoot slow or failed deployments.

XL Deploy

Benefits of XebiaLabs Satellite

XebiaLabs offers the first and only hassle-free, resilient, fault-tolerant global application deployment solution, designed to make enterprise deployments go smoothly. If your network has bandwidth, latency, or reliability issues, there’s no better way to ensure fast and error-free deployments.
XebiaLabs Satellite provides:

  • Infinite scalability for deployment workloads without the overhead of agents
  • Increased fault tolerance and reliability when network connections are unstable
  • Reduced network traffic that substantially reduces costs for global infrastructure
  • Automated staging and clean-up of deployment artifacts
  • Full control over deployment security in remote data centers
  • Simpler deployments across mixed Unix/Windows environments

Learn More

The post Scale Global Deployments Exponentially with XebiaLabs Satellite appeared first on XebiaLabs.


Best Practices for Custom Deployments with the XebiaLabs DevOps Platform

$
0
0

Whenever I give a demo of, or a training class on, the XebiaLabs DevOps Platform, I mention that all actions in deploying an application can be broken down into two categories: moving data (files) and executing commands.

In a typical deployment, the XL Deploy module of the XebiaLabs DevOps Platform performs these two actions on a remote system, to which a connection is made via SSH for Unix/Linux operating systems and WinRM for Windows.

XL Deploy has a convenient control task called “Check Connection” that proves that these two actions can be performed successfully. It transfers a dummy data file and runs a command to list the contents of a temporary directory. It proves that the protocols are correctly configured, firewalls and ports are open, and the login credentials are valid.

So are we ready to deploy?

Some users are tempted at this point to structure a deployment using plugins that provide these exact two actions in their most basic form:  the “command” plugin with its cmd.Command object type and the “file” plugin with its file.File object type.
A user goes into XL Deploy and configures a deployment in this fashion:

  1. Commands that precede the file transfers

cmd.Command object with order = 45
cmd.Command object with order = 55

  1. File transfers

file.File object for the first file with default order 60
file.File object for the second file with default order 60
file.File object for the third file with default order 60
file.File object for the fourth file with default order 60

  1. Commands that follow the file transfers

cmd.Command object with order = 65
cmd.Command object with order = 75


What are the pros and cons of this approach?

As seen in the next two images, the objects involved in the deployment are:

  • Easy to configure
  • Easy to read and comprehend

For the command object, the user simply enters the command to be executed in the command line property. For the file object, the user uploads the file and indicates the target path, at minimum.  The example below uses a placeholder for the latter.


On the other hand, this approach has some shortcomings:

  • Fragile. You’re working with an open command line susceptible to errors.
  • Incomplete. The commands don’t take rollbacks and reruns into account effectively.
  • Not portable. You’ll have to rewrite the commands for another OS such as Windows.
  • Doesn’t identify that all these items go together if they are part of a package containing other deployables.
  • Doesn’t provide the benefits of XL Deploy’s object model. For example, if you wanted to “subclass” this configuration for behavior slightly different than this, you have to rewrite it.

Best Practices

XebiaLabs recommends the following best practices for XL Deploy when it comes to deployments not already supported by an existing plugin.

Combine the file artifacts, both text and binary, into a single zip-style archive. 

This follows  the same rationale for bundling application files together as a jar, war, or ear file. They move together through your CI/CD pipelines as a single unit, developed and deployed together. Pack them in the build job, and unpack them when they reach their final home on the target system.

Make use of classpath resources when able. 

These might be installation binaries or control templates that don’t change between deployed versions and therefore don’t have to be bundled into the actual application files.

Control the commands required for your deployment with xl-rules and classpath scripts. 

The scripts are easily parameterized, and XL Deploy can send the Linux or Windows version of a script depending on the OS targeted.

A Plugin Example

Now we’ll begin constructing a plugin by working in the XL-DEPLOY-SERVER/ext directory.  As a first step, add a definition to your synthetic.xml:

<?xml version="1.0" ?>
<synthetic xmlns="http://www.xebialabs.com/deployit/synthetic">
    <type type="demo.DeployedBestPractices" extends="udm.BaseDeployedArtifact"
        deployable-type="demo.BestPractices" container-type="overthere.Host">
        <generate-deployable type="demo.BestPractices" extends="udm.BaseDeployableArtifact" />
        <property name="targetdir1" />
        <property name="targetdir2" />
        <property name="targetdir3" />
        <property name="targetdir4" />
    </type>
</synthetic>

Adding a definition to our synthetic.xml gives us the ability to define a custom artifact, along with some properties for the deployment, in this case the target directories for each of the files when we unzip them. Of course, you can define any properties necessary for the deployment, making use of such data-types (kinds) as strings, integers, booleans, key-value maps, or even references to other objects.
Now, on to defining the deployment behavior via xl-rules:

<?xml version="1.0"?>
<rules xmlns="http://www.xebialabs.com/xl-deploy/xl-rules">
    <rule name="demo.DeployedBestPractices.Create" scope="deployed">
        <conditions>
            <type>demo.DeployedBestPractices</type>
            <operation>CREATE</operation>
        </conditions>
        <steps>
            <os-script>
                <description>Execute script 1</description>
                <script>demo/createBestPractices1</script>
                <order>45</order>
                <upload-artifacts>false</upload-artifacts>
            </os-script>
            <os-script>
                <description>Execute script 2</description>
                <script>demo/createBestPractices2</script>
                <order>55</order>
               <upload-artifacts>false</upload-artifacts>
            </os-script>
            <os-script>
                <description>Handle the files</description>
                <script>demo/createBestPracticesUpload</script>
                <order>60</order>
            </os-script>
            <os-script>
                <description>Execute script 3</description>
                <script>demo/createBestPractices3</script>
                <order>65</order>
                <upload-artifacts>false</upload-artifacts>
            </os-script>
            <os-script>
                <description>Execute script 4</description>
                <script>demo/createBestPractices4</script>
                <order>75</order>
                <upload-artifacts>false</upload-artifacts>
            </os-script>
        </steps>
    </rule>
</rules>

We have replaced each of our four commands with an os-script section, pointing to a script in the XL Deploy classpath. Each os-script tag represents a step, and for each one there are four properties that will be applied to it: a description, a script path, an order number, and a boolean to tell XL Deploy whether or not to upload the artifact(s) in the object.

The ext directory can now be structured like this:

ext
├── demo
│  ├── createBestPractices1.sh.ftl
│  ├── createBestPractices2.sh.ftl
│  ├── createBestPractices3.sh.ftl
│  ├── createBestPractices4.sh.ftl
│  └── createBestPracticesUpload.sh.ftl
├── readme.txt
├── synthetic.xml
└── xl-rules.xml
1 directory, 8 files

And the four “command” scripts now look like this:

$ cat demo/createBestPractices1.sh.ftl
echo "Executing command 1 on ${deployed.container.name}

$ cat demo/createBestPractices2.sh.ftl
echo "Executing command 2 on ${deployed.container.name}

$ cat demo/createBestPractices3.sh.ftl
echo "Executing command 3 on ${deployed.container.name}"

$ cat demo/createBestPractices4.sh.ftl
echo "Executing command 4 on ${deployed.container.name}"

Note the FreeMarker references to the deployed and to objects referenced by it. With the appropriate chain of reference pointers, you can reach any object in XL Deploy’s model.

And we have added an upload step to handle the files within our zip archive and move them to their proper directories, taking advantage of FreeMarker templating for properties of the deployed object:

$ cat demo/createBestPracticesUpload.sh.ftl
unzip -o ${deployed.file.path} myfile1 -d ${deployed.targetdir1}
unzip -o ${deployed.file.path} myfile2 -d ${deployed.targetdir2}
unzip -o ${deployed.file.path} myfile3 -d ${deployed.targetdir3}
unzip -o ${deployed.file.path} myfile4 -d ${deployed.targetdir4}

Here is our deployment output for the this script, which illustrates how XL Deploy uploads the script and the artifact are uploaded to a temporary directory, and then executes the script from there on the host’s operating system.

Notice that the rules file only specified demo/createBestPracticeUpload, without the .sh or .ftl extensions. Since XL Deploy knows this deployment is going to a Linux system, it will look for the version of the script having the .sh extension. The .ftl extension directs XL Deploy to process the script throughout FreeMarker. 

If we wanted to deploy to Windows, we would have included a .bat or .cmd version of the script.
So we end up with the same result as we had with the four command and the four file objects. And there are many more options available in XL Deploy to make this example fully functional:

  • This type can be subclassed for variations on the core behavior.
  • Rollback behavior can be controlled with additional rules. See the rules reference for all the options available with XL Rules.
  • Parameterization is very flexible with FreeMarker.

Related Resources

The post Best Practices for Custom Deployments with the XebiaLabs DevOps Platform appeared first on XebiaLabs.

Announcing XL JetPack: Deploy Applications to the Cloud in Under 15 Minutes

$
0
0

Enterprises across all industries are attracted to the cloud for its potential to help them cut costs and deliver value faster. But those doing the hands-on work of migrating apps to the cloud know the roadblocks well:

  • Time wasted writing scripts to configure infrastructure and connect pipelines
  • Missed compliance and security requirements as delivery accelerates
  • Lack of cloud expertise on their teams

XebiaLabs has pioneered release orchestration and deployment automation for DevOps and Continuous Delivery across on-premises, hybrid, and cloud environments. Now we’ve wrapped up our best practices and tools into one convenient package—XL JetPack—so you can launch successful, efficient, automated cloud deployments in just 15 minutes or less!

Check in Your Code and Fly

XL JetPack is the only cloud tool to offer the comprehensive release orchestration and deployment automation needed to migrate apps to Production in the cloud at rapid speed. With XL JetPack, developers, cloud architects, and others get everything they need to push their apps to Production in the cloud, with all the correct settings, best-practices, and built-in governance—all in under 15 minutes. XL JetPack’s cloud-centric foundation lets teams:

Push their apps to Production in the cloud in minutes. Automate the most complex pipelines and deployments through easy-to-use declarative DSL—no low-level scripting. It’s just YAML.

Unify mission control with release orchestration. Ensure that every step of a release happens in the right order, with the right information propagated to the right places.

Establish an easily repeatable process for multi-cloud deployment automation. Follow a tested, repeatable process to deploy apps from Dev to Test to UAT to Production environments on any platform, on any cloud, whether Amazon, Google, or Azure.

Go fast with best-practice blueprints. Blueprints are safe, secure, immediately-usable release templates and deployment plans for cloud-based resources, such as Amazon ECS and EKS. Get best-practice scaffolding, correct cloud configurations, proven reusable processes, and inter-tool coordination, all ready to go. Easily version and share your apps and reuse delivery pipelines.

Control the launch from the command line. Manage releases in a few keystrokes: invoke blueprints, start releases and deployments, and import and export configurations, release templates, and deployment information for fast spin-up of complete release pipelines and cloud deployments.

Get hands-free connected pipelines. Work with your favorite tools like Jenkins, Ansible, and AWS CloudFormation, and link your pipelines all the way to Production—without scripting.

Collaborate with pipeline views and give visibility to all. Release flow, release relationship, and risk views make it easy for teams to follow release, task, and deployment status. Automatic risk scoring, built-in alerts, and built-in approvals and gates help teams quickly spot problems and ensure everyone sees what they need at exactly the right time.

Monitor launches with comprehensive dashboards. Purpose-built and customizable dashboards based on real data highlight performance, deployment activity, environment status, and security and test coverage, so teams can launch safely, monitor trends, and continuously improve.

Get built-in compliance, security, and, audit tracking. Includes full chain-of-custody reporting without extra coding or meetings, and ensures that the right governance processes and security steps are followed, whether for cloud deployments, hybrid, or on-prem. Plus, everything is logged in a form that’s easy for audit and compliance teams to use directly.

According to one XebiaLabs customer, a large, global insurance company: “As we moved apps to the cloud, we hit a wall because of the complexities of cloud deployments and the lack of cloud skills on our development teams. XebiaLabs has given us a core foundation that makes cloud migration painless for developers who just want to build apps and not have to deal with all the configurations, compliance, and other details involved in getting them onto the cloud.”

Easy to Try and Buy

Try XL JetPack for free, and see how fast and secure moving apps to the cloud can be.
XL JetPack is available for purchase at $499 per month for a 10-user development team.

Try XL JetPack for Free!

Try the tool Development teams need to accelerate deployments to AWS, Azure, and Google Clouds!

Learn More

The post Announcing XL JetPack: Deploy Applications to the Cloud in Under 15 Minutes appeared first on XebiaLabs.

Expedia Gives Developers Self-service Deployments

$
0
0

If you ever travel, chances are you’ve gone to Expedia.com to check out hotel, flight, car and other options. One of the first online booking websites, Expedia, Inc. is now one of the world’s leading travel companies, with an extensive brand portfolio that includes some of the most trusted online travel brands from around the globe.

To continue their market dominance in a very fast-paced market, Expedia, Inc. looked to increase software deployment speeds and streamline processes, while operating at scale across 1,000 machines.

Expedia

The Challenge

Supporting manual deployments, managing dependencies and manually patching code were taking more time than they wanted, and the Operations team knew they could improve and automate. They sought to streamline their use of multiple tools and manual processes.
Using Chef to manage deployment services resulted in too much script-related maintenance. Whenever they needed to update their services or make changes, they manually edited each script and then redeployed their applications to each environment separately. The Operations team knew there was a better way. Rather than spending time manually executing or maintaining deployments, Operations wanted to give their developers self-service, schedulable, automated deployments.

5 Reasons To Stay Away from Workflows - KLP

WHITE PAPER

5 Reasons to Stay Away from Workflows

When it comes to application deployment, “simple ‘n easy” workflows seem quite appealing. However, they suffer from many problems when applied to enterprise deployment automation. Learn why some users choose to use workflows, the drawbacks and why XebiaLabs doesn’t use them.

 
 

Developers Deploy Twice as Fast with XebiaLabs

Expedia considered a range of tools to automate their deployments, including Puppet, Chef, Microsoft Orchestrator SCCM and the XebiaLabs DevOps Platform. They chose XebiaLabs because it manages dependencies, has a model-based, low maintenance architecture and does not require copious custom scripts. Today, over 200 developers at Expedia use XL Deploy to manage, on average, over 2,500 deployments per month.

Within just one month of implementing XebiaLabs, we realized a 20% increase in release velocity, a 33% increase by month two and later reached a 50% increase.
Ganeshram Iyer, Manager of Release Engineering at Expedia

Game-changing Results with XebiaLabs

In addition to a huge boost in deployments per month, Expedia has also achieved many other game-changing results, including:

  • End-to-end automation and control
  • Seamless, self-service deployments for developers
  • More time for Operations team members to excel in other work as they are no longer required for deployment activities
  • Standardized and defined release process
  • Ability to assign permissions, roles and controls to granular aspects of the pipeline as needed
  • Consistent steps across all environments
  • A process that guides developers to adopt coding best practices for all environments

Learn more about Expedia’s processes and the improvements they made with XebiaLabs… read the full case study!

The post Expedia Gives Developers Self-service Deployments appeared first on XebiaLabs.

Too Many Pipelines: The Real Complexity of Delivering Software

$
0
0

Many enterprises struggle with software delivery when they have to release many different business applications at the same time. Cross-application, cross-project, cross-technology, and cross-team dependencies can slow down technical release pipelines and increase the risk of conflicts or failures. Plus, complicated, large-scale software delivery processes make it hard for managers and other business stakeholders to see what’s happening in real time. Release complexity is something that affects everyone, from developers, QA testers, and system administrators, to development team leads, release managers, and product owners.

Your Pipelines are More Complex than You Think

Teams often think of the software delivery pipeline as a straightforward set of activities that flow from development through testing, acceptance testing, approvals, and deployment to production. Of course, there are actually many software delivery pipelines across the organization, all with their own sets of requirements and tasks that need to be executed.

But software delivery isn’t only complicated because there are multiple delivery pipelines; it’s complicated because those pipelines move at different speeds, require different approval gates, and have different potential bottlenecks and failure risks. In addition, the Continuous Integration cycle of merging code, building software artifacts, and automatically testing them means that every phase of the process goes through many iterations. The real complexity of software delivery looks like this:

Delivery Patterns Help You Manage Release Complexity

XebiaLabs’ delivery patterns feature allows you to track all individual items that contribute to the business delivery of one application as a whole, or of multiple applications that need to be released together. You can manage the business delivery in a single place, and optionally add transition points that provide synchronization of stages across many technical pipelines. With delivery patterns, your software delivery landscape can look like this:

Check Out the Video to Learn More

Learn More

The post Too Many Pipelines: The Real Complexity of Delivering Software appeared first on XebiaLabs.

Viewing all 59 articles
Browse latest View live