Provisioning not part of your Application Release Automation process? Think again
You provision for a purpose, so you should automate it
You provision for a purpose, so you should automate it
In the early part of the last decade, I ran a global education program at a midsized company called Softwatch. In this role, my responsibilities required that I provide education to our partners around the world. One such occasion had me traveling to Paris to teach a class with 20 students from various parts of Europe, and so my partner Matt Grob and I flew out on Saturday to spend the weekend creating a complete education lab from scratch.
Granted, it's not incredibly taxing when you're building a lab in Paris of all places. The process, which was fairly straightforward, took us both days and then we had the nights to explore the Champs-Élysées, La Concorde, etc. not to mention eat and drink amazing culinary delights.
However, I digress. The point I am trying to make is that It took us an entire weekend to build 20 computers in an identical manner. It cost the company several hundred dollars to provide flights and living expenses that would have been unnecessary if it were 2012 instead of 2002.
But it must be said that provisioning, for provisioning's sake, is never done – there is always a purpose. This task is performed to provide an environment that can run applications. Maybe the applications are COTS (Commercial, Off The Shelf) or internally developed, but applications are the raison d'être. Even in my story, we were building the machines so that my company's software could also be installed and run by the students during the class.
Provisioning, then, should be considered part of application release.
Many companies have missed the boat: BMC's BladeLogic and CA's Automation Suite for Clouds are provisioning with no purpose. They require other solutions to consume the produced machine, real or virtual, so that it may be incorporated into a bigger picture. But customers no longer want building blocks – they want turn-key solutions – so these formerly bedrock solutions are now passé.
This is evidenced by the fact that both of these solutions are now buried deep within the websites of their respective owners. I suspect similarly deep locations will be found on the websites of other vendors in this space.
Looking at the bigger picture, however, we see that provisioning is simply a sequence of steps to be followed:
1. Install the OS
2. Install the application server, DBMS, or ESB
3. Configure the application server, DBMS, or ESB
4. Connect it to the rest of the infrastructure
5. Deploy the application components on the box.
The above steps define a process that can be automated.
Automation is automation, regardless of how it's being used. And all automation, regardless of its design, requires core capabilities that have withstood the test of time in terms of stability and scalability.
Let’s take a look at the general capabilities of an automation platform with the overall view of applying those capabilities to application release, service orchestration/provisioning, and workload/job scheduling.
The earliest use of automation can be traced back to IT Operations where Run Books were used heavily to "begin, stop, supervise and debug the system(s)" (from Wikipedia) in the Network Operation Center (NOC). Run Books were initially index cards containing a set of instructions to accomplish a certain task. These were then listed on 8.5" x 11" paper and ultimately moved to three huge ring binders due to the complexity of the underlying systems with which the operator interacted.
At some point, companies such as Opsware, RealOps, and Opalis recognized that well-defined, mature processes were simply a set of repeatable steps with simple branch logic built-in. They built products that were then sold to HP (in 2007), BMC (in 2007), and Microsoft (in 2009) respectively, to allow Run Book scripts to be defined in a computerized form and then initiated via an operator console and, later, via integration with ITSM solutions.
From a capabilities standpoint, all automation (Run Book Automation (RBA) now often referred to as Process Automation, Workload Automation, or Release Automation) requires similar capabilities. These are listed below:
1. Support for a broad number of platforms
Distributed systems are widely used, of course, but the mainframe also didn't die as many "industry experts" predicted it would in the last decade. Combined with the many Unix variants and even lesser known platforms like IBM's iSeries, all of these operating systems have enough market share that they cannot be ignored and should be supported.
2. Built-in integration with the surrounding ecosystem
While being able to enter commands manually via a script window is no worse than writing BASH or WSH scripts, having built-in support for commonly-used IT infrastructure (e.g. monitoring systems, typical system metrics such as CPU usage or free disk space available, web/application/database servers, etc.) allows the workflow designer to simply enter a few values in a data form and the underlying system takes care of translating the action to the underlying commands.
3. Parse and react
Taking the output of executed commands or their result codes and either extracting values to be used in subsequent steps or branching based on those values is critical.
4. Complex scheduling
Email systems like Outlook standardized the use of calendars for scheduling meetings or tasks to be completed. The use of scheduling in an automation platform, however, needs to be much more capable since automated IT processes are often run according to very complex scheduling rules.
Integration capabilities cannot be emphasized enough. To illustrate the former, the need to query free disk space (for example) exists no matter what the ecosystem is, so using high level, abstract commands frees the author from having to explicitly add support for new platforms as they are adopted by their IT department. Instead, they can simply drag and drop a step called "query disk space" into their workflow and do not have to worry if the workflow will be running on Windows, UNIX, OS/400, z/OS, etc.
Similarly, the ability to support very complex scheduling rules is also a "must-have." For example, end of month financial reporting may need to run on the last business day of each month unless it is a holiday, in which case it would run on the next business day after that. Rules like this cannot easily be expressed using "Monday, Tuesday, ..." or "Every n weeks" types of criteria that end users are typically familiar with.
Other core capabilities that are not automation specific are multi-tenancy, Role Based Access Control (RBAC), High Availability (HA), and auditing features. These will not be discussed here due to their use in several other types of IT Operations systems with which you are ultimately familiar.
All of these capabilities do not belong in one type of automation system or another. Instead, they belong in a core platform that can be utilized by several types of solutions to meet various business needs. Whether it is something general (e.g. job scheduling or application release) or something specific (e.g. processing large Hadoop datasets or copying your SAP system from one instance to another), having a feature-rich, automation-centric foundation ensures that all of your operations systems will not only meet your current needs but will also grow as your needs do.
More importantly, building business oriented capabilities on top of an enterprise capable automation platform – whether that purpose is for provisioning or the more general application release – not only ensures that you have a solution to meet your immediate needs but also future-proofs your investment by guaranteeing that business-centric capabilities will continue to improve as the underlying platform evolves.