Last Update: Sep 04, 2024 | Published: Sep 25, 2013
Now that you’ve started to understand System Center 2012 SP1 – Orchestrator and its power, in this post I’m going discuss Orchestrator’s disk maintenance, and I’ll demonstrate why Orchestrator is like the IT assistant you never had.
One of the jobs which drives me nuts is the maintenance of logs and data on my hosts. Take Exchange for example: This software in my production environment is more than happy to generate over 1.2Gb a day of logs per server just for users connecting over HTTPS. As you can appreciate, it doesn’t take too many days to start starving the server for storage, and deleting these logs isn’t an option, so they must be archived.
Once a week this process is executed manually by myself or a member of my team. As these are live systems, we need to keep an eye on the servers while the 10Gb on average logs are archived away – which is like watching paint dry.
Due to the size of this solution, I am going to break the post into two parts, so let’s get this going.
With my trusted friend Orchestrator, I have taken the opportunity to delegate this task to its ever-capable hands. To save time, Orchestrator is willing to help by running this process for me on a nightly basis, reducing the impact even further. In addition, the list of servers that require maintenance never appears to shrink, so instead of updating the runbook for each new server, I have chosen to create a central list to work from. Not every file should be archived while we are implementing, so I have also added the ability to simply purge files from the system.
Let’s being with the central list. For this I have decided to use SQL, with a database specifically for my MIS Activities.
I plan to use this database for a number of different projects and tasks, including logging my Runbook activity status, and of course the list of disks that I need to maintain. Let’s begin first by creating the table for our disk maintenance work.
Column Name | Data Type | Allow Nulls |
SourcePath | nvarchar(1024) | No |
SourceMask | nvarchar(50) | No |
Action | nvarchar(50) | No |
Age | numeric(18,0) | Yes |
TargetPath | nvarchar(1024) | Yes |
Finally, lets seed the table with some initial pruning and archive work which we will have Orchestrator process for us.
SourcePath | SourceMask | Action | Age | TargetPath | ||
\pdc-ex-svr01c$inetpublogsLogFilesW3SVC1 | *.log | Archive | 10 | \pdc-fs-smb01archives | ||
\pdc-ex-svr01C$Windows | *.evtx | Purge | 5 |
With our database now ready, we can proceed to use orchestrator to create a set of four runbooks for our project. Of course we could achieve all that we need in a single runbook, but segregating the runbooks into smaller jobs ensures that we can unit test the solution in bite-size chunks.
Let’s begin by creating a new folder for our solution. In my case, I am calling this 2. Storage Maintenance.
The first runbook I create will be called 2.2 Archive Files. This runbook will accept the details of the archive job, including the Source, Target, and Age of the files for archival.
On the canvas I will place, and hook up the following:
Starting with the Initialize Data Activity I configure the following properties:
Next, on my Archive Files activity I define the following setting:
Finally, select the Pipeline/Link from the Archive Files activity to the Failure activity, and set its properties.
At this point you are well on the way – our database is now in place and the first of our runbook for file processing is now also in place and ready for use.
In the next post, we will continue creating the runbooks, and we will even see how easy it is to leverage PowerShell within the runbook.