Creating an Enterprise Scripting Template in PowerShell

Published: Jan 07, 2011

SHARE ARTICLE

What we’ve done in the previous articles (Scripting Database Objects in PowerShell and Scripting Database Objects with Advanced Scripting Options in PowerShell) was to set the stage for this final piece that will make everything infinitely more usable.  Don’t get me wrong, those other skills come in handy from time to time and are useful to have.  It’s also useful to have them as a frame of reference to be able to graduate the examples so they’re easier to understand.  So what we’re going to talk about this time is taking everything we’ve learned so far and using it to build a scripting process that’s easier to sustain in the different areas you’re going to need it in.  And I suppose it would be good to start with defining what ‘easier to sustain’ actually means so we’ll know what we’re trying to achieve.

In this case, we’re trying to meet some simple criteria:

  1. Make it easier to manually recover a single object.
  2. Make it easier to automate the recovery of a single object.
  3. Make it easier to compare versions of an object from different days.
  4. Make it easier to allow users to recover their own objects.
  5. Make it easier to recover a specified group of objects.


We’ve got this list of 5 basic things that we want to accomplish.  There could easily be some more discrete tasks we might want to perform as well, but I think they’ll all be derivatives of these.  So how are we going to make this happen?  Well, we’re not going to change the process all that much.  All we’re going to do is change it so that each object is scripted to its own file, and possibly even by date.  This is what’s going to allow us to keep a version history by date as well as easily restore a single object or even compare different versions of objects.  If you store everything in a single script, then you’ve got to search through the file to find the object in question and then copy and paste it into an editor to restore it.  And it would be next to impossible to reliably automate the recovery or to allow users to recover their own objects.  So by putting everything in its own file, you ensure that the object is easy to find and is easily runnable because it has no other code to watch out for.

OK, no more talking… let’s code.

We’re going to start off with our original code for reference.

>[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)

>$so = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions

>$so.IncludeDatabaseContext = 1

>$so.ScriptDrops = 1

>$so = NoFilegroup = 1

>dir | %{$_.Script($so) | out-file c:\Tables.txt -append}

**I went ahead and assumed that we wanted to keep our advanced scripting options in there.  Especially since it’s a good idea to have a database context in code like this.

Ok, this is where we’re going to start from.  Let’s start making this look like we want it.  First, let’s put the scripts into a text file named after the table itself.

>dir | %{$TableName = $_.Name; $_.Script($so) | out-file c:\$TableName.txt}

I’ve put my changes in red for you.  I assigned the name of the current object to a variable called $TableName.  I didn’t have to do this, but it makes the script easier to read when you see it used later in the line.  If you have a lot of loops inside of loops it can be hard to keep track of which one you’re in so just seeing a “$_” everywhere can get hard to decipher.  You can also see that I enclosed the file name in quotes and used the variable instead of the hardcoded “Tables.txt” that was there before.  Now the file name will be created with the name of the current object.  I also took away the ‘-append’ since we’re creating a new file each time, we really don’t need it anymore.

We still need to deal with the schema though.  We can do that in a couple of ways.  We can either create a schema folder or make the schema part of the file name itself.  First we need to create a variable to hold the base path for our scripts.  We’re assuming that all of your scripts will be housed in a central location and if they’re not, you’ll want to make it easy to change where they’re stored.  So keeping a base path variable ensures that you only have a single place to change the path instead of having to search the script for it and possibly making a mistake.  So here’s our base path:

$BasePath = “c:\DBscripts”

Now we need to test whether the schema folder we’re going to write our scripts to already exists, and if not, we’ll create it.  This is very useful because it makes your script more dynamic should someone add a new schema without telling you (and they probably wouldn’t tell you).

$Schema = $_.Schema;

IF ((test-path $BasePath\$Schema\Tables) –eq $False) {MKDIR $BasePath\$Schema\Tables}

Here I test whether the path already exists.  If it doesn’t then I create it.  Notice I specifically scripted it for tables.  This is because any schema can have different types of objects and I want the entire schema scripted under its own folder.  So once I open the schema’s folder I can potentially see folders for Tables, StoredProcedures, Functions, Views, etc.  It’s just a good way to organize things, but you can organize yours however you like.

Now I need to add the date element.  I prefer to have the date out front because I’m usually looking for things by date, but again, you can organize yours however you like.  Here’s an example of how my path will look when I’m done:

C:\DBscripts\20101206\dbo\Tables

So here’s how to add the date element:

$ExecutionDate = Get-Date –Uformat “%Y%m%d”;

Now I need to add that to my test-path line.  This is going to make sure that every day you run this, it’ll create a new path for that day’s date.

$ExecutionDate = Get-Date –Uformat “%Y%m%d”;

$Schema = $_.Schema;

IF ((test-path $BasePath\$ExecutionDate\$Schema\Tables) –eq $False) {MKDIR $BasePath\$ExecutionDate\$Schema\Tables}

Again I put the changes in red.  And the only thing really left to do is to put the new dynamic path into the loop where the file gets created.

>$FullFilePath = “$BasePath\$ExecutionDate\$Schema\Tables”

>dir | %{$TableName = $_.Name; $_.Script($so) | out-file “$FullFilePath\$TableName.txt”}

Just to make things easier I created a new variable with the full path.  It just makes the loop easier to read.

Now, since I’ve done a lot of talking in between these lines of code, here’s our final version of the code so you can see it all together.

>[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)

>$so = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions

>$so.IncludeDatabaseContext = 1

>$so.ScriptDrops = 1

>$so = NoFilegroup = 1

>$BasePath = “c:\DBscripts”

>$ExecutionDate = Get-Date –Uformat “%Y%m%d”;

>$Schema = $_.Schema;

>IF ((test-path $BasePath\$ExecutionDate\$Schema\Tables) –eq $False) {MKDIR $BasePath\$ExecutionDate\$Schema\Tables}

>$FullFilePath = “$BasePath\$ExecutionDate\$Schema\Tables”

>dir | %{$TableName = $_.Name; $_.Script($so) | out-file “$FullFilePath\$TableName.txt”}

Ok, so I’ve given you a way that you can script your schemas and some good reasons behind the way I did it.  You’re of course free to organize your folders as you see fit, and you can make this whole process work any way you wish.  What I didn’t do was provide you with the complete solution.  There’s nothing wrong with this code, but you’ll need to decide whether you want to do this for a single server, or for multiple servers.  If you do it for multiple servers you may want to add the server name to your base path.  And you may want to add the database name as well.  These are decisions I can’t make for you, but if you follow the formula I’ve given you, you should be able to put these things into the script with little trouble and be up and running in no time.

SHARE ARTICLE