System Center Orchestrator: Solving Return Data is blank

System Center Orchestrator–formerly Opalis Orchestrator– and now lovingly called SCORCH by its fans is a powerful automation tool, but there are a lot of gotcha’s that makes it difficult to begin rolling out in the environment.
For instance, it’s desirable to Return Data back from a Runbook either to a parent runbook, or to Service Manager or another system to act on the results. As an example, imagine a runbook that can fork in many places and then return an exit code that we then send off to a parent runbook to register as an Operations Manager event, or to send in an e-mail. There’s lot of options. Well, even with this common scenario, people still run into a brick wall when they experience the following
Symptom
When a runbook has valid data pushed to the SCORCH Databus, adding a Return Data step results in a blank window like the following.
Cause
Oddly enough, the settings for which data is returned from the runbook aren’t configured from anywhere within the runbook, but rather within the runbook properties itself.
Resolution
This one is easy to fix, go up to the top of the Orchestrator window and right-click on the Runbook name itself, and choose properties.
Next we browse down to to Returned Data and prepare to roll our eyes.
This is actually the place where you enable values for the Return Data activity. I know, I think it’s absolutly horrific from a usability and discoverability stand-point. One of the many things that make Orchestrator a challenge to use.
If we go back into the runbook itself now, we can Check the runbook back in and out and then our Return Data field will update. Enjoy!
Continue Reading...Part I : Building an AD Domain Testlab with DSC

I often rebuild my testlab from the ground up, and have gotten to the point that setting up my Domain, DHCP, DNS and the like all is a very quick and easy task., But it wasn’t always this way, in fact, I used to spend hours trying just to get DHCP and Domain Controller working.
This is post one of a projected three part series in which we’ll use the magical power of infrastructure as code and embrace the DevOps lifestyle using PowerShell Desired State Configuration. In post one, we’ll start easily and just change the name of our machine and the workgroup, then configure a local admin account in the same doc.
In part II - we’ll configure some Windows Roles, and make this system into a Domain Controller. In part III - we’ll pull out all of the stops and ensure that our DSC configuration handles DHCP and DNS as well, giving us a one-click DSC Testlab.
System Prerequisites
Using Hyper-V or VMware, make a VM with Two NICs, one connected to an external, and one an internal virtual switch, for ease, make it a Server 2012 R2 VM.
Then we’ll apply WMF 5.0 to our server, found here.
Kick that bad boy off, and let ‘er reboot. Uh..let him rebot. What I’m trying to say is, regardless of the sex of your system, reboot it.
We’ll need to provide the DSC resources we need to install, so the next step is to download the xServer script module, provided here.
Now, download and extract this to the following path : $env:ProgramFilesWindowsPowerShellModules folder
Making our first configuration
In the ISE, we can run the Get-DSCResource cmdlet to see if PowerShell detects our new xResource. If you don’t see the following, stop now and make sure you downloaded the xComputerManagement resource before proceeding.
Now deeply under the covers, we can see that making a PowerShell configuration is really quite similar to creating a Function. All we’ve got to do is use the Configuration Keyword in a format that should look quite familiar.
Configuration TestLab {
Param($nodeName)
Import-DscResource -Module xComputerManagement
Node $nodeName {
}
}
When we run this, we’ll end up with a compiled Configuration in memory, just like we would when we run a Function. We can call it by typing TestLab and it will accept a parameter of -NodeName, which will be the computer to apply to configuration too.
We’ll compile our Configuration, then execute it, which makes a .mof configuration. Finally, we run Start-DSCConfiguration -Path .Pathtoconfiguration.mof to apply the changes to our system.
Adding Configuration Items to our Configuration
So far, we’ve got the skeleton of a config but it isn’t making any changes to our system.
We’ll now rely on one of the cool features of the ISE is Intellisense, I.e. super mega auto-complete. Right underneath Node $nodename
, let’s start by typing xComputer then hitting Control+Space to pop-up Intellisense and see which configuration options we can use from this resource.
We see that we can configure a lot of things:
- Credential : If you need special rights to implement this change
- DependsOn : We’ll use this in Part II to order the application of our changes
- DomainName : if we want to add to a new Domain, we’d use this configuration
- UnjoinCredential : if we need special rights to pull our machine off of an existing domain
- WorkGroupName: to specify a new workgroup, you specify this setting
So, for part I of our DSC walk-through, we only want to change the MachineName and the WorkGroupName, lets drop these values in under $nodeName. I want to name my new system DSCDC01, and my new WorkGroup to be called TestLab
xComputer NewNameAndWorkgroup
{
Name = DSCDC01
WorkGroupName = TestLab
}
The Next step…wait, That’s all!
Just to reiterate, this is our total script, making some small changes to add parameter support.
configuration TestLab
{
param
(
[string[]]$NodeName ='localhost',
[Parameter(Mandatory)][string]$MachineName,
[Parameter(Mandatory)][string]$WorkGroupName
)
#Import the required DSC Resources
Import-DscResource -Module xComputerManagement
Node $NodeName
{
xComputer NewNameAndWorkgroup
{
Name = $MachineName
WorkGroupName = $WorkGroupName
}
}
}
To implement this, all that we have to do is go to our new DSC client, and execute the code, just like we would with a Function. We then run it like we do a cmdlet and provide some params.
TestLab -MachineName DSCDC01 `
-WorkGroupName TESTLAB -Verbose
That will create an output file in .mof format
Directory: C:TestlabDCTestlabDCxComputer
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 3/19/2015 2:23 PM 1670 localhost.mof
The final step here is to apply the configuration to our machine and see what happens.
Start-DscConfiguration -ComputerName localhost -Path .xComputer `
-Wait -Force -Verbose
And watch the beautiful colors scroll by
Our log output says that a reboot is needed to kick things off, but we can take a look at System Management to see what the setting will be after a reboot.
That’s it!
Now, join us in our next post on this series to see how we add a few extra paragraphs to make this into a Domain Controller and get our TestLab really going.
- This post - Step 1 Getting Started, renaming our machine and joining a workgroup
- Step 1.5 Creating our first local user
- Step 2 Making our user a local administrator
- Step 3 Making our system a Domain Controller
Orchestrator and PowerShell - Solved: the execution of scripts is disabled on this system

System Center Orchestrator (formerly Opalis Orchestrator) is a wonderful tool for automating heavy lifting in your environment, like managing Exchange server Windows Updates, while respecting DAGs, or other situations with repetitive tasks that can take a lot of manual labor, so long as the logic for responding to particular circumstances are well understood.
That being said, Orchestrator can be one of the most daunting and gotcha ridden programs for any System Center Devop and Wintel admin to wrap his head around.
Take this case, I have a straightforward task that involves running a PowerShell script, but every time I run a script, I run into this…
Problem
When using the Orchestrator commands for ‘Run .Net Script’, ‘Run PowerShell Script’, or ‘Run Exchange Management Shell Cmdlet’, the following error occurs, halting runbook.
PowerShell invoke error: There were errors in loading the format data file: Microsoft.PowerShell, , .format.ps1xml : File skipped because of the following validation exception: File .format.ps1xml cannot be loaded because the execution of scripts is disabled on this system
Text from error
PowerShell invoke error: There were errors in loading the format data file:
Microsoft.PowerShell, , .format.ps1xml :
File skipped because of the following validation exception: File .format.ps1xml
cannot be loaded because the execution of scripts is disabled on this system.
Reason
This seems fairly straight forward, there’s a problem with the execution Policy, so I should go in and launch PowerShell as an administrator then run ‘Set-ExecutionPolicy RemoteSigned’ or unrestricted, right?
Well, there’s a gotcha. System Center Orchestrator was developed as an x86 program only. This means that when it launches PowerShell, it also calls the 32-bit version of PowerShell too. If you’re running on a server 2008 and up, chances are you’re running a x64 version of Server, and so when you launch PowerShell, by default you’ll be running the 64 bit version.
PowerShell has separate execution policies for 32bit and 64bit mode.
Solution
Depending on whether you’re running the Runbook tester, or if you’re launching the Runbook from the Orchestration console:
If running the Runbook Tester: on the local system, launch PowerShell in x86 mode as an admin, and run Set-ExecutionPolicy RemoteSigned.
If launching the Runbook from the Orchestration Console: connect to the Runbook server (as PowerShell and all commands will be executing from there), launch PowerShell in x86 mode as an admin, and run Set-ExecutionPolicy RemoteSigned.
Summary
This is a reliatively simple problem, made more complex by the need to distinguish between where PowerShell is actually running, and also needing to know which version of PowerShell in which to make the change.
I hope this saves you some time, could have saved me about four hours today!
Continue Reading...Solved: Getting a user's Distribution Group Memberships

It’s surprisingly hard to get back a listing of all of a particular user’s Exchange Distribution Group Memberships. The strange thing about this is that you can very easily get a list of all of a user’s AD Security Groups using
Get-ADPrincipalGroupMembership
. If this works for your purposes, great, but if what you really need is a report of all of a user’s / mailbox or resource mailbox Distribution Group membership, I’ve come up with the following.
get-distributiongroup | ForEach-Object { $groupName = $\_ Get-DistributionGroupMember -Identity $groupname.Name | ForEach-Object{ \[pscustomObject\]@{GroupName=$groupname;groupMember=$\_.Name} } } | Group-Object -Property GroupMember | Select-object Name, @{Name=‘Groups‘;Expression={$\_.Group.GroupName}}
Whoa! What’s happening there?
Here’s the walkthrough of why this works:
- We’re getting a big list of all of the distribution groups
- for each, resolving the full membership of each group
- For every entity we discover who is a member of this group, we create a custom object of “username,groupname”
- Once this finishes, we send this to the Group-Object command to let it pick out every unique user
- Then gather all of their memberships using a calculated property
- We then can send this on to a CSV file, to get an output like this.
#TYPE Selected.Microsoft.PowerShell.Commands.GroupInfo "Name","Groups" "Stephen","Group\_1 Group\_2 OtherFolks" "Lenna.Paprocki","Group\_2 OtherFolks" "James.Butt","Group\_2 OtherFolks" "Josephine.Darakjy","OtherFolks"
In my opinion, XML would be the best way to display this info, rather than a CSV. Additionally, it would be very cool to have a lighter weight cmdlet to return just the Distribution Group membership of one user. If I come up with this approach, I’ll be sure to update this.
Hope you enjoy!
Continue Reading...Migrating to your own WordPress.org account - things that will suck and how to avoid them

I’ve recently been in the process of migrating my blog off of WordPress.com hosting to my own WordPress.org account. I tried a few things, a number of which did not work well, and I hope to help you avoid them if you try the same thing too.
After installing WordPress on a localhost / Linux LAMP setup (Linux Apache MySql PhP) you’re prompted for credentials when uploading content or plugins
This one is super annoying. You’ll basically see a message like this whenever you try to install a new plug-in, and have to put in your Linux credentials.
Whoa whoa whoa, contact my host? I am the host. Uh-oh.
--Whats going wrong
What happens here is that if you follow the instructions in this page for setting up WordPress on a LAMP stack, you’ll end up with WordPress installed to /var/www and all of the files and folders there owned by your user account. This means that when you try to upload files, Apache, the Linux Web Service, which runs with the user account www-data will not have any permission to this path. Hence the prompt for credentials.
--How to fix it
This is simple. In your Linux/Ubuntu system, open terminal. If you’ve got an Azure Ubuntu VM like me, connect via SSH using Putty.
CD to your WordPress directory, most likely /var/www
Run the following commands
chown www-data * chown www-data */* chown www-data */*/*
Thanks to fkoosna for this answer.
You setup an Azure WordPress Website using the quick create option from the Azure Store and WPAdmin / Dashboard and page loads are slow
This one really stinks. You’ll get eight or nine second pages loads (shoot for under two!) and doing work to your site is a slog, because the page loads so slowly.
When you setup an Azure WordPress site using quick create from the Azure store, you’ll end up with a ‘shared’ website, meaning that your WordPress instance runs alongside other sites competing for resources. You also generally have very little control over the site, as you’re using Platform as a Service (PaaS).
--What’s going wrong
Chances are that one of the other tennants in your shared host is being unruly and has lots of scripts running or other intense server behavior. Azure will attempt to quash bad activity like that, but in the end can only do so much. When you’re ready to move to the big leagues and have dedicated hosting for your site…
--How to fix it
The simple fix to this issue is to simply upgrade your instance to a Basic or Standard instance, which moves you into infrastructure as a service (IaaS) in which you now have a VM you can work with and tweak.
Login to the Azure portal and click on Web Sites, then click your desired site and choose Scale
]
On this page, keep in mind your budget. As soon as we click from Free/Shared and move up to Basic, we’re talking about a full VM tier here. This can run about $35 bucks a month. That’s more than twenty dollars (which is my ‘uh, maybe I should pay attention to this’ threshold). If you’re on a totally free account, you’ll have to remove the spending limit, or you’ll basically have a VM immediately turn off and not work anymore. Goodbye website.
[blog_UpgradingToBasicInstance1]
On this page, we can also configure Autoscaling, if we want to allow our site to scale up and out as needed. Keep in mind that you’ll have to do a bit of work to make this function. Azure can’t magically code-in scale-out support for you.
When setting up additional Azure websites after scaling up the WordPress installer does not launch
This one took me a while to figure out. I was creating blog sites for my wife, and in the process of spinning one up in the Azure portal, I would point them to an existing MySql (or SQL Server, if you’re using Brandoo Wordpress) db, expecting to colocate all of my blogs in one instance of SQL that I keep clean.
So, I’m making a new WordPress site just to show this to you.
](http://foxdeploy.com/2015/02/20/migrating-to-your-own-wordpress-org-account-things-that-will-suck-and-how-to-avoid-them/blog_whatiswrongwiththis1/) Alright, lets see if it happens again.
The site is built, let’s see what happens when I go to the URL.
](http://foxdeploy.com/2015/02/20/migrating-to-your-own-wordpress-org-account-things-that-will-suck-and-how-to-avoid-them/blog_whatiswrongwiththis2/) Um, shouldn’t there be something here?
--What’s going wrong?
The first time that a WordPress instance is created, it will attempt to connect to and use MySQL and initiate a few tables, things like that. However, if another blog has already been setup before and has permissions and data in those tables, the Wordpress install will stall out! Specifically, there is a setting we need to change to allow our WordPress installer to spin up some new tables in our existing MySQL DB.
--How to fix it?
This was one I never would have solved if not for this blog post (insert link here later when I find it!). We need to edit the wp-config.php file, which has a number of useful settings we can change to help shape and control the way that WordPress runs. We need to address line 65 on this file, $dbPrefix, because our first blog instance is already using that prefix, preventing the installer from running.
Assuming you’re using Azure Websites, we’ll use the very, very very useful Webmatrix tool to work with an online copy of our site. From the Website window, click WebMatrix
Now, choose Edit Live Site Directly.
This gives us a nice way to upload files without resorting to FTP, and can be used to backup your site as well! You can even run the site off of your laptop or PC if you choose to ‘Edit Local Copy’. We need to edit the wp-config.php file, which executes and sets up our blog for us by running the WordPress installer.
Skip down to about line 62, and look for the line beginning
$table\_prefix = 'wp\_';
Change this prefix to be anything else in the world, and you’re set!
Hit Control+S to save the change…
then hit F5 in your browser
But wait, there’s more!
I’m still not done with my migration. I’ll let you know if I run into any other really fun problems that destroy my blog along the way. I’ll tell you that deciding to move to my own hosting opened up a huge can of worms with regards to minifying CSS and JavaScript, and suddenly worrying about PageSpeed and things like that. I’m not certain yet if I’ll continue trying to host my own site, but the learning alone has been worth it.
Continue Reading...Is your SCCM SQL stuck in Evaluation mode? Don't despair!

Have you ever wondered what happens when you install SCCM 2012 or 2007 on top of SQL and choose ‘Evaluation mode’, then forget to enter the SQL key?
SQL will turn off the SQL Database Service on day 180, and never turn back on until you license it
How’s that for getting your attention?
Recently at a client, my contact was searching for their install keys, and promised to provide them later. No problem, I said, and proceeded to install both ConfigMgr 2012 R2 and SQL 2012 SP1 in evaluation mode. Typically what would happen here is that after a week or so, the VAR would get back to us with some keys…This time however…
The big rub was that this was an educational institute. They are sold licenses in different quanities and at different rates than your typical enterprise or SMB, the sorts of customers I deal with much more often. Those larger firms often buy a bundle of SQL licenses, and I’d just use one of them, which would get re-added / comped to their license pool the next time they re-upped their licenses. Schools don’t typically need SQL Server, relying instead on Postgresql or MySQL (shudder) or even worse, Oracle.
I contacted Microsoft about my dilemma and posted online, I was told that System Center does include a special SQL installer that won’t prompt you for a key, meaning that there are no install keys for me to find in my Volume License account. What should happen is that if you use the right iso, it will contain your own license key for SQL pre-embedded, and you can actually see it when you run the installer. However, this installer will detect pre-existing SQL features, and will dump/not display the license info if it detects any.
When I posted online about it people told me to backup my SQL Server DBs, delete my SQL install, then reinstall using the special installer ISO (and hope that SCCM’s accounts restore without any pain).
Thanks to the help of my friend, MCT and SQL Badass Sassan Karai, we found a better way.
The problem
Your System Center license from Microsoft includes SQL install rights, but you’ve installed SQL using different media and don’t have another Enterprise or Standard key to use. Additionally, if you try to run a new install side-by-side, the SQL Server installer will detect a pre-existing instance and NOT display your license key.
The Solution
Setup a clean VM or other server and run the installer, using the special ISO you’ll get from your volume license account under System Center 2012 R2 Client License (or something similar). This time, when the installer runs and gets to the product key, you can see your special embedded key!
Thank God for this embedded key!
I’m not certain how it’s done, but somehow when you download your ISO from Volume Licensing, your unique key is embedded into the ISO. Very cool technology.
Anyway, if you find yourself with SCCM installed on a SQL instance stuck in evaluation mode, and don’t have any other keys to use, and don’t want to reinstall it all, try this method. It works, can save you a full rebuild AND it also uses the key you’re legally entitled too.
Keep in mind, your license from MS only entitles you to run SQL in service of System Center, and not for any other reason. If you want to use SQL Server for something else, call Microsoft and get a license for it!
Continue Reading...