My Site - 2014 in review

December 30, 2014 FoxDeploy

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 76,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.

Click here to see the complete report.

Continue Reading...

Unholy Matrimony - wGet and PowerShell together

December 19, 2014 FoxDeploy

XML has been around for a LONG time. But just like working with HTML, it still kind of stinks. If you want to reach into a file and pull out values within certain tags, you’d better become a pro with Xpath or be prepared to create some REALLY ugly Regex. For instance, if we wanted to grab the values within the tag we care about for this blog post, the regex would be this simple little number.

<wp\\\\:attachment\_url\\b\[^>\]\*>(.\*?)</wp\\\\:attachment\_url>

You know what they say about using regex…

perl_problems(http://xkcd.com/1171/)

Fortunately for us, PowerShell makes it very easy to work with XML and pull out properties, and makes it super easy to do so if you take advantage of the built-in .net System.Xml.XmlDocument object type to help you parse it’s values.

For this example, I wanted to make a full backup of my family Wordpress Photo site, but there was no easy way to download all of the photos from my blog posts, which is a necessity to transfer my content to another provider, or at least to maintain my own copy of all of my work. (Note to Wordpress.com engineers, it would be totally sweet to have the option to download my entire blog in one nice little .zip!). So if, you’re curious about how to backup your WordPress blog including all media, read further!

How to Export your files from WordPress

So, there’s no option to download all of the media, but there is the option to download a Blogtitle.XML file, which you can then import into other blog providers.  Go to your Blog Dashboard, then Settings->Export.

[]howToExport At the time I took this screen shot, I had a different working title for this post

 

howToExport2

Scraping XML

This XML file describes the overall layout of our site, including posts, forum replies, and things like that.  It also provides the full URL of all of the media actually featured within any of your posts.   If we’re good with XML parsing, we can scrape this document and pull out the URLs to all of our media featured anywhere on our site.  Here’s an example of what the XML looks like, showing the layout of a normal image post on a photo centric blog.

FindingContent

So, Now that we’ve got an XML file, let’s get into which commands we have available to work with XML.  If you were to do a search for Commands in PowerShell that support or have the word XML in their name, you’ll only find these few:

powerShell-commands with XML in thename

Let’s stay away from Import-CLIXml; down that path lies only sorrow.  This cmdlet is the completely wrong command to use for parsing someone else’s XML files. See, these CliXML commands really expect you to take PowerShell Objects and export them using Export-CLIXML, and then to later reimport them using Import-CLIXML.   It would work for a lot of things, but it not the right command to use to Import someone else’s XML.

Instead, to import those files, we’ll simply take the content of the XML file, store it in a variable (in this case $xml) and be sure to cast that as [XML] in the process.

#Cast as \[xml\] while creating the variable 
[xml]$xml = Get-Content "C:\\Users\\Stephen\\Downloads\\theowenden.wordpress.2014-12-17.xml" 

And once that’s done a quick, Get-Member will show us that now PowerShell is parsing the content of this file successfully.

So many methods! Thank you PowerShell for making this easy!

We’re looking for a method to let us find certain elements based on the name of their tag  (the ‘wp:attachment_url’ tag).  Looking through the list, one jumps out at me.  The method in question we’ll use is $_.GetElementsByTagName().  Our next step is to take a look at the XML file itself and see which tag contains the data we’re interested in.

Here’s a screen shot of my .xml file, I’ve highlighted the tagname we’re interested in.

FindingContent Yep, same screen shot as before. A rarely seen example of code reuse in the wild!

 

Now, let’s provide the name of the tag we want as the overload of our .Method(). If we get all a lot of URLs for images, we’re good to go.

CheckingOurTagName

It should only take a short PowerShell hop to prepare a skeleton to download each file, we’re almost done!

#Cast as \[xml\] while creating the variable \[xml\]$xml = gc "C:\\Users\\Stephen\\Downloads\\theowenden.wordpress.2014-12-17.xml"

$count = $xml.GetElementsByTagName("wp:attachment\_url").Count $xml.GetElementsByTagName("wp:attachment\_url") | %{ "Downloading $i of $count" $remoteFile = $\_."#text" $destination = "C:\\Users\\Stephen\\Dropbox\\Photos\\Catherine\\$($remoteFile.Split('/')\[5\])" $destination

#ToDo: #Figure out how to download the file...should be easy, right? $i++ }


The wrinkle

I totally couldn’t figure out how to do this next part in PowerShell.

I tried every variation under the sun to do this natively in PowerShell, and was never able to succeed past the WordPress logon page to download these images.  I could logon (Using the FB example), but maintaining my session and using that to execute a download as an authenticated user?  No workey.  I tried the FaceBook method from the Invoke-WebRequest help, I tried making a local $Form.Fields object and populating the values, I tried the System.Net.WebCLient class.  Like Thomas Edison, I merely found 24 ways that wouldn’t work.

XML-Fails

I even went so far as to install Fiddler and proxy my Invoke-WebRequests through it to see why in the world I couldn’t maintain my session after passing authentication.  I gave the form my username and password, what more could it want from me?

Unholy matrimony, wGet and PowerShell together

I took a different approach. I already had the framework in place to pull the image URLs out of the XML file. I had the skeleton there to act on each and save them locally. The last missing piece was a mechanism to handle downloading the files and I knew that the mystical wGet utility from Linux can handle pretty much anything under the sun.

I did it, I merged PowerShell code to automate commands going to wGet. I’m not proud of it, but it worked!

From previous experience, I knew that you could provide a cookies.txt file to wGet and it could ‘Just work’ and pick the right cookie for the right site.  I used the awesome Firefox plug-in ‘Export Cookies’ to dump all of my cookies into a C:\temp\cookies.txt file. If you open up the cookies.txt, you can see a LOT of session data.  You can just cull down to the site you need to perform your task, but I was lazy and provided my full 2KB cookies.txt file to wGet.

Basically, enable the plug-in, go to WordPress and login. Then hit alt on the keyboard to display the toolbar, and choose the ‘Export Cookies’ option.

ExportCookies Awww one of the many photos I wanted to download!

You’ll get a cookies.txt.  Now, pass this in your wGet –load-cookies parameter, and watch the magic.  To execute this command, you’ll need to provide the following params to wget.

wget.exe remoteFile -O localpath --no-check-certificate --secure-protocol='auto' --keep-session-cookies -U "Mozilla" --load-cookies c:\\temp\\cookies.txt
  • $remotefile - the URL to the remote file we want to download
  • o localpath - the place we want to save the file
  • no-check-certificate - this one was needed as WordPress didn’t have a signed cert, or something to that effect
  • secure-protocol - let wGet handle figuring out which protocol to use
  • keep-session-cookies - yep, we need to preserve the data for our session
  • u - Useragent, we’re spoofing a Mozilla user agent, as WordPress didn’t like me using a PowerShell scripting agent
  • load-cookies - the path to the cookies.txt file we exported earlier

Start with downloading just a single file!

$xml.GetElementsByTagName("wp:attachment\_url") | Select -first 1 | %{
  $remoteFile = $_."#text"
    $destination = "C:\Users\Stephen\Dropbox\Photos\Catherine\$($remoteFile.Split('/')[5])"
  wget.exe --load-cookies c:\temp\cookies.txt $remoteFile -O $destination --no-check-certificate --secure-protocol='auto' --keep-session-cookies -U "Mozilla"

The output:

 Resolving 1redfamily.files.wordpress.com.
192.0.72.3
Connecting to 1redfamily.files.wordpress.com|192.0.72.3|:443.
connected.
WARNING: cannot verify 1redfamily.files.wordpress.com's certificate, issued by `/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority -
G2':  Unable to locally verify the issuer's authority.
HTTP request sent, awaiting response.
200 OK
Length:  6766012 (6.5M) [image/jpeg]
Saving to: `C:/Users/Stephen/Dropbox/Photos/Catherine/img_5194.jpg'
     98%  692K 0s  6500K .
     99%  705K 0s  6550K
    100%  COMPLETE

Yay! If one worked, then the same should be true for all of the rest!  Here’s the code!

CODE

Just a warning…the output from this will be VERY ugly if you run it from within the ISE. The ISE does not appreciate wGet’s method of displaying process output to the console window. If any of you can figure out how to preserve a session for a php-based logon server and to do it with only PowerShell, please let me know!

#Cast as [xml] while creating the variable
[xml]$xml = gc "C:\Users\Stephen\Downloads\theowenden.wordpress.2014-12-17.xml"
 
$count = $xml.GetElementsByTagName("wp:attachment_url").Count
$xml.GetElementsByTagName("wp:attachment_url") |  %{
    "Downloading $i of $count"
    $remoteFile = $_."#text"
    $destination = "C:\Users\Stephen\Dropbox\Photos\Catherine\$($remoteFile.Split('/')[5])"
    $destination
 
    #ToDo:
    #Figure out how to download the file...should be easy, right? - COMPLETED!
    wget.exe --load-cookies c:\temp\cookies.txt $remoteFile -O $destination --no-check-certificate --secure-protocol='auto' --keep-session-cookies -U "Mozilla"
    $i++
}

Sources

I couldn’t have completed this job without the help of the forum post from the Ubuntu discussion board

  • ‘Export Cookies’ or the wGet utility. Export Cookies - https://addons.mozilla.org/en-US/firefox/addon/export-cookies/
  • How to use WGET to download from a site with a login - http://askubuntu.com/questions/161778/how-do-i-use-wget-curl-to-download-from-a-site-i-am-logged-into
  • Get wGet - http://gnuwin32.sourceforge.net/packages/wget.htm
Continue Reading...

PowerShell Version 5, What's new!

December 17, 2014 FoxDeploy

PowerShell native switch configuration

I’m not going to dig into this too deeply, instead, read Jeffrey Snover’s great post on the topic here.

APT-GET comes to PowerShell!

The Coolest new feature is OneGet, which is PowerShell’s adaptation of the community-based software repository Chocolatey.  Chocolatey supports a tremendous catalog of software, allowing you to silently install software directly from the command line.  Some examples of software found in the Chocalatey gallery:

  • 7Zip
  • NotePad++
  • Google Chrome
  • Java
  • Flash
  • VLC Player
  • Microsoft C++
  • Puddy
  • Fiddler
  • DotNet Framework
  • Virtual Box
  • Virtual Clone Drive
  • FoxIT

You can see a full catalog of software here, http://chocolatey.org/packages.

 Sample of using OneGet to install packages

First and foremost, you’ll need to temporarily allow remote script execution in order to use this version of OneGet.  That is because behind the scenes to install a program using OneGet, PowerShell will download a Chocolatey install script and execute it, and if your execution policy prohibits it from running, you won’t be having any fun. To get started, first install WMF 5.0, available here.  This may or may not require a restart for you.  Now, launch PowerShell and check out the version of PowerShell you’re running with Get-Host.

Aw, yeah…Upgrayedd

Now, let’s Import the OneGet module and see what new commands are available. 02   PowerShell exposes some very nice functionality here.  Out of the box, we’re able to add our own corporate PackageSource repository, and do some other interesting things:

CommandPurpose
Add-PackageSourceAdd your own Package Source other than Chocolatey
Find-PackageSearch your package sources for software
Get-PackageGet information about packages installed locally
Get-PackageSourceGet a listing of Package Sources available
Install-PackageInstall a package
Remove-PackageSourceRemove a Package Source
Uninstall-PackageUninstall a package from your system

Let’s say that we needed a tool to work with PDFs, and had never heard of Adobe before.  We might run Find-Package, and pipe that into Where-Object to filter.

[]03 You could potentially discover software to install from the command line.

Let’s choose Foxit Reader.  Remember when I said to allow script execution?  Well this is why.  If you try to install without specifying this, you’ll get the following error.

04 The install script can’t run if you don’t allow for UnSigned Scripts during your Install-Package session

This is what is really happening when you use OneGet to install FoxitReader.  PowerShell first downloads the configuration script (C:\Chocalatey\lib\FoxitReader\tools\ChocolateyInstall.ps1) that looks like this:

  Install-ChocolateyPackage 'FoxitReader6010.1213\_L10N\_Setup.exe' 'EXE' '/quiet' '[http://cdn01.foxitsoftware.com/pub/foxit/reader/desktop/win/6.x/6.0/L10N/FoxitReader6010.1213\_L10N\_Setup.exe](http://cdn01.foxitsoftware.com/pub/foxit/reader/desktop/win/6.x/6.0/L10N/FoxitReader6010.1213_L10N_Setup.exe)'

Which as you can see, downloads the .EXE from the CDN for the provider, then passes quiet install parameters on to the exe.

EDIT 12/16/2014: As of this writing the problem with Chocolatey packages not installing unless you run with the signing policy as ‘Unrestricted’ has been resolved.  Leaving the below for posterity.

[…]So, hopefully you’ve launched an administrative session of PowerShell and set your execution policy to UnRestricted for the moment.  Assuming you’ve done so, you should see the following when you run your install for 7Zip or FoxitReader.  If you’re not running as an administrative user, you’ll get a UAC prompt, which I personally feel is good behavior, then the install will continue.  Since these scripts are configured by the application owners, some will be silent installs, some will not. 05

For instance, if you run the install of Visual C++ 2010 from an administrative PowerShell prompt, the application will install with no prompt whatsoever. All in all, very powerful stuff, and finally brings App-Get like functionality to PowerShell. 

10/10 would download again.

EDIT: I’ve noticed that Install-Packages has parameters to pass your switches along to the .exe files, and you can see there are a lot of parameters available.  However, it’s early in the game and as of this writing the help files don’t exist for this and other PowerShell v5 resources.  

Continue Reading...

Solving the DSC Pull Chicken and Egg problem

December 03, 2014 FoxDeploy

Learning DSC Series

This post is part of the Learning DSC Series, click the link to explore the rest of the posts!

My 100th Post, Thank you!

Hi guys, it’s here, my 100th post anniversary special! I want to thank all of my loyal readers, commenters, and the folks who’ve liked my blog over the last 18 months for their input, critique and exciting ideas.

I’d also like to thank my extremely talented friend Joie Brown for designing this wonderful and festive banner for my site to celebrate the occasion! She is a wonderfully skilled artist, illustrator and designer, and you can find out more about her freelance art here at www.joieart.net. She’s done work for My Little Pony, popular web comics and more, including her own printed comic book! This banner turned out great and I owe her a lot of gratitude for it.

Honestly, the feedback I’ve gotten from Reddit, Twitter and on my site itself is inspiring, and drives me to make better and better content. Thanks for sticking with me, and please, as always, feel free to e-mail me your questions, topic suggestions, or any critique! Stephen.Owen.ii@Gmail.com

DSC’s Chicken and Egg Problem

Part of my series on replacing and reproducing Group Policy using PowerShell Desired State Configuration.

Anyone who’s followed my blog or industry news knows that there is a lot of excitement in the Windows World about the growth of PowerShell and the introduction of Desired State Configuration. DSC will surely grow to replace at least Group Policy, and likely also begin chipping away at Configuration Manager (SCCM or ConfigMan, as it is popularly known) for ConfigMgr’s DCM and software distribution. Just my prediction :)

As I’ve covered before on my blog, Desired State Configuration settings currently come to machines in one of two ways: Push or Pull. Here’s the run-down:

  • DSC Push ○ A system is built and a devop/admin/sysop pushes a config to the machine by logging on locally and pushing the config down to the system OR ○ A system is built and then an outside construct pushes the DSC config down to the system remotely, this could be a runbook or some other flavor of Service Management Automation (SMA)
  • DSC Pull ○ A system is built and then instructed by some mechanism to reach out to an SMB Share or IIS server which is configured as a DSC Pull Server, and the system downloads a configuration from there.

The differences between them highlights one of the current challenges you’ll run into with DSC: while you could write and push a DSC configuration out for every system created, it would really be better to instruct your VMs or physical infrastructure to automatically look for configuration settings as they’re being built.

The Challenges from here

Here’s the problems we need to solve:

In order for a machine to successfully register a DSC Pull server, at the time you make the registration, a DSC Configuration Guid_must already exist for that server_. Knowing that, how do we ensure a configuration exists for a brand new machine? How do we tell our systems about Pull servers while we’re building them?

Active Directory Group Policy side-steps these issue entirely by delivering config settings down as part of standard Group Policy when a new system joins the domain. Since most builds of Windows machines will have them joining a domain, it really is a very nice configuration package.

So the DSC Chicken and Egg problem, as I’ve coined it, is this:

in order to register a DSC Pull Server Successfully, you must pass a GUID with your Pull settings. However, if you’re trying to assign a Pull server for a brand new machine, how do you ensure that a Configuration with the appropriate Guid exists? And if you can create a config, how do you return it back to the local machine?

In this article, I’ll outline how to configure a machine for to DSC pull while imaging, which can be used in MDT or SCCM Task Sequences. The goal will be to give an example of implanting DSC Pull server settings on our systems while they’re being built.

Following Along

If you’d like to follow along, I recommend following Jason Helmick’s blog post here on building your own DSC Pull Server. We’ll use his method and setup an SMB based pull-server.

You will need:

Virtual or physical test infrastructure, at least two machines- (If you’d like to test the baked-in approached using MDT or SCCM, you’ll need) - One single site SCCM 2012 R2 Server OR - One MDT Server

Assuming you have a newly imaged Server 2012 R2 server, you’ll need to install:

Blam, you’ve got a DSC Server.

Pull-ServerDSCInstalled No way, that was too easy!

It was deceptively easy, right? What happened under the covers was that an instance of IIS was spun up and configured to be running a webapp of the Desired State Configuration Pull Service, with a source directory for configs set up as PROGRAMFILES\WindowsPowerShell\DscService\Configuration, and also an instance of the DSC Compliance server (which we’ll get into in a later post…once I can understand it!). This script from Jason’s blog post configured IIS to listen on port 8080, so we’ll need to keep that in mind by directing any requests to our DSC server to that port, using this syntax http://ServerName:PortName. You can change the bindings in IIS, if you’d like, or change it in the script before you launch it.

Alright, and to verify that the service is working… go to http://PullServerName:8080/psDSCPullServer.svc/ Pull-ServerDSCwebBrowser

This isn’t very human-readable, but if you’re following along from home, you should see something like this.  Henceforth, when we’re going to provide a pull service URL in our DSC Configs, we’ll provide this full path as the URL.  Our DSC Pull Server exists, satisfying one part of our goals; next, let’s look at how you instruct a client to pull configs down from a DSC Pull Server.

Into the Local Configuration Manager

Beginning with PowerShell v4, there is a new engine running under the covers, the Local Configuration Manager. You can interact with it by using PowerShell commands like Get-DscLocalConfigurationManager. This is where the comparative magic of DSC takes place, where your system evaluates what it should look like, and also where it takes action or reports if it is out-of-compliance. This is an incredibly powerful engine, and I fully believe that the next decade we will spend countless hours coming up with ways to leverage it to our professional and personal success.

Here’s a screenshot of the default state of a DSC endpoint.

AllowModuleOverWrite : False CertificateID : ConfigurationDownloadManagers : {} ConfigurationID : ConfigurationMode : ApplyAndMonitor ConfigurationModeFrequencyMins : 15 Credential : DebugMode : False DownloadManagerCustomData : DownloadManagerName : LCMCompatibleVersions : {1.0, 2.0} LCMState : Ready LCMVersion : 2.0 MaxPendingConfigRetryCount : StatusRetentionTimeInDays : 7 PartialConfigurations : {} RebootNodeIfNeeded : False RefreshFrequencyMins : 30 RefreshMode : PUSH ReportManagers : {} ResourceModuleManagers : {} PSComputerName :

We will need to modify a few of these values to reflect the settings for our Pull Server. We’ll do that using a DSC configuration resource titled LocalConfigurationManager, which we’ll set also using DSC. You can set your system to pull down a config from a pull server using the following syntax:

Configuration SetPullMode { param(\[string\]$guid,$machineName) Node $machineName { LocalConfigurationManager { ConfigurationMode = ApplyOnly ConfigurationID = $guid RefreshMode = Pull DownloadManagerName = WebDownloadManager DownloadManagerCustomData = @{ ServerUrl = http://serverName:8080/PSDSCPullServer.svc'; AllowUnsecureConnection = ‘true’ } } } } SetPullMode –guid $Guid Set-DSCLocalConfigurationManager –Computer servername -Path ./SetPullMode –Verbose 

Make sure you include ‘AllowUnsecureConnection’, otherwise DSC will attempt to query for a settings page on port 443 (https instead of http), and you’ll have a nasty hour or two worth of errors to solve.

Thanks to Pete Zerger and Steven Murawski for their excellent blog posts on DSC Pull which helped me to understand the settings needed here. Thanks also to Jacob Benson’s post on DSC Troubleshooting which helped me realize that you do need ‘AllowUnsecureConnection’.

So far we’ve seen how to build a DSC Pull Server (EASY!) and also how to instruct a single endpoint to look to the DSC Pull Server for a configuration. That’s great and all, but we still haven’t created a DSC Configuration for this machine we’re building, nor have we dealt with some of the problems above, like making sure that a DSC Configuration exists, giving it a GUID, signing it with a checksum, and then registering this GUID in the DSC client at build time.

The Flow: Created a DSC Resource on-demand, before you need it

The overall next steps here are as follows, to be conducted while our machine is building:

  • Ensure a DSC configuration (a .mof file) exists for our newly built machine
  • Generate a GUID for our newly built machine, rename our .mof from $nodename to Guid.mof
  • Assign a checksum to our config and ensure the extension is .mof.checksum (using the New-DSCCheckcmdlet)
  • Tell our newly built machine to pull down a configuration from a pull server using the Guid and relevant Pull Server settings

But how would I automate this in production? Great question! We’re solving this hairy problem using PowerShell Sessions and Remoting. During the imaging process, we’ll include a step to run a PowerShell script which will step out to another server and create a unique GUID, generate a new DSC configuration for our machine, renaming the config to the GUID we created. Finally, we’ll use Invoke-Command to pull back the GUID and use that to configure DSC Pull Locally on the new machine. Dsc Flowchart

Assuming we’re using SCCM, what will happen here is that we’ll add one step to our Task Sequence, and we add ‘Run PowerShell Script’. PServerTaskSequence

And here is the code for the script.

Now, what will happen here is that your system will build and then run the remote PowerShell commands to create the .mof for it. Then, it will use DSC to configure itself as a Pull Client, and proceed with the Task Sequence. When the TS finished, within about 30 minutes, the Local Configuration Manager will attempt it’s first Pull, and grab the .mof file then enact the configuration. What we’re doing in our demo is simply copying down the source files for an .MSI, and then when it gets here ensure the MSI is installed. Assuming you’ve got a MSI for 7Zip sitting on a share and you run this script, you should see this!

Our old stand-by, installing 7Zip

Our old stand-by, installing 7Zip

Wrapping up

If you can run a PowerShell Script while imaging your machine, you can use this approach to bootstrap machines with DSC Pull Settings.  I recognize that not everybody will want to use SCCM to do this procedure, so here are some other ways you could use the same script to attack this problem:

  • With SCCM’s little brother, Microsoft Deployment Toolkit  and its Lite Touch Install
  • Use Windows System Image Manager to configure an unattended.xml file with system first boot actionlook for a pull server (setup DNS alias for a pull server)
  • Use Group Policy to run a log-on script to run this PowerShell
  • If you make use of another system building tool, use your Orchestration framework to Invoke this script and impart the DSC pull setting back down.

This was a challenging post for me.  Frankly, DSC is still a very new technology and not that much has been written about it.  What I’ve produced here is an answer to a question that has bothered me since I first became aware of DSC with Jeffrey Snover’s WMF 4.0 talk at TechEd last year.  I may have made some mistakes, and it is possible that I’m over-working the whole problem.  Regardless, this solution has worked in my testing, and I would feel confident deploying this to clients or in my own environment.  If you spot an error, please let me know!  I’d love to make this the perfect solution to the ‘Configuration must be there before you can embed it’ problem.

Thanks!

Sources

Steve Murawski’s Great blog series on the topic on PowerShell.org httpowershell.org/wp/2013/11/06/configuring-a-desired-state-configuration-client/

His site on DSC and Chef configuration methodologies - http://stevenmurawski.com/

Jason Helmick’s Series on ConcentratedTech http://searchservervirtualization.techtarget.com/tip/How-to-push-out-a-Desired-State-Configura-Pull-Server

Download link to the configuration script needed to configure a DSC Pull Server http://cdn.ttgtmecom/rms/editorial/1.CreatePullServer.ps1.txt

Pete Zerger’s awesome post on start-to-finish deploying of a DSC Pull Server http://www.systemcentercentral.com/day-1-intro-to-powershell-dsc-and-configuring-your-first-pserver/

Jacob Bensen’s great post on DSC Troubleshooting http://jacobbenson.com/?p=296#sthash.VDDogyU0.

Mike F Robbins great blog post about dealing with an error that choked me up. I had a Server with WMF 5.0 was pushing the image to a PowerShell v4 PC. http://mikefrobbins.com/2014/10/30/powershell-desired-state-configuraterror-undefined-property-configurationname/#comment-18749

Download link to the newest WMF for PowerShell / DSC http://www.microsoft.comus/download/confirmation.aspx?id=44987

Download link to the newest DSC resources link https://galltechnet.microsoft.com/scriptcenter/DSC-Resource-Kit-All-c449312d

Continue Reading...

Working with Web Services, SOAP, PHP and all the REST with PowerShell

November 19, 2014 FoxDeploy

In order to truly ascend to the next level, every scripter eventually needs to integrate an outside service into the organization, be it Air-Watch, ServiceNow, LogicMonitor, Azure AD or any other popular service.

Some of these services include ready made PowerShell modules which you can easily integrate into the environment, but others instead present something different, an API, or Application Programming Interface.

These sound scary and ‘developery’ but they really aren’t so bad.  And the great thing is that they all adhere to the same standard, er, or set of standards.

Fortunately, most of the services we’ll find will adhere to one of these common standards: SOAP, REST, or PHP.

The goal of this post is to give you an example of how to use each of these standards to interact with the various systems you may run across.

Not only for the web

You may have noticed on the past few posts here that I’m really getting into APIs. “What’s so great about APIs?” you may ask.

APIs allow you to very easily leverage work that someone else has done to quickly create your own functions and get seriously useful output from just a little bit of work. If you’re planning to Orchestrate workflows in your environment, or create runbooks for your data center too, or if you want to make your own tools, learning how to interact with SOAP, REST and WebServices will definitely be in your favor.

The difference between your average ‘Scripting’ guy and an Automation Engineer or Consultant is the ability to create your own tools, from scratch, using the APIs provided.  That’s where you start to make the big bucks.

How do I know which standard to use?

Sometimes, we’ll be super lucky and the developers for our desired Service will list which type of API they’re offering.  Of course, sometimes we’re not so lucky and have to look elsewhere.

Fortunately, we can use the URL for the service to help determine which PowerShell cmdlets to use!  The following chart shows the relationships between URL specification and cmdlet.

URL

Service Type

Cmdlet

Ends in .asmx or ?WSDL

SOAP

New-WebServiceProxy

Contains API, especially api/v2

REST

Invoke-RESTMethod

Ends in .php

PHP/Form

Invoke-WebRequest

REST v. Soap, whats the difference?

This is a great question that came up during our user’s group last night. Both REST and SOAP are simply methods of accessing information presented via web services. It will suffice to say that REST is now in vogue, and is generally believed to be easier to use and manage than SOAP, which tends to be a bit heavier on XML.

The best answer I’ve seen came from Dr. M. Ekelstein, who put it the following way: “A nice analogy for REST vs. SOAP is mailing a letter: with SOAP, you’re using an envelope; with REST, it’s a postcard. ”

In his blog he gives an example, comparing the response you can expect from SOAP vs. the response from REST. In both examples, we’re querying for the results of a user ‘12345’. Note the tremendous verbosity of one reply over the other.

SOAPREST
<?xml version="1.0"?><soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:body pb="http://www.acme.com/phonebook"> <pb:GetUserDetails> <pb:UserID>12345</pb:UserID> </pb:GetUserDetails> </soap:Body> </soap:Envelope>http://www.acme.com/phonebook/UserDetails/12345

You can imagine how much work would go into parsing out the real juicy bits from the result on the left, versus the result on the right.

Simply put, if you have the option, use REST, it’s much easier to deal with the return objects!

Working with SOAP Protocol

So, we’ve determined that we’re working with a SOAP API, either because the service API catalog says so, or we used the handy URL trick to determine that the URL of this service ends in .asmx?WSDL, which is short for Web Services Description Language.

The overall flow of accessing resources from a SOAP source are to access the source using New-WebServiceProxy, storing the results in a variable. You’ll then run Get-Member to look at the methods your WebService offers, and then go from there with accessing it.

You can generally view a WSDL in your browser by, uh, browsing to it. It will be human readable  XML code.  For this example, we’ll be using the handy Length endpoint from WebServiceX.net, which allows us to convert one unit of Length into another.  When we open it in a browser, we see the following Service Description.

Fortunately for us, rather than scrolling through pages and pages of XML, PowerShell knows how to interpret this description and let us access it in a PowerShell-y way, using the New-WebServiceProxy cmdlet.

For example:

$url = "http://www.webservicex.net/length.asmx?WSDL" 
$proxy = New-WebServiceProxy $url $proxy | gm -memberType Method


TypeName: .AutogeneratedTypes.WebServiceProxy 
 
Name            MemberType Definition
---- --------- ----------
ChangeLengthUnitCompleted Event 
BeginChangeLengthUnit Method System.IAsyncResult 
ChangeLengthUnit      Method double ChangeLengthUnit
ChangeLengthUnitAsync Method void ChangeLengthUnitAsync
EndChangeLengthUnit   Method double EndChangeLengthUnit
ToString Method string ToString()

So, this helpful output lets us see some interesting Methods() available, all centered around Changing Length Units. Let’s take a peek at the .ChangeLengthUnit() method.

methods

Those definition types are super long! It basically abbreviates down to (“NumberOfUnits”,”StartingLengthUnit”,”EndingLengthUnit”)

We can give it a try with the following, to convert 15 Meters into a similar number of International Confusing Headache Increments (INCHEs, for short)

$serviceProxy.ChangeLengthUnit(15,"Meters","Inches") 

 >590.551181102362

Pretty nifty!

Working with REST

REST APIs are the bomb, and totally fly AF.  They’re written when the developers of a service truly have extensibility in mind.

For this example, we’ll refer back to my Get-Weather function I released about a month ago. When I originally wrote that, I was using Invoke-WebRequest (Which is effectively just loading the web page and scraping it’s contents! I’ve since had a come to Jesus meeting and fixed my code there )

Here are the most pertinent bits of that function:

$API\_key = "$secret" $url = "https://api.forecast.io/forecast/$API\_key/$coords" #Store the results in $weather $weather = Invoke-RestMethod $url -Method Get

#Display the contents of $weather $weather 
latitude :      33.9533
longitude :  -84.5406
timezone :    America/New_York
offset :          -5
currently :  @{time=1416415006; summary=Clear; icon=clear-day; nearestStormDistance=235; nearestStormBearing=321; precipIntensity=0;
precipProbability=0; temperature=38.67; apparentTemperature=36.25; dewPoint=20.8; humidity=0.48; windSpeed=3.54; windBearing=249;
visibility=10; cloudCover=0.09; pressure=1029.21; ozone=321.84}
minutely :     @{summary=Clear for the hour.; icon=clear-day; data=System.Object[]}
hourly :         @{summary=Partly cloudy starting this afternoon, continuing until this evening.; icon=partly-cloudy-day; data=System.Object[]}
daily :            @{summary=Light rain on Saturday through Tuesday, with temperatures rising to 67F on Monday.; icon=rain; data=System.Object[]}
flags :           @{sources=System.Object[]; isd-stations=System.Object[]; darksky-stations=System.Object[]; madis-stations=System.Object[];
lamp-stations=System.Object[]; units=us}

So,now that we’ve seen how easy it is to work with these object oriented services, let’s take a deeper peak under the covers with some PHP/Forms manipulation using PowerShell’s built-in FaceBook example.

Working with PHP/Web Forms Objects

Now, that we’ve seen how comparatively easy these were, let’s see how we’d attack a .php/forms login.

One of the things to note about using Invoke-WebRequest is that you’ll be getting cozy with the HTTP Verbs of Get, Post, Delete, and others. For this example, we’ll use Get and Post.

We’ll run our test using the easiest .php that I know of, the Form Post Tester service on Hashemian.com’s blog. The usage of this service is that you can post any data you’d like to the php system, in the -Body param of your submission. You can pull the data back down later if you append a ‘/’ and a key to your submission, which is handy for testing your HTTP Get.

Here’s an example.

$FormtesterUrl = http://www.hashemian.com/tools/form-post-tester.php 
$accessCode = "/FoxDeploy" 
$URI = $FormtesterUrl + $accesscode 
Invoke-WebRequest -Uri $uri -Method Post -Body "Test Message From PowerShell" 

If you want to test that it worked, you can open up the full URL in a browser, and see something like this.

PostTestweb

Now to pull the data back down from there, we’ll do an use the ‘Get’ method instead.

Invoke-WebRequest -Uri http://www.hashemian.com/tools/form-post-tester.php/FoxDeploy -Method Get | Select -expand Content

PostTestPosh

In more complex scenarios, you could read the HTML of a page and provide values for all of the fields on a page to log in. If you check the Get-Help example, you’ll find a very ambitious example that allows you to log into Facebook with PowerShell!

Where to go from here

I hope you liked this quick tour through working with various APIs.  For your next steps, you might be interested in how to work with complex authentication, covered here in Using PowerShell and oAuth.

Have a specific question?  I’ve written modules for dozens of APIs, from AirWatch, to ServiceNow and even Imgur and can help you get your API needs sorted.   Leave a message below, or on reddit.com/r/FoxDeploy and we’ll see what we can do!

Continue Reading...

Question Time: when I want a property, PowerShell gives the whole object!

November 13, 2014 FoxDeploy

I’m posting today on a topic I see over and over again in the forums, on reddit, and have run into myself numerous times. Every person I’ve ever taught PowerShell runs into it too, and most authors have covered this at some point, including Don Jones in ‘The big book of PowerShell Gotchas’.

It always happens, and can take a while to troubleshoot.  The problem boils down to this:

In my Script, for some reason when I call $object.Property within double quotes, PowerShell dumps the whole object with all of its properties! How do I get just one value?

And inevitably this leads to ugly, ugly string concatenation like this:

Write-host ("Operation completed on: " + $object.Property + " at " (Get-Date))

It’s ugly and a bad way to do things. You can end up with strange errors too, when objects of a different type are shoved into one another.

So, even though everyone has had a crack at answering this one, I took my own shot at it.  I’ll show you how you should do this, by merit of explaining it to someone else.

“What’s going on here?”

I’ve been scripting for years in both BASH and Batch but I’m new to Powershell and object-oriented languages. I want to make sure I understand this before moving on with my script. This is the input csv file: FirstName,LastName,ID,Dept,Flag First,Last,5403,Accounting This works:

$inputfile = Import-Csv "\\\\san\\inputfile.csv" ForEach ($user in $inputfile) { If (Get-ADUser -Filter "mobile -eq $($user.ID)") {Echo $user.ID} }

This doesn’t work:

$inputfile = Import-Csv "\\\\san\\inputfile.csv" ForEach ($user in $inputfile) { If (Get-ADUser -Filter {mobile -eq $user.ID}) {Echo $user.ID} } 

What is it about the syntax in the first one that makes it work? From <http://www.reddit.com/r/PowerShell/comments/2lzxgm/new_to_powershell_why_does_this_work_and_this_not/>


 

The Reason Why you’re getting too much; PowerShell just wants to help

Here’s the reason why you’re getting that output. It all has to do with the String! When you put a variable in quotes and in paranthesis, PowerShell will pull out only that single property when evaluating the string. In your first example, you can see this in action, this is what PowerShell is really doing, I’m only going to include the bit on line 3.

ForEach ($user in $inputfile){Write-Host "(Get-ADUser -Filter mobile -eq $($user.ID))"}

(Get-ADUser -Filter mobile -eq 5403)

So, Powershell reads through the line, sees the $variable marker in front of paranthesis, then treats the contents of it as a string. Here, it pulls out only the ID property in that case, because it respects the order of operations. Now compare this to your Example 2, and look at what is happening

ForEach ($user in $inputfile){Write-Host"Get-ADUser -Filter {mobile -eq $user.ID}" }


Get-ADUser -Filter {mobile -eq @{FirstName=First; LastName=Last; ID=5403; Dept=Accounting; Flag=}.ID}

 

See that? PowerShell is reading the characters after the -Filter, and gets to $user and then just dropping in the value for the $user, which includes all of these properties. At the end of the contents of User, it then lamely appends ‘.ID’ to the end.

Nice try, PowerShell, you did your best.

So, the real problem here is that if you want to pull out a single property of an object within full quotes, you need to use $($variable.Property) syntax.


 

Oh, and in case you’re wondering how I did the PowerShell console output in my post? I used an HTML block and figured out using mspaint that the hex color for the PowerShell window is RGB #013686.

<div style="padding: 12px; background-color: #013686; line-height: 1.4;"> <span style="color: #ffffff;"> #YourConsoleOutput here </span> </div>

Results in

#YourConsoleOutputHere

Continue Reading...

Microsoft MVP

Five time Microsoft MVP, and now I work for the mothership


Need Help?

Get help much faster on our new dedicated Subreddit!

depicts a crowd of people in a night club with colored lights and says 'join the foxdeploy subrreddit today'


Blog Series
series_sml_IntroToDsc
series_sml_PowerShellGUI series_sml_IntroToRaspberryPi Programming series_sml_IntroToWindows Remote Management Series The Logo for System Center Configuration Manager is displayed here Depicts a road sign saying 'Learning PowerShell Autocomplete'




Blog Stats