The situation is that after resuming from sleep, my gaming PC would suddenly not be able to resolve any new DNS requests nor open any new traffic / webpages.
For instance, in trying to open a page in Edge or Chrome, I’d see ERR_NETWORK_CONNECTION
. Next, if I tried to invoke a webrequest from PowerShell to test connectivity outside of a web browser, I’d see this baffling error:
Invoke-restmethod https://google.com
Invoke-RestMethod: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
This was accompanied with these events in Event Viewer:
A request to allocate an ephemeral port number from the global TCP port space has failed due to all such ports being in use.
And
TCP/IP failed to establish an outgoing connection because the selected local endpoint was recently used to connect to the same remote endpoint. This error typically occurs when outgoing connections are opened and closed at a high rate, causing all available local ports to be used and forcing TCP/IP to reuse a local port for an outgoing connection. To minimize the risk of data corruption, the TCP/IP standard requires a minimum time period to elapse between successive connections from a given local endpoint to a given remote endpoint.
I ran the netstat command to see where all of my ports were going!
You’d think Windows would have support for a bajillion ports, since Windows supports communications on ports from from 1-65535, right? Well, the first 1024 are reserved, but the other 64,511 should be plenty…right?
But then I remembered that PowerShell has some much better command alternatives than netstat
for Windows 10 and up, so I wrote this one-liner to get all of my ports and see which process is taking them all:
Get-NetTcpConnection |
Group-Object -Property OwningProcess |
Select Count,Name | sort Count -Descending
Count Name
----- ----
5143 4556
8 10664
7 8304
6 15284
4 6416
3 0
3 4
2 13184
Hmm. Most processes are using 10 or less ports but one is using more than 5000? That doesn’t smell right.
I expanded the PowerShell some more, so that it would also grab the owning process as well
Get-NetTcpConnection |
Group-Object -Property OwningProcess |
Select Count,Name | sort Count -Descending |
select -first 4 | % {
Get-Process -PID $_.Name
}
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
2679 149.43 195.05 18.83 4556 2 EAConnect_microsoft
41 36.77 8.48 0.33 8304 2 XboxGameBarWidgets
206 362.44 432.64 141.23 15284 2 SearchApp
26 23.93 51.54 12.83 10664 2 msedge
I then opened up Process Monitor and filtered to processes called Ea*
and found that EA Connect is crashing rapidly for some reason. The ProcMon dump lead me to file access operations which revealed the location of its log file
Path: %localappdata%\Electronic Arts\EA Desktop\Logs\EAConnect_microsoft.log
And I saw this message, repeating in EAConnects
log file, which perfectly matches the original message from Event Viewer
TCP/IP failed to establish an outgoing connection because the selected local endpoint was recently used to connect to the same remote endpoint. This error typically occurs when outgoing connections are opened and closed at a high rate, causing all available local ports to be used and forcing TCP/IP to reuse a local port for an outgoing connection. To minimize the risk of data corruption, the TCP/IP standard requires a minimum time period to elapse between successive connections from a given local endpoint to a given remote endpoint.
My fix was to sign into EA Connect, which was installed along with my GamePass Subscription, and the issue never returned again.
Source for troubleshooting info: Troubleshoot port exhaustion issues
I installed and I use GamePass. I tried to install a game via GamePass that is only available through EA, and this action installed the EA Agent.
I never carried through with making an EA account or signing into one, but the agent being installed was enough to setup the EA Background Service and the EA Connect process. From checking the logs, it looks like this is meant to check for updates for any EA Games installed and so once a second the process checks for any games installed. But it crashes due to me not having signed in.
Then a second later it checks again.
You’d think one app using many thousands of ports would be a warning sign, and it is.
In fact within Windows, when a process requests a port, if that port is not used within 4 seconds, it gets recyled back into the pool of available ports.
But in this instance a network request is tried one second later. And so every second, it adds a new port to its list.
And after 64,511 attempts, my outbound ports were totally consumed my this process, and then all new connections fail, but importantly previously opened connection could continue to function.
And that is why I needed to restart once a day.
]]>This is another one of those posts inspired by a Stack Overflow question (these things just write themselves!). Here’s the post that inspired it.
Post Outline
In programming, imagine you have some function like this which could presumably bail early if it’s not going to be able to do the work you need done.
public void SomeMethod<T>(string var1, IEnumerable<T> items, int count)
{
if (string.IsNullOrEmpty(var1))
{
throw new ArgumentNullException("var1");
}
if (items == null)
{
throw new ArgumentNullException("items");
}
if (count < 1)
{
throw new ArgumentOutOfRangeException("count");
}
... etc ....
}
There a lot of reasons we could bail, and the code looks pretty messy because of it.
We can introduce a guard clause here, and the technique works in PowerShell or C#, to contain all the messy “stuff that makes my function die” logic.
We can also use them to make our ifs
easier to read too!
Some places write Ensure
methods to handle bailing on certain conditions, like this.
public static class Ensure{
public static void IsNotNull(object val, string arg)
{
if (value == null)
throw new ArgumentNullException(argument);
}
}
You’d then modify the code to consume it like so:
public void SomeMethod<T>(string var1, IEnumerable<T> items, int count)
{
Ensure.IsNotNull(var1);
Ensure.IsNotNull(items);
You can even go farther and just stuff all of your ‘sadpath’ logic into one guard.
public static class Ensure{
public static void CanProcess(object val, string arg)
{
if (val == null)
throw new ArgumentNullException(argument);
if (val.GetType().IsArray && val.Count < 1 )
throw new ArgumentOutOfRangeException("count");
}
}
Then you just bail out early and easily, shortening your code.
public void SomeMethod<T>(string var1, IEnumerable<T> items, int count)
{
new [] {var1, items, count}.ForEach(x=>Ensure.CanProcess(x));
... etc ....
}
But Stephen, can’t we just use parameter validation for these?
Sure, you can and should use parameter validation to ensure your function can work, but there are loads of common scenarios when you will have special handling for special combinations, and guard clauses are an awesome tool for simplifying that logic.
Which brings us to…
In PowerShell specifically, we already have parameter validation, so most people can and should use that to clean up and help our function not have to worry about the sad paths out there.
For PowerShell, I love the use case of using these little guard functions to instead return true
or false
and then we just plop them directly into an if
statement.
These are great because…
Imagine a function like this, which wraps a call to Write-Host
(thank you to @IISResetMe for the sample).
#updating Mathias to add calling the guard clauses
function Write-CustomHost {
param(
[Parameter(Mandatory = $true)]
$Object,
[Parameter(Mandatory = $false)]
[ValidateSet('Red','Green')]
[string]$ForegroundColor
)
Write-Host @PSBoundParameters
}
Now, imagine we needed some special handling when we got in a Process object, or, for simplicities sake, an [int]
object.
We could write this kind of an if clause.
if (($object -is [int]) && $foreGroundColor -eq 'red'))
Which maybe isn’t too hard to read. But what about when we need even more complex behavior, like handling combinations of params coming in?
Cases like these, where our ifLogic
goes into a simple function to give a [bool]
response save the day.
#guardClause setup
function isRed {
param([string]$ForegroundColor)
$ForegroundColor -eq 'Red'
}
function isInt {
param($object)
$object -is [int]
}
We don’t need much logic at all, just a simple PowerShell comparison statement. Now, we go back to our custom host and….
#updating Mathias to add calling the guard clauses
function Write-CustomHost {
param(
#...
)
if (isRed $ForegroundColor -and isInt $Object){
return "this is a red Int, so lets do special handling here"
}
Write-Host @PSBoundParameters
}
Write-CustomHost -Object 1 -ForegroundColor Red
Yes of course you do. If code is worth writing, it’s worth testing. These are especially easy to test, but proper testing should include a test for both possible conditions for each clause.
At a minimum, there should be:
Describe GuardCluases{
It "IsInt should return true when an int" {
IsInt 4 | should -be $true
}
It "IsInt should return false when not an int" {
IsInt "ok" | should -Be $false
}
}
In migrating from Wordpress.com hosting, I noticed that there wasn’t any great one step solution for the ‘WordPress Stats’ info I got from WordPress’ JetPack Service. In this post, I’ll show you exactly how I recreated both the utility AND more importantly the Vanity I got from WordPress Stats (the later to satisfy my ego!)
Post Outline
If you’re a long time reader of blogs but don’t blog on your own, you might never have paid much attention to little widgets like these on the sites you visit.
But believe me, the authors of those sites definitely know what I’m talking about.
What does it do? Well, it might be a little bit old school but it just keeps track of the page loads of a site. If someone comes and visits an article, then clicks to see some more in the series and ends up looking at four other articles, it would tick up five more times.
If you host your blog on WordPress, one of the nifty features you can enable is a Plugin called JetPack which gives you a number of cool features, one of which is the stat tracker I showed above.
Unfortunately if you migrate away from WordPress…you can’t bring it with you.
Or can we?? I originally looked at monitoring to see if the request to the WordPress.com hosted site had an obvious method to…extract the call to the API used to track hits to the site.
Unfortunately, stats seem to be tracked as part of the page load request, and not a separate API call.
Furthermore, while their API does allow you to query the current traffic stats and milestones, they do not allow you to increment the hits with a simple call that I could find.
This didn’t stop me though, I needed a way to see my pretty numbers counting up!
All I really needed was an API that could increment simply. I thought of writing my own dotnet core app, as I’ve done a bunch of times, like when I wrote a Game of Throne Deathpool site with logons and passwords, or when I wrote a simple app to check the UV levels of a given day (needed because of my ghastly palor, due to being a redhead).
But then…I got lazy.
My normal level of energy on a given day
So I hunted for an easy peasey API, and found a great one in countapi.xyz
All you do with this API is register your new counter with an optional reset / control code and then trigger a Get request to increment and retrieve the new value!
So to create a new tracker for a site called mysite.com
and then set its starting value at 42, we’d run:
GET https://api.countapi.xyz/create?namespace=mysite.com&value=42
⇒ 200 {"namespace":"mysite.com", "key":"33606dbe-4800-4228-b042-5c0fb8ec8f08", "value":42}
And then, after that, we only need to run a new get request to see that we’re up and running.
Invoke-RestMethod "https://api.countapi.xyz/hit/foxdeployhits"
>2075540
Now I have a mechanism to track my hits, I just need to actually trigger this when the page is loaded. This needs to happen client side (which sadly means I lose tracking and counts if the user has JavaScript disabled or very strict settings, c`est la vie).
This was really easy. My blog theme, Bulma-Clean-Theme already had a look and feel I liked from my site on WordPress foxdeploy.wordpress.com, so with a minimal amount of configuration, I had something that felt familiar.
It even had a great sidebar function, provided via a file called Latest-Posts.html
. Here’s mine if you want to see what it looks like:
when someone looks at my source code
Adding a new node here to mimic the ‘Page Stats’ feature was very easy, just adding a new div with the right class.
<div class="card" style="padding-top:20px;">
<header class="card-header">
<div class="card-header-title">Blog Stats</a>
</header>
<div class="card-content">
<div class="content">
<div id='blogHits'></div>
</div>
</div>
</div>
Note: Pay special attention to the currently empty div called blogHits above, for now it will be empty but we’ll use this as an anchor to inject a value in just a moment!
Now to reload and see how it shows up…
I can trigger a http request when this element is loaded by adding a bit of JS, like this:
<Script>
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://api.countapi.xyz/hit/foxdeployhits");
xhr.responseType = "json";
xhr.onload = function() {
document.getElementById('blogHits').innerText = this.response.value + ' hits';
console.log("total hits " + this.response.value);
}
xhr.send();
</Script>
With this code, we prepare a GET request over to countapi.xyz, and then onload (when we have a response), we scan the document object model (the DOM, a code based interpretation of the current webpage, which feels very familiar coming from PowerShell) to find the empty div we setup earlier.
This works BUT….the number is uggo.
Fortunately JavaScript has approximate 857,3028,237 different built in formatting tools, and I can make use of the convenient .toLocaleString('en')
utility to transpose strings into different formats. Because I’m Anglo-centric, I have chose the right way to do it, using comma separators. Sorry if you like other format types!
With this in place…
Frankly, Google Analytics gives me INCREDIBLE amounts of data and is dramatically far and away much, much better than what I had before with WordPress stats. It even has a load of tools to show me ways I can improve the site, like Accessibility and page loads.
I did a lot of work transcribing Accessibility and alt tags onto images, and now have a much better layout for keyboard navigation, and now no longer have screen shots of code, and the site loads much faster.
All of these issues were surfaced to me by Google Analytics, which is frankly the bomb, and I will probably write a blog post about it later on!
Did you like this post? Are you interested in what it took to migrate my site from WordPress over to GitHub pages? You’re in luck, I have some more posts planned for that topic! Leave me a message below and lemme know!
🐦 - Shout out to Chris for the excellent Bulma Clean Theme
🐦 - Shout out to this blog post by Rohit from Eyehunts for the tip on how to format numbers in a way that makes me happy.
]]>After years of blogging about PowerShell, ConfigMgr and Automation, I decided to give it all up and now focus the blog solely on my new love, Dogs…
Or not! But if you’re like my friend Ryan Engstrom on Twitter, and wanted me to compile a list of my funny ‘#Important Bernie Size Update’ posts…
Please put together a thread of all your 'dog size update' photos.
— Ryan Engstrom (@ryandengstrom) February 28, 2021
Then check out the April Fools Page, over here, by clicking the link below!
Or click the little Bernie button floating up above!
]]>I have long admired the Report Issue button on Microsoft Docs and sought to recreate it. Then I loved it so much, I added it to the end of every post! Here’s how, and you can just copy and paste it and then take the rest of the day off!
Post Outline
This is a reasonable question to ask anyone, and especially me.
Way back in the day when I was writing a Wiki or docs for work, I knew that if I had a bug or error in my documentation, I could count on my good ol’ buddy Wayne from @waingrositblog to spin in his chair and shout
Yo, this shit is whack, son
before he went back to looking at collectible Transformer figurines and then you knew you had to make some changes to your docs.
But not everyone had a Wayne of their own. So eventually everyone else started rolling comments on their blogs with Disqus or something similar. However they got flooded by Spam and noise and it was hard to stay on top of them.
So now we’re left with needing a way for people to report issues in your posts without using comments which are hard to stay on top of without a lot of effort.
Enter the Report Issue Button, which can be found in a lot of Microsoft Docs, like this page on adding Application Insights to your app.
Clicking this button takes the user to the ‘Report an issue’ page but also populated the body of the issue with a lot of useful info.
This implementation is great, because it automagically includes a lot of info about the post including which specific URL they viewed and also a link to the source file, so the dev or whoever isn’t left wondering what the heck the person is talking about.
I loved this and knew that I with me hosting my new blog on GitHub pages, I wanted to rely on GitHub Issues as well to keep track of needed changes and feedback.
The first thing I did was right-click and inspect element to get a clue as to how this button works. This revealed that it looks like you can do a lot with simple URL encoded values to set things like the Issue Title and prefill Issue Comments.
https://github.com/MicrosoftDocs/azure-docs/ #githubRepo
issues/new? #Issues / new
title=& #/No title but it looks like thats an option body=...
And with a little HTML decode on the body, we can see that it translates into exactly what you see in the body when click ‘Report Issue’
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 401d2997-0fbd-7966-8996-edbaab1819ff
* Version Independent ID: c3ee3318-4c69-8ded-a8d5-5059d699351b
* Content: [Application Insights API for custom events and metrics - Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics)
* Content Source: [articles/azure-monitor/app/api-custom-events-metrics.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-monitor/app/api-custom-events-metrics.md)
* Service: **azure-monitor**
* Sub-service: **application-insights**
* GitHub Login: @lgayhardt
* Microsoft Alias: **lagayhar**
Now, I liked some of those fields but I didn’t need all of them, so I trimmed down to just these fields.
[Enter feedback here]
---
Document Details
- Content [GitHub Report Issue Button](https://1redone.github.io//blog/github-report-issue-button.html)
- Content Source [_posts/2021-03-15-github-report-issue-button.md](https://github.com/1RedOne/1RedOne.Github.io/blob/master/_posts/2021-03-15-github-report-issue-button.md)
The content and Content Source items are MarkDown Links, using the Jekyll Liquid Tags of page.title
and page.path
to autofill those values. This will help me see which URL someone was viewing and also what file that correlates to in the GitHub repo as well.
Next, to test the results I just pasted this into some random blog post and fired it up in Jekyll to see the result.
It works! It has both a working Content link, to show me where they clicked from, and the file path so I can edit the post too.
Now I just need to HTML Encode this body and make it into a clickable button.
Do enough web dev work and eventually you won’t see %20
anymore, but automatically interpet it as a space. That’s the power of URL encoding. And of the human brain!
The easiest way to do this in a human readable form is to make a variable in JavaScript with the plain text, and then call encodeURI
and let JS do it all for me.
<div class="alert alert-info" role="alert">
<i>See an Issue with this page? Report it on Github Automatically!</i><br>
<button class="button" id='GitHubButton'>
<i class="fab fa-github fa-lg fa-pull-left"></i> Report Issue
</button>
</div>
<script>
function MakeLink(){
var urlToEncode = `https://github.com/1RedOne/1RedOne.Github.io/issues/new?title=BlogPostIssue&body=
[Enter feedback here]
---
Document Details
- Content [GitHub Report Issue Button](https://1redone.github.io//blog/github-report-issue-button.html)
- Content Source [_posts/2021-03-15-github-report-issue-button.md](https://github.com/1RedOne/1RedOne.Github.io/blob/master/_posts/2021-03-15-github-report-issue-button.md)
`;
return encodeURI(urlToEncode);
}
var gitHubURL = MakeLink();
console.log("setting button for " + gitHubURL);
var item = document.getElementById("GitHubButton");
item.onclick = function(){window.open(gitHubURL);}
</script>
Then I added this manually to a page, refreshed Jekyll…and what did I see?
It was that easy! Now, I could go paste this into every single post I ever wrote, but I am a bit lazier than that…
Much like basically any other web framework (what an astute statement, Stephen) Jekyll supports saving common UI elements as a snippet which can be included and embedded wherever you like, making the perfect composited UI.
All you do is make a new html
file and drop it into your ./includes
folder and you’ll be good to go.
I took this chance to spruce up the button into a full modal element.
Adding this snippet automatically is super easy. With it saved in the ./includes
folder, we can then reference it within any page manually by using this syntax.
`{`% include gitHubLink.html `%`}
Remove the backticks!
So now our final step is to add this to our default post.html
layout, which will be found in the layouts folder.
Merge the new include
statement here after post content.
And then after a Jekyll Rebuild, it should appear beneath your posts!
]]>The other day I answered a question on StackOverflow about how to cache the results of slow running operations easily in PowerShell.
In answering it, I was reminded that this problem occurs all the time in automation, like when you:
As always, we will approach this with Progressive Automation, step-by-step adding complexity and features till we get something we’re really proud about. So first we’ll look at Caching calls just for this instance of PowerShell. Then we’ll build in complexity and add a more persistent cache.
One new thing: I’m going to try to start sprinkling in some more deliberate career / workplace advice throughout my posts, hope you like it. If you hate it, feel free to contact me for a refund.
Outline
For the question on Stack, the user wanted to cache a call to Get-ADGroup for some automation at her workplace. Also, it was good enough to cache the membership when PowerShell opened up. So we started with looking at her code.
She already had a function called Get-AdUsers
, which wrapped around the normal Get-AdGroup
and Get-AdUser
cmdlets, I’ll post a snippet here. The gist of this function was to retrieve all nested group members from a parent group.
function Get-ADUsers {
param (
[Parameter(ValuefromPipeline = $true, mandatory = $true)][String] $GroupName
)
[int]$circular = $null
# result holder
$resultHolder = @()
$table = $null
$nestedmembers = $null
$adgroupname = $null
# get members of the group and member of
$ADGroupname = get-adgroup $groupname -properties memberof, members
# list all members as list (no headers) and save to var
$memberof = $adgroupname | select -expand memberof
if ($adgroupname) {
if ($circular) {
$nestedMembers = Get-ADGroupMember -Identity $GroupName -recursive
$circular = $null
}
else {
$nestedMembers = Get-ADGroupMember -Identity $GroupName | sort objectclass -Descending
}
}
#...code continued...
She was looking for some place to cache hits to LDAP. Two lines jumped out at me.
13: $ADGroupname = get-adgroup $groupname -properties memberof, members
20: $nestedMembers = Get-ADGroupMember -Identity $GroupName -recursive
The top line is good to cache if we want this cmdlet to be fast when someone looks at multiple groups in a session. The bottom line is good to cache when we want fast results when the parent group often contains the same nested groups.
You might wonder at this point
why bother analyzing the problem, I wanna code!
This sort of analysis is good to perform before you just start coding. Ideally, our work should be done as part of a team, identifying pain-points and spending our efforts meaningfully.
You want at the end of the week, month and year to have a list of your achievements, and speeding up or improving the performance of something critical and meaningful will help you give your boss the ammo she needs to argue for a higher raise or promotion for you.
BusinessImpact image
TL/DR: Don’t waste engineering hours automating something painless or that no one cares about. Your efforts should be apparent and yell from the roof top “Yo, this Engineer is AWESOME, give him a raise!”
We will begin by caching the nested hits, line 20. Most organizations go BONKERS nesting groups inside groups inside groups, so if we can mitigate some of those greedy LDAP hits to use our speedy, snappy cache will have an immediate speed boost.
20: $nestedMembers = Get-ADGroupMember -Identity $GroupName -recursive
We’ll do this by replacing this function call with another call. We’ll name this Get-CachedADGroupMember
function Get-CachedADGroupMember($groupname){
$groupName = "cached_member_$($groupName)"
$cachedResults = Get-Variable -Scope Global -Name $groupName -ErrorAction SilentlyContinue
if($null -ne $cachedResults){
"found cached result"
return $cachedResults
}
else{
"need to cache"
$results = get-adgroup $groupname -properties memberof, members
Set-CachedGroupMembership -groupName $groupName -value $results
}
}
This is pretty straightforward. The code builds the name of a variable and then checks the environment to see if it exists. If it does, that variable is returned. If not, then we execute the operation and hand off the results to another cmdlet just to store the results. The storage command is very simple. (Hint: it will become less simple once we add storage to disk!)
Function Set-CachedGroupMembership($groupName,$value){
Set-Variable -Scope Global -Name $groupName -Value $value
return $value
}
Already, this will become noticably faster because of all the cache hits. However, what if our cache becomes stale and we need to update it?
We can provide this feature by just passing a $boolean value of -Update in, by adding this to our cmds params.
function Get-CachedADGroupMember([string]$groupname, [switch]$update){
#...
if(($update) -and ($null -ne $cachedResults)){
Then, to force an update of the Cache, we simply append -Update
to our function.
Easy peasey!
Time to modify this function and remove the inherent AD Based focus, and turn it into a modular tool that could work to cache anything.
Some small edits is all we need!
function Get-CachedOperation([String]$Name, [ScriptBlock]$command, [Switch]$Force){
$CommandName = "cached_$($Name)"
$cachedResults = Get-Variable -Scope Global -Name $CommandName -ErrorAction SilentlyContinue
if($force -or $null -eq $cachedResults ){
"need to cache, evaluating..."
$results = $command.Invoke()
New-Variable -Scope Global -Name $CommandName -value $results -Force
}
else{
"found cached result"
return $cachedResults
}
}
To actually use it, we use the following
>Get-CachedOperation -Name SlowCommand -command ([ScriptBlock]::Create({start-sleep 2;return 5}) ) | tee-object -var result
>$result.Value
5
One downside to our code as written is that there is no logic to rerun if the results get too stale
To add time awareness, the quickest way is to make a custom type that has the command name, scriptblock, results and an automatic timestamp. This is actually a perfect use case for PowerShell classes, which past me from like four years ago completely couldn’t understand. Aww, see how cute I was?
Link to my old post
So, here’s the class. We could make a new one if we wanted by running the bottom line.
class CachedOperation
{
# Automatic TimeStamp
[DateTime] $TimeStamp;
# Command Nickname
[string] $Name;
# Command Instructions
[ScriptBlock] $Command;
# Output, whatever it is
[psCustomObject] $Value;
#Constructor
CachedOperation ([string] $name, [ScriptBlock]$scriptblock)
{
$this.TimeStamp = [DateTime]::UtcNow
$this.Name = $name;
$this.Command = $scriptblock
$this.Value= $scriptblock.Invoke()
}
}
Now to modify the function to work with this class.
function Get-CachedOperation([String]$Name, [ScriptBlock]$Command, [Switch]$Force){
$CommandName = "cached_$($Name)"
$cachedResults = Get-Variable -Scope Global -Name $CommandName -ErrorAction SilentlyContinue | Select -ExpandProperty Value
if($force -or $null -eq $cachedResults ){
Write-Verbose "need to cache, evaluating..."
$CachedOperation = [CachedOperation]::new($Name, $command)
New-Variable -Scope Global -Name $CommandName -value $CachedOperation -Force
$cachedResults = $CachedOperation
}
else{
Write-Verbose "found cached result"
}
return $cachedResults.Value
}
And in action:
Get-CachedOperation -Name MySlowCommand -command ([ScriptBlock]::Create({start-sleep 1;return 6}) )
VERBOSE: need to cache, evaluating...
6
PS C:\Users\Stephen> Get-CachedOperation -Name MySlowCommand -command ([ScriptBlock]::Create({start-sleep 1;return 6}) )
VERBOSE: found cached result
6
So in the last step, no value was created. We were merely setting up the scaffolding for the next, actually cool step.
Seems silly to have intermediate steps but in the real world, you’ll probably be following a flow like this, creating the scaffolding and supporting functions and then submitting them with their unit tests. Then once that passes muster, you introduce the small feature flag PR that flips things around and starts using that new code.
To actually do something useful, let’s add the time check to see how old the results are.
This is pretty easily done, when we retrieve a cached result, we’ll check to see how old it is and if more than 30 minutes, we’ll rerun the operation.
function Get-CachedOperation([String]$Name, [ScriptBlock]$Command, [Switch]$Force){
$CommandName = "cached_$($Name)"
$cachedResults = Get-Variable -Scope Global -Name $CommandName -ErrorAction SilentlyContinue | Select -ExpandProperty Value
if($force -or $null -eq $cachedResults -or ($cachedResults.TimeStamp -le [DateTime]::UtcNow.AddMinutes(-2))){
if($cachedResults.TimeStamp -le [DateTime]::UtcNow.AddMinutes(-2)){
Write-Verbose "Results are too old, reevaluating..."
}
else{
Write-Verbose "need to cache, evaluating..."
}
$CachedOperation = [CachedOperation]::new($Name, $command)
New-Variable -Scope Global -Name $CommandName -value $CachedOperation -Force
$cachedResults = $CachedOperation
}
else{
Write-Verbose "found cached result"
}
return $cachedResults.Value
}
But one rough part of it is the syntax for Moq, which requires you to write a handler and specify each input argument, which can get pretty verbose and tiresome. To ease this up, try this function, which will take a method signature and convert it into a sample Mock.Setup or Mock.Verify block, ready for testing
]]>Wouldn’t it be great to have a way to keep them informed of when Daddy or Mommy is in a meeting? Something nice and big and obvious that they can just totally ignore, right?
That’s why I sought to design my own perfect on-air light, to automatically turn on when I joined Teams Meetings. Won’t you join me in this journey together, and you can build your own?
Great question, and if either of these describe you, you can probably just stop and buy one of the off the shelf products that answer this need.
But if you do want to make your own…read on!
With all of the products acquired, let’s get started.
This can be surprisingly hard. If you buy the three or five packs of the Wemo Mini Smart Switch, rectangular style, they will likely be the Mini.82C or F7C063. Depending on your luck and if you buy the bulk packaging, you might end up with ones like I got, which had Firmware so old the Wemo Smart app as of July 2020 would be unable to configure them.
If that happens to you, here’s how to get them going.
Do this for each switch to make your life easier.
I’m calling my device ‘MeetingLight’.
Before moving on, you should have one plug connected to a light or fan or whatever that responds when you turn it on and off with the Wemo app.
As a sneak preview, this is what turning a light on from PowerShell when I join a meeting…here’s what it looks like!
If-This-Then-That is an awesome resource, an automation engine that provides endless capabilities and is really amazing and wonderful.
I like it. I like it a lot. https://www.youtube.com/watch?v=FD2qrBRy84k&feature=emb_logo
In this section, we’ll create a new flow we can use that starts with a Web Push and ends with asking Wemo nicely to do something for us.
Login to https://ifttt.com/ and click on Create.
Click on ‘This’ and choose ‘Webhooks’:
This is the icon you are looking for.
The Webhook logo is so pretty![/caption]
Select ‘Recieve a web request’
This is so cool![/caption]
Next, choose ‘That’, where we’ll tell IFTTT what to do when this flow happens.
The User interface speaks to me! See, it’s the same logo but now it calls out that the flow begins with a Webhook. Excellent UX.
Search for Wemo Smart Plug and you’ll have to login to an oAuth process to connect the services together.
You’d pick your smart bulb, fan or crockpot if you were turning those on and off when entering a meeting…
Hm, maybe a flow to trigger my George Forman grill to make some bacon for me?
There are a lot of possibilites here!
Now, pick the smart device we setup way back in section one to enact the action upon.
I am picking ‘Meeting light’.
Finally, click ‘Finish’ on the review and finish page, and go ahead and try it out to confirm your flow works.
💡 Do this one more time to setup a ‘TurnOffTheLight’ flow too! 🤓
My two flows are named meetingStart
and meetingStop
.
This part is so easy, still within IFTTT, click on ‘Documentation’ from the Maker:Webhooks page.
Clicking here on the Documentation button shows you how to formulate your requests to IFTTT.[/caption]
I only drew a big box in the screen shot because I, embarrassingly, just couldn’t find it! The next page shows you your API key and how to trigger your events.
This will show you how to formulate your request and the URL to hit.
https://maker.ifttt.com/trigger//with/key/
But aren’t these tokens in the clear?
No they are not. With HTTPS, as we have discussed before on this blog in The Case of the Spooky Certificate, even the URL itself is secured and passed as an encrypted body parameter. Only the target server, in this case maker.ifttt.com is transmitted in clear.
Now let’s make these into the worlds ugliest PowerShell functions.
Function meetingStart { irm https://maker.ifttt.com/trigger/meetingStart/with/key/apiKeyGoesHere -method Post }
Function meetingStop { irm https://maker.ifttt.com/trigger/meetingStop/with/key/apiKeyGoesHere -method Post }
Here you will need the Microsoft Lync 2013 SDK. You don’t have to install it, just open the .exe with 7Zip then manually run the x86 flavored .msi
.
Or if you’re really cool, extract that too and just get this dll file, Assemblies\Desktop\Microsoft.Lync.Model.dll
.
You can also just search for it on the web, some folks bundle it on Github with their projects.
Once you have that…
As of this writing, retrieving user presence through the Graph API requires special permissions. Some tenants, like your companies Office 365 tenant might allow regular users a token to retrieve delegated info, but not all tenants do this. If they don’t then you may require Tenant Admin permissions to hit the Graph API and get presence state back. I felt that was kind of overkill to turn on a light, if you ask me so I looked to other options.
Wait, what is user presence?
It’s the the Office Unified Communications (sometimes called Office UC) term for being Away, Present, Presenting).
Next up, I had poor luck using the modern OfficeUC SDK to connect directly to Teams to retrieve the status and gave up after a few hours, in the interests of staying true to the ‘pressures on’ hackathon spirit.
So, to retrieve status, we will query it from Skype4Business! How elegant, right?
The root of our woes is that the presence of a person is protected info, and rightly so. Imagine if a vendor knew the second you sat down at your desk and could call you every time. It would get old, and fast.
To be trusted with presence info, apps like Office, Teams and Skype all had to do some heavy lifting to retrieve and set our Presence state, and we can only view that info about peers if we authenticate and use our account, or are federated, which means using an account. Again, heavy lifting.
So, in order for us to do it in code, here’s what we can do.
Add-Type -Path "C:\Program Files (x86)\Microsoft Office 2013\LyncSDK\Assemblies\Desktop\Microsoft.Lync.Model.dll";
#Gets a reference to the currently running Skype4Business client $lyncclient = [Microsoft.Lync.Model.LyncClient]::GetClient()
#Gets a reference to our special contact object from Skype $myContact = $lyncclient.Self.Contact;
#Calls our contact to update the status and retrieve an \`Availability\` property back $myState = $myContact.GetContactInformation("Availability")
See, even retrieving our own state results in a call that Lync/Skype4Business processes for us.
But it works! Now to bake the whole thing into some code to run…
And it works! When I join a call or a meeting, in just a few moments, the light outside my door turns on!
Ain’t she a beaut![/caption]
I realize that my instructions on how to actually make the On Air light fixture are akin to this.
My wife made the whole thing for me! She used a leftover children’s crafting lunchbox and some black and red vinyl for the graphic, which she cut out using a Cricut machine.
I’ll update this as I find better ways to do it, of course. Wait, before you leave, do you know of a better way!?
Share it in the comments or on our subreddit! Did you make your own? I’d love to see it!
Tag me on Twitter @FoxDeploy and I’ll retweet the coolest on-air lights folks create.
]]>I have really loved these last three years with #BigBank #SpoilersItWasWellsFargoAllAlong and made some great friends and had some awesome experiences creating and sharing sessions at MMS with my friends I made along the way.
My career for the last ten years has been focused on automating, deploying, and managing Microsoft technologies. And now now, I’m going to get a chance to help work on them as well!
Starting May 18th, I am happily joining Microsoft’s Azure Compute team as a Developer. I’ll be remaining in Atlanta, and working from home for the foreseeable future.
What to Expect
This blog has always been a place for me to show you how I do it and I will continue to do the same thing, with my own same flavor and perspective. All thoughts and perspectives will be my own and will not be my employers.
I’ll update this blog in the coming weeks when I have tips to share about what I’ve been working on, or as post ideas strike me!
]]>This post is part of the Learning GUI Toolmaking Series, here on FoxDeploy. Click the banner to return to the series jump page!
In our previous post in the series, we took a manual task and converted it into a script, but our users could only interface with it by ugly manual manipulation of a spreadsheet. And, while I think sheetOps (configuring and managing a Kubernetes cluster with a GoogleSheets doc!) are pretty cool we can probably do better.
So in this post, I’ll show how I would typically go about building a PowerShell WPF GUI from an existing automation that kind of works OK.
To begin making a UI we need to start by analyzing which values a user will be entering, considering what inputs make sense for that, and then thinking if there is anything the user will need to see in the UI as well, so, looking back to the first post…
To begin with, users have their own spreadsheet they update like this, it’s a simple CSV format.
HostName,Processed,ProcessedDate
SomePC123
SomePC234
SomePC345
Previous our users were manually adding computers to a list of computer names. That kind of scenario is best handled by the TextBox input. Or if we hate our users, we can make them provide input with a series of sliders.
Me: The ideal phone number input control doesn’t exis– [wpvideo 8HfEBT6J] Gif Credit - Twitter
So we need at least a TextBox.
We need a confirmation button too, to enter the new items. We also need some textblocks to explain the UI. Finally, a Cancel/Reset button to zero out the text box.
We should also provide feedback of how many items we see in their input, so we should add a label which we can update.
That brings us up to:
A note on TextBoxes: As soon as we provide TextBoxes to users, all kinds of weird scenarios might happen. Expect it!
For instance, users will copy and paste from e-mails in Outlook, or from Spreadsheets in Excel. They might also type in notepad a list of computers separated by Newlines (/r/n
) carriage returns. Or maybe they’re more of the comma-separated type, and will try to separate entries with Commas. These are all predictable scenarios we should account for in our UI, so we should give the user some kind of confirmation of what we see from their typing in the TextBox, and our form should handle most of the weird things they’ll try.
That’s why we need Confirmation. If you provide UI without confirmation, users will hate you and e-mail (or worse, they might call you!!) for help, so be sure to do it the right way and think of their needs from the get go, or you will enjoy getting to hear from them a lot.
Don’t make UI that will make your users hate you, like this one ![](
https://twitter.com/FoxDeploy/status/1256953579186905090
With all of these components in mind, time to get started.
We’re going to open up Visual Studio, pick a WPF app and then do some drag and dropping. If you are getting a bit scared of how to do it, or what you should do to install it, check out some of the previous posts in my GUI Series, here!
You should end up with something like this:
Which will look like this when rendered!
Easily the ugliest UI we’ve done so far[/caption]
To wire up the buttons, I wrote a few helped functions for the logic for the buttons, which look like this.
function loadListView(){ $global:deviceList = new-object -TypeName System.Collections.ArrayList $devices = import-csv "$PSScriptRoot\\devices.csv" | Sort-Object Processed ForEach($device in $devices){ $global:deviceList.Add($device) } $WPFdevice\_listView.ItemsSource = $global:deviceList }
function cancelButton(){ $WPFok.IsEnabled = $false $wpfdeviceTextbox.Text = $null $wpflabelCounter.Text="Reset" }
$wpfdeviceTextbox.Add\_TextChanged({ if ($wpfdeviceTextbox.Text.Length -le 5){ return } $WPFok.IsEnabled = $true $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split(\[System.Environment\]::NewLine).Where({$\_.Length -ge 3}) $count = $deviceTextbox.Count $wpflabelCounter.Text=$count })
$WPFCancel.Add\_Click({ cancelButton })
$WPFok.Add\_Click({ $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split(\[System.Environment\]::NewLine).Where({$\_.Length -ge 3}) ForEach($item in $deviceTextbox){ $global:deviceList.Add(\[pscustomObject\]@{HostName=$item}) } set-content "$PSScriptRoot\\devices.csv" -Value $($deviceList | ConvertTo-csv -NoTypeInformation) cancelButton loadListView })
To walk through these, we set an arrayList to track our collection of devices from the input file in loadListView
, then define behavior in the $WPFok.Add_Click
method to save the new items to the output.csv file. This is simple, and much harder to mess up than our previous approach of telling users to update a .csv
file manually.
🔗Get the complete source here 🔗
You may also notice a new method of loading up the .XAML
files.
[void\]\[System.Reflection.Assembly\]::LoadWithPartialName('presentationframework')
$xamlPath = "$($PSScriptRoot)\\$((split-path $PSCommandPath -Leaf ).Split(".")\[0\]).xaml" if (-not(Test-Path $xamlPath)){ throw "Ensure that $xamlPath is present within $PSScriptRoot" } $inputXML = Get-Content $xamlPath $inputXML = $inputXML -replace 'mc:Ignorable="d"','' -replace "x:N",'N' -replace '^<Win.\*', '<Window' \[xml\]$XAML = $inputXML \[/code\]
After some time away from writing PowerShell GUIs, I now think it is unnecessarily verbose to keep your .xaml
content within the script, and now recommend letting your xaml
layouts live happily next to the script and logic code. So I’ve modified the template as shown here, to now automatically look for a matching named .xaml
file within the neighboring folder. Simple and easy to read!
And that’s that! Was this the world’s best GUI? Yes. Yes of course it was!
Join us next time where we explore a whole new world, don’t you dare close your eyes, of aspnet core as an alternative way of approaching automation.
If you’re still looking for something to do, try this out this great walkthrough of terrible UI traits by a UI design consulting firm. Whatever you do, don’t do this in your UI and you’ll be off to a good start.
]]>