Let’s Make A Development Environment with PowerShell

Built on the .NET Framework, Windows PowerShell is a task-based command-line shell and scripting language; it is designed specifically for system administrators and power-users, to rapidly automate the administration of multiple operating systems (Linux, macOS, Unix, and Windows) and the processes related to the applications that run on those operating systems.

A big difference between other scripting languages and Powershell is that it is fully object-based and not text-based. Therefore it is important to keep in mind that what you might see as output on your screen is only a representation of the object, but not the object itself.

Lately, Microsoft has even made PowerShell open source with their PowerShell Core iteration which is cross-platform (Windows, Linux, and macOS)  allowing you to take full advantage of automation and configuration tool/framework that works well with your existing tools and is optimized for dealing with structured data (e.g. JSON, CSV, XML, etc.), REST APIs, and object models. It includes a command-line shell, an associated scripting language and a framework for processing cmdlets.

In this post I will explain the basic steps to set up a what I would call a “sane” working environment, which arguably gives you an experience similar to a Bash shell.

In specific I will discuss the following tools/modules:

  • ConEmu
  • Environment Settings
  • Powershell Profile

ConEmu

Like many other scriptwriters and developers, I spend quite a bit of time in command line applications (Windows CMD, PowerShell, Terminal on MacOS, etc.). Unfortunately, these applications don’t offer a lot in terms of customization. ConEmu is a console emulator with tabs and panes, which is great for those who want easier multi-tasking. It’s a high customization, tabbed console emulator that lets you run any shell you want.

Install ConEmu

First step is to download the latest version of ConEmu. Pick the installer for either the latest preview or stable version.

Run the installer, choose the 32-bit or 64-bit option (depending on which version of Windows you have installed), and keep all the default options.

Configure ConEmu

Once installed, start it up. The first time you run it, you’ll be presented with a fast configuration screen. Everything here can be changed later but it’s a good place to start.

For usage with PowerShell I use the following settings in ConEmu:

  • Startup
    • Tasks
      • Select Item 5: {Shells::Powershell}
      • In the Commands field I set my environment to:
        • “C:\Windows\System32\WindowsPowershell\v1.0\powershell.exe -new_console:d:F:\GitHub\PowerShell”

  • Environment > “Set up environment variables” = “set HOME=F:\GitHub\PowerShell”

  • Features
    • Colors
      • <Monokai>

When we break down the above settings, we are ultimately telling ConEmu to ensure that a new shell or tab is spawned with the starting directory that contains my PowerShell scripts, and that ConEmu is able to run scripts that reside under this path.

Environment Settings

The PSModulePath environment variable stores the paths to the locations of the modules that are installed on disk. Windows PowerShell uses this variable to locate modules when the user does not specify the full path to a module. The paths in this variable are searched in the order in which they appear.

When Windows PowerShell starts, PSModulePath is created as a system environment variable with the following default value: $home\Documents\WindowsPowerShell\Modules; $pshome\Modules.

Only one environment variable should be set for Powershell, specifically the “PSModulePath” variable. This variable allows the usage of modules (and functions within these modules) straight from any Powershell CLI. It doesn’t matter whether it is powershell.exe, Powershell ISE, or a custom execution from a different path.

To set the variable go the following:

Source: Microsoft TechNet

PowerShell Profile

If you find yourself using it as a command line shell it may be useful to store your functions and customization’s in a profile that gets loaded every time you load the console. PowerShell also allows you to specify a set of commands which will run before spawning a new shell. This is convenient for pre-loading modules, setting aliases and setting the path of the new shell.

Typically, (especially if you are doing this for the first time on your workstation) there is not a profile file created that pre-loads modules, etc. But to be safe, I would run the following cmdlet to verify. If it comes back as False, then you know you will need to create one:

If one has been created already for some reason, and you want to create new one right away with no regard to the existing profiles specifications you can run:

Once the file has been created, opening up your favorite text editor *Cough* VS Code w/ PowerShell Extension *Cough* and enter something on the lines of:

Summary

Now we are talking! Playing around with the settings I cleaned up the default look of the console’s default settings and tabs to give it a nice clean look while maintaining the search menu in the top right hand side of the shell:

There are plenty more tweaks you could do to tailor ConEmu to your preferences (the full documentation can be found here), but I find these settings are good starting point for creating a sleek-looking, effective command line application that is light years ahead in terms of flexibility and customization for your PowerShell command line adventures.

Inspired by: Sudesh JetHoe & Mike Larah

Connecting to Microsoft Online Services via PowerShell Sessions

It can be fairly annoying to have to run several different sets of commands in your PowerShell console window just to connect to the service online that you are working on.

John Weber and I were having a discussion about how annoying it was, and he and I couldn’t help but ask: “Is there an easier way?”

Utilizing Windows PowerShell, we cumulatively came up with the idea to pass a single entering of credentials, and in turn, log into each environment we manage most in their own respective PS console environment.

I use ConEmu as my PS CLI interface of choice rather than the native console and one of the best features of ConEmu in my opinion is the tabbed console windows. I like to separate my work so I can better keep track of the commands that I am running, especially when I am running a lot of them. The idea of establishing connections in separate console windows sounded like a great idea.

Therefore, using the powers of PowerShell (pun intended) I put together a script to auto-magically do this for us.

If you would like to download the script, click the link below:

Download Link

“Playtime is over, Star Fox!” Err I mean.. “StarCom!”

Star Fox on N64… those where the good ol’ days. Not only do we have to move on from those lengthy joyful summer days playing Star Fox on N64, but also from our free SSL CA friends at StarCom.

StarCom was bought out by a Chinese CA (Wosign) and were caught backdating certificates and issuing certificates for domains that people didn’t own.  StarCom certficiates are no longer trusted in Firefox and Chrome.  They are in the process of re-issuing new root certs, but for now stay far, far away from them….

StarCom is (well, was) the only competition to Let’s Encrypt in the free certificate space. It is far and away the cheapest direct provider of wildcard certificates (which are impossible to get for free), unless you move into reseller territory. And even their free certificates last four times as long, and don’t require the use of certbot.

Certainly, Let’s Encrypt works great for a lot of peoples’ needs. But for those it doesn’t (and there’s more of them than you might think), this is seriously bad news.

A real bummer – I always liked StarCom because of their approach to charge for verification (with increasing costs for each higher trust level) but not for issuing certs (while still manually checking every cert request, at least for any OV&EV cert in my case).

I used StarCom’s certificates in my labs and even suggested them to a few customers in the past to get around those hefty price tags associated with SSL certificates.

Now that StarCom is SOL, my hand has been forced and I must renew my certificates before my browser starts to yell at me….. Let’s Encrypt, lets see what you got!

Stumbling around on the interwebs to make Let’s Encrypt work for me I found this nifty GitHub whose author titles the repository: A .NET library and client for the ACME protocol (Let’s Encrypt CA). The handy QuickStart guide served me well but I want to expand on some of the gotchas that I had ran into:

  • The Certificate is only valid for 90 days. You will have to generate a new certificate via this process below to have a valid certificate after the validity period expires.
  • The Root CA is DST Root CA X3
  • The Intermediate CA is Let’s Encrypt Authority X3
  • Signature Algorithm is SHA-256 with RSA Encryption
  • Key Size is 2048 bits
  • Valid CA in common web browsers such as Chrome, FireFox, IE etc.
  • You can have up to 100 SANs
  • Let’s Encrypt Rate Limits

1. ACMESharp Installation

  • First, install the ACMESharp PowerShell module:

2016-11-28-09_40_52-start1

  • The workstation I was running on did not like that the module was going to make the command ‘Get-Certificate’ available even though it already was. Since I am doing all this work on a throw-away VM, I chose to AllowClobber.

 2016-11-28-09_40_52-start

  • Then, per the QuickStart guide, I loaded the module:

2. Vault Initialization

  • Let’s encrypt stores your Certificates and related artifacts in what they call a Vault. To use Let’s Encrypt, you will have to start by initializing a Vault.
    • Note, if you run as Administrator, your Vault will be created in a system-wide path, otherwise it will be created in a private, user-specific location.

3. Register

  • Register yourself with the Let’s Encrypt CA:
    • Provide a method of contact, e.g. an email (note, LE does not support the tel: contact method)
    • Accept their Terms-of-Service (TOS).

4. Set your Domain Identifier

  • Submit a DNS domain name that you want to secure with a PKI certificate.
  • If you want to create a SAN certificate, you will have to do this step, 5, and 6 for each “myserver.example.com” you want to include. I recommend creating all of your PowerShell cmdlets ahead of time to ease this tedious process.

5. Prove Domain Ownership – DNS Challenge

  • The Quick-Start guide found on the ACMESharp GitHub includes 3 methods to prove domain ownership. For my sake, the easiest way to prove I owned my domain was to complete what is refered to as a DNS Challenge.
  • If you want to handle the DNS Challenge manually, use the following cmdlet and to print out the necessary instructions that you need to follow on your DNS server/service of choice. Implement the steps described in the instructions before moving on to the next step.

6. Submit the Challenge Response to Prove Domain Ownership

  • Once you have handled the Challenge using one of the methods in Step #5, you need to let the LE server know so that it can perform a verification.
  • I chose to use the DNS Challenge method, so I used this cmdlet to submit my challenge:

7. Verify the Status of the Challenge

  • Once the Challenge response is submitted, the validation usually takes anywhere from seconds to minutes to perform. I performed a check status of the validation for my domain using the following command.

  • Until the Challenge has been verified, you should see a status of pending.
  • If the Challenge fails for any reason you will see a status of invalid. At this point, you cannot re-attempt the same exact Challenge without first Submitting a new DNS Identifier (Step #4).
  • If the Challenge is successful, you will see a status of valid.

2016-11-28-18_54_14-windows-shell-experience-host

  • Once the Challenge has been successfully validated, you can check the overall status of the Domain Identifier, which should be valid as well.

8. Request and Retrieve the Certificate

  • After you have proved your ownership of the domain name you wish to secure, you can create a new PKI certificate request, and then submit it for issuance by the LE CA.

Subject Alternative Names (SAN)

If you want to generate a CSR that lists multiple names, you can use the Subject Alternative Names extension of the PKI certificate request to list multiple additional names other than the primary Subject Name. To do so you specify the -AlternativeIdentifierRefs option with a list of one or more additional Identifier references.

9. Export the Certificate

I personally ended up exporting all of the items below for my certificate to give myself the most flexibility as possible. You can export the certificate in a variety of ways such as the following:

Export Private Key

You can export the private key in PEM format:

Export CSR

You can export the Certificate Signing Request (CSR) in PEM format:

Export Certificate Issued By LE

You can export your public certificate that was signed and issued by the Let’s Encrypt CA in PEM or DER format:

Export Issuer Certificate

You can export the public certificate of the issuer, that is, the CA’s signing intermediary certificate:

Export PKCS#12 (PFX) Archive

You can export the certificate and related assets in PKCS#12 archive (.PFX used by Windows and IIS):

Final Thoughts

All in all, a fairly painless procedure to get yourself a free 90 day trusted SSL certificate for your labs and anything else you see fit so long as you can live with renewing once ever three months. Let’s Encrypt is still fairly new, and may have some exciting stuff for us in the near future as it relates to free SSL certificates. Until then, I’ll be harnessing the Powers of PowerShell.

Installing PowerShell and VS Code on Mac OSX

“If someone told me 10 years ago that we (Microsoft) would offer SQL Server on Linux, I would have laughed them out of the room.” – Presenter while at Microsoft HQ Redmond, WA

It’s 2016 and Microsoft is breaking down the walls to its golden garden. With recent announcements coming from all across the board at Microsoft, it really is an interesting time. Never before have the tech giants been so open to sharing software across distributions and vendors.

In the early 1980s, Richard Matthew Stallman began a movement within the software industry, preaching that software should be free.

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”

Why would anyone in their right mind want to give away their software for free? I think the better question is: “How do we enable ourselves as developers and users, while protecting ourselves at the same time?” A user of the software should never be forced to deal with a developer who might or might not support that user’s intentions for the software. The user should never have to wait for bug fixes to be published. Code developed under the scrutiny of other programmers is typically of higher quality than code written behind locked doors. (We have been looking at you Apple and Microsoft) One of the great benefits of open source software comes from the users themselves. If a user desires or has the need for a new feature, they can add it to the original program and then contribute it back to the source so that everyone else can benefit from it. It is this sole reason why the popularity of GitHub has risen so dramatically in the last couple of years.

This line of thinking sprung a desire to release a complete UNIX-like system (Linux) to the public, free of license restrictions.

Just a couple of weeks ago Microsoft released this blog article announcing its latest “customer obsessed” move and to further convince us that “Microsoft loves Linux.” PowerShell and Visual Studio can now be ran on Linux and OSX.

It’s about time.

I have been running OSX Sierra on my 2015 13-inch MacBook Pro for as long as the public beta has been available, and now I can edit and run my PowerShell scripts directly in my Terminal.

Installing PowerShell

Firstly you will need to download the PKG package powershell-6.0.0-alpha.9.pkg from the page from here onto your macOS machine.

Either double-click the file and follow the prompts, or install it from the terminal.

Screen Shot 2016-09-04 at 1.13.35 PM

Once the package has been installed, running the command powershell puts you directly into a PowerShell session in your Mac OSX terminal:

Screen Shot 2016-09-04 at 1.12.00 PM

$PSHOME is/usr/local/microsoft/powershell/6.0.0-alpha.9/, and the symlink is placed at /usr/local/bin/powershell.

Installing VSCode

Installation

  1. Download Visual Studio Code for Mac OS X.
  2. Double-click on the downloaded archive to expand the contents.
  3. Drag Visual Studio Code.app to the Applications folder, making it available in theLaunchpad.
  4. Add VS Code to your Dock by right-clicking on the icon and choosing Options, Keep in Dock.

Tip: If you want to run VS Code from the terminal by simply typing ‘code’, VS Code has a command, Shell Command: Install ‘code’ command in PATH, to add ‘code’ to your $PATHvariable list.

After installation, launch VS Code. Now open the Command Palette (⇧⌘P) and typeshell command to find the Shell Command: Install ‘code’ command in PATH command.

OS X shell commands

After executing the command, restart the terminal for the new $PATH value to take effect. You’ll be able to simply type ‘code .’ in any folder to start editing files in that folder.

Installing PowerShell Extension

Launch the Visual Studio Code app by:

  • Windows: typing code in your PowerShell session
    • Press F1 (or Ctrl+Shift+P) which opens up the “Command Palette” inside the Visual Studio Code app.
  • In the command palette, type ext install and hit Enter. It will show all Visual Studio Code extensions available on your system.
  • Choose PowerShell and click on Install, you will see something like below

VSCode

  • After the install, you will see the Install button turns to Enable.
  • Click on Enable and OK
  • Now you are ready for editing. For example, to create a new file, click File->New. To save it, click File->Save and then provide a file name, let’s say “helloworld.ps1”. To close the file, click on “x” next to the file name. To exit Visual Studio Code, File->Exit.

For further information, check out Microsoft’s documentation on GitHub:

Managing contact objects in AD when Exchange was never there

Have you noticed that if you have never had an Exchange server in your Active Directory environment, that it becomes extremely annoying to manage contact objects? I just recently came across this nuisance.

Recently tasked with Domino Notes to Exchange Online migrations, creating contact objects of the Notes mail users that contained the relevant attributes needed to migrate presented me with a truckload of contacts to validate.

Thanks to the powers of automation, the go to cmdlet that comes to mind when opening up my PowerShell console equipped with the AD Module, is:

Psych!

Get-Contact_PS_Errorpng

But…. But Microsoft……

Turns out, if you want to manage contact objects in AD using PowerShell without the availability of the EMS, the cmdlet you actually want is:

I am sure Microsoft has their reasons, but since I wanted to manage ADSI attributes of my contacts, it left me scratching my head. How am I to bulk change attributes for contact objects using the AD Module in PowerShell?

In my case, I needed to fix the mailNickname attribute as it had been appended with the Notes users e-mail address instead of just the syntax of the username and soon to be mail user alias in Exchange Online.

Well, luckily I was able to put something together after prowling the all knowing Google for answers

Using the Get-ADObject cmdlet, I was able to target the OU containing the contacts I wanted to manage and select the Name, ObjectGUID, and mailNickname ADSI attributes for manipulation. I pipe that into a Set-ADObject for each (fun fact: the “%” sign is an alias for: ForEach-Object) of the contacts to replace the mailNickname with the whatever the mailNickname is currently set to, minus the “@” symbol and anything that follows it.

BAM! 

One task complete.

The other attribute fix identified itself as the removal of a proxy address from the proxyAddresses ADSI attribute for each contact object. For this task, I was able to target a specific OU containing all the contact objects in question and remove the bad apple proxy address with the following:

Calling in the LDAP ADSI initiator, and specifying a ForEach, I was able to get each contact (In my case, only contact objects existed in the OU’s I targeted) that had an address like “*@notes.domain.com.” Following this you’ll notice I pipe that into a ForEach contact, set the ADSI proxyAddresses attribute to PutEx 4 to the attribute proxyAddresses and gave it a value of nothing with the blank string ($_).

If that went way over your head like it did for me the first time I saw the method, don’t worry. Microsoft has a great article that clarifies this in much more detail:

HOW TO: Use ADSI to Set LDAP Directory Attributes

PowerShell managed to save me from hours of manual attribute changes after all, even when Exchange was never there.

As always, happy scripting!

Exchange 2013 – Importing Certificates

In Exchange 2013 installing certificates is fairly straightforward. With a CAS/Mailbox duo setup on three of these configured, I encountered one problem with the certificates that I thought was worth noting.

I wanted to add a certificate to my lab. So I went to one of the free sites (in my case, StartSSL) and got myself a certificate for my lab domain, after submitting my certificate request from my first Exchange server.

Once I had finally received my certificate, I used the Exchange EAC web GUI to import my certificate onto all of my Exchange servers. Exchange seemed happy, notifying me that the import was successful to all servers.

At this point, something isn’t right. I ran into a weird problem. I went to check the certificate I just imported on all of my servers through the GUI and I found that the certificate was only visible on my first Exchange server. This was weird to me because I had not received any errors when the first Exchange server was importing. Naturally I checked the logs, and found no error on any of my servers.

Logically the next step that came to mind was to try to re-import the certificate on the servers that it was not showing on.

What’s interesting is that when I tried to import the certificate directly onto either of the servers, it still wasn’t showing up. I was prompted by Exchange that the certificate had already been installed with its respective thumbprint. This continued after exporting and removing the certificate from each of the servers and starting back from square one.

After getting frustrated, I decided to do a little more digging, I loaded up MMC with the Certificate snap-in to take a closer look at the certificate I had just imported to my Exchange server. I was able to successfully see the certificate I had just imported, but on closer inspection I saw that it didn’t have the private key installed, which in turn makes the certificate useless.

With this new found information, I removed the certificate from all stores on all of my labs Exchange servers. During the removal however from my first Exchange server, I exported a copy of it which included the private key and all extended properties. I then decided to try to import the certificate using the Exchange shell so I could see more of what might be happening on the back-end.

What I found worked, could be summarized as follows:

If you have already installed a certificate like me, make sure you have already removed the certificate from each server you imported it to.

Import the certificate using the Import-ExchangeCertificate powershell command:

This command will import the certificate from a file called mycert.pfx from a file share called share. (Use the UNC path here) The password parameter prompts for a password here, as it is required for a file storing both a certificate and its private key.

Then you would use the command Get-ExchangeCertificate to receive a list of the installed certificate. I used the -server <Server Name> parameter to view each individual server and verify the certificate installed.

Next you would use the Enable-ExchangeCertificate cmdlet to assign the certificate it’s roles with exchange.

This command will enable the certificate with the thumbprint you specify to secure communication over IMAP, POP3, IIS, and SMTP etc.

At this point, you should be practically ready to go with your Exchange certificate. After refreshing the view in the Exchange admin console, you should see the certificate you’ve installed listed on all servers you imported it on.

I will not however that when importing the certificate through PowerShell it does not allow you to give it a friendly name, so you will have to give it one through EAC if you want to easily identify it this way.