Sunteți pe pagina 1din 32

Concentrated


21
Windows
PowerShell
Tips

By
Don
Jones

In
this
guide,
I’m
collecting
some
of
the
best
tips
that
have
been
published
in
the

blog
at
ConcentratedTech.com
over
a
period
of
six
months.
You
could
simply
read

this
straight
through
–
especially
if
you’re
having
trouble
sleeping
at
night!
But
I

recommend
that
you
tackle
one
tip
a
week.
Work
with
it,
try
and
use
it
in
real
life,

and
see
what
you
can
make
of
it.
In
time,
by
learning
in
chunks
like
this,
you’ll
be

using
PowerShell
more
effectively
than
ever.


Table
of
Contents

Fun
WMI
Trick
Proves
You
Should
Read
the
Help .........................................................2

Remoting
in
v2.............................................................................................................2

Fun with RegEx's ...................................................................................................5

Select-String and Regular Expressions ..........................................................6

RegEx and Switch ..................................................................................................7

Okay, I've got a shell. What do I do now? ......................................................7

PowerShell Misconceptions................................................................................8

Aliasing Properties ...............................................................................................9

Property Tricks and Parameter Binding .......................................................10

Emitting Objects in PowerShell (aka, the Evolution of Functions) .......12

Faster PowerShell Startup.................................................................................19

Do I need .NET, WMI, COM, and all that to use PowerShell?...................20

Write-Host vs. Write-Output ...........................................................................21

Pipeline Binding...................................................................................................22

Here-Strings .........................................................................................................24

The Formatting Rules ........................................................................................25

Error Handling in PowerShell ..........................................................................26

WMI System Properties ......................................................................................29

Knowing the WMI Server Name .......................................................................29

Dates and Times..................................................................................................30

Target Computers from AD .............................................................................31


Fun
WMI
Trick
Proves
You
Should
Read
the

Help

Did
you
ever
read
the
help
for
Windows
PowerShell's
Get‐WmiObject
cmdlet?
I

mean
really
read
the
help?

You've
probably
noticed
the
‐computerName
parameter,
but
pay
close
attention
to

how
it's
listed
in
the
help:

[‐computerName string[]] 

The
outer
[square
brackets]
indicate
that
the
parameter
is
optional,
but
the
string[]

indicates
that
the
parameter
can
accept
either
a
single
string
‐
that
is,
a
single

computer
name
‐
or
an
array
of
strings.
This
has
some
pretty
cool
uses.
For
example,

create
a
list
of
computer
names,
and
then
retrieve
service
pack
inventory

information
from
each:

$names = "server1","server2","server3" 
Gwmi Win32_OperatingSystem ‐comp $names | ft 
CSName,BuildNumber,ServicePackMajorVersion ‐auto 

You
could
also
pass
the
output
of
any
cmdlet
to
‐computerName,
provided
the

cmdlet
is
providing
an
array
of
computer
names.
Suppose
I
have
a
text
file
named

names.txt,
and
it
contains
one
computer
name
per
line.
I
could
then
use
this:

Gwmi Win32_OperatingSystem ‐comp (gc names.txt) | ft 
CSName,BuildNumber,ServicePackMajorVersion ‐auto 

Very
cool.
Of
course,
this
trick
is
useful,
but
the
real
point
is
to
REALLY
READ
cmdlet

help
pages,
and
take
some
time
to
try
and
understand
the
implications
and
uses
of

each
thing
you
see.


Remoting
in
v2

If you haven't taken the new PowerShell v2 CTP for a test drive, you really
should. The remoting (which at this point is only working on Vista and
Win2008) is just awesome. The PowerShell teamposts a good quick-start
to remoting that you'll want to try; I want to shed some more light on
what's happening in it.

PowerShell remoting uses WinRM (Windows Remote Management), a new


technology first shipped with Win2008 that's specifically designed for
remote management. There's a CTP of a new version of WinRM you'll need
to use with PowerShell v2, though.

Right now, there are two main models of remoting you'll want to explore.
The first is called fan-out, and the second I call point-to-point. There are a
few ways you can use these features; I'm going to focus on a single way just
to provide some clarity to the discussion.

The first new concept you have to learn is the runspace. Essentially, a
runspace is just an instance of PowerShell, such as the interactive runspace
you have when you're using the shell in its console host window. With v2,
you can also spin up remote instances, provided v2 (and WinRM) is
installed on the remote computer. Note that WinRM is very
authentication-sensitive - if you're not in a domain where all computers
have a set of shared credentials to work with, then each machine running
WinRM will need to trust the machines you plan to issue commands from.
This is a common test lab scenario, but making the configuration change is
the most non-obvious thing I've ever seen. You have to run something like:

Winrm set winrm/config/client @{TrustedHosts="(address)"} 

Where (address) is either a hostname or an IP address. Once your remote


servers are set up to trust your client (e.g., (address) needs to be your client
name or IP), you're read to begin.

I like to start by spinning up a collection of remote runspaces. Assuming


I've got computer names in an array:

$names = "server1","server2","server3" 

OR I load a text file of names (with the file containing one computer name
per line):
$names = gc names.txt 

OR I use the Quest AD cmdlets to perhaps query a bunch of computers


from the domain:

$names = Get‐QADComputer ‐city LasVegas | select name 

(The syntax is something like that)... then I can get a runspace on each of
them by using this:

$rs = new‐runspace ‐comp $names 

I can add the -credential parameter to specify a single set of alternate


credentials which can be used to connect to each computer, such as a
Domain Admin account. With those runspaces in hand, I can do two
distinct things. The first is the fan-out scenario, where I push a command
(or even an entire script) out to multiple computers at once:

Invoke‐Command { Get‐Process } ‐runsp $rs 

This runs the command synchronously, meaning I have to wait for it to


finish on all computers. You could put ANYTHING, as complicated as you
like, in place of Get-Process. The command runs locally on each of the
computers in the $rs variable - that is, on each of the remote runspaces you
created. If you want the command to run in the background, just add the -
AsJob parameter.

The other thing you can do is use the point-to-point connection. For this,
you need to get a single runspace, not a collection of them. You can do this
by either referring to a single computer:

$rs[0] 

Gets the first runspace, for example. Or, you can create a unique variable
for each computer. I like to do this because it helps me keep track of what's
what. So, I could either do this:
$server1 = new‐runspace ‐comp "Server1" 

Or, if Server1 was already in a collection of runspaces (say, the first item in
the collection):

$server1 = $rs[0] 

Then I can start interactively using the remote shell - a la SSH:

Push‐Runspace $server1 

The shell prompt even indicates what remote computer you're using. To get
back to your local runspace:

Pop‐Runspace 

Pretty cool stuff. It's worth playing with this now, so that you can offer
feedback.

Fun with RegEx's



I've had some fun working with regular expressions in PowerShell recently.
Take this beauty:

^\\\\\w+(\\\w+)+ 

It's a UNC path. The ^ anchors the start of the string, making strings like
"this\\Server\path" illegal; the double backslash ("\\\\" in the regex,
because backslash is a special character and has to be escaped with a
second backslash) must start the string. From there, you can have one or
more alphanumeric characters (the "\w+" part). After that, you can have
repeating sets of single backslashes and alphanumeric characters (the
"(\\\w+)+" bit). It's actually not a perfect regex for a UNC because UNCs
can technically contain spaces, which \w doesn't permit, but I hate spaces
in UNC paths so in my world this enforces what I want.

Here's a fun one for an IP address:


^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$ 

One to three digits, a period, and then repeat that pattern to get all four
octets of the IP address in place. Why are these useful? Well, data
validation is an obvious one. If you ask someone to type an IP address as
input to a script, you can validate at least the format of it:

$input ‐match "^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$" 

If you have a specific range of IP addresses - maybe the first two octets are
standardized in your environment - then you can hardcode those:

$input ‐match "^192\.168\.\d{1,3}\.\d{1,3}$" 

You can also have great fun with the Select-String cmdlet, or even the
Switch construct.

Select-String and Regular


Expressions

So here's where that last tip on regular expressions came into a real-world
scenario for me. I needed to comb through an IIS log file and find every IP
address that was hitting a particular Web page on a Web site. Actually, I
needed to comb through a bunch of log files, not just one. Fortunately,
Select-String can wade through an entire folder of files - just give it a path,
and a string to look for.

Select‐String ‐path c:\logs\*.log ‐pattern "/download.aspx" 
‐simplematch ‐showall 

Because my "pattern" is a literal string, and not a regex, I used the -


simpleMatch switch. The result of this is a set of match objects, which have
a Line property. So I could just pull that property:

Select‐String ‐path c:\logs\*.log ‐pattern "/download.aspx" 
‐simplematch ‐showall | Select Line 

And very easily pull out the information I want. I could go a bit further to
narrow down the output - and I will, in my August 2008 TechNet Magazine
column [grin]!

RegEx and Switch


I recently needed to write a script that accepted domain credentials. The
problem is that there are two ways of providing these in Windows: The old
DOMAIN\USER way from NT, and the newer user@domain UPN
format that Active Directory introduced. The code I was writing could use
either, but needed to do something a bit differently depending on which
was provided. The easy way to tell the difference? A regex and the switch
construct:

$cred = Read‐Host "Enter domain credential" switch ‐regex 
($cred) { "^\w+\\\w+$" { # Old‐style credential 
} "^\w+@\w+$" { # New‐style credential } } 

The switch construct attempts to match $cred against the two regular
expressions I provided, and executes a different code block depending on
which one matches. Because it's impossible for a single string to match
both expressions, only one of these two will execute - although switch will
still check both of them (normally, it'll execute each matching code block).

Okay, I've got a shell. What do I do


now?
One of the neat things about a GUI is that right from the start, you can start
figuring stuff out. Take the new Office 2007 applications: The Ribbon is
designed to put major functionality right in your face, so you can see what's
available. The "jewel" icon pulses, encouraging you to click it (supposedly)
and see what functionality is hidden within. We've all learned to right-click
stuff and see what comes up on the context menu.
A downside of a CLI like Windows PowerShell is that you don't get that
discoverability built in. You open the shell and... it's blue. Yay. Now what?

The PowerShell team did build in several things designed to help with
discoverability, although they themselves aren't always that discoverable.
Here are the major ones:

 Help. Ask for help on anything. Use wildcards. Run Help alone to
see all the things it knows how to help you with. This is a great way
of revealing functionality and learning resources. Get SAPIEN's
Graphical PowerShell Help tool and you can see all the help topics in
a tree view, and then view any one of them with a quick click.
 Get-Command, which has an alias (or nickname) Gcm. Run Gcm
* to see all the commands available, or Gcm *service* to see
everything having to do with a service. If you've got an idea of what
you want to do, Gcm can help you find the command that does it.
 Get-Member, which has an alias Gm. PowerShell is object-
oriented, and in order to effectively utilize objects you need to know
what properties (or attributes) they have, and what actions (or
methods) they are capable of taking. Pipe an object to Gm to find
out: Get-Process | Gm will show you what you can do with a
Process object.
There's no question that using a CLI takes more education than learning to
use a GUI. The payback for that learning is greater efficiency through
automation, and in many cases grester efficiency by not being restricted to
the tasks and workflows that the GUI's author envisioned - you can do
anything, in any order, from the CLI. Knowing some of the discoverability
features in PowerShell - and using them - is a key to becoming more
effective and learning to use the shell better.

PowerShell Misconceptions
SAPIEN's Jeff Hicks blogs about what he feels are the top 5 misconceptions
related to Windows PowerShell - things admins don't know or just plain get
wrong when they get started. Worth a read: These are important. If you
find yourself doing one or more of them, get it fixed.
Aliasing Properties
Last week I looked at a way to get raw strings into an object by creating a
custom property. It's a great way to help bind those strings to a particular
cmdlet parameter via pipeline input. But what if your problem is that your
input objects have the key data in a property that doesn't match the
property name a cmdlet is looking for? Here's an example:

Say I have a cmdlet, Our-Cmdlet, which has a -computerName parameter


that accepts pipeline input ByPropertyName. In other words, to pipe in a
computer name, I need to pipe in an object that has a computerName
property containing the computer name text. Let's say that I have an input
object, but that the computer's name is in a property called CN, instead of
computerName - a common situation if you've retrieved the computers
from Active Directory.

The trick would be to create AliasProperty members. So, if my original


object has a CN property, I want to create an AliasProperty that is named
computerName, and which "points to" the original CN property. I've
written a function that does just that.

function Alias‐Property { 

 param([hashtable]$properties) 

 PROCESS { 

 $obj = $_ 

 foreach($key in $properties.keys) { 

 $value = $properties.$key 

 $obj | Add‐Member AliasProperty $key $value 

 } 

 write $obj 
 } 

 } 

The advantage of this technique is that multiple properties can be aliased.


For example, suppose I want to alias the CSName property so that a
ComputerName property is also available. Easy:

gwmi win32_operatingsystem | Alias‐Property( 
@{"computerName"="CSName"} ) | fl * 

But I could also have the BuildNumber property made into a BuildNo
property at the same time:

gwmi win32_operatingsystem | Alias‐Property( 
@{"computerName"="CSName";"BuildNo"="BuildNumber"} ) | fl * 

And so forth. So this technique would allow any number of properties to be


'renamed,' so that they could then be bound to specific parameters on the
next cmdlet.

Property Tricks and Parameter


Binding
One neat thing coming in PowerShell v2 is a lot more cmdlets with -
computerName properties, which in many cases accept pipeline input. The
trick is in getting the pipeline input to those cmdlets in a way that
accomplishes what you want - and it turns out the solution is a handy trick
in PowerShell v1, as well.

This is a trick that comes up with any cmdlet that accepts pipeline input
"ByPropertyName." To see what I'm talking about here, look at the help for
Get-Service - be sure to specify -full:

Help Gsv ‐full 
Notice that the -Name parameter accepts a String, and will accept pipeline
input ByValue or ByPropertyName. That means any String objects which
are piped in will be bound to the -Name parameter; also, if you pipe in
other objects which have a Name property, that property will be bound to
the -Name parameter.

The -inputObject parameter also accepts pipeline input ByValue, and it


accepts a ServiceController object. So if you pipe in a ServiceController
object, it'll be bound to the -inputObject parameter.

Any given data type can bind to only one parameter ByValue. That is, the
String type can only be bound to one parameter. If you had two parameters
which accepted a string ByValue, PowerShell wouldn't know which of the
two parameters to actually bind the string to. So, if you have multiple
parameters that can accept a string, all but one of them have to do so
ByPropertyName.

So let's imagine a cmdlet which has a -computerName property, which


accepts pipeline input only ByPropertyName. We can't just do this:

GC Names.txt | Our‐Cmdlet 

Why not? Because Our-Cmdlet's -computerName property doesn't accept a


string ByValue - you can't just send it raw strings. Instead, it needs an
object which has a ComputerName property. Unfortunately, the Get-
Content cmdlet (GC) doesn't do that - it only produces raw string objects.
One "fix" is to add the word "ComputerName" to the first line of our
names.txt file, and then import it as a CSV file:

Import‐CSV Names.txt | Our‐Cmdlet 

Because we added that first line, Import-CSV interprets it as a column


header, producing objects which have a ComputerName property. Neat
trick. Here's another, in the form of a function I wrote:

function Add‐Property { 
 param([string]$property = "data") 

 PROCESS { 

 $obj = New‐Object PSObject 

 $obj | Add‐Member NoteProperty ($property) ($_) 

 write $obj 

 } 

 } 

Now I can do this:

gc names.txt | Add‐Property("computerName") | Our‐Cmdlet 

My function is taking the raw strings and creating new, blank objects. Each
new object gets a property added, using the property name I specified:
computerName. The value of the piped-in string is added to that new
property, and the new object is output to the pipeline. This is useful when
you've got names, for example, in a file that you can't easily modify for
some reason.

And there's yet a better, more flexible way to do this... coming up next.

Emitting Objects in PowerShell (aka,


the Evolution of Functions)

Probably one of the most important things you can learn to do in Windows
PowerShell is write scripts and functions that emit not text, but rather emit
custom objects. For example, let's take a very simple script that uses WMI
to ping a remote computer:

$computer = 'server2' 
$results = Get‐WmiObject ‐query "SELECT * FROM 
Win32_PingStatus WHERE Address = '$computer'" 

if ($results.StatusCode ‐eq 0) {  

  Write‐Host "$computer is Pingable" 

} else { 

  Write‐Host "$computer is not Pingable" 

This simply writes a text message directly to the console window. Now,
there are a couple of problems with this approach. One is that the output
isn't terribly re-usable. For example, what if you wanted to use this bit of
code to test a computer's connectivity before attempting to connect to it for
some other purpose, such as remote management? In that case, you might
prefer a True or False output, since that could be used to let a larger script
determine whether or not to try connecting. Another problem with this is
that it doesn't accept pipeline input - the computer name you're pinging is
in a variable, meaning it's hardcoded - it'd be nice to not only have this
parameterized, but also designed to accept many computer names from the
pipeline. So let's make a few revisions.

function Ping‐Host { 

  param([string]$computer = 'localhost') 

  $results = get‐wmiobject ‐query "SELECT * FROM 
Win32_PingStatus WHERE Address = '$computer'" 

  if ($results.StatusCode ‐eq 0) { 

    Write‐Output $True 

  } else { 
    Write‐Output $False 

  } 

This parameterized function can be used as follows:

if (Ping‐Host "server2") { 

  # do something ‐ it's pingable 

Or, if we wanted to read a bunch of computer names from a file (which


includes one computer name per line), we could do this:

$names = Get‐Content c:\computers.txt 

foreach ($name in $names) { 

  if (Ping‐Host $name) { 

    # do something ‐ it's pingable 

  } 

In fact the above is a very VBScript-style approach to the problem: Get a


bunch of items (computer names) and then enumerate through them one
at a time. But PowerShell is designed to circumvent a lot of that complexity
by offering a pipeline that will work with batches of objects, rather than
forcing you to manually enumerate through them like this. Redoing this
function to support pipeline input isn't difficult:

function Ping‐Host { 

  PROCESS { 
    $results = get‐wmiobject ‐query "SELECT * FROM 
Win32_PingStatus WHERE Address = '$_'" 

    if ($results.StatusCode ‐eq 0) { 

      Write‐Output $_ 

    } 

  } 

Really, there are only four changes here:

 We've wrapped the function's code in a scriptblock named PROCESS


 We removed the PARAM declaration
 Instead of using the $computer parameter to carry the computer
name, we're using $_ - this variable will be automatically populated
by Windows PowerShell with whatever comes in from the pipeline.
 Rather than writing out True and False values, we're just writing out
the computer names of the computers which are pingable.
Compuetrs which aren't pingable disappear - making our Ping-Host
function act as a "filtering function," filtering out non-reachable
computers.
Now we can use this in a pipeline that accepts a bunch of computer names,
filters out those which aren't reachable, and leaves us with the ones that
are:
Get‐Content c:\computers.txt | Ping‐Host 
We could then pipe those reachable computer names on to some other
cmdlet to actually do something with them. But wait... there are still some
problems, here. For one, we're losing information: If a computer isn't
reachable, the function just drops it like a bad habit. What if we wanted to
do something with only the non-reachable computers, like send a Wake-
On-LAN packet? We'd have to change the way our function works,
meaning the function isn't as reusable as it could be. Let's make a revision.
function Ping-Host {

  BEGIN { 

    Write‐Output 
"Computer`tResponseTime`tReachable`tIPAddress" 

  } 

  PROCESS { 

    $results = get‐wmiobject ‐query "SELECT * FROM 
Win32_PingStatus WHERE Address = '$_'" 

    $template = "{0}`t{1}`t{2}`t{3}" 

    if ($results.StatusCode ‐eq 0) { 

      Write‐Output ($template ‐f 
$_,($results.ResponseTime),$True,($results.ProtocolAddress)) 

    } else { 

      Write‐Output ($template ‐f 
$_,($results.ResponseTime),$False,($results.ProtocolAddress)

    } 

  } 

This uses some pretty powerful juju, so let's analyze it:

 The BEGIN scriptblock runs first, and emits a header. The `t bits
insert tabs to create columns.
 $template is just a template output string, with {tokens} separated
by tabs
 the -f operator is used to insert data into the {tokens}. We're
inserting the original computer name ($_), the response time from
the ping, either $True or $False if it's reachable or not, and finally
the ProtocolAddress property from the ping - that's the IP address
which responded to the ping.
So in the end we wind up with a formatted table of output. Well, sort of -
using tabs to format output doesn't usually work out great. We'll spend a
lot of time fussing with it, but the fact is we've done a bad thing. We've
got PowerShell outputting text, and we're doing all of the formatting work
ourselves. Why would we do that when PowerShell has a much better
formatting subsystem than we could ever write? Plus, how would we re-use
this output? If we run:
Get‐Content c:\computers.txt | Ping‐Host 
What we get is a bunch of text - in order to filter out the computers which
were (or were not) reachable, we'll have to parse that text, which is a major
pain. We'd have to parse it again to extract the computer names from that
output, in order to use those computer names for some subsequent
process. Outputting text in Windows PowerShell leads to the Dark
Side: More work for you. Who needs more work?
The solution is to have our function output objects, not text.
function Ping-Host {

  PROCESS { 

    $results = get‐wmiobject ‐query "SELECT * FROM 
Win32_PingStatus WHERE Address = '$_'" 

    $obj = New‐Object PSObject 

    $obj | Add‐Member NoteProperty Name $_ 

    $obj | Add‐Member NoteProperty ResponseTime 
($results.ResponseTime) 

    $obj | Add‐Member NoteProperty Address 
($results.ProtocolAddress)  

    if ($results.StatusCode ‐eq 0) { 

      $obj | Add‐Member NoteProperty Responding $True 

    } else { 

      $obj | Add‐Member NoteProperty Responding $False 

    } 

    Write‐Output $obj 

  } 

Welcome to the Light Side.

 We start by creating a new, blank object of the PSObject type. That's


basically a blank canvas.
 We then attach four properties to the object: The computer name,
the response time, and the IP address. We then attach a
"responding" property with either True or False as its value.
 We output the object to the pipeline by using Write-Output.
Now we can get a nicely-formatted table:
Get‐Content c:\computers.txt | Ping‐Host | Format‐Table 
If we just want the computers which aren't reachable:
Get‐Content c:\computers.txt | Ping‐Host | Where { 
$_.Responding ‐eq $False } 
If we want only reachable computers, sorted by response time:
Get‐Content c:\computers.txt | Ping‐Host | Where { 
$_.Responding ‐eq $True } | Sort ResponseTime ‐descending 
If we want to output just the computer names and response times to a CSV
file:
Get‐Content c:\computers.txt | Ping‐Host | Select 
Name,ResponseTime | Export‐CSV Pings.csv 
The point is that by outputting objects which contain all the information
we might ever need, we can reuse this one function in a variety of
situations without ever having to make changes to it. We can utilize
PowerShell's rich functionality to filter and manipulate our output, convert
and export it to other formats, and so forth. So your goal should be to
produce rich objects whenever possible, so that you'll get the biggest long-
term return on your scripting investment.

Faster PowerShell Startup


Windows PowerShell has a minor bug that prevents it from correctly pre-
compiling and caching its .NET Framework assemblies, meaning you take
a performance hit each time you open the console window. The way to fix
this is to simply open the shell (as an Admin - make sure you right-click
and select "Run as Administrator" in Vista or 2008) and run this:

Set‐Alias ngen @(dir (join‐path ${env:\windir} 
"Microsoft.NET\Framework") ngen.exe ‐recurse | sort ‐
descending 
lastwritetime)[0].fullName [appdomain]::currentdomain.getass
emblies() | %{ngen $_.location} 

You actually have to do this per-process, meaning when you do it in the


console, you're only speeding things up for the console - this won't affect
any other apps which host the shell, such as PowerGUI or PrimalScript.
But you can paste this code into a script within those tools and run the
script within the tool (so that the tool is capturing the output into an
output pane or something) in order to effect the speed-up for them.

It doesn't hurt to do this multiple times, but it's slow. If you want to test
and see if this has been done, run this:

test‐path 
"$env:windir\assembly\native*\system.management.a#\*\*.ni.dl
l" 
If the result is True, then the NGEN process has been done; otherwise, it
needs to be done. Thanks to Jeffrey Snover and Oisin Grehan for these tips.

Do I need .NET, WMI, COM, and all


that to use PowerShell?
PowerShell seems, to many administrators, to offer a huge learning curve.
Browse the various PowerShell related forums or the PowerShell portion of
the blogosphere and you'll see numerous posts that rely on the .NET
Framework being used from within PowerShell, posts that rely heavily on
byzantine WMI classes, and so forth. It's enough to make you turn in your
mouse and go home. Do you really need to learn all that stuff?

Let's be clear that PowerShell is still early in its life. Not every Microsoft or
third-party product has been written in a PowerShell-friendly way,
although they will be (the MS stuff, at least) over the next few years. In the
meantime, you've got early-adopter enthusiasm, and honestly a lot of those
enthusiasts are more developer than admin - they're not interested in
managing systems on a day-to-day basis, they're interested in playing with
the shell and seeing what it can do. That's good, because they push the
envelope; it's bad because it can present a twisted view of what the shell is
all about.

I think it's great that you can access the .NET Framework from within
PowerShell. In theory. However, anytime you have to do so in order to
accomplish an administrative task, then you have found a failing
within the shell. Administrative tasks should be accomplished by running
commands ("cmdlets" in shell-speak), and those commands should align
with tasks like starting services or reading event logs. Whatever .NET
Framework craziness goes on inside the cmdlet is fine; admins should only
have to deal with cmdlets.

Today, of course, there aren't cmdlets for everything. So today, you have a
choice: Use PowerShell for whatever it can do (Exchange, SCVMM, SCOM,
SCDPM, VMWare, and more), and don't use it for other things. Or, learn
enough of the .NET Framework to get the shell to go beyond where it really
is, from an administrative standpoint. Essentially, learn enough .NET
Framework that you don't need a cmdlet, because you've learned to do
what the cmdlet would do. I do not think most admins will want to do this,
have the time to do this, or be remotely interested in doing this; I think
PowerShell won't ultimately rule the world until there are cmdlets for
everything an admin needs. And I think that time will ultimately
come, and it won't be a decade away - it'll be a few years, and we'll get
more cmdlet-based functionality all the time during that few years.

So if you aren't excited about learning the .NET Framework, fine. Using it
from within PowerShell was always (as far as the product team was
concerned) a bonus, and not a primary, intended path for administrators.
PowerShell can't administer everything today using cmdlets - the
difference between PowerShell and things (like VBScript) which have come
before is that someday PowerShell will be able to administer
everything. So there's a bright future.

Write-Host vs. Write-Output


There's a LOT of confusion out there between the Write-Host and Write-
Output cmdlets and how they work - not too surprising, since the
differences aren't really spelled out in PowerShell's help files.

Basically, the difference is this:

 Write-Host writes output directly to the console window, which is


considered the "host" when you're running PowerShell.exe.
 Write-Output puts objects (including String objects, if you're just
writing text) into the pipeline.
This is a subtle yet important difference. Consider this:

Write‐Host "Something" > file.txt 

Not only does "Something" not wind up in the file, but File.txt isn't even
created. "Something" displays in the console window, though. Why?
This might be clearer if I stopped using the backward-compatible > thingy
and instead wrote out what PowerShell is actually doing:

Write‐Host "Something" | Out‐File file.txt 

Now, if you consider that Write-Host doesn't write to the pipeline (review
two bullet points above), you might realize that nothing is being piped to
Out-File, and so nothing gets "out-ed" (written) to a file.

Write-Output, on the other hand, does write to the pipeline:

Write‐Output "Something" | Out‐File file.txt 

Puts "Something" into the pipeline, pipes it to Out-File, and gets written
into File.txt as expected. So it makes sense that the legacy syntax also
works:

Write‐Output "Something" > file.txt 

Since that syntax is really just using Out-File behind the scenes. In short,
Write-Host is only useful for displaying output in the console window
(which is why it has options for specifying the -foreground and -
background colors of that output). If you want to do any kind of
redirection, you need to get your text (or whatever) into the pipeline, so
that it can be redirected to a file, printer, or wherever else you need it. It's
also helpful to start migrating away from PowerShell's legacy syntax
support (like > and >>) and to start using cmdlets and pipe characters -
that sometimes makes it clearer what's happening in the shell.

Pipeline Binding
I've always said that PowerShell's help files are awesome... but that's only
true if you know what the heck they're talking about. Let's take this as an
example (run it yourself and follow along; I don't want to copy and paste
the whole thing here - it's huge):

Help Stop‐Process ‐full 
Now scroll down to the specific help for the -id parameter. It tells you a lot
about the parameter:

 It is a required parameter
 It occupies position 1 (meaning that you can just run Stop-Process
1 to stop process ID #1 - you don't have to specify the entire
parameter name).
 It does not have a default value
 It accepts pipeline input by property name (more on this in a bit)
 It does not accept wildcard characters
That pipeline input "ByPropertyName" is what I want to focus on. Let's
first contrast it with the -inputObject parameter, which accepts Process
objects and accepts pipeline input "ByValue." That means if you pipe in a
bunch of Process objects, PowerShell knows to "attach" or bind those
objects to the -inputObject parameter. That's why this works:

Get‐Process | Stop‐Process 

Get-Process produces a mess of Process objects, and puts them in the


pipeline. Stop-Process receives these objects in its -inputObject parameter,
and it knows to try and stop each object which is passed to -inputObject.
"ByValue" means that the -inputObject paramater will be bound to any
incoming objects of the correct type - that is, of the Process type. A general
rule to remember: Only one paramater of any cmdlet can accept pipeline
input ByValue.

Okay, so what about "ByPropertyName?" Well, if Stop-Process were to


receive an object other than a Process object, what does it do with it? If the
incoming pipeline object has a property named "ID," then the value of the
ID property will be bound to the -id parameter. The -name paramater of
Stop-Process has the same capability. For example, let's create a custom
object that has an ID property:

$obj = New‐Object PSObject $obj | Add‐Member NoteProperty ID 
52 
Our object, contained in $obj, has a NoteProperty (which is a kind of
property) named ID, which contains the value 52. We could then do this:

$obj | Stop‐Process 

Although $obj isn't a Process object, this would still result in the process
with the ID #52 being stopped. The ID property of $obj is bound to the -id
parameter of Stop-Process, simply because the property name matches the
parameter name, and because the parameter accepts pipeline input
"ByPropertyName." It's functionally the same as running:

Stop‐Process ‐id 52 

Or:

Stop‐Process 52 

So when is this useful? When you need to pass a lot of objects to a cmdlet
but don't have the entire object it might want. For example, when I don't
have a Process object, but I do have a bunch of process ID numbers or
names. Many cmdlets have multiple parameters capable of accepting
pipeline input ByPropertyName, so it's a useful trick to have up your
sleeve.

Here-Strings
Sometimes you need to work with really long string data - and it can seem
like a pain in the neck. For example, suppose you want to construct an
HTML table, and you want to use a template:

$template = "<tr>`n" 

$template += " <td>{0}</td>`n"$template += "<td>{1}</td>`n" 

$template += "</tr>`n" 

Yuck to look at. Those `n are needed to add carriage returns (to make the
output more readable), and concatenating all those strings makes the code
hard to follow. Instead, you could use a "here-string:"

$template = @"<tr> 

<td>{0}</td> 

<td>{1}</td> 

</tr>"@ 

The here-string delimeter, @" and "@, tells PowerShell that everything in
between is a string. The shell will preserve carriage returns and everything.
That means you can use your new template more easily, and have more
legible code in your script:

$data1 = "Test" $data2 = "Test2" $template ‐f $data1,$data2 

So the here-string makes it easier to put longer, more formatted strings


into a variable.

The Formatting Rules


Why does PowerShell sometimes display data as a list, and sometimes as a
table? Try running these three commands:

Get‐WmiObject Win32_OperatingSystem 

 Get‐Service  

Get‐Process 

One is a list, the other is a table. Why? And why were those particular
properties displayed?

When PowerShell needs to display objects, it follows a precise, fixed set of


rules.

RULE #1: Is there a predefined view? Go into PowerShell's install folder


(%systemroot%/WindowsPowerShell/v1.0) and take a look at
DotNetTypes.format.ps1xml (be VERY careful not to modify this file in any
way). If you search for "System.Diagnostics.Process" (the type of object
output by Get-Process), you'll see that a predefined view does, in fact, exist.
It's a table-style view, and it specifies particular columns - which is why
you see the output you do when you run the command.

RULE #2: Is there a default display property set? If there's no predefined


view, PowerShell looks at its type information - types.ps1xml, by default.
Search for "Win32_OperatingSystem" and notice that the object type does
have a "DefaultDisplayPropertySet." When one of these exists, PowerShell
uses the defined properties for rule #3. If a set doesn't exist, all of the
object's properties are considered in rule #3.

RULE #3: How many properties are there? If this rule is dealing with
fewer than 5 properties, it will display them in a table. Otherwise, it
displays them in a list.

It's really that simple. Of course, you can override these decisions by piping
to one of the Format cmdlets: Format-Table (ft), Format-List (fl), or
Format-Wide (fw):

Get‐WmiObject Win32_OperatingSystem | Ft 

The Format cmdlets also let you specify which properties to display. Those
cmdlets need to be pretty much the last thing on the pipeline; the only
cmdlets that can deal with the output of a Format cmdlets are the Out
cmdlets - Out-Printer, Out-File, Out-Host, and so forth.


Error Handling in PowerShell


Sometimes, errors are inevitable in a PowerShell script. Here's how to
anticipate them, and handle them gracefully.

To begin with, you need to know that PowerShell has a sort of global error-
handling mode, which, by default, is set to "Continue," which basically
means, "display an error message and keep going." An "error message"
isn't the same thing as an "exception;" error messages show up on the
screen but you can't do anything about them. Exceptions, on the other
hand, are yours to command. So to start with, you need to get your
commands to throw an exception when something goes wrong, and you do
that by adding the -ErrorAction (or -EA) parameter to the command,
giving it the value "Stop."

Get‐WmiObject Win32_Service ‐computerName Server2 ‐EA Stop 

When an exception occurs, PowerShell looks to see if your script has


already defined a trap handler:

trap { # deal with the error here } 

Within the trap handler, you have the special $error object, and $error[0]
will contain the error (exception) that resulted in the trap being executed.
The special $_ variable will also contain the error that you trapped, which
is convenient. Remember: Your trap needs to appear in your script
BEFORE the command which throws the exception!

Exceptions still display error messages, which you might want to shut off.
To do so within a script, and without affecting the rest of the shell, simply
add this to the top:

$ErrorActionPreference = "SilentlyContinue" 

That actually sets the default -ErrorAction for all cmdlets, so you won't see
ANY error messages. A cmdlet which overrides that with a local -
ErrorAction of "Stop" will produce an error message and throw a trappable
exception; the script-level $ErrorActionPreference will act to suppress the
error message without preventing your trap from working.

Within your trap, do whatever you want. At the end, specify one of two
keywords:

 Break will exit the current scope and pass the original exception up
to the calling scope, which must either deal with it in its own trap or
go ahead and display it.
 Continue will skip over the line of code which threw the exception
and continue execution on the next line - but will stay within the
same scope as the trap.
Understanding these two puppies can be a little complex. Let's set up a
scenario:

 You're in the shell. You haven't defined any trap handlers in the
shell.
 You run a script named ScriptA.ps1. It contains a trap handler, which
ends with the "Continue" keyword. On line 5 of the script, you
execute a second script, ScriptB.ps1.
 ScriptB.ps1 also defines a trap handler which ends with the Break
keyword. On line 10 of ScriptB.ps1, a cmdlet runs with the -EA Stop
parameter.
 Line 10 of ScriptB.ps1 has a problem, and so the cmdlet throws an
exception.
 The trap handler in ScriptB.ps1 executes, does its thing, and ends
with the Break keyword. This exits ScriptB.ps1, passing the original
exception to ScriptA.ps1.
 ScriptA.ps1 sees an exception on line 5, because that's where it
executed ScriptB.ps1, and ScriptB.ps1 passed an exception. So the
trap handler in ScriptA.ps1 executes.
 That trap handler ends in the keyword Continue - so ScriptA.ps1
resumes executing on line 6. It doesn't re-enter ScriptB.ps1 and
resume execution on line 11 (the line after the one where the original
exception happened). We've exited ScriptB.ps1 at this point, and
there's no going back.
Read that through a few times if necessary to really "get it." And then try
writing a trap handler of your own for a simple script!
WMI System Properties
Try this in PowerShell: Get-WmiObject Win32_Service | Format-
List * - see all those funny-looking properties that start with a double
underscore? What are those? What good are they?

They're called WMI system properties, and you'll typically see them
anytime PowerShell's built-in formatting system doesn't have some kind of
default set of properties to show you. For the most part, you can safely
ignore them. You can even hide them by specifying a list of properties that
you want to see, instead of "*," with the Format-xxxx cmdlets. But there is
one useful property in that list that you'll want to know about - I'll save that
for the next tip and show you something really cool.

Knowing the WMI Server Name


How can WMI system properties be useful? Here's the scenario: You want
to query disk inventory information from multiple computers, and produce
some sort of simple, formatted report. You've got the computer names in a
text file, which lists one computer name per line. You might start with this
simple command:

Get‐WmiObject Win32_LogicalDisk ‐filter "DriveType=3" ‐
computerName (Get‐Content C:\Computers.txt) | Format‐Table 
DeviceID,Size,FreeSpace 

Believe it or not, your targeted computers can be running any version of


Windows back to NT 4.0! The problem is that your output won't contain
the computer name - meaning you won't know which disk goes with which
computer. That's where WMI's system properties come into play: Just add
__SERVER to the list of properties (that's two underscores on the front,
there):

Get‐WmiObject Win32_LogicalDisk ‐filter "DriveType=3" ‐
computerName (Get‐Content C:\Computers.txt) | Format‐Table 
__SERVER,DeviceID,Size,FreeSpace 
Because each WMI object is tagged with the name of the computer it came
from, you can simply use that information as part of your output. Want the
table in a file?

Get‐WmiObject Win32_LogicalDisk ‐filter "DriveType=3" ‐
computerName (Get‐Content C:\Computers.txt) | Format‐Table 
DeviceID,Size,FreeSpace | Out‐File c:\DiskReport.txt 

Prefer a CSV file?

Get‐WmiObject Win32_LogicalDisk ‐filter "DriveType=3" ‐
computerName (Get‐Content C:\Computers.txt) | Select 
DeviceID,Size,FreeSpace | Export‐CSV c:\DiskSpace.csv 

Or an HTML Table?

Get‐WmiObject Win32_LogicalDisk ‐filter "DriveType=3" ‐
computerName (Get‐Content C:\Computers.txt) | Select 
DeviceID,Size,FreeSpace | ConvertTo‐HTML | Out‐File 
C:\DiskInventory.html 

PowerShell's flexibility lets you accomplish all this by just stringing


together a few commands - no need to write a script at all!

Dates and Times


There's often times when you'll need to work with dates. Perhaps you're
archiving information and need to establish a cutoff date, or perhaps you're
examining dates from AD and need to calculate how many days it's been
since a certain date. PowerShell generally makes it pretty easy to do.

Need to get a date?

$date = Get‐Date 

That also gets the current time. Try:


$date | gm 

To see what all you can do with that date once you've got it. Need a short-
formatted date?

$date.toshortdatestring() 

Need to see what date it was a year ago?

$date.addyears(‐1) 

It's really easy to work with dates using their built-in methods!

Oh, and see the TechNet Script Center for more details on date formatting,
if you're interested.


Target Computers from AD


Need to execute a command against every computer in a domain? Easy:
Here's how to do it using nothing more than what's built into PowerShell:

$strFilter = "computer"  

$objDomain = New‐Object 
System.DirectoryServices.DirectoryEntry  

$objSearcher = New‐Object 
System.DirectoryServices.DirectorySearcher  

$objSearcher.SearchRoot = $objDomain  

$objSearcher.SearchScope = "Subtree"  

$objSearcher.PageSize = 1000  

$objSearcher.Filter = "(objectCategory=$strFilter)"  

$colResults = $objSearcher.FindAll()  
foreach ($i in $colResults) {  

$objComputer = $i.GetDirectoryEntry()  

# Your command here ‐ use $objComputer.Name 

 } 

You'll use $objComputer.Name where indicated to refer to each computer.


An easier way is to get the Quest AD Cmdlets and use the Get-
QADComputer cmdlet:

$computers = Get‐QADComputer foreach ($computer in 
$computers) { # use $computer.cn to get the computer name } 

With that technique it's also easier to specify a specific starting OU; just
add the appropriate parameter to Get-QADComputer.


S-ar putea să vă placă și