Quantcast
Channel: Scripting Blog
Viewing all 2129 articles
Browse latest View live

PowerTip: View PowerShell Formatting Data

$
0
0

Summary: Learn how to view the formatting data for a Windows PowerShell type.

Hey, Scripting Guy! Question How can I use Windows PowerShell to view the formatting data for a particular type?

Hey, Scripting Guy! Answer Examine the output from Get-Member to find the type you are interested in, then use
          the Get-FormatData cmdlet. Store the returned object in a variable and explore it
          the way you do any other Windows PowerShell object.

$f=Get-FormatData -TypeName ModuleInfoGrouping

$f.FormatViewDefinition.control

$f.FormatViewDefinition.Control.headers


Troubleshooting PowerShell Scheduled Jobs

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about troubleshooting Windows PowerShell scheduled jobs.

Microsoft Scripting Guy, Ed Wilson, is here. This has been an awesome week at TechEd 2014 in Houston. I can avoid the "Houston we have a problem" joke for the simple reason that the Scripting Wife and I are on our way home to Charlotte, N.C. We saw a tremendous number of Windows PowerShell fans this week, and I can tell you that being the Scripting Guy is a lot of fun when representing a technology as awesome as Windows PowerShell. I mean, dude (or dudette), Windows PowerShell rocks, and it helps solve some difficult problems.

There were three Windows PowerShell resources that I was excited to talk to people about when they came to the Scripting Guys booth at TechEd:

Check them out. All three of these resources can help make your life easier and better. Of course, there is the Hey, Scripting Guy! Blog—but hey, you are already here.

Today I want to talk a bit about troubleshooting scheduled jobs. Here are some things to help you when you have to troubleshoot.

Import the module  

To get the results of a Windows PowerShell scheduled job, you use the Get-Job cmdlet. But if you use that cmdlet and it shows no results from any scheduled jobs, don’t panic. It probably means that you have not imported the PSScheduledJob module.

Often I do not need to import it because it automatically loads when I use any cmdlet from the module, such as Get-ScheduledJob. But if I open a fresh Windows PowerShell console and immediately type Get-Job, I might not see the results I want. So remember, first import the module as shown here (ipmo is an alias for Import-Module):

ipmo PSScheduledJob

Look in the scheduled job Output directory  

If you are not finding the results you seek, it is time to look in the scheduled job Output directory. This is located at the following path:

home\AppData\Local\Microsoft\Windows\PowerShell\ScheduledJob

On my computer, this resolves to the following:

C:\Users\ed\AppData\Local\Microsoft\Windows\PowerShell\ScheduledJobs

Under ScheduledJobs, you will find a folder for the scheduled job and an Output folder as shown in the following image:

Image of menu

In the Output folder, there should be a results.xml file and a time-stamped folder for each execution of the scheduled job. If someone clears the job history, these folders are deleted. If there are no folders and XML files in the output directory, the job results cannot be displayed.

Use –Keep with Receive-Job  

To receive the results from a scheduled job (assuming you do not write to an output file or some other file on a disk storage mechanism), you use the Receive-Job cmdlet, like for any other Windows PowerShell job.

But if you use Receive-Job and nothing comes back, it is possible that you have (or someone else has) already used Receive-Job to receive the output, and the –Keep parameter was not used. As a best practice, I always use –Keep with Receive-Job to keep myself from inadvertently deleting the job output.

Check the ExecutionHistoryLength value

If job results are still not forthcoming for a job, it is possible that the number of job instances exceeds the ExecutionHistoryLength value for that particular job. The default value for a Windows PowerShell scheduled job is 32. To double-check to see what this value is, you can use the following command:

(Get-ScheduledJob getservice).ExecutionHistoryLength

     Note  Yesterday I talked about setting this value in Advanced PowerShell Scheduled Jobs.

Check if Start-Job was used

The Windows PowerShell scheduled job may have been executed (immediately) by using Start-Job. Start-Job runs a normal Windows PowerShell background job, and the execution history and results are not stored to a disk.

Check if the job is disabled

If a Windows PowerShell scheduled job does not run, it might be disabled. By default, all newly created Windows PowerShell scheduled jobs are enabled. But it might have been created as disabled, or it might have been disabled at a later date.

Use the Get-ScheduledJob cmdlet to see if the scheduled job is enabled. The Disable-ScheduledJob cmdlet disables a Windows PowerShell scheduled job, and the Enable-ScheduledJob cmdlet enables a scheduled job. The following command illustrates enabling a scheduled job:

Get-ScheduledJob | Enable-ScheduledJob

The following image shows checking and enabling a scheduled job:

Image of command output

Check if the job failed

It is possible that the scheduled job failed. To see if it failed, use the Get-WinEvent cmdlet, for example:

Get-WinEvent -LogName Microsoft-Windows-TaskScheduler/Operational |

Where {$_.Message -like "*fail*"}

Check the permissions

Permissions are one common cause for scheduled jobs failing. Scheduled jobs run with the permissions of the user who created the job or the permissions of the user who is specified by the Credential parameter in the Register-ScheduledJob or Set-ScheduledJob command. If that user does not have permissions to run the commands, the job fails.

That is all there is to troubleshooting Windows PowerShell scheduled jobs.  Join me tomorrow when I will have a guest post about Best Practices for enterprise Windows PowerShell scripting.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Use PowerShell to Find Status of Hyper-V

$
0
0

Summary: Learn to use Windows PowerShell to find the status of Hyper-V on your Windows 8.1 laptop.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find the status of Hyper-V on my laptop running Windows 8.1?

Hey, Scripting Guy! Answer Use the Get-WindowsOptionalFeature cmdlet and choose the –Online option:

Get-WindowsOptionalFeature -Online -FeatureName *hyper*

Weekend Scripter: Best Practices for PowerShell Scripting in Shared Environment

$
0
0

Summary: Microsoft PFE, Dan Sheehan, shares Windows PowerShell scripting best practices for a shared environment.

Microsoft Scripting Guy, Ed Wilson, is here. Today I would like to welcome a new guest blogger, Dan Sheehan.

Dan recently joined Microsoft as a senior premiere field engineer in the U.S. Public Sector team. Previously he served as an Enterprise Messaging team lead, and was an Exchange Server consultant for many years. Dan has been programming and scripting off and on for 20 years, and he has been working with Windows PowerShell since the release of Exchange Server 2007. Overall Dan has over 15 years of experience working with Exchange Server in an enterprise environment, and he tries to keep his skillset sharp in the supporting areas of Exchange, such as Active Directory, Hyper-V, and all of the underlying Windows services.

Here's Dan…

I have been working and scripting (using various technologies) in enterprise environments where code is shared, updated, and copied by others for over 20 years. Even though I don’t consider myself a Windows PowerShell expert, I find myself assisting others with their Windows PowerShell scripts with best practices and speed improvement techniques, so I thought I would share them with the community as a whole.

This blog post is centered on the best practices I find myself sharing and championing the most in shared environments (I include all enterprise environments). In my next blog post, I will be discussing some Windows PowerShell script speed improvement techniques.

But before we try to speed up our script, it’s a good idea to review and implement coding best practices as a form of a code cleanup. Although some of these best practices can apply to any coding technology, they are all relevant to Windows PowerShell. For another good source of best practices for Windows PowerShell, see The Top Ten PowerShell Best Practices for IT Pros.

The primary benefit of these best practices is to make it easier for others who review your script to understand and follow it. This is especially important for the ongoing maintenance of production scripts as people change jobs, get sick, or get hit by a bus (hopefully never). They also become important people and post scripts in online repositories, such as the TechNet Gallery, to share with the community.

Some of these best practices may not provide a lot of value if the script is small or will only be used by one person. However, even in that scenario, it is a good idea to get into a habit of using best practices for consistency. You never know when you might revisit a script you wrote years ago, and these best practices can help you save time refamiliarizing yourself with it.

Ultimately, the goal of the best practices I discuss in this post is to help you take messy script that looks like this:

Image of script

…and turn it into an exact functional, but much more readable, version, like this:

Image of script

Note  I format my Windows PowerShell script for landscape-mode printing. It is my personal opinion that portrait-mode causes excessive line wraps in script, which makes the script harder to read. This is a personal preference, and I realize most people stick to keeping their script to within 85 characters on a line, which is perfectly fine if that works for them. Just be consistent about wherever you choose to wrap your script.

Keep it simple (or less is more)

The first best practice, which really applies to all coding, is to try to keep the script as simple and streamlined as possible. The first thing to remember is that most humans think in a very linear fashion, in this case from the top to the bottom of a script, so you want to try to keep your script as linear as possible. This means you should avoid making someone else jump around your script to try follow the logical outcome.

Also during the course of learning how to do new and different things, IT workers have a tendency to make script more complex than it needs to be because that’s how some of us experiment with and learn new techniques. Even though learning how to do new and different things in Windows PowerShell scripting is important, learning exercises should be separate from production script that others will have to use and support.

I’m going to use Windows PowerShell functions as an example of a scenario where I see authors unnecessarily overcomplicating script. For example, if a small, simple block of code will accomplish what needs to occur, don’t go out of your way to turn that script into a function and move it somewhere else in the script where it is later called…just because you can. Unnecessarily breaking the linear flow of the script just to use a function makes it harder for someone else to review your script linearly.

I was discussing the use of functions with a coworker recently. He argued that modularizing his script into functions and then calling all the functions at the end of the script made the script progression easier for him to follow.

I see this type of modularization behavior from those who have been full-on programming (or taught by a programmer)—all the routines, voids, or whatever in the code are modularized. Although I appreciate that we all have different coding styles, and ultimately you need to write the script in the way that works best for you, the emphasis in this blog post is writing your script so others can read and follow it as easily as possible.

Although using a couple of single-purpose functions in a script may not initially seem to make it hard for you to follow the linear progression of the script, I have also seen script that calls functions inside of other functions, which compounds the issue. This nesting of functions makes it exceedingly difficult for someone else to follow the progression of events because they have to jump around the script (and script logic) quite a bit.

To be clear, I am not picking on all uses of functions because there is definitely a time and place for them in Windows PowerShell. A good justification for using a function in your script is when you can avoid listing the same block of code multiple times in your script and instead store that code in a multiple use function. In this case, reducing the amount of code people have to review will hopefully make it easier for them to understand.

For example, in the Mailbox Billing Report Generator script I wrote at a previous job, I used a function to generate Excel spreadsheets because I was going to be reusing that block of code in the script multiple times. It made more sense to have the code listed once and then called multiple times in the script. I also tried to locate the function close to the script where it was going to be called, so other people reviewing the script didn’t have to go far to find it.

Let's take the focus off of functions and back to Windows PowerShell scripting techniques in general…

Utimately when you are thinking about using a particular scripting technique, try to determine if it is really beneficial. A good way to do this is by asking yourself if the technique is adding value and functionality to the script and if it will potentially unnecessarily confuse another person reading it. Remember that just because you can use a certain technique doesn’t mean you should.

Use consistent indentation

Along with keeping the script simple, it should be consistently organized and formatted, including indentations when new loop or conditional check code constructs are used. Lack of indentation, or even worse, inconsistent use of indentation makes script much harder to read and follow. One of the worst examples that I have seen is when someone pasted examples (including the example indentation level) from multiple sources into their script, and the indentation seemed to be randomly chosen. I had a really hard time following that particular script.

The following example uses the Tab key to indent the script after each time a new If condition check construct is used. This is used to represent that the script following that condition check is executed only if the outcome of the condition check is met. The Else statement is returned to the same indentation level as the opening If condition check, because it represents closure of the original condition check outcome and the beginning of the alternate outcome (the condition check wasn’t met). Likewise, the final closing curly brace is returned to the same level of indentation as the opening condition check because the condition check is now completely finished.

Image of script

If you add another condition check inside of an existing condition check (referred to as “nesting”), then you should begin indenting the new condition check at the current indentation level to show it is nested inside a “parent” condition check. The previous example shows a second If condition check on line #6, which is nested inside a parent If condition check where everything is already indented one level. The nested If condition check then indents a second level on line #7 for its condition check outcome, but then it returns to the first indentation level when the condition check outcome is complete.

Indentations should be used any time you have open and close curly braces around a block of code, so the person reading your script knows that block of code is a part of construct. This would apply to ForEach loops, Do…While condition check loops, or any block of code in between open and closing curly brackets of a construct.

The use of indentation isn’t limited to constructs, and it can be used to show that a line of script is a continuation of the line above it. For example as a personal preference, whenever I use the back tick character ( ` ) to continue the same Windows PowerShell command on the next line in a script, I indent that next line so that as I am reviewing the script, I can easily tell that line is a part of the command on the previous line.

Note  Different Windows PowerShell text editors can record indentations differently, such as a Tab being recorded as a true Tab in one editor and multiple spaces in another editor. It’s a good idea check your indentations if you switch editors and you aren’t sure they use the same formatting. Otherwise, viewing your script in other programs (such as cutting and pasting the script into Microsoft Word) can show your script with inconsistent indentations.

Use Break, Continue, and other looping controls

Normally, if I want to execute a large block of code only if certain conditions are met, I would create an If condition check in the script with the block of code indented (following the practices I discussed previously). If the condition wasn’t met, the script would jump to the end of the condition check where the indentation was returned back to the level of the original condition check.

Now imagine you have a script where you only want the bulk of the script to execute if certain condition checks are met. Further imagine you have multiple nested condition checks or loops inside of that main condition check. Although this may not seem like an issue because it works perfectly fine as a scripting method, nesting multiple condition checks and following proper indentation methods can cause many levels of indenting. This, in turn, causes the script to get a little cramped, depending on where you chose to line wrap.

I refer to excessive levels of nested indentation as “indent hell.” The script is so indented that the left half of the screen is wasted on white space and the real script is cramped on the right side of the screen. To avoid “indent hell,” I started looking for another method to control when I executed large blocks of code in a script without violating the indentation practice.

I came across the use of Break and Continue, and after conferring with a colleague infinitely more versed in Windows PowerShell than myself, I decided to switch to using these loop processing controls instead of making multiple gigantic nested condition checks.

In the following example, I have a condition check that is nested inside of a ForEach loop. If the first two condition checks aren’t met, the Windows PowerShell script executes the Continue loop processing control, which tells it to skip the rest of the ForEach loop.

Image of script

Using these capabilities in your script isn’t ideal for every situation, but they can help reduce “indent hell” by helping streamline and simplifying some of your script.

For more information about these Windows PowerShell commands, see:

Use clear and intelligently named variables

Too often I come across scripts that use variables, for example, $j. This name has nothing to do with what the variable is going to be used for, and it doesn’t help distinguish its purpose later in the script from another variable, such as $i.

You may know the purpose of $j and $i at the time you are writing the script, but don’t assume someone else will be able to pick up on their purposes when they are reviewing your script. Years from now, you may not remember the variable’s purposes when you are reviewing your script, and you will have to back track in your own script to reeducate yourself.

Ideally, variables should be clearly named for the data they represent. If the variable name contains multiple words, it’s a good idea to capitalize the first letter of each word so the name is easier to read because there are no spaces in a Windows PowerShell variable name. For example, the variable name of $GatheredMailboxes is easier to read quickly and understand than $gatheredmailboxes.

Providing longer and more intelligently named variables does not adversely affect Windows PowerShell performance or memory utilization from what I have seen. So there should be no arguments for saving memory space or improving speed to impede the adoption of this practice.

In the following example, all mailbox objects gathered by a large Get-Mailbox query are stored in a variable named $GatheredMailboxes, which should remove any ambiguity as to what the variable has stored in it.

Image of script

Building on this example, if we wanted to process each individual mailbox in the $GatheredMailboxes variable in a ForEach loop, we could additionally use a clear purpose variable with the name of $Mailbox like this:

Image of script

Using longer variable names may seem unnecessary to some people, but it will pay off for you and others working with your scripts in the long run.

Leverage comment-based Help

Sometimes known as the “header” in Windows PowerShell scripts, a block of text called comment-based Help allows you to provide important information to readers in a consistent format, and it integrates into the Help function in Windows PowerShell. Specifically, if the proper tags are populated with information, and a user runs Get-Help YourScriptName.ps1, that information will be returned to the user.

Although a header isn’t necessary for small scripts, it is a good idea to use the header to track information in large scripts, for example, version history, changes, and requirements. The header can also provide meaningful information about the script’s parameters. It can also provide examples, so your script users don’t have to open and review the script to understand what the parameters are or how they should use them.

For example, this is the Get-Help output from a Get-GroupMembership script I wrote:

Image of command output

If the –detailed or –full switches are used with the Get-Help cmdlet, even more information is returned.

For more information about standard header formatting, see WTFM: Writing the Fabulous Manual.

Place user-defined variables at top of script

Ideally, as the script is being written, but definitely before the script is “finished,” variables that are likely to be changed by a user in the future should be placed at the top of the script directly under the comment-based Help. This makes it easier for anyone making changes to those script variables, because they don’t have to go hunting for in your script. This should be obvious to everyone, but even I occasionally find myself forgetting to move a user-defined variable to the top of my script after I get it working.

For example, user might want to change the date and time format of a report file, where that file should be stored, who an email is to be sent to, and the grouping of servers to be used in the script:

Image of script

There are no concrete rules as to when you should place a variable at the top of a script or when you should leave it in the middle of the script. If you are unsure whether you should move the variable to the top, ask yourself if another person might want or need to change it in the future. When in doubt, and if moving the variable to the top of the script won’t break anything, it’s probably a good idea to move it.

Comment, comment, comment

Writing functional script is important because, otherwise, what is the point of the script right? Writing script with consistent formatting and clearly labeled variables is also important, otherwise your script will be much harder to read and understand by someone else. Likewise adding detailed comments that explain what you are doing and why will further reduce confusion as other people (and your future self) try to figure out how and sometimes more importantly, why, specific script was used.

In the following detailed comment example, we are figuring out if a mailbox is using the default database mailbox size limits, and we are taking multiple actions if it is True. Otherwise we launch into an Else statement, which has different actions based on the value of the mailbox send limit.

Image of script

This level of detailed commenting of what you are doing and why can seem like overkill until you get into a habit of doing it. But it pays off in unexpected ways, such as not having to sit with a co-worker and explain your script step-by-step, or having to remember why a year ago you made an array called $MailboxesBothLimits. This is especially true if you are doing any complex work in the script that you have not done before, or you know others will have a hard time figuring it out.

I prefer to err on the side of caution, so I tend to over comment versus under comment in my script. When in doubt, I pretend I am going to publish the script in the TechNet Gallery (even if I know I won’t), and I use that as a gauge as to how much commenting to add. Most Windows PowerShell text editors will color code comments in a different color than the real script, so users who don’t care about the comments can skip them if they don’t need them.

When it comes to inline commenting when comments are added on the same line of script but at the end, my advice is to strongly avoid this practice. When people skim script, they don’t always look to the end to see if there was a comment. Also, if others start modifying your script, you could end up with old or invalid comments in places where you didn’t expect them, which could cause further confusion.

Note  There are different personal styles of Windows PowerShell commenting, from starting each line with # to using <# and #> to surround a block of comment text. One way is as good as another, and you should use a personal style that makes sense to you (be consistent about it). For example, in my scripts, the first line of a new block of commenting always gets a # followed by one space. Each additional line in the continued comment block gets a # followed by three spaces. You can see this demonstrated in the second and third lines of script in the previous example. I like using this method because it shows me when I have multiple separate comments next to each other in the script. The important point is that you are putting comments in your script.

Avoid unnecessary temporary data output and retrieval

Occasionally, I come across a script where the author is piping the results of one query to a file, such as a CSV file, and then later reading that file information back into the script as a part of another query. Although this certainly works as a method of temporarily storing and retrieving information, doing so takes the data out of the computer’s extremely fast memory (in nanoseconds) and slows down the process because Windows PowerShell incurs a file system write and read I/O action (in milliseconds).

The more efficient method is to temporarily store the information from the first query in memory, for example, inside an array of custom Windows PowerShell objects or a data table, where additional queries can be performed against the in-memory storage mechanism. This skips the 2x file system I/O penalty because the data never leaves the computer’s fast memory where it was going to end up eventually.

This may seem like a speed best practice, but keeping data in memory if at all possible avoids unnecessary file system I/O headaches such as underperforming file systems and interference from file system antivirus scanners.

~Dan

Thank you, Dan, for a really helpful guest post. Join us tomorrow when Dan will continue his discussion.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use Comment-Based Help in Shared Scripts

$
0
0

Summary: Learn how to leverage Windows PowerShell comment-based Help.

Hey, Scripting Guy! Question How can I make my scripts easier to use when I share them with friends or colleagues?

Hey, Scripting Guy! Answer People do not need to open and analyze your script if you use the comment-based Help capabilities
          of Windows PowerShell
. You can provide a description, explain the use of parameters, and provide
          a number of examples of script execution.

          Use the Get-Help cmdlet to access comment-based Help, use the name of your script, and choose
          from a number of optional switches:

Get-Help .\YourScriptName.PS1 [–Detailed –Full –Examples –Parameter <String>]

Weekend Scripter: PowerShell Speed Improvement Techniques

$
0
0

Summary: Microsoft PFE, Dan Sheehan, talks about speeding up Windows PowerShell scripts.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back Dan Sheehan, who debuted yesterday as a new guest blogger with his post Best Practices for PowerShell Scripting in Shared Environment.

Dan recently joined Microsoft as a senior premiere field engineer in the U.S. Public Sector team. Previously he served as an Enterprise Messaging team lead, and was an Exchange Server consultant for many years. Dan has been programming and scripting off and on for 20 years, and he has been working with Windows PowerShell since the release of Exchange Server 2007. Overall Dan has over 15 years of experience working with Exchange Server in an enterprise environment, and he tries to keep his skillset sharp in the supporting areas of Exchange, such as Active Directory, Hyper-V, and all of the underlying Windows services.

Here's Dan…

In my last blog post, we established some best practices for organizing and streamlining our script so it is more useable by others and ourselves in the future. Now let me turn your attention to some speed improvement techniques I picked up at a my last job before joining Microsoft.

Years ago, I wrote the 1.X version of the Generate-MailboxBillingReport, which generated monthly billing reports in Excel for the customers for which my team provided messaging services. Although this process worked well and did everything we needed it to, it was taking over 2 hours and 45 minutes to complete, as you can see here:

Image of report

When I first started writing the script, I focused on the quality and detail of the output versus the speed. But after some jovial harassment by my coworkers to improve the speed, I sat down to see what I could do. I focused on the optimizations techniques covered in this post, and as a result of the script reconfiguration, I was able to come up with Exchange Mailbox Billing Report Generator v2.X, which reduced the execution time:

Image of report

That’s right…I removed about two and a half hours of processing time from the script by implementing the speed improvement techniques I discuss in this post. I hope some of them also provide some benefit for you.

Get your script functional first

This may seem obvious, but before you try to optimize your script, you should first clean it up and make sure it’s fully functional. This is especially true if you have never used the optimization techniques discussed here, because you could end up making more work for yourself trying to implement something brand new at the same time you are trying to make your script functional.

Not to mention…you don’t want to waste time trying to optimize script that won’t work because that is just an effort in frustration. In other words don’t try to bite off more than you can chew. These recommendations can be big bites depending on your skill level.

Using my mailbox billing report generation script as an example, implementing the script optimization techniques discussed here would have been much harder and taken longer if I tried them in the beginning when I was also trying to get the script to produce the output we wanted.

Leverage Measure-Command to time sections of code

Learning how long certain parts of your script take to execute, such as a loop or even a part of a loop, will provide you valuable insight into where you need to focus your speed improvements. Windows PowerShell gives you a couple of ways to time how long a block of script takes to execute.

Until recently, I used the New-Object System.Diagnostics.Stopwatch command to track the amount of time taken in various parts of my scripts. Although I still prefer to use this method to track the overall execution time of the entire script because it doesn’t cause you to indent your script, it is awkward to use multiple times in the middle of a script.

When I was looking for other ways to time small amounts of script, I learned about the Measure-Command {} cmdlet, which will measure the time it takes to complete everything listed within the curly braces, as shown here:

Image of command output

I used this cmdlet in my script to quickly learn where the majority of time was being spent in the big mailbox processing loop. I did this by using multiple instances of this cmdlet inside the loop, essentially dividing the loop into multiple sections, which showed me how long each part of that loop was taking. The results for each section varied in the reported milliseconds, but some stood out as taking more time, which allowed me to decide which sections to focus on.

I encourage you to use this cmdlet any time you want to quickly check how long part of your script is taking to execute. For more information, see Using the Measure-Command Cmdlet.

Query for multiple objects: Same time or individually?

One of the major time delays in my script was when it was gathering information for each mailbox from multiple locations, one mailbox at a time. In this case, after the script gathered all of the mailboxes in one large Get-Mailbox query, it would perform individual queries for each mailbox by using the Get-MailboxStatistics, Get-ActiveSyncDevice, and Get-CSUser (Lync) cmdlets.

Performing these multiple individual queries one mailbox object at a time was very costly in regards to execution time because individual data query sessions were being opened and closed serially (one at a time) per cmdlet per mailbox.

To put this in perspective, let’s say that you have 100 mailboxes you need to gather the mailbox statistics for, and using Get-MailboxStatistics takes one second per mailbox to query and return the information. One second may not sound like a long time for each individual mailbox, but doing that for all 100 mailboxes takes 1 minute 40 seconds. What if you could query all 100 mailboxes in a single query by using the Get-MailboxStatistics –Server switch, and this single query takes 30 seconds?

Now imagine if you had two more queries to perform for each mailbox (ActiveSync and Lync) that also take one second each per mailbox, or 30 seconds as bulk queries. As you can see the individual queries can add up both in the number of queries you have to run and the number of objects you have to run them for.

Therefore, the more time-efficient approach (if the cmdlet supports it and you can leverage the output in your script), is to gather as many objects as possible at the same time in a single cmdlet call. Going back to my example script, simply switching from using Get-MailboxStatistics to query one mailbox at a time to bulk querying all of the mailbox data on a per-server basis shaved off about 45 minutes in the script execution time.

In the 1.X version of my script, the mailbox statistics lookup was performed one mailbox at a time as a part of a large ForEach loop that processes each mailbox individually:

Image of script

In the 2.X version of my script, the mailbox statistics lookup was performed for all non-disconnected mailboxes housed on all of the database availability group (DAG) Exchange servers, and temporarily stored in a data table:

Image of script

And then back in the per mailbox ForEach loop, the individual mailbox data was pulled from the data table versus from individual Get-MailboxStatistics queries. As a safety measure, theGet-MailboxStatistics cmdlet was used only if the mailbox didn’t exist in the data table for whatever reason (such as it was moved to a temporary database):

Image of script

Although this new method of querying the mailbox statistics requires a lot more script and requires that it is broken into two separate pieces (an initial data gathering and then a query against the gathered data), the speed increase of the script was well worth it. At least it got my old coworkers off my back.

Perform multiple queries at the same time

Performing multiple queries at the same time is also known as “multithreading” in Windows PowerShell. It essentially consists of running multiple actions (in our case, data queries) at the same time through multiple Windows PowerShell jobs. Each new job runs in its own Windows PowerShell.exe instance (session). Subsequently, the job obtains its own pool of memory and CPU threads, which makes for more efficient use of a computer’s memory and CPU resources.

Moving queries into separate jobs has two major areas that deserve special consideration. The first is that the data passed back into the main script from the job is “deserialized” through the Receive-Job cmdlet. This means the output of the job is passed back to variable you assign to the output by using XML as a temporary transport.

Passing the data through XML can cause the attributes of an object being passed to get altered from their current type to another data type, such as a string. If you aren’t prepared for this, it can wreak havoc on your existing functional code until you script around it. If you aren’t sure if an attribute has changed as a result of being passed through the Receive-Job cmdlet, you can always use the Get-Type cmdlet to check it. For more information, see How objects are sent to and from remote sessions.

The second area of consideration is that it is possible to run too many jobs in your script at the same time, subsequently overwhelming the computer running the script. This will slow everything down. You can handle this by making sure you limit or throttle the number of jobs executing at the same time. In my script, I only spawned three separate jobs, so this wasn’t a real concern for me. But if you plan to spawn tens or hundreds of jobs, you should read more about this scenario in Increase Performance by Slowing Down Your PowerShell Script.

Another method of multithreading is runspaces. I haven’t had a chance to try them yet, but testing by others has shown they are faster than jobs, and they can pass variables between the job and the main script (presumably bypassing the deserialization concern). If you are interested in this, you can read more about it in Multithreading Powershell Scripts.

Ultimately, whatever method you chose, being able to execute multiple data pulls at the same time will help reduce the overall execution time of the script because the script won’t be waiting to start one data collection until another one finishes.

Avoid extracting variable attributes multiple times

I previously thought that it was unnecessary to create a new variable based on the data extracted from an object’s “.” attribute (for example, $Mailbox.DisplayName). It appeared to waste script when the information was already in the object’s attribute. The only time I extracted that type of information out of an object was if I was worried it would change and I wanted to save a point-in-time copy, or if some other cmdlet or function didn’t like using the $Object.Attribute extraction method.

Through research I found that every time you ask Windows PowerShell to reference an object’s attribute (which forces a data extraction each time), it takes longer than if that information was saved to and referenced from a standard variable. For example, it will take Windows PowerShell longer to the reference the information in the $Mailbox.DisplayName string five times, than it will if you set $DisplayName = $Mailbox.DisplayName once and then reference the $DisplayName string variable five times.

You may not notice a speed difference if you are only saving yourself a couple of extractions in a script. However, this approach becomes extremely important in ForEach loops where there could be thousands of extra unnecessary object attribute enumerations, as evidenced by this really good post: Make Your PowerShell For Loops 4x Faster.

For example, I was using a loop to process 20,000+ mailboxes, and I was using the Write-Progress cmdlet that extracted the same $GatheredMailboxes.Countvalue every loop. I noticed a difference when I switched to extracting the mailbox count only once before the loop into a variable named $GatheredMailboxesCount, and then used that variable in my Write-Progress cmdlet.

Subsequently, I recommend that if you are going to reference an object’s attribute information more than once (loop or no loop), you should save the attribute information in a variable and reference the variable. This allows Windows PowerShell to extract the information only once, which can become exponentially important in the amount of time it takes to process a script.

Prioritize condition checks from most common to least

This may seem obvious, but the order in which multilevel If conditions are checked can have an impact on a script’s speed. This is based on how many condition checks you have and which condition is likely to occur the most often. Windows PowerShell will stop checking the remaining conditions when a condition is met. To take advantage of this processing logic, you want to try to first check the condition that is most likely to occur first so the rest of the condition checks never have to be considered.

For example, the following small script construct probably doesn’t look like there will be a performance difference one way or another in checking the condition of the $Colors variable:

Image of script

But when the array $Colors is expanded in the example, and Yellow becomes the predominant color, the construct must then check and dismiss the first three conditions in the script before the condition is of Yellow met:

Image of script

In this simple code block, swapping the position checks for Yellow and Green improved the output by only a few milliseconds. That might not seem like a big difference, but now imagine performing this check 10,000+ times. Next imagine that you are checking multiple conditions in each If statement with the -and and -or operators. Finally, imagine that you are extracting a different object’s attribute with each check (such as If $Mailbox.Displayname –like “HR”). All of these can add up to noticeable time delays, depending on how complex your script is.

All of this points to making sure that if you know a particular condition is going to be met most of the time (for example, most mailboxes are one default size), you should put that condition check first in a group of condition checks so Windows PowerShell doesn’t waste time checking conditions that are less likely to occur.

Likewise, if you know another condition is likely to occur second most often (such as most non-default mailbox sizes are a standard larger size), you should put that condition check second…and so on. Although you may not always see a speed improvement from this prioritization, it’s a good habit to get into for when it will make a difference.

Eliminate redundant checks for same condition

For those of us who think linearly (which is often the case with IT pros), often when we are writing script, we think about condition checks leading to outcomes for different purposes. Because each condition check causes Windows PowerShell to stop and make a decision, it is important for the sake of speed to eliminate multiple checks for the same condition.

For example, when I wrote the following two snippets, I was originally thinking about determining a bottom-line charge for a mailbox which depended on if the mailbox was also associated with a BlackBerry user. Later in the script, I was focused on whether the users’ BlackBerry PIN and separate charge should be listed in the report (if they were a BlackBerry user):

Image of script

Then during a code review, I realized that I was checking the same condition twice, and I combined them into the following single check:

Image of script

Like with prioritizing condition checks, these changes might not immediately show any increased speed in testing. But when you are processing 20,000+ objects and performing multiple redundant checks, it all adds up.

Use ForEach over ForEach-Object

The difference between the ForEach loop construct and ForEach-Object cmdlet is not well understood by those beginning to write Windows PowerShell scripts. This is because they initially appear to be the same. However, they are different in how they handle the loop object enumeration (among other things), and it is worth understanding the difference between the two—especially when you are trying to speed up your scripts.

The primary difference between them is how they process objects. The cmdlet uses pipelining and the loop construct does not. I could try to go into detail about the difference between the two, but there are individuals who understand these concepts much better than myself. The following blog post has some good information and additional links: Essential PowerShell: Understanding ForEach. I highly encourage you to spend some time reviewing the posts to better understand the differences.

My recommendation is that you only use the ForEach-Object cmdlet if you are concerned about saving memory as follows:

  • While the loop is running (because only one of the evaluated objects is loaded into memory at one time).
  • If you want to start seeing output from your loop faster (because the cmdlet starts the loop the second it has the first object in a collection versus waiting to gather them all like the ForEach construct).

You should use the ForEach loop construct in the following situations:

  • If you want the loop to finish executing faster (notice I said finish faster and not start showing results faster).
  • You want to Break/Continue out of the loop (because you can’t with the ForEach-Object cmdlet).

This is especially true if you already have the group of objects collected into a variable, such as large collection of mailboxes.

Like with all rules or recommendations, there are exceptions for when the ForEach-Object cmdlet might finish faster than the ForEach loop construct, but this is going to be under unique scenarios such as starting and pulling the results of multiple background jobs. If you think you might have one of these unique scenarios, you should test both methods by using the Measure-Command cmdlet to see which one is faster.

~Dan

Thank you, Dan, for a great post. I look forward to seeing your next contribution to the Hey, Scripting Guy! Blog. Join us tomorrow when I begin Windows PowerShell Profile Week.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Time Parts of PowerShell Scripts

$
0
0

Summary: Learn how to time different parts of your Windows PowerShell scripts.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find which parts of my script are taking the longest
          to complete so I can focus on improving them?

Hey, Scripting Guy! Answer Use the Measure-Command cmdlet to time any section of script in between two curly braces:

Measure-Command {Start-Sleep -Seconds 5}

Windows PowerShell automatically outputs the time for that section of script:

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 5

Milliseconds      : 7

Ticks             : 50072804

TotalDays         : 5.79546342592593E-05

TotalHours        : 0.00139091122222222

TotalMinutes      : 0.0834546733333333

TotalSeconds      : 5.0072804

TotalMilliseconds : 5007.2804

What's In Your PowerShell Profile? Here Are a Few of My Favorites

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a few favorites from his Windows PowerShell profile.

Microsoft Scripting Guy, Ed Wilson, is here. I was at the Windows PowerShell Summit a couple weeks ago. When Jason Shirk, a manager on the Windows PowerShell team, was doing a demo, I was able to glance into his Windows PowerShell profile. I was amazed by the number of cool things that were there. I thought, “Hmmmmm...”

I have had a Windows PowerShell profile since the early beta days—back when it was called Monad. I hardly consider myself to be a noob, but I am also the first to admit that I do not know everything there is to know about Windows PowerShell. In fact, I learn new stuff every single day. One of the best ways to learn, is to talk to other Windows PowerShell enthusiasts and gurus. That is one of the reason the Scripting Wife and I are such big supporters of Windows PowerShell User Groups. I firmly believe that you get a bunch of Windows PowerShell people together, and great things will happen.

So, that is the source of this week’s blog posts. I decided it would be fun to have people submit some of their favorite functions, tips, and tricks from their Windows PowerShell profile. I am also asking you to post your profile on the Scripting Guys Script Repository, or to paste your favorite function, tip, or trick into the comments section of this week’s series of posts. It will be good stuff.

Note  I have nearly twenty Hey, Scripting Guy! Blog posts where I talk about Windows PowerShell profiles. I recommend you review them prior to getting too bogged down with this week’s series.

Over the years, I have had wild and wooly Windows PowerShell profiles, and simple and serene Windows PowerShell profiles. I generally add the following components into my Windows PowerShell profile. I like to create a section for each:

  • Aliases
  • PS Drives
  • Functions
  • Variables
  • Initialization commands

When I am in the midst of making a lot of changes to my Windows PowerShell profile, I like to back it up each time I open Windows PowerShell. It is pretty easy to do, all I need to do is to call the following command and specify a name and a destination:

Copy-Item –path $profile

One of the things that people sometimes talk about is the need to check their profile to ensure it has not inadvertently changed —either by accident or by intention. For example, I recently downloaded a module—a reputable module from a reputable author. Come to find out, he modified my Windows PowerShell profile to call his module with specific parameters. It would have been nice if he had told me he was going to do this. Not that it caused any problems. It is just that I would have liked to have been prompted to do this myself, or given the option to allow him to do it. Know what I mean?

Some people talk about signing the Windows PowerShell profile, but that means that every time it is modified, it needs to be resigned. An easier way to detect changes is to save a file hash of the profile, and then compare the file hash on startup.

Get-ProfileHash function

So I decided to create a Get-ProfileHash function. I added this to my ISE profile and to my Windows PowerShell console profile. I added logic to the function so that it will detect which profile is in use and check that one. You can find the complete function on the Script Center Repository: Get-ProfileHash function.

I am using the Get-FileHash function from Windows PowerShell 4.0 in Windows 8.1, but there are other ways to get a file hash. They are just not quite as easy. Because of this, I add a check for Windows PowerShell 4.0 as shown here:

#requires -version 4.0

Function Get-ProfileHash

{

Now I need to get the name of the current Windows PowerShell profile. To do this, I use the Split-Path cmdlet and choose the –Leaf parameter. But this returns a string, and it includes the .PS1 file extension. So I cast the string into a [system.io.fileinfo] object type, and I then choose only the basename property. I could have parsed the string, but casting to an object and selecting a specific property works better. Here is that line of script:

$profileName = ([system.io.fileinfo](Split-Path $profile -Leaf)).basename

Now I want to get the folder that contains the $profile. That is easy, I choose the –Parent parameter from the Split-Path:

$profileFolder = Split-Path $profile -Parent

I need to build up a string for the profile name with an XML file extension. I use the Join-Path cmdlet and choose the profile folder, and I create a file name based on the profile name with the .xml file extension. This is shown here:

$HashPath =

  Join-Path -Path $profileFolder -ChildPath ("{0}.{1}" -f $profileName,"XML")

I want to see if there is already an XML representation of the file hash. If there is, I will read it. Then I will get a new file hash for the current profile, and use Compare-Object to examine the hash property. This portion of the function is shown here:

if(Test-Path -Path $HashPath)

   { $oldHash = Import-Clixml $HashPath

     $newHash = Get-FileHash -Path $profile

     $diff = Compare-Object -Property hash -ReferenceObject $oldHash `

     -DifferenceObject $newHash

If there is a difference, I open Windows PowerShell with the –noprofile switch, and I open the current profile in Notepad. In this way, I can easily look at the profile in a safe environment. This script is shown here:

If($diff)

      {

       powershell -noprofile -command "&"Notepad $profile""

When I am done, I delete the old file hash of the profile, and I create a new file hash of the current Windows PowerShell profile. I store the hash in the XML file as shown here:

Remove-Item $HashPath

       Get-FileHash -Path $profile |

       Export-Clixml -Path $HashPath -Force }

   }

If there is no stored file hash, I simply take a file hash and store it in the XML file. It will be referenced the next time I call the function:

  Else { Get-FileHash -Path $profile |

       Export-Clixml -Path $HashPath -Force }

The complete function is shown here:

#requires -version 4.0

Function Get-ProfileHash

{

 $profileName = ([system.io.fileinfo](Split-Path $profile -Leaf)).basename

 $profileFolder = Split-Path $profile -Parent

 $HashPath =

  Join-Path -Path $profileFolder -ChildPath ("{0}.{1}" -f $profileName,"XML")

 if(Test-Path -Path $HashPath)

   { $oldHash = Import-Clixml $HashPath

     $newHash = Get-FileHash -Path $profile

     $diff = Compare-Object -Property hash -ReferenceObject $oldHash `

     -DifferenceObject $newHash

     If($diff)

      {

       powershell -noprofile -command "&"Notepad $profile""

       Remove-Item $HashPath

       Get-FileHash -Path $profile |

       Export-Clixml -Path $HashPath -Force }

   }

 Else { Get-FileHash -Path $profile |

       Export-Clixml -Path $HashPath -Force }

}

All that remains to do is to add my Get-ProfileHash function to my profiles, and then add a line in the profile that calls the function. The rest is automatic.

I have uploaded the complete function to the Script Center Repository: Get-ProfileHash function. It is best to copy if from there rather than trying to copy it from the blog because it may pick up entrenched cred.

So this is something cool from my Windows PowerShell profile. Windows PowerShell Profile Week will continue tomorrow when I will bring together a collection of tips and tricks from various Microsoft PowerShell enthusiasts. Hey, what’s in your Windows PowerShell profile? Let me know.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


PowerTip: Use PowerShell to Add Hyper-V to Windows 8.1 Laptop

$
0
0

Summary: Learn how to use Windows PowerShell to add optional features to your laptop running Windows 8.1.

Hey, Scripting Guy! Question How can I use Windows PowerShell to install Hyper-V on my laptop running Windows 8.1?

Hey, Scripting Guy! Answer Use the Add-WindowsOptionalFeature cmdlet, and choose the –Online option.
          The feature name is case sensitive, and must be typed exactly, for example:

Enable-WindowsOptionalFeature -FeatureName Microsoft-Hyper-V-All -Online -All -NoRestart

What's in Your PowerShell Profile? Users' Favorites Part 2

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks to various Microsoft Windows PowerShell users about what is in their profile.

Microsoft Scripting Guy, Ed Wilson, is here. Today I have more items that Microsoft employees have in their profiles.

Michael Lyons shared this:

To make my Windows PowerShell profile easier to use, I like to customize it with:

  • Some additional aliases to give a quick shorthand to common commands.
  • A prompt that has color and times every command. It shows the time if it took more than 3 seconds to run, and it beeps if it took more than 20 seconds (so if I’m reading email or browsing the web, I can tell when it’s done).
  • A replacement for “dir” and “rd” that call “cmd.exe” so that the syntax is the same. It’s too burnt into my brain. If I’m currently in a PS drive, it will mount it to a valid drive letter first (because cmd.exe must be called from a valid drive).
  • A hosts function to edit $env:windir\system32\drivers\etc\hosts.
  • Some Update-FormatData changes. For example, file sizes shown with gci have thousands separators, junctions will show that they are junctions, and certificates will show if they have a private key.

Here’s my dir function:

function GLOBAL:dir

{

    $cwd = & cmd.exe /c cd 2> $null

    if ($cwd -eq $pwd.Path)

    {

        & cmd.exe /c dir $args

    }

    else

    {

        $cwd = $pwd.ProviderPath

        pushd $home

        & cmd.exe /c pushd "$cwd" `& dir $args

        popd

    }

}

Tiger Wang provided the following function called DllOptimizationControl.psm1.Each function recursively turns ON and OFF the optimization for each DLL under the current path. For more information about, .NET Framework Debugging Control, see Making an Image Easier to Debug.

function Disable-DllOptimization($path="$PWD")

{

    Get-ChildItem $path -Recurse | Where-Object {$_.Name.Contains(".dll")} |
    ForEach-Object {$_.FullName.Replace(".dll", ".ini")} | ForEach-Object {Remove-Item $_ -Force}

}

 

function Enable-DllOptimization($path="$PWD")

{

    $debugInfo =

@'

[.NET Framework Debugging Control]

AllowOptimize=0

GenerateTrackingInfo=1

'@

 

    Get-ChildItem $path -Recurse | Where-Object {$_.Name.Contains(".dll")} |
     ForEach-Object {$_.FullName.Replace(".dll", ".ini")} | ForEach-Object {$debugInfo | Out-File $_}

}

Tim Dunnprovided a link to a recent blog post he wrote called $PROFILE and RDP Sessions. I am not going to explain his entire post, but I wanted to point out one item. He has a real cool method of mapping his home drive when accessing resources by remoting into machines in different domains. It is pretty clever and it is worth a read.

Rahul Duggal wrote:

"I usually work with SQL Server, and I’ve found the following function to be useful in my profile."

Write-Host "Loading SQLPS Module, Hold your horses!" -Fore Green # To tell user, why PS is not showing prompt yet

Import-Module SQLPS -DisableNameChecking

Write-Host "Welcome $env:USERNAME , Let's Automate !!!" -Fore yellow  # Small motivation

Set-Location E:\PowerShell #I change default directory to avoid accidental damage to important folders in C: drive

$host.PrivateData.ErrorForegroundColor = "gray"  # RED colour in error message stresses me out J

Tomorrow we will see what is in the profiles of some Windows PowerShell MVPs.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell and WMI to Change Disk Label

$
0
0

Summary: Learn how to use Windows PowerShell and WMI to change a disk label.

Hey, Scripting Guy! Question How can I use Windows PowerShell with WMI to change the disk label on a logical disk?

Hey, Scripting Guy! Answer Beginning with Windows PowerShell 3.0, use the Get-CimInstance cmdlet to retrieve
          the logical disk, and then use the Set-CimInstance cmdlet to set the VolumeName:

Get-CimInstance win32_logicaldisk -Filter "deviceID = 'c:'" |

Set-CimInstance -Property @{volumename = 'SSD'}

Note  This technique requires you to start the Windows PowerShell console with elevated rights.

What’s in Your PowerShell Profile? PowerShell MVPs Favorites

$
0
0

Summary: Microsoft Windows PowerShell MVPs share some of their favorite functions from their Windows PowerShell profiles.

Microsoft Scripting Guy, Ed Wilson, is here. Today I will be sharing some profile goodies provided by Windows PowerShell MVPs.

Claus Nielsen stated that in his day-to-day work, he does not use a Windows PowerShell profile. There are several reasons for this, for example, he forgets to move it when he reinstalls his workstation. When he does demo’s, he has a script that puts line numbers in the Windows PowerShell console, which makes accessing history easier.

Boe Prox shares a couple of pretty cool ideas. He creates a modules drive. He also starts the transcript, and stores it in a specific location, but rather than letting the transcripts collect dust, he deletes them if they are more than 14 days old. He also creates a shortcut to the Windows PowerShell type accelerator, and he has a cool function that gets the constructor for classes. Like many people, Boe creates an alias for his custom functions.

## Module PSDrive

# Create Modules directory if it doesn't exist

New-Item -Path ($env:PSModulePath -split ';')[0] -ItemType Directory -ErrorAction SilentlyContinue

New-PSDrive -Name PSModule -PSProvider FileSystem -Root ($env:PSModulePath -split ';')[0]

 

## Transcript

Write-Verbose ("[{0}] Initialize Transcript" -f (Get-Date).ToString()) -Verbose

If ($host.Name -eq "ConsoleHost") {

    $transcripts = (Join-Path $Env:USERPROFILE "Documents\WindowsPowerShell\Transcripts")

    If (-Not (Test-Path $transcripts)) {

            New-Item -path $transcripts -Type Directory | out-null

            }

    $global:TRANSCRIPT = ("{0}\PSLOG_{1:dd-MM-yyyy}.txt" -f $transcripts,(Get-Date))

    Start-Transcript -Path $transcript -Append

    Get-ChildItem $transcripts | Where {

        $_.LastWriteTime -lt (Get-Date).AddDays(-14)

    } | Remove-Item -Force -ea 0

}

 

## Type accelerator shortcut

$accelerator = [PSObject].Assembly.GetType('System.Management.Automation.TypeAccelerators')

$null = $accelerator::Add('accelerator',$accelerator)

 

## Get-Constructor function and alias

Function Get-Constructor {

    <#

        .SYNOPSIS

            Displays the available constructor parameters for a given type

 

        .DESCRIPTION

            Displays the available constructor parameters for a given type

 

        .PARAMETER Type

            The type name to list out available contructors and parameters

 

        .PARAMETER AsObject

            Output the results as an object instead of a formatted table

 

        .EXAMPLE

            Get-Constructor -Type "adsi"

 

            DirectoryEntry Constructors

            ---------------------------

 

            System.String path

            System.String path, System.String username, System.String password

            System.String path, System.String username, System.String password, System.DirectoryServices.AuthenticationTypes aut...

            System.Object adsObject

 

            Description

            -----------

            Displays the output of the adsi contructors as a formatted table

 

        .EXAMPLE

            "adsisearcher" | Get-Constructor

 

            DirectorySearcher Constructors

            ------------------------------

 

            System.DirectoryServices.DirectoryEntry searchRoot

            System.DirectoryServices.DirectoryEntry searchRoot, System.String filter

            System.DirectoryServices.DirectoryEntry searchRoot, System.String filter, System.String[] propertiesToLoad

            System.String filter

            System.String filter, System.String[] propertiesToLoad

            System.String filter, System.String[] propertiesToLoad, System.DirectoryServices.SearchScope scope

            System.DirectoryServices.DirectoryEntry searchRoot, System.String filter, System.String[] propertiesToLoad, System.D...

 

            Description

            -----------

            Takes input from pipeline and displays the output of the adsi contructors as a formatted table

 

        .EXAMPLE

            "adsisearcher" | Get-Constructor -AsObject

 

            Type                                                        Parameters

            ----                                                        ----------

            System.DirectoryServices.DirectorySearcher                  {}

            System.DirectoryServices.DirectorySearcher                  {searchRoot}

            System.DirectoryServices.DirectorySearcher                  {searchRoot, filter}

            System.DirectoryServices.DirectorySearcher                  {searchRoot, filter, propertiesToLoad}

            System.DirectoryServices.DirectorySearcher                  {filter}

            System.DirectoryServices.DirectorySearcher                  {filter, propertiesToLoad}

            System.DirectoryServices.DirectorySearcher                  {filter, propertiesToLoad, scope}

            System.DirectoryServices.DirectorySearcher                  {searchRoot, filter, propertiesToLoad, scope}

 

            Description

            -----------

            Takes input from pipeline and displays the output of the adsi contructors as an object

 

        .INPUTS

            System.Type

       

        .OUTPUTS

            System.Constructor

            System.String

 

        .NOTES

            Author: Boe Prox

            Date Created: 28 Jan 2013

            Version 1.0

    #>

    [cmdletbinding()]

    Param (

        [parameter(ValueFromPipeline=$True)]

        [Type]$Type,

        [parameter()]

        [switch]$AsObject

    )

    Process {

        If ($PSBoundParameters['AsObject']) {

            $type.GetConstructors() | ForEach {

                $object = New-Object PSobject -Property @{

                    Type = $_.DeclaringType

                    Parameters = $_.GetParameters()

                }

                $object.pstypenames.insert(0,'System.Constructor')

                Write-Output $Object

            }

 

 

        } Else {

            $Type.GetConstructors() | Select @{

                                Label="$($type.Name) Constructors"

                                Expression={($_.GetParameters() | ForEach {$_.ToString()}) -Join ", "}

                    }

        }

    }

}

New-Alias gctor Get-Constructor

Dave Wyattshares two functions. The first is New-ISETab, and the second is Get-ProxyCode. The first function opens code on a new tab in the Windows PowerShell ISE, and the second function is a helper function that makes it easier to write a proxy function. Both are cool.

gpc Send-MailMessage | nt

(aliases for: Get-ProxyCode –Name Send-MailMessage | New-ISETab )

function New-ISETab {

    [CmdletBinding()]

    param(

        [Parameter(Mandatory=$false, Position=1, ValueFromPipeline = $true)]

        [System.String[]]

        $Text,

        [Parameter(Mandatory=$false)]

        [System.Object]

        $Separator

    )

   

    begin {

        if (!$psISE) {

            throw 'This command can only be run from within the PowerShell ISE.'

        }

 

        if ((!$PSBoundParameters['Separator']) -and (Test-Path 'variable:\OFS')) {

            $Separator = $OFS

        }

 

        if (!$Separator) { $Separator = "`r`n" }

 

        $tab = $psISE.CurrentPowerShellTab.Files.Add()

       

        $sb = New-Object System.Text.StringBuilder

    }

   

    process {

        foreach ($str in @($Text)) {

            if ($sb.Length -gt 0) {

                $sb.Append(("{0}{1}" -f $Separator, $str)) | Out-Null

            } else {

                $sb.Append($str) | Out-Null

            }

        }

    }

 

    end {

        $tab.Editor.Text = $sb.ToString()

        $tab.Editor.SetCaretPosition(1,1)

    }

}

 

Set-Alias -Name nt -Value New-ISETab

function Get-ProxyCode {

    [CmdletBinding()]

    [OutputType([String])]

    param (

        [Parameter(Mandatory=$true, Position=0)]

        [System.String]

        $Name,

        [Parameter(Mandatory=$false,Position=1)]

        [System.Management.Automation.CommandTypes]

        $CommandType

    )

    process {

        $command = $null

        if ($PSBoundParameters['CommandType']) {

            $command = $ExecutionContext.InvokeCommand.GetCommand($Name, $CommandType)

        } else {

            $command = (Get-Command -Name $Name)

        }

 

        # Add a function header and indentation to the output of ProxyCommand::Create

       

        $MetaData = New-Object System.Management.Automation.CommandMetaData ($command)

        $code = [System.Management.Automation.ProxyCommand]::Create($MetaData)

 

        $sb = New-Object -TypeName System.Text.StringBuilder

 

        $sb.AppendLine("function $($command.Name)") | Out-Null

        $sb.AppendLine('{') | Out-Null

 

        foreach ($line in $code -split "\r?\n") {

            $sb.AppendLine('    {0}' -f $line) | Out-Null

        }

 

        $sb.AppendLine('}') | Out-Null

 

        $sb.ToString()

    }

}

Set-Alias -Name gpc -Value Get-ProxyCode 

David Moravec states that in his Windows PowerShell profile, he likes to use various Windows PowerShell drives to access important folders. One custom Windows PowerShell drive points to his user profile and to a folder called Projects: 

New-PSDrive -Name Projects -PSProvider FileSystem -Root "$env:USERPROFILE\Projects" | Out-Null

He also has code that will load all scripts from a specific folder. This is a pretty cool idea, and works well if you organize your scripts according to a specific project. You can then use Get-ChildItem to open the scripts:

# Load all scripts

Get-ChildItem (Join-Path ('Dropbox:\PowerShell\Profile') \Scripts\) |? `

    { $_.Name -notlike '__*' -and $_.Name -like '*.ps1'} |% `

    { . $_.FullName }

He also has functions that simplify common tasks, such as sending an email to his wife:

function MailToAndrea

{

    param($s, $b)

 

    $prop = @{

        From = 'David.Moravec@mainstream.cz'

        SmtpServer = $SMTPServer

        To = 'andrea.moravcova@<domain>.com' 

        Subject = $s

        Body = $b

    }

 

    Send-MailMessage @prop

}

Set-Alias -Name m2a -Value MailToAndrea

Jan Egil Ringshared the following tip:

"Here is a generalized script I use in my profile to load different customizations, based on which Active Directory domain I log in to. My ~\Documents\WindowsPowerShell folder is synchronized across my profile in each domain by using OneDrive. This way, I`m able to load different variables (cluster names and so on), based on where I`m working."

# Environment specific set up

 

$PSProfileRoot = Split-Path $MyInvocation.MyCommand.Path -Parent

 

switch ($env:USERDOMAIN)

{

 

'CORP' {

 

    try {

        . (Join-Path -Path $PSProfileRoot -ChildPath Environments\Corp\setup.ps1 -Resolve -ErrorAction stop)

        }

    catch {

        Write-Warning "Corp customization script not available"

        }

    }

 

'AZURE' {

 

    try {

        . (Join-Path -Path $PSProfileRoot -ChildPath Environments\Azure\setup.ps1 -Resolve -ErrorAction stop)

        }

    catch {

        Write-Warning "Azure customization script not available"

        }

    }

 

}

Jeffery Hicksshared the following:

"I take advantage of multiple profiles. Settings that should apply to the ISE and the console go in $profile.CurrentUserAllHosts. I then have host-specific profiles for settings that only make sense in the console or the ISE. Following are a few items that might be of interest."

1. Set some default locations for a few PSDrives.

(get-psdrive c).CurrentLocation="\scripts"
(get-psdrive d).CurrentLocation="\temp"
(get-psdrive hklm).CurrentLocation="\Software\Microsoft\Windows\CurrentVersion"

2. Define a function I can use to kick off a new v2 session.

Function New-V2Session {
#This must be run in an elevated session

Param([switch]$noprofile)

#Modify PowerShell.exe.Config if found
$exeConfig= Join-Path -path $PSHome -ChildPath "PowerShell.exe.config"
if (Test-Path $ExeConfig) {
    [xml]$config = Get-Content -Path $exeConfig
    #set policy to false to allow starting a v2 session
    $config.configuration.startup.useLegacyV2RuntimeActivationPolicy="False"
    $config.Save($exeConfig)
}

#start a new PowerShell v2 session
  if ($NoProfile) {
   $new = Start-Process -file PowerShell.exe -arg ' -version 2.0 -nologo -noprofile' -PassThru
  }
  else {
   $new = Start-Process -file PowerShell.exe -arg ' -version 2.0 -nologo' -PassThru
  }

#give the new processes a chance to start
Do {
 Start-Sleep -Milliseconds 100
} Until ($new)

#wait a few more seconds
Start-Sleep -Seconds 2

#change the config file back
If ($config) {
 #change the config back to avoid breaking PowerShell 3
 $config.configuration.startup.useLegacyV2RuntimeActivationPolicy="True"
 $config.Save($exeConfig)
}
} #end function

3. Call a function to display a quote of the day.

#requires -version 3.0

Function Get-QOTD {
<#
.Synopsis
Download quote of the day.
.Description
Using Invoke-RestMethod download the quote of the day from the BrainyQuote RSS
feed. The URL parameter has the necessary default value.
.Example
PS C:\> get-qotd
"We choose our joys and sorrows long before we experience them." - Khalil Gibran
.Link
Invoke-RestMethod
#>
    [cmdletBinding()]

    Param(
    [Parameter(Position=0)]
    [ValidateNotNullorEmpty()]
    [string]$Url="http://feeds.feedburner.com/brainyquote/QUOTEBR"
    )

    Write-Verbose "$(Get-Date) Starting Get-QOTD" 
    Write-Verbose "$(Get-Date) Connecting to $url"

    Try
    {
        #retrieve the url using Invoke-RestMethod
        Write-Verbose "$(Get-Date) Running Invoke-Restmethod"
       
        #if there is an exception, store it in my own variable.
        $data = Invoke-RestMethod -Uri $url -ErrorAction Stop -ErrorVariable myErr

        #The first quote will be the most recent
        Write-Verbose "$(Get-Date) retrieved data"
        $quote = $data[0]
    }
    Catch
    {
        $msg = "There was an error connecting to $url. "
        $msg += "$($myErr.Message)."

        Write-Warning $msg
    }

    #only process if we got a valid quote response
    if ($quote.description)
    {
        Write-Verbose "$(Get-Date) Processing $($quote.OrigLink)"
        #write a quote string to the pipeline
        "{0} - {1}" -f $quote.Description,$quote.Title
    }
    else
    {
        Write-Warning "Failed to get expected QOTD data from $url."
    }

    Write-Verbose "$(Get-Date) Ending Get-QOTD"

} #end Get-QOTD

#OPTIONAL: create an alias
Set-Alias -name "qotd" -Value Get-QOTD

Tome Tanasovskicontributed the following:

"The most essential part of my profile includes two very important lines of code. The intention of these lines is to strike fear in the hearts of men (and women) who are brave enough to watch me start Windows PowerShell.  It also makes me smile every time I see it."

(("1*0x26*1x3*2x4*3x1*4x1*2x2*1x1*2x1*4x4*3x3*2x21*1x1*0x20*1x1*2x2*4x3*5x5*6x2*7x6*1x2*8x5*6x3*5x2*4x1*2x15*1x1*0x17*1x1*2x1*4x1*5x10*6x2*7x1*1x1*9x4*1x1*10x1*1x2*8x10*6x1*5x1*4x1*2x13*1x1*0x16*1x1*4x12*6x2*7x2*1x1*11x1*8x2*5x1*7x1*11x2*1x2*8x12*6x1*4x11*1x1*0x14*1x1*2x1*7x12*6x2*7x3*1x1*9x1*12x2*13x1*14x1*10x3*1x2*8x12*6x1*8x1*2x9*1x1*0x14*1x1*7x13*6x2*9x4*1x2*8x2*7x5*1x2*10x13*6x1*8x9*1x1*0x12*1x1*4x15*6x2*8x4*1x1*9x2*15x1*10x4*1x2*7x15*6x1*4x7*1x1*0x11*1x1*4x17*6x2*8x2*1x1*7x1*1x2*16x1*1x1*8x2*1x2*7x17*6x1*4x7*1x1*0x10*1x1*4x19*6x2*8x1*7x6*1x1*8x2*7x19*6x1*4x5*1x1*0x9*1x1*2x1*6x1*7x1*11x10*6x1*7x1*8x6*6x1*9x3*1x1*7x1*8x3*1x1*10x6*6x1*7x1*8x10*6x1*11x1*8x1*6x1*2x5*1x1*0x9*1x1*11x1*7x1*1x1*11x1*6x1*7x1*8x1*6x1*7x1*8x1*6x1*7x1*8x1*7x2*1x1*8x1*6x1*7x1*8x2*6x1*8x2*1x1*11x2*1x1*11x2*1x1*7x2*6x1*7x1*8x1*6x1*7x2*1x1*8x1*7x1*8x1*6x1*7x1*8x1*6x1*7x1*8x1*6x1*11x1*1x1*8x1*11x5*1x1*0x9*1x1*17x2*1x1*11x1*7x2*1x1*16x2*1x1*16x2*1x1*17x3*1x1*16x2*1x1*8x1*6x1*8x1*11x1*1x1*11x2*1x1*11x1*1x1*11x1*7x1*6x1*7x2*1x1*16x3*1x1*17x2*1x1*16x2*1x1*16x2*1x1*8x1*11x2*1x1*18x5*1x1*0x13*1x1*17x3*1x1*17x2*1x1*17x6*1x1*17x2*1x1*7x1*1x1*11x1*1x1*11x2*1x1*11x1*1x1*11x1*1x1*8x3*1x1*17x6*1x1*17x2*1x1*17x3*1x1*17x7*1x1*0x29*1x1*9x2*1x1*11x1*1x1*11x2*1x1*11x1*1x1*11x2*1x1*10x25*1x1*0x4*1x1*19x1*20x1*15x1*17x1*21x1*1x1*22x1*20x1*23x1*1x1*24x1*25x1*21x1*22x1*23x1*26x1*27x1*28x1*27x1*28x1*27x3*1x2*2x1*8x1*1x1*11x1*1x1*11x2*1x1*11x1*1x1*11x1*1x1*7x2*2x1*1x1*29x1*15x1*30x1*31x1*32x2*33x1*28x1*1x1*22x1*15x1*23x1*34x1*32x2*35x1*36x1*37x1*38x1*25x1*39x1*40x1*41x1*42x1*15x1*38x1*0x27*1x1*9x3*43x1*9x3*16x1*10x1*9x3*16x1*10x3*43x1*10x6*1x1*0x22*1x1*20x2*22x1*44x1*13x2*7x1*44x1*15x1*45x1*23x1*26x1*22x1*15x1*23x1*41x1*45x1*15x1*26x1*46x1*44x1*26x1*23x2*21x1*41x1*42x1*15x1*38x1*0" -split "x") -split "x"|%{ if ($_ -match "(\d+)\*(\d+)") { "$([char][int]("10T32T95T61T45T94T35T47T92T40T41T124T62T58T60T111T86T39T44T87T104T115T116T101T77T97T114T63T33T84T69T78T117T70T110T102T64T103T109T105T108T46T99T118T112T119T100" -split "T")[$matches[2]])" * $matches[1] } }) -join ""

$host.UI.RawUI.WindowTitle = "By the powershell of greyskull"

"If this inspires you, you should read through this post on the Windows PowerShell Forum from 2010: Post your sig thread.

"Besides a multitude of aliases for different languages and tools like Perl, Python, golang, Pester, and Git, I keep the following function, which allows me to quickly grep through the history of commands I have recently typed."

function hgrep {

    param(

        [Parameter(Mandatory=$false, Position=0)]

        [string]$Regex,

        [Parameter(Mandatory=$false)]

        [switch]$Full

    )

    $commands = get-history |?{$_.commandline -match $regex}

    if ($full) {

        $commands |ft *

    }

    else {

        foreach ($command in ($commands |select -ExpandProperty commandline)) {

            # This ensures that only the first line is shown of a multiline command

            # You can always get the full command using get-history or you can fork and remove this from the gist

            if ($command -match '\r\n') {

                ($command -split '\r\n')[0] + " ..."

            }

            else {

                $command

            }

        }

    }

"Finally, my prompt is very simple. I like to add the time that the last command finished to every line. I do this with the following prompt function."

function prompt {

    (get-date).ToString("HH:mm:ss") + " " + $(if (test-path variable:/PSDebugContext) { '[DBG]: ' } else { '' }) + 'PS ' + (Get-Location).ProviderPath + $(if ($nestedpromptlevel -ge 1) { '>>' }) + '> '

}

"Additionally, I run posh-git to ensure that my prompt provides me with Git information for the directory I'm in. My entire profile lives in a GitHub Gist. You can read through it here: toenuff  / profile.ps1."

Bartek Bielawski shared the following:

"First of all, I’m huge fan of script signing. In my opinion, people should make sure that their environment is safe from a malicious profile. After all, this is simply a text file in a documents folder, and nothing (as far as I know) is protecting it. During my Windows PowerShell security talk, I use a “profile” that starts as follows."

Write-Progress -PercentComplete 0 -Activity 'Deleting users from AD..'

Start-Sleep -Seconds 1

foreach ($Percent in 1..99) {

    Write-Progress -PercentComplete $Percent -Activity 'Deleting users from AD..' -Status "Delete so far: $($Percent * 100) :D"

    Start-Sleep -Milliseconds 200

}

Write-Progress -PercentComplete 100 -Activity 'And now I will kill your PC. BSOD!'

Start-Sleep -Seconds 1

"After that, I show an actual "blue screen of death" in Windows 8, blur it out, and finalize it with some silly graphics. To prevent this from happening in real life, I have used the same setup for years, and it’s something I learned from Glenn Sizemore’s blog. (Unfortunately, I can’t provide you with URL to Glenn’s post—his blog is long gone.) The setup includes:

  • The execution policy is configured to AllSigned.
  • Any profile script I use is signed.
  • In the last profiles, before I load any modules, I change the execution policy for the current process to RemoteSigned.

"Now if someone tries to modify my profile, I will get information that it was modified, and it won’t run. At the same time, I can run any script I want. (But hey, I’m doing it on purpose, so if the script is bad, I’m the one to blame.) With profiles, it’s not always a voluntary run.

"Before Tobias Weltner made the ISESteroids add-in public, I had a different issue: Whenever I updated my profile, I needed something to update its signature too. That’s how I ended up with the following handy little function. It updates the digital signature of any script, but it defaults to my profile."

function Update-ScriptSignature {

param (

    [string]$FilePath = $profile,

    [Security.Cryptography.X509Certificates.X509Certificate2]$Certificate = $(

        Get-ChildItem -Path Cert:\CurrentUser\My -CodeSigningCert |

            Where-Object { $_.Verify() } |

            Select-Object -First 1)

)

    $Params = @{

        FilePath = $FilePath

        Certificate = $Certificate

        TimestampServer = 'http://timestamp.digicert.com'

    }

    Set-AuthenticodeSignature @Params

}

"As you can see, it will take the first code-signing certificate I have that can be verified, and sign my script with it. It uses TimestampServer, which is something people tend to ignore. Why would you use it? Well, if someone told you that after your digital signature expires, any script you’ve signed with it will no longer be valid, they lied.

"That’s exactly what a time stamp is for—it’s proof that your script was signed when the signature was valid. That prevents nasty tricks with moving clocks that could keep a yearly certificate valid forever. And that’s why a script withouta time stamp “dies” when the certificate expires.

"Following is a good example. It is a script I signed two years ago, and it is still valid, even though my certificate expired. Without the time stamp, I would have to sign it again or ignore the fact that it has any signature."

<# Provider: FileSystem => Location: E:\PowerShell ID: [25]

#>⁠⁠⁠ Get-AuthenticodeSignature .\LocalAdminGUI.ps1 |

    Format-List Status, { $_.SignerCertificate.NotAfter }, TimeStamperCertificate, {Get-Date}

Status                          : Valid

$_.SignerCertificate.NotAfter  : 21-Mar-13 13:00:00

TimeStamperCertificate          : [Subject]

                                    CN=DigiCert Timestamp Responder, O=DigiCert, C=US

                                 

                                  [Issuer]

                                    CN=DigiCert Assured ID CA-1, OU=www.digicert.com, O=DigiCert Inc, C=US

                                 

                                  [Serial Number]

                                    038B96F070D9E21E55A5426792E1C83A

                                 

                                  [Not Before]

                                    04-Apr-12 02:00:00

                                 

                                  [Not After]

                                    18-Apr-13 02:00:00

                                 

                                  [Thumbprint]

                                    51AEC7BA27E71A65D36BE1125B6909EE031119AC

                                  

Get-Date                        : 09-May-14 23:46:26

Thanks to the MVPs who shared these tips. Windows PowerShell Profile Week will continue tomorrow when I will share Windows PowerShell profile tips from Microsoft premier field engineers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Import All PowerShell Modules

$
0
0

Summary: Learn to easily import all Windows PowerShell modules.

Hey, Scripting Guy! Question How can I import all Windows PowerShell modules into my current Windows PowerShell session?

Hey, Scripting Guy! Answer Use the Get-Module cmdlet with the –ListAvailable switch, and pipe the results to the Import-Module cmdlet:

Get-Module -ListAvailable | Import-Module

You can also use aliases:

gmo -l | ipmo

What’s in Your PowerShell Profile? Microsoft PFEs' Favorites

$
0
0

Summary: Microsoft premier field engineers share some of their favorite functions from their Windows PowerShell profiles.

Microsoft Scripting Guy, Ed Wilson, is here. Today we will look at some profile excerpts from a few Microsoft premier field engineers (PFEs).

Michael Wiley offers the following idea:

"I actually got this from Ashley McGlone, but I use it extensively for giving presentations and demos. My $Profile for ISE contains the following lines to make error text more readable on projectors and to increase the font size."

# Make error text easier to read in the console pane.

$psISE.Options.ErrorBackgroundColor = "red"

$psISE.Options.ErrorForegroundColor = "white"

 

# Make text easier to read at larger resolutions

$psISE.Options.Zoom = 125

"And I use the following for the $Profile in console."

$a = (Get-Host).PrivateData

$a.ErrorBackgroundColor = "red"

$a.ErrorForegroundColor = "white"

"I adjust the font setting on my console to 18pt Lucinda Console, and the Layout is 80 wide by 35 high. I also apply these same settings inside all of my virtual machines. On a typical 1024x768 projector, this makes the text very readable for students."

Ashley McGlone(@GoateePFE) provided this additional tip:

"As part of my work, I teach many Windows PowerShell workshops for our customers. I’ve been using Windows PowerShell for four years, and I run a fairly minimal profile for both the console and the ISE. I do this for two reasons:

1. I am a purist, and I like things clean and simple.

2. I want to have a default environment when I am teaching students.

"Here is what I use in my console profile."

# Clean out small files from my transcript folder

dir C:\Users\asmcglon\Documents\PowerShell\_Transcripts\*.txt | ? length -lt 1kb | remove-item

# Log a transcript for every console session in case I need to go back and find a command later

Start-Transcript -Path "C:\Users\ashley\Documents\PowerShell\_Transcripts\$(Get-Date -Format yyyyMMddHHmmss).txt" | Out-Null

# Set the console error colors to white text on red background.

# This is much easier to read on a projector.

$a = (Get-Host).PrivateData

$a.ErrorBackgroundColor = "red"

$a.ErrorForegroundColor = "white"

"Here is what I use in my ISE profile."

# Make error text easier to read in the console pane.

$psISE.Options.ErrorBackgroundColor = "red"

$psISE.Options.ErrorForegroundColor = "white"

# Make text easier to read at larger resolutions

$psISE.Options.Zoom = 125

"That’s it. These very basic settings help me as I teach Windows PowerShell."

Funtrol Ready says his main areas of focus are SharePoint and Windows PowerShell.

"I use a really basic profile that connects me to Office 365 where I can manage, demo, or troubleshoot the suite of products by using Windows PowerShell.

"In my ISE profile, first I capture my credentials and then connect to Office 365. This requires the installation of the Microsoft Online Services Sign-in Assistant and the Windows Azure Active Directory Module for Windows PowerShell. If I am using Windows PowerShell 2.0, I must import the module. Windows PowerShell 3.0 or later will automatically load them for me."

$o365cred = get-credential -UserName admin@<my-tenant>.onmicrosoft.com -Message cloudO365demo

Connect-MsolService -Credential $o365cred

"Next I connect to SharePoint Online. This requires downloading the SharePoint Online Management Shell."

connect-sposervice -url https://<my-tenant>-admin.sharepoint.com -Credential $o365cred

 "Next I connect to Lync Online. This requires the installation of the Lync Online Connector Module."

Import-Module LyncOnlineConnector

$session = New-CsOnlineSession -Credential $o365cred

Import-PSSession $session

"And finally, I connect to Exchange Online by using a remote session."

$exchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential $o365cred -Authentication "Basic" -AllowRedirection

Import-PSSession $exchangeSession

Georges Maheusays that he uses Windows PowerShell every day. Most of the time, he uses Windows PowerShell with scripts, but he uses the console to test things and to do research. Here are some of the functions from his profile.

"My first function loads my profile to facilitate editions. This is very useful when you are creating your profile. The second function pipes Help text to More, which reduces scrolling. The third function lists WMI classes (excluding CIM classes). The last function is my favorite, it overwrites the default Prompt function. This provides the following:

1. Displays a timestamp to gauge timespans between commands. I use Measure-Command for more precise timing, but this provides me with a rough order of magnitude.

2. Displays the current path on its own line, which provides more real estate to type commands.

3. The green color and ‘---‘ line provides a visual clue that separates commands."

#--------------------------------------------------------------

function profile  

{

notepad $profile

} #function profile

 

#--------------------------------------------------------------

function moreHelp($what)

{

get-help -name $what -full | more

} #function moreHelp

 

#--------------------------------------------------------------

function getWMI($what)

{

get-wmiobject -list |

  where-object {$_.name -match $what -and

         $_.name -notmatch "CIM_"}

} #function getWMI

 

#--------------------------------------------------------------

function prompt()

{

Write-Host -ForegroundColor Green @"

PS $(get-date) $(Get-Location)

--------------------------------------------------------------

"@

 

$(if ($nestedpromptlevel -ge 1) { '>>' }) + '> '

} #function prompt()

 

# Initialise PowerShell =======================================

 

(get-host).UI.RawUI.windowTitle = "George's PowerShell"

Ian Farrclaims to be a PowerShell addict…

"There, I’ve said it! I also teach Windows PowerShell and help my customers with their scripts. I’ve got a lot in my $profile. Here’s some of the publishable stuff.

"I check to see whether the console has been started with ‘Run as Administrator.’ If it has, I give it a different color than that of my standard console, so I instantly know what context I’m in. I also kick off a background update of my Help files because this can only be achieved with admin permissions."

#Check for admin privs

If (([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(`

   [Security.Principal.WindowsBuiltInRole] "Administrator"))

{

 

  #Change the console display

  $UI = (Get-Host).UI.RawUI

  $UI.BackgroundColor = "blue"

  $UI.ForegroundColor = "white"

  $UI.WindowTitle = "Ian's Admin PowerShell"

  Set-Location c:\

  cls

 

  #Update Help files

  Start-Job -ScriptBlock {Update-Help -Force}

}

Else

{

  #Change the console display

  $UI = (Get-Host).UI.RawUI

  $Size = $UI.WindowSize

    $Size.Width = 120

    $Size.Height = 60

  $UI.WindowSize = $Size

  $UI.BackgroundColor = "black"

  $UI.ForegroundColor = "darkgreen"

  $UI.WindowTitle = "Ian's PowerShell"

  cls

}

"The following function automatically adds -AutoSize for when I use Format-Wide or Format-Table. (I should add more of these $PsDefaultParameterValues statements because they’re awesome!)"

#Set Autosize switch for both Format-Table and Format-Wide cmdlet

$PSDefaultParameterValues['Format-[wt]*:Autosize'] = $True

Rene Mau likes to use Notepad and the console to write scripts.

"When I need information about a class or object, I’m reading the MSDN library. So the most useful profile function for me is the following. I add a ScriptMethod QuerymsdnClassInfo() to every object in the session. This method will open the MSDN website for extended information for the class."

#Microsoft.PowerShell_profile.ps1
$ProfilePath = Split-Path -Path $PROFILE –Parent
$TypesFile = Join-Path -Path $ProfilePath -ChildPath MyTypes.ps1xml

try
{
            Update-TypeData -Path $TypesFile -EA Stop
}
catch [System.Management.Automation.ItemNotFoundException]
{
            Write-Host "Update TypeData failed. Could not find $TypesFile" -ForegroundColor DarkRed
}
catch
{
            Write-Host "Update TypeData failed. Please check syntax of $TypesFile" -ForegroundColor DarkRed
}

#MyTypes.ps1xml

<Types>
 <Type>

 <Name>System.Object</Name>

 <Members>

  <ScriptMethod>

  <Name>queryMSDNClassInfo</Name>

  <Script>

   $type = $this.GetType().FullName

   switch -Wildcard ($type)

   {

   "System.Management.ManagementObject" { $urilist = "http://msdn.microsoft.com/en-us/library/windows/desktop/aa394554`(v=vs.85`).aspx" }

   "System.__ComObject" { $urilist = "http://www.microsoft.com/com/default.mspx" }

   default { $urilist = "http://msdn.microsoft.com/$PSUICulture/library/$type.aspx" }

   }

   foreach ($uri in $urilist)

   {

   If (-not $($global:iemsdn.Type) -eq "HTML Document")

   {

    $global:iemsdn = new-object -comobject InternetExplorer.Application -property @{navigate2 = $uri; visible = $true}

   }

   else

   {

    $global:iemsdn.navigate2($uri,0x1000)

   }

   }

  </Script>

  </ScriptMethod>

 </Members>

 </Type>

</Types>

Stefan Strangerspecializes in System Center Operations Manager and Windows PowerShell.

"I love to use Windows PowerShell to automate and inspect systems I am working on. During the many Windows PowerShell workshops that I deliver, I’ve added more in my profile. That’s why I’ve added information about what is in my profile when I start my different Windows PowerShell hosts. In my console profile, I have the following functions."

Write-Host "Loaded in profile: Measure-Script, Scripts drive" -ForegroundColor Yellow

Write-Host "Loaded in profile: PSReadLine" -ForegroundColor Yellow

Write-Host "PSReadline example: Get-Process –<Ctrl+Space> or Get-Process i <Ctrl+Space>" -ForegroundColor Yellow

# Load Module PSProfile Module

# More info http://www.powershellmagazine.com/2013/05/13/measuring-powershell-scripts/

import-module PSProfiler

 

#Go to default Script folder

Set-Location C:\Scripts\PS

 

#Create FileSystem Drive for Script folder

New-PSDrive -Name Scripts -PSProvider FileSystem -Root C:\Scripts\PS | Out-Null

 

# Load Module PSReadLine Module

# More info https://github.com/lzybkr/PSReadLine

import-module PSReadLine

 

"In my ISE profile, I have the following functions."

Write-Host "Loaded in profile: Measure-Script, Scripts drive" -ForegroundColor Yellow

# Load Module PSProfile Module

# More info http://www.powershellmagazine.com/2013/05/13/measuring-powershell-scripts/

import-module PSProfiler

 

#Go to default Script folder

Set-Location C:\Scripts\PS

 

#Set WindowsTitle

((Get-Host).UI.RawUI).WindowTitle = "PowerShell Rocks!"

 

#Create FileSystem Drive for Script folder

New-PSDrive -Name Scripts -PSProvider FileSystem -Root C:\Scripts\PS | Out-Null

#Script Browser Begin

Function Start-ScriptBrowser

{

  Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\System.Windows.Interactivity.dll'

  Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\ScriptBrowser.dll'

  Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\BestPractices.dll'

  #Check if ScriptBrowser is already added to AddOnTools

  if (!($psISE.CurrentPowerShellTab.VerticalAddOnTools.Name -eq "Script Browser"))

    {

      $scriptBrowser = $psISE.CurrentPowerShellTab.VerticalAddOnTools.Add('Script Browser', [ScriptExplorer.Views.MainView], $true)

    }

  #Check if ScriptAnalyzer is already added to AddOnTools

  if (!($psISE.CurrentPowerShellTab.VerticalAddOnTools.Name -eq "Script Analyzer"))

  {

    $scriptAnalyzer = $psISE.CurrentPowerShellTab.VerticalAddOnTools.Add('Script Analyzer', [BestPractices.Views.BestPracticesView], $true)

  }

  if (!($psISE.CurrentPowerShellTab.VerticalAddOnTools.Name -eq "Script Browser"))

  {

    $psISE.CurrentPowerShellTab.VisibleVerticalAddOnTools.SelectedAddOnTool = $scriptBrowser

  }

 

}

#Script Browser End

"That’s it. I write a blog on TechNet: Stefan Stranger's Weblog - Manage your IT Infrastructure. You can also find me on Twitter."

Jason Walkeroffers the following ideas:

"I like to use my Windows PowerShell profile to start off my day with a laugh. I do this with the Sapi.SPVoice COM object. When Windows PowerShell is launched, my computer is inspired by LMFAO’s “Party Rock Anthem,” and it informs me that it’s time to shuffle.

"In the following script, first I create the Sapi.SPVoice COM object. Then I switch the voice from David to Zira, and I use Get-Date to get the day of the week to use in the greeting. Lastly, I switch the voice back to David and state the second half of the greeting."

#Create Sapi.Spvoice COM object

$Sapi = New-Object -ComObject sapi.spvoice

#Switch Voice from David to Zira

$Sapi.Voice = $Sapi.GetVoices().Item(2)

#State what "day of the week" shuffle it is

$Sapi.Speak("Time for the $((Get-Date).DayofWeek) shuffle")

#Switch the voice back to David

$Sapi.Voice = $Sapi.GetVoices().Item(0)

#Slow the rate of speach down

$Sapi.Rate = -3

#State last half of greeting

$Sapi.Speak("Everyday I'm shuff-a-linn")

"Yes I know…it's kind of cheesy, but it makes me laugh."

Martin Schvartzman provides the following script to configure a transcript and to import a command's history from previous sessions:

#region Transcript and History management

function Bye() {

    Stop-Transcript

    Write-Host "Transcript stopped. Exporting History... " -ForegroundColor Yellow -NoNewline

    Get-History -Count $global:MaximumHistoryCount | Export-Clixml -Path $global:HistoryXmlFilePath

    Write-Host "Finished!" -ForegroundColor Green

    Start-Sleep -Milliseconds 500

    exit

}

 

function Hi() {

    Start-Transcript -Path $global:Transcript -ErrorAction SilentlyContinue

    if (Test-Path $global:HistoryXmlFilePath) {

       Import-Clixml $global:HistoryXmlFilePath | Add-History }

}

 

$global:Transcript = 'C:\Temp\Transcript_{0:yyyyMMddHHmmss}.log' -f (Get-Date)

$global:MaximumHistoryCount = 1500

$global:HistoryXmlFilePath = Join-Path -Path (Split-Path -Path $PROFILE -Parent) -ChildPath PSHistory.xml

$ExitAction = { Bye }

[void](Register-EngineEvent –SourceIdentifier PowerShell.Exiting –Action $ExitAction)

Hi

#endregion

Matt Reynoldsdoesn’t want his scripts to have accidental dependencies.

"I try to avoid loading things in the profile because I use too many different machines. However, I have a module full of utility functions, which I load and bundle as needed. The following function is a recent addition. It provides memory efficient sums, counts, averages, and so on. It is aggregated by property values. Think of it as a pivot table on a pipeline."

<#

.Synopsis

  Measures averages, sums, counts, etc over a series of object with grouping by property values, like a pivot table

.DESCRIPTION

  Measures averages, sums, counts, etc over a series of object with grouping by property values, like a pivot table

  Avoids holding all the objects in memory for scalability

.EXAMPLE

  ## Get counts, sums, avgs, etc. for file length by extension

  Get-ChildItem c:\ -Recurse | Measure-MLib__Aggregate -GroupProperty Extension -MeasureProperty Length -OutPipeHt

.EXAMPLE

  ## Get counts, sums, avgs, etc. for multiple numerical columns in a log file while grouping by multiple text columns

  Get-Content -Path somelogfile.txt | ConvertFrom-CSV | Measure-MLib__Aggregate -GroupProperty SourceIp,RequestType -MeasureProperty Size,ResponseTime -OutPipeHt

.INPUTS

  Any object(S)

.OUTPUTS

  Varies depending on -Passthru and other parameters

#>

function Measure-MLib__Aggregate{

  [CmdletBinding()]

  param(

                ## Input any stream of objects (e.g., psobjects, hashtables, etc.) with

                ## properties that you want to count/sum/average

    [Parameter(ValueFromPipeline=$true)]

    [object[]]$InputObject,

    ## Causes the input object(s) to be output instead of the measurement object

                ## Use together with $SideOutputHtVariableName or $SideOutputArrayVariableName

                ## to have the measurements sent to a variable while the input objects continue down output pipeline

                [switch]$Passthru,

                ## See Passthru

    [string]$SideOutputHtVariableName = $null,

                ## See Passthru

                [string]$SideOutputArrayVariableName = $null,

                ## One or more property names by which to aggregate

    [string[]]$GroupProperty,

                ## One or more property names by which to measure after aggregation

    [string[]]$MeasureProperty,

                ## Causes the measurement results to be output as a flattened table / stream

                [switch]$OutPipeFlat,

                ## Causes the measurement results to be output as a hierachial hashtable

                [switch]$OutPipeHt

  )

  begin{

    $groups = @{}

 

    function New-MLib__MeasurementsHashTable{

      param( [string[]]$propertyNames )

      $outer = @{}

      foreach($propertyName in $propertyNames){

        $outer[$propertyName] = @{

          PropertyName = $propertyName

          Count = 0

          Sum = 0

          Avg = 0

          Max = 0

          Min = 0

        }

      }

      $outer

    }

 

                function ConvertFrom-MLib__AggregateMeasure{

                        param(

                                [hashtable]$AggregateMeasure

                        )

 

                        foreach( $aggregateKey in $AggregateMeasure.Keys ){

                                foreach( $propertyMeasure in $AggregateMeasure[$aggregateKey].Values ){

                                        New-Object -TypeName PSCustomObject -Property @{

                                                GroupKey = $aggregateKey

                                                PropertyName = $propertyMeasure["PropertyName"]

                                                Min = $propertyMeasure["Min"]

                                                Max = $propertyMeasure["Max"]

                                                Avg = $propertyMeasure["Avg"]

                                                Count = $propertyMeasure["Count"]

                                                Sum = $propertyMeasure["Sum"]

 

                                        }

                                }

 

                        }

 

                }

  }

 

  process{ foreach( $item in $InputObject ){

    $groupingKey = (@(foreach($propertyName in $GroupProperty){ $item.$propertyName | out-string }) -join ";").Trim()

    if( -not $groups.ContainsKey( $groupingKey )){

      $groups[ $groupingKey ] = New-MLib__MeasurementsHashTable -propertyNames $MeasureProperty

    }

    foreach($measurePropertyName in $MeasureProperty){

      $currentMeasureObject = $groups[$groupingKey][$measurePropertyName]

      $currentMeasureObject.Count++

      $currentMeasureObject.Sum = $currentMeasureObject.Sum + $item.$measurePropertyName

      $currentMeasureObject.Avg = $currentMeasureObject.Sum / $currentMeasureObject.Count

      if( $currentMeasureObject.Min -gt $item.$measurePropertyName ){ $currentMeasureObject.Min = $item.$MeasurePropertyName }

      if( $currentMeasureObject.Max -lt $item.$measurePropertyName ){ $currentMeasureObject.Max = $item.$MeasurePropertyName }

 

    }

                if( $Passthru ){

                        $item

                }

  }}

 

  end{

                ##

                ## TODO FUTURE:

                ## Figure out a way to avoid use of global scope for side output variables.

                ## Every other approach I tried seemed to fail due to crossing of module boundaries

 

                if(-not ([String]::IsNullOrEmpty($SideOutputHtVariableName) ) ){

                        Set-Variable -Name $SideOutputHtVariableName -Value $groups -Scope GLobal

                }

                if(-not ([String]::IsNullOrEmpty($SideOutputArrayVariableName) ) ){

                        Set-Variable -Name $SideOutputArrayVariableName -Scope Global -Value @(ConvertFrom-MLib__AggregateMeasure $groups)

                }

 

                if( $OutPipeHt ){

                        $groups

                }

 

                if( $OutPipeFlat ){

                        ConvertFrom-MLib__AggregateMeasure $groups

                }

  }

}

That is it for today. Thank you to all the Microsoft PFEs who took time out to contribute to this project. It was fun and educational. Join me tomorrow as I hit up the Windows PowerShell team for what is in their profiles. By the way, what’s in your Windows PowerShell profile? Add a comment in the following text box and let us know.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Display Date Formats

$
0
0

Summary: Use Windows PowerShell to easily change the way the date displays on your computer.

Hey, Scripting Guy! Question How can I use Windows PowerShell to display the date on my computer as a two-digit month, two-digit day,
          and two-digit year?

Hey, Scripting Guy! Answer Use the ToString() method from the Get-Date cmdlet. Here are two examples:

PS C:\> (get-date).tostring("MM/dd/yy")

05/19/14

PS C:\> (get-date).tostring("MMddyy")

051914


What’s in Your PowerShell Profile? Powershell Team Favorites

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson talks to members of the Windows PowerShell team about what is in their Windows PowerShell profile.

Microsoft Scripting Guy, Ed Wilson, is here. I'm wrapping up the week with profile excerpts from the Windows PowerShell team. There is some really cool stuff here.

Hemant Mahawar indicated that he does not use a profile.

This is quite common for people who work in Windows PowerShell core. This is also a common theme I have run across from people who teach or otherwise work with customers on a regular basis. The danger, of course, is inadvertently picking up something that is in your profile, and expecting it to be in a default Windows PowerShell installation.

Lee Holmes shared a couple of really cool techniques.

"Here are two killer techniques I use in my profile."

Use PSDefaultParameterValues on Out-Default

"This is only supported on Windows PowerShell 4.0 and later in Windows 8.1), but it lets you capture the output of the last command into a variable (I use $0)."

123 [C:\windows\system32]

>> $PSDefaultParameterValues["Out-Default:OutVariable"] = "0"

 

124 [C:\windows\system32]

>> Get-Command Watch-Clipboard

 

CommandType     Name                                               Version    Source

-----------     ----                                               -------    ------

ExternalScript  Watch-Clipboard.ps1                                           d:\documents\tools\Watch-Clipboard.ps1

 

125 [C:\windows\system32]

>> ise $0.Source

Auto-register scheduled jobs

"My Windows PowerShell jobs are constantly being blasted from my machines. So I put their registration in my profile. When Windows PowerShell launches, it lets me know if the job is missing. If it is, it re-registers it."

if(-not (Get-ScheduledJob -Name PowerFix -EA Ignore))

{

    $trigger = New-ScheduledTaskTrigger -Once -At (Get-Date) -RepetitionInterval (New-TimeSpan -Minutes 15) -RepetitionDuration ([TimeSpan]::MaxValue)

    $null = Register-ScheduledJob -Name PowerFix -Trigger $trigger -ScriptBlock { powercfg -x -standby-timeout-ac 0 } -MaxResultCount 1

}

Steve Lee is a principal test lead in Windows PowerShell and WMI, and he provided links to several resources.

Abhik Chatterjee provided a useful function to use when you are dealing with nested directories:

"I don’t have much in my Windows PowerShell profile, but I find this function really useful because when I am in a very nested directory, it puts the prompt right below it so I do not have to scroll a lot."

function prompt

{

            Write-Host("PS: " + "$pwd" + ">")

}

Jason Shirk offered the following suggestion for changing directories.

"Here is a 'command not found' handler that I use to change directories without typing cd."

 $ExecutionContext.InvokeCommand.CommandNotFoundAction =

{

    param([string]$commandName,

          [System.Management.Automation.CommandLookupEventArgs]$eventArgs)

 

    # Remove the 'get-' prefix that confuses Test-Path after we produce

    # something that is path like after the get-.

    if ($commandName.StartsWith('get-'))

    {

        $commandName = $commandName.Substring(4)

    }

 

    # Replace sequences of 3 or more dots with the correct path syntax, e.g.

    #     ... -> ..\..

    #     .... -> ..\..\..

    $normalizedPath =

        ([regex]"\.{3,}").Replace($commandName, { ('..\' * ($args[0].Length - 2) + '..') })

 

    # If the command looks like a location, just switch to that directory

    if (Test-Path -Path $normalizedPath)

    {

        $eventArgs.CommandScriptBlock = { Set-Location -LiteralPath $normalizedPath }.GetNewClosure()

        return

    }

 

    if ($commandName -ne $normalizedPath)

    {

        # Maybe the command is an external script/native exe specified with ....

        $eventArgs.Command = Get-Command -Name $normalizedPath -ea Ignore

    }

}

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: See if You Have a PowerShell Profile

$
0
0

Summary: Easily determine if you have a Windows PowerShell profile.

Hey, Scripting Guy! Question I am not sure if I have a Windows PowerShell profile. How can I easily find out?

Hey, Scripting Guy! Answer Use Test-Path and the $profile automatic variable:

Test-Path $PROFILE

Weekend Scripter: Use PowerShell to Find and Disable Webcams

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to find and disable webcams and cameras.

Microsoft Scripting Guy, Ed Wilson, is here. It is really interesting (at least to me) the way certain questions seem to swirl around like trigger fish taking up residence near a coral reef. They keep coming back, although they appear to go away for a certain amount of time. Today’s question is no exception. When webcams first began to make their appearance on laptops, there was a huge rash of activity amongst network admins to remove drivers, disable the devices, or otherwise make them unusable.

This makes sense because many (if not most) corporate users can get along fine without enabling their webcams. In fact, I rarely turn on the webcam on my laptop, even though I routinely spend more than a quarter of my time in meetings via LYNC.

The reason? One is bandwidth. I have a hard enough time making the connection stable enough for a multi-hour meeting without adding the additional resource of a webcam. The second reason is that after a given amount of time, one tends to forget that the webcam is broadcasting (assuming that condition one is fixed), and soon everyone in the meeting is watching me take off my glasses, rub my eyes, scratch my head, or yawn for a real long time. Whereas there is a microphone mute button, muting video is more intrusive if I started the meeting with the video on.

Lastly, assuming that condition 1 and 2 are satisfied, there is an inherit latency that is downright disturbing. People hear me speak, and then a few seconds later, they see my lips move. It is like a bad monster movie from the sixties. Here is an example of the bad monster movie experience. Dude, everyone was scared away.

Image of screen

So how do I disable the webcam? It is pretty easy…

Just the steps

There are only two steps I need to take:

  1. Find the webcam (or camera) Plug and Play device ID. (I will use WMI for this.)
  2. Use Devcon to disable the device. (I happen to have DevCon on my system, but you may need to download it because it is not part of the standard Windows installation.)

Find the camera

I decide that I want to cheat a little bit before I get too carried away. I open Computer Management and click Device Manager. From the list of components, I choose Imaging devices and select Integrated Camera. I right-click the icon and click Properties. Next I go to the Details tab. There are lots of properties displayed. But the one I want is Matching device Id. The menu is shown here:

Image of menu

Now, I want to find the same information from within WMI. I prefer to use the Get-CimInstance cmdlet (available in Windows PowerShell 3.0 and later) because it is faster than Get-WmiObject and because in 95% of the cases, the output returned is easier to read. Here is the command I use to find the camera:

Get-CimInstance Win32_PnPEntity | where caption -match 'integrated camera'

When I have found my camera, the next step is to return only the Plug and Play device ID. To do this, I once again take advantage of the Windows PowerShell 3.0 (or later) features. The output returns the ID of the camera and the ID of the camera audio. In reality, I want to disable both, so I am in a good shape here.

Note  Your values and devices may be different than those I have on my laptop. This is why you should first look at sample data in Device Manager before getting too carried away. If you have multiple device makers, you need to use something like the Switch statement to pick up the specific devices you have on your network.

PS C:\> (Get-CimInstance Win32_PnPEntity | where caption -match 'integrated camera').pnpDeviceID

USB\VID_04F2&PID_B2EA&MI_00\7&35A58CE9&0&0000

Now that I know I am getting the information I need to obtain from WMI, I can go to the next step…

Use Devcon to disable the camera

Devcon is the command-line, device-manager utility. It was originally developed to help hardware makers who were working with devices, and as such, it was included in the Windows DDK. Because it is also a useful utility for network administrators, it is available via the Microsoft Download Center. It has not been updated in a while, and it is not supported. However, it is just the tool to use when working with hardware devices that do not have a nice cmdlet or WMI class that exposes management interfaces. A good overview of Devcon is presented in article 311272in the Microsoft Knowledge Base.

Note  There is a 32-bit version and a 64-bit version, and you must use the version that is appropriate to your operating system. Luckily, the download package includes both versions, and when you unzip the package, it creates ia64 and i386 folders.

If you are using a later version of the operating system, you need to download the Windows Driver Kit (WDK) for the appropriate version of the operating system. For example, there is a WDK 8.1 upgrade kit that contains a version of DevCon for Windows 8.1. To get information about the WDK and download it, see WDK and WinDbg downloads.

Unfortunately, a full installation of the WDK is required, and you can only select the destination directory. So you might want to download and install Devcon on a virtual machine, and then after you copy the tools you want, revert the virtual machine. I really wish there was a tools-only installation, but there is not.

Also, the page states that Visual Studio is required, but you can skip that step, and during the installation, you can skip the warning. After all, if you are only after the tools—you do not need Visual Studio. MSDN has a good overview page about DevCon: Windows Device Console (Devcon.exe). It also provides examples about how to use the tool: Device Console (DevCon.exe) Examples.

When using Devcon to work with a device ID, it is required to preference the string with an ampersand. Therefore, I decided to use a format string to do this. Here is the script I have at this point:

$id = (Get-CimInstance Win32_PnPEntity |

where caption -match 'integrated camera').pnpDeviceID

$ppid = "{0}{1}" -f '@',$id

I change my location to where my copy of Devcon is. I then get the status of the webcam, disable the camera, and call the status again. If I use Devcon –r, it will reboot if required. I do not want to do that, and so I leave that part of the command off. Here are the three Devcon commands:

Set-Location c:\fso

Devcon status $ppid

Devcon disable $ppid

Devcon status $ppid

Here is the script and the output from the script:

Image of command output

In my Computer Management console, I see the camera is disabled:

Image of menu

To enable the camera, I would use:

devcon enable $ppid

Well, that is about all there is to using Devcon to disable or enable the webcam.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Discover Laptop Webcam

$
0
0

Summary: Use Windows PowerShell to discover a webcam attached to your laptop.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find a webcam or camera that is attached to my laptop?

Hey, Scripting Guy! Answer Use the Get-CimInstance or the Get-WmiObject cmdlet, examine the Win32_PnpEntity WMI class,
          and look for something that matches camera in the caption.

By using Get-CimInstance in Windows PowerShell 3.0 or later:

Get-CimInstance Win32_PnPEntity | where caption -match 'camera'

By using Get-WmiObject in Windows PowerShell 2.0:

Get-WmiObject Win32_PnPEntity | where {$_.caption -match 'camera'}

Note  Not all webcams populate exactly the same way, so it may take a bit of searching
to find the right string.

PowerShell Best Practices: Examine the Issues

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about examining the issues surrounding Windows PowerShell best practices.

Microsoft Scripting Guy, Ed Wilson, is here. One of the best things about TechEd, no matter where it is held, is the chance to meet up with people who we do not normally get to see. For example, this year in Houston, Texas at TechEd 2014, we had Jaap Brasser in the Scripting Guys booth with us. We first met him at the Dutch PowerShell User Group, and he helped in our booth at TechEd in Madrid last year. But this year when I was speaking at the Dutch PowerShell User Group, he was in China. The cool thing is that he came to Houston for TechEd to help us out.

Another person we do not get to see very often is Honorary Scripting Guy and Windows PowerShell MVP, Sean Kearney. Sean and I have been talking about doing a video since before the Windows PowerShell Summit in Seattle. Well, it finally came about, and we recorded it at TechEd 2014 in Houston. Check it out: Scripting Cmdlet Style video. One word of warning, the video is addictive.

Dudes and dudettes!

Why all the fuss about Windows PowerShell best practices? Windows PowerShell is merely a scripting language. It is really interesting. I wrote three books about VBScript, and I taught literally hundreds of classes worldwide. Only rarely was there any question about best practices. It might be that most people who were writing VBScript were people who had a development background anyway. All developers study patterns and practices, and most shops have style guides that cover what most people are calling best practices.

But interest there is. For example, at the last three TechEds, I (along with Jeffrey Hicks, Don Jones, and Hal Rotenberg) provided a series of talks about Windows PowerShell Best Practices. In each case, the talks were overflowing and we turned people away. In New Orleans and Orlando, they added additional sessions, and those sessions also sold outl. I even wrote a bestselling book on the topic for Windows PowerShell 4.0: Windows PowerShell Best Practices.

What is it about Windows PowerShell best practices? For one thing, Windows PowerShell is easy-to-use and very powerful. As Voltaire (and also Uncle Ben in Spiderman) said, “With great power comes great responsibility.” So IT pros who use Windows PowerShell want to know what is the responsible way to use this great power.

The peanut butter and chocolate issue

Windows PowerShell has a dual personality, just like the peanut butter and chocolate issue from in the 80s. This dual personality issue is something that complicates all discussions of Windows PowerShell best practices.

For example, a common Windows PowerShell best practice is to avoid using aliases. Although there is somewhat agreement, not all Windows PowerShell MVPs agree with not using aliases in scripts. The issue is not quite so easy. For example, I do an awful lot of configuration directly from the Windows PowerShell console, and I do not write as many scripts now as I once did. In fact, with all the cool modules in Windows 8.1, Windows 8, Windows Server 2012 R2, and Windows Server 2012, I do not have to write as much. For example, I can quickly start all virtual machines on my system, local or remote, with a one liner:

Get-VM | Start-VM

When I can do something this powerful without scripting, there is no reason to place this command in a script, and try to parameterize it, add Help, comments, and blah blah blah. Dude, it is a one-line command, and I think it is a best practice to not write such a script. Besides, with over 8,000 scripts on my laptop, I would have a hard time finding the script in the first place. Difficulty in finding some scripts is why I started typing the commands at the console. I can re-create the command quicker than I can find the script.

I am, obviously, not alone in this. Last week in my series about profiles, several people shared functions that relate to managing the transcript or history in their Windows PowerShell console. This tells me that some of the most advanced Windows PowerShell people on earth are spending a decent amount of time working interactively at the Windows PowerShell console.

But what about scripts?

For the sake of this discussion, I break scripts into two categories (although there could be more than two categories). I define them as one-off scripts and tool scripts.

One-off scripts

One-off scripts are written to be used one or two times in a limited set of circumstances. Typically, they will have hard-coded values instead of parameterized input. Quite often, they are straight-line (no functions) scripts.

These types of scripts are little more than a collection of commands one might enter into the Windows PowerShell console. As such, they may use aliases for cmdlets, use positional parameters, and comment-based Help is neither expected nor required. Some Help may be present in the form of single-line comments. These types of comments generally are limited to instructing the user about which values in the script need to be changed to permit it to run in a slightly different environment.

I often write these types of scripts. I am doing so when I want to illustrate a Windows PowerShell technique and I do not want to overly complicate my examples. Or I might write something for myself to automate something that involves more than a single line in the Windows PowerShell console.

In fact, when I am writing for myself, I treat these types of scripts as an extension of the Windows PowerShell console.

Tool scripts

Tool (also known as utility) scripts are written in such a fashion as to promote reuse. Typically, all points that could be configured are parameterized in such a way as to facilitate running the script from the Windows PowerShell console or from a scheduled task. Often these scripts take the form of advanced functions, and they are stored in Windows PowerShell modules. The best of this class act and behave just like Windows PowerShell cmdlets—in fact, they are script cmdlets.

The advantage of writing in this fashion is the ability to configure, extend, or modify the way Windows PowerShell behaves in your environment. The scripts are written to be used by others. In this way, they become like additional cmdlets, but they are customized for the environment. They behave exactly the way they need to in order to facilitate the work to be done. For these, we want to use the power of Windows PowerShell to ensure reliability and readability, and to promote discoverability.

That is an overview of the issues surrounding Windows PowerShell best practices. Best Practices Week will continue tomorrow when I will talk about Windows PowerShell best practices in the Windows PowerShell console.

This week, I will discuss various Windows PowerShell best practice scenarios. Feel free to chime in to the discussion via the comments section, or via Twitter, Facebook, or scripter@microsoft.com. As always, you can post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 2129 articles
Browse latest View live