Quantcast
Channel: Scripting Blog
Viewing all 2129 articles
Browse latest View live

PowerTip: Find Disk Partition Information with PowerShell

$
0
0

Summary: Learn how to use a Windows PowerShell function in Windows 8.1 to get partition information.

Hey, Scripting Guy! Question How can I use Windows PowerShell on my computer running Windows 8.1 to find disk partition information?

Hey, Scripting Guy! Answer Use the Get-Partition function.


PowerShell Best Practices for the Console

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about Windows PowerShell best practices for the console.

Microsoft Scripting Guy, Ed Wilson, is here. This morning, the Scripting Wife and I decided to head to a new breakfast place that had great reviews on Yelp. We grabbed our Surface 2s and headed into town. Teresa had her new Surface 2 RT with 4G, and I took my new Surface 2 Pro with the power keyboard.

One of the things that got my attention about this restaurant was the statement that they made their scones in house from fresh ingredients instead of from mixes. They also claimed to have 30 different types of tea, so I was in.

Well, they did have scones, but most were covered with ½ inch thick sugar icing. I did find a multiberry one that was not. Most of the teas were fruit or herb, which I am sure you know is not even a real tea. But I did settle on a nice cup of English Breakfast tea. They had a good Internet connection, so our breakfast was worthwhile.

Speaking of worthwhile…

I spend most of my day with the Windows PowerShell console. In fact, I generally have two Windows PowerShell consoles open at the same time. I have one in which I am working, and a second one where I am looking up Help content. If I need elevated permissions, I open a third Windows PowerShell console.

Remember the purpose

The key thing to remember is the purpose of the operation. For example, when I am working interactively at the Windows PowerShell console, I am focusing on commands and quickly getting work done. Here are some of the things that I do:

  1. I turn on the Windows PowerShell transcript: Start-Transcript.
  2. I do not use Set-StrictMode.
  3. I use aliases as much as I possibly can.
  4. I use Tab expansion where I cannot use an alias.
  5. I make extensive use of the history: Get-Command –noun history.
  6. I make extensive use of my Windows PowerShell profile.
  7. If a command goes to more than two lines, I move it to the Windows PowerShell ISE, and format it so it is easier to read.
  8. I use positional parameters.
  9. I tend to use rather non-complicated syntax.
  10. I do a lot of grouping, and I select properties from returned collections.
  11. I explore Help for cmdlets and examples in my other Windows PowerShell console. Often I experiment with modifying Help examples.
  12. I use type accelerators when appropriate, but I prefer to use standard Windows PowerShell cmdlets.
  13. I am not shy about using standard command-line utilities inside the Windows PowerShell console if no easy Windows PowerShell equivalent exists.
  14. I like to use Out-Gridview to help me visualize data and to help explore data relationships.
  15. I prefer to store returned objects in variables, and then sort, filter, and group the data. In this way, I only have to obtain the data once.
  16. I like to set $PSDefaultParameterValues for cmdlets that I always use in a standard way.
  17. I like to store my credentials in a variable. I use Get-Credential early in my Windows PowerShell session, and then I can reuse the credentials. I typically use a variable named $cred so it is easy for me to remember.
  18. I like to create a list of the remote computers I am going to use early in my Windows PowerShell session. Typically, I use the Get-ADComputer cmdlet and filter out what I do not need.
  19. I like to create remote Windows PowerShell sessions to my target computers early in my Windows PowerShell session. I store the sessions in a variable I typically call $session. Most of the time this is a CimSession, but occasionally it is a normal remote Windows PowerShell session.
  20. In my Windows PowerShell profile, I create aliases for the cmdlets I use on a regular basis. My list is now around 20 aliases.
  21. I create several Windows PowerShell drives (PSDrives) in my profile. I like to have a PSDrive for my module location and one for my script library.
  22. I parse my environmental variable so it is easy for me to access resources such as my document library, music library, and photo library. I store the paths in appropriate variables, so I can use $doc instead of C:\Users\ed\Documents\.
  23. I use PSReadline.

That is a quick overview of best practices for working with the Windows PowerShell console. Best Practices Week will continue tomorrow when I will talk about best practices for Windows PowerShell scripts. What are some of the things that you do to make life easier when you are working in the Windows PowerShell console?

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Ensure Compliance with PowerShell Best Practices

$
0
0

Summary: Learn how to force Windows PowerShell to comply with basic best practices.

Hey, Scripting Guy! Question Is there an easy way for me to help Windows PowerShell comply with basic best practices?

Hey, Scripting Guy! Answer Use the Set-StrictMode –Version Latest command to ensure that you cannot reference things such as
          uninitialized variables and non-existent properties of an object:

 Set-StrictMode -Version latest

PowerShell Best Practices: Simple Scripting

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about Windows PowerShell best practices for simple scripts.

Microsoft Scripting Guy, Ed Wilson, is here. "Heyyyyy! Script it, baby!" The words of Scripting Cmdlet Style continue to echo through the house. If you have not seen the latest Windows PowerShell community video by Sean Kearney, you should definitely check it out. It is really well done, and so far, it has received four thumbs up on You Tube.

When I am working with Windows PowerShell, and the command begins to wrap to the second line, I start thinking about migrating to the Windows PowerShell ISE. If the line goes beyond two lines, I definitely move to the ISE. The reason is because command-line editing is rudimentary in the Windows PowerShell console, and also because the commands begin to be difficult to read. Also, as the length of command increases, my chance of executing the command correctly the first time decreases. If I pollute my Windows PowerShell console history with a bunch of commands that do not work properly, the whole thing becomes an exercise in futility.

Therefore, I view a simple script as only a little different than the Windows PowerShell console itself. It serves the same essential purpose: to allow me to quickly and efficiently execute simple Windows PowerShell commands. Therefore, my best practices for simple scripting are much the same as they are for the Windows PowerShell console:

  1. A simple script is 15 lines or less.
  2. A simple script is straight line execution (no functions).
  3. I do not feel compelled to use full cmdlet names, aliases are permitted.
  4. I do not feel I need to avoid positional parameters.
  5. I do not set strict mode (Set-Strictmode).
  6. I do not generally initialize all variables prior to use.
  7. I do not generally add comment-based Help.
  8. I add single line comments to clarify usage.
  9. I might add a single line comment that illustrates a sample command.
  10. I generally save the simple script with a reasonable name in my script folder.
  11. I do not bother with version, date, or other "header" types of information.
  12. If I had to look something up on MSDN, I will generally paste the URL in a comment.
  13. Often I will use the ISE to write a quick one-off script, and not save the script. In these cases, it is exactly like using the Windows PowerShell console, except that I have a bit better syntax highlighting.
  14. I use the Script Browser to find sample script, but do not use the Script Analyzer.
  15. I avoid end-of-line comments. Instead, I place comments on the line above the command to be commented, and I do so at the beginning of each line.
  16. I line up related objects and pieces of syntax; therefore, I pay attention to good formatting.
  17. I generally break each line of script at the pipeline character. Most of the time, I do not have more than one pipeline character on a single line.
  18. I leave pipeline characters to the right of the script.
  19. I will generally pipe to a Foreach-Object, instead of storing in a variable and using the ForEach statement to walk the collection.
  20. I always prefer a Windows PowerShell cmdlet to a COM object or to creating a .NET Framework class and calling methods and properties from that.
  21. I store credentials to a variable, and I use it during my Windows PowerShell sessions in the ISE.
  22. I often close the Windows PowerShell ISE to ensure I am not inadvertently taking advantage of an object created during a different scripting session. This is the best way to clear variables; remove objects; and release PS Drives, COM objects, and other scripting paraphernalia.

That is an introduction to Windows PowerShell best practices related to quick scripts. Best Practices Week will continue tomorrow when I will talk about functions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Find a List of PowerShell Type Accelerators

$
0
0

Summary: Use Windows PowerShell to list type accelerators.

Hey, Scripting Guy! Question How can I find a list of type accelerators available in Windows PowerShell?

Hey, Scripting Guy! Answer Use the Get property from TypeAccelerators class:

[PSObject].Assembly.GetType("System.Management.Automation.TypeAccelerators")::Get

PowerShell Best Practices: Simple Functions

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about Windows PowerShell best practices related to simple functions.

Microsoft Scripting Guy, Ed Wilson, is here. The first time I saw this tea, I thought, "Dude, what an extravagance." I mean, a fine cup of Darjeeling tea is a true treat. It is mellow, light, and it is lightly aromatic. So why cover up that fine tea with bergamot and make a Darjeeling Earl Grey tea? Well for one thing, it is so wonderful, that it becomes addictive. During my recent trip to Europe, I stopped at nearly a dozen tea shops, and of course, I came back with a couple of very nice bags of Darjeeling Earl Grey tea. This morning I brewed a pot, and I added a little lavender, cinnamon stick, lemon peel, and rock sugar (which I also brought back from Europe). A couple squares of 90 percent cocoa chocolate and a handful of macadamia nuts completes my morning snack.

Note  If you have not yet seen it, you should check out the new community Windows PowerShell video, Scripting Cmdlet Style. It is inspirational and fun. See how many MVPs you can identify, how many Windows PowerShell team members you see, and if you can spot the Microsoft Scripting Guy. Filmed at TechEd 2014 in Dallas Texas, it is a tribute to the Windows PowerShell community and to the great technology we love.

Image of booth

A simple Windows PowerShell function consists of three parts: the function keyword, the name of the function, and a script block. In fact, I can create a function in the Windows PowerShell console on a single line, for example:

function demo {"hello $input"}

I can now pipe input to the demo function. The command and its output are shown in the following image:

Image of command output

In reality, I do not do this. Whereas it is theoretically possible to create functions directly inside the Windows PowerShell console, unless you happen to be Bruce Payette, I simply do not think it is a very practical solution. When I write simple functions, I use the Windows PowerShell ISE or a commercial script editor such as PowerShell Studio 2014. In fact, I use the Windows PowerShell ISE as a more advanced and more customizable Windows PowerShell environment.

Here are the best practices I follow when I write simple functions:

1. Write simple functions in a script editor, such as the Windows PowerShell ISE.

2. Use the Verb-Noun naming convention. Although it is possible to use very nondescript names (such as a, b, c), I prefer to use Verb-Noun. It does not hurt anything, and it provides a good foundation if I decide to turn the function into an advanced function at a later time.

3. I like to use $input and pipe input to a simple function. This makes things really easy, and I do not have to fool with things like parameters.

4. I do not like to write one-line functions, no matter how compact they may be. They are a bit hard to read. If they are really simple, I put the function keyword and the Verb-Noun name on one line, and the script block on a second line.

5. I like to include a space after the opening of the script block and before the closing of the script block, for example:

{ “hello $input” }

This type of syntax will make it easier when I decide I want to add more stuff to the script block.

6. I like to organize my functions in the order that they will appear in the script. But I do not get too carried away with this, and quite often the order ends up jumbled.

7. Use the regions in the Windows PowerShell ISE. To do this, I begin with the #region tag and end with the #endregion tag. This permits collapsing the region later to facilitate development.

8. A good function should do one thing, and do it well.

9. A good function should be flexible and accept different types of input.

10. A good function should have one way in and one way out.

11. A good function should always return an object, and not formatted text.

12. A good function avoids using things like Write-Host that will not be captured in the output stream.

13. A good function does not pollute the Windows PowerShell environment.

14. Anything that is set in a function should be unset when the function completes running. The best way to do this is to capture the current value and store in a variable. Then when the function is finished, set the value back to the initial value that is stored in the variable. An example of this is changing the value of $ErrorActionPrefrence.

15. When you are finished using your function, you may want to remove the function from the Function drive. An example of this is:

Get-Item Function:\demo | Remove-Item

16. If you have made a substantial number of changes to your running Windows PowerShell environment, the easiest way to clean things up is simply to close the Windows PowerShell ISE or console. To do this you can type exit at the Windows PowerShell prompt.

17. You should think about adding some rudimentary input validation to even a simple function. For example, if the function accepts a computer name for input, you might decide to use Test-Connection to ensure the computer is online prior to running the rest of the function. Or you could have an array of permissible computer names, and ensure that the input name is contained in the array.

18. If the function is supposed to do something, and it might fail, you should include script to clean up if it does fail. One easy way to do this is if your provider supports transactions.

19. Even for a simple function, you should include at least a comment or two about how to use the function. It does not have to be full-blown comment-based Help, but at least include a sample syntax and maybe something about the purpose of the function.

20. I generally like to include a date written in my function, even if I do not add versioning. This give some hint of the version of the function should I run across a different function with the same name.

21. In general, if you are following the Verb-Noun construction, it is quite likely that you will run into naming conflict between different functions. For example, I have over a dozen Get-Bios functions on my computer that I have written over the years. This is the reason for following at least rule #20, and for this rule. It is a good idea to include a major.minor version number in your script, for example: 1.0 for first version of script, 2.0 for a rewrite that added new functionality, and 2.1 for a minor revision that corrected a flaw or cleaned up the script a bit.

22. Keep in mind that these are simple functions, and don’t get too carried away. But if you find that the function is becoming really useful, think about moving it to an advanced function.

That is all there is to using simple functions with Windows PowerShell. Best Practices Week will continue tomorrow when I will talk about advanced functions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Obtain a List of All Functions in PowerShell

$
0
0

Summary: Learn how to easily find functions in Windows PowerShell.

Hey, Scripting Guy! Question How can I see what functions exist in my current Windows PowerShell session?

Hey, Scripting Guy! Answer Use the Function drive, for example:

dir function:

PowerShell Best Practices: Advanced Functions

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about best practices for Windows PowerShell advanced functions.

Microsoft Scripting Guy, Ed Wilson, is here. This morning, I am again enjoying a cup of my Darjeeling Earl Grey tea. Today I added a bit of rose petal, lemon grass, and a cinnamon stick to the mixture. Match it with a homemade cinnamon raisin scone with a little locally sourced butter, and I am certainly in the mood to work on Scripting Guy blog posts.

I was up last night reading Piers Plowman in the original old English. The poem (written more than 650 years ago) is certainly interesting, and it has inspired lots of other writers. It can be interpreted primarily as a political document, and many of the passages deal with the difference between what is said, what is done, and what is ideal.

So what does that have to do with Windows PowerShell? Well when dealing with Windows PowerShell best practices, there is often a difference between what is said, what is done, and what is ideal. But in contrast to Piers Plowman, these differences are more a function of expedience than of nefarious intent.

When talking about Windows PowerShell advanced functions, there is a perceived disconnect between the assumed complexity, and the potential benefit of such an exercise. I do not think that an advanced function needs to implement every potential feature, such as the 77-line advanced function snip here:

Image of script

There are some things, however, that should happen in an advanced function: comment-based Help is the minimum. Named parameters that use type constraints is another. Structured error handling is a good one to also implement. One of the reasons for writing an advanced function is that it is perfect for adding to a module.

Here are some best practices for advanced functions:

  1. All advanced functions should implement a minimal level of comment-based Help. The minimum items to add are the Synopsis, Description, and Example nodes.
  2. All advanced functions should implement named parameters. At a minimum you should provide types for the parameters. Parameter validation should be used to simplify error handling when it makes sense.
  3. You should consider adding parameter aliases if your parameter names are long.
  4. The examples you use in your comment-based Help should be examples that actually work in a wide variety of environments.
  5. You should implement structured error handling in advanced functions by using Try/Catch and Finally.
  6. In your Finally block, you should set your environment back to its base configuration—that is, remove any changes you added.
  7. You should use cmdlet binding and add support for the –debug parameter at a minimum.
  8. Consider adding support for the –verbose and –whatif parameters if it makes sense for your situation.
  9. Give your advanced function a good descriptive Verb-Noun name.
  10. Consider creating aliases for your advanced function that will make it easier to use.
  11. If you have a group of related advanced functions, consider adding them to a module.
  12. In your module, create a letter range for your aliases, so all are three, four, or five letters that make sense. You can use a letter for each major syllable, such as GPS for Get-Process.
  13. If you create a module and write an advanced function that does something, consider also writing a function that reports that something, and that undoes the thing that was done. This would typically include the verbs Get, New, and Remove.
  14. When picking out verbs, always use standard verbs. Use Get-Verb to see what verbs are available. Refer to system cmdlets for examples of usage and follow the Windows PowerShell team examples.
  15. Always return an object from your advanced function, not only text.
  16. If you create a module with your advanced functions, make sure you also create a module manifest.
  17. Make sure your script is lined up, indented properly, and easy to read. If you can read and understand your script, you will simplify your debugging process.
  18. Make sure your script fits on a single screen without scrolling. Keep in mind that script that is too wide is difficult to read and to debug.
  19. Use regions to simplify debugging.
  20. Select and run script in sections when debugging a complicated function.
  21. Make sure you test your function in a pristine environment with no dependencies on profiles, admin rights, and versions to ensure the greatest compatibility.
  22. If you are unsure if your function runs in a down-level environment, use the #Requires tag for your current version of Windows PowerShell.
  23. Use Strict mode for your version of Windows PowerShell, and ensure that if you do, it matches the version set for #Requires.
  24. Test your function in a non-elevated environment to see if it requires elevation. If it does, use the Test-IsAdmin function to verify that the environment is elevated.
  25. Create a clean virtual machine for various versions of the operating system, and test your function there to see if there are operating system dependencies in addition to Windows PowerShell version dependencies.

That is all there is to using advanced functions in Windows PowerShell. It also closes Windows PowerShell Best Practices Week. Join me tomorrow when I will have a guest blog post written by Gary Jackson, who will talk about automating user creation.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


PowerTip: Set the PowerShell ISE Zoom Level

$
0
0

Summary: Use a Windows PowerShell command to set the ISE zoom level.

Hey, Scripting Guy! Question How can I ensure that the Windows PowerShell ISE zoom level is set to 100 percent all the time?

Hey, Scripting Guy! Answer Use the $psISE options, and set the zoom level in your ISE profile:

$psISE.Options.Zoom = 100

Weekend Scripter: Use PowerShell to Automate Active Directory Accounts

$
0
0

Summary: Guest blogger, Gary Jackson, shares a technique to automate Active Directory accounts.

Microsoft Scripting Guy, Ed Wilson, is here. I met Gary Jackson at the Windows PowerShell Summit in April. We talked about some cool things he has been working on, so I invited him to write a guest blog post. First a little about Gary…

I am currently a senior database administrator in the healthcare field in Dallas, Texas. I started in the IT industry in 1983 with the Apple IIe. Over the years I’ve managed various systems from OpenVMS, Windows, UNIX, to Netware. Scripting has always been a big part of my daily life because I’m too lazy to do real work. My motto is, “Let the computer do the work.” I was introduced to Windows PowerShell (Monad at the time) in 2005, but I didn’t use it much until Windows PowerShell 2.0. At that point, I made the switch from VBScript to Windows PowerShell.

Creating Active Directory accounts is one of the job duties that most people in IT hate or strongly dislike. And if you’re one of the unlucky IT shops that doesn’t have a provisioning tool, I’d suggest using Windows PowerShell to automate the process. That is what I decided to do about five years ago.

To be effective, the script had to:

  • Create the mailbox (requires the Microsoft.Exchange.Management.PowerShell.Admin add-in) and the Active Directory account.
  • Place each mailbox in the appropriate storage group, based on the first initial of the user’s last name.
  • Place each Active Directory account in the appropriate organizational unit, based on the user’s department.
  • Create a shared home folder for each user.
  • Apply appropriate group membership for each account.
  • Set appropriate permissions for each home folder.
  • Send an email to the Help Desk to notify them about the account status.

To further automate this process, I had one of our programmers create a hook into the Lawson HR system that would append each new user into a .csv file that my script would then parse to create the new Active Directory account.

User accounts are stored in departmental organizational units with template accounts. The template accounts are used for copying the correct group membership to each user account and for setting the ParentDN so that the new account is placed in the appropriate organizational unit.

By using the Exchange Server module for Windows PowerShell, the following command will create the user mailbox and Active Directory account, and place each mailbox in the appropriate storage group.

Import-Csv \\acmeweb02\c$\employees.csv |

            foreach {

      $name=$_.name

      $alias=$_.alias

      $user=$alias

      $userprincipalname="$alias@acme.org"

      $EmpID=$_.EmpID

      $Title=$_.title

      $Dept=$_.department

      $LastNameInit=$_.Lastname

      $LastNameInit=$LastNameInit.substring(0,1)

      $DeptNumber=$_.DeptNumber

      $suffix="TEMPLATE"

      $tmplateUser="$DeptNumber$suffix"

      $templateDN=get-qaduser-includedproperties parentContainerDN  $tmplateUser | Select–ExpandProperty parentContainerDN

                       

      switch -regex ($LastNameInit)

      {

         "[A]" {$Database="ACMEMX02\6th Storage Group\A Mailboxes"}

         "[B]" {$Database="ACMEMX02\7th Storage Group\B Mailboxes"}

         "[E]" {$Database="ACMEMX02\10th Storage Group\E Mailboxes"}

         "[F]" {$Database="ACMEMX02\11th Storage Group\F Mailboxes"}

         "[G]" {$Database="ACMEMX02\12th Storage Group\G Mailboxes"}

         "[S]" {$Database="ACMEMX02\13th Storage Group\S Mailboxes"}

         "[T]" {$Database="ACMEMX02\14th Storage Group\T Mailboxes"}

         "[U-V]" {$Database="ACMEMX02\15th Storage Group\U-V Mailboxes"}

         "[W-Z]" {$Database="ACMEMX02\16th Storage Group\W-Z Mailboxes"}

         "[H]" {$Database="ACMEMX02\17th Storage Group\H Mailboxes"}

         "[I-K]" {$Database="ACMEMX02\18th Storage Group\I-K Mailboxes"}

         "[L]" {$Database="ACMEMX02\19th Storage Group\L Mailboxes"}

         "[M]" {$Database="ACMEMX02\20th Storage Group\M Mailboxes"}

         "[N-O]" {$Database="ACMEMX02\21st Storage Group\N-O Mailboxes"}

         "[P-Q]" {$Database="ACMEMX02\22nd Storage Group\P-Q Mailboxes"}

         "[C]" {$Database="ACMEMX02\8th Storage Group\C Mailboxes"}

         "[D]" {$Database="ACMEMX02\9th Storage Group\D Mailboxes"}

         "[R]" {$Database="ACMEMX02\23rd Storage Group\R Mailboxes"}

                        }

       

       new-mailbox -name $name -alias $alias -Firstname $_.Firstname -LastName $_.Lastname -userPrincipalName $userprincipalname `

       -database $Database -OrganizationalUnit $templateDN -Password $Password

The following function is used to create the home shared folders:

function CreateHomeDir

{

   Param([string]$user)

   $homepath="f:\Users\$user"

   $shareName="$user$"

   $Type=0

   $pathToShare="\\acmenet01\f$\Users\$user"

   New-Item -type directory -path $pathToShare|Out-Null

   $WMI=[wmiClass]"\\acmenet01\root\cimV2:Win32_Share"

   $WMI.Create($homepath,$shareName,$Type)|Out-Null

}

As mentioned earlier, each departmental organizational unit contains a template account with the appropriate security group membership for that department. We use the template account to copy the group membership to the newly created user account. I use the Quest Active Directory tools (the Quest.ActiveRoles.ADManagement add-in), but the script can be easily modified to use the Active Directory module. The function is shown here:

function set-Attributes

{

   Param(

      [string]$user,

      [string]$tmplateUser

   )

 

   AddToCompanyWideGroup -user $user

 

   $groups=get-qaduser $tmplateUser | select-ExpandProperty memberof

   foreach ($Group In $groups)

   {

      add-qadgroupmember-identity $Group -member $user

   }

            $arrAttrs="department"

            $FirstName=$_.Firstname

            $LastName=$_.Lastname

            $displayName="$FirstName $LastName"

            $State="TX"

            $Country="United States"

            $CountryAbbr="US"

            $CountryCode="840"

            $Company="Acme Corporation"

            $ScriptPath="ACMELOG"

            $HomeDrivePath="\\acmenet01\$user$"

            $HomeDrive="U:"

            $user.st=$State

            $user.scriptpath=$ScriptPath

            $user.company=$Company

            $user.pwdLastSet=0

            $user.countryCode=$CountryCode

            $user.co=$Country

            $user.c=$CountryAbbr

            $user.employeeID=$EmpID

            $user.title=$Title

            $user.displayName=$displayName

            $user.SetInfo()

            $user.homeDrive=$HomeDrive

            $user.homeDirectory=$HomeDrivePath

            foreach ($Arr In $arrAttrs)

            {

                        $updatedAttr=$UserToCopy.Get($Arr)

                        $user.Put($Arr,$updatedAttr)

            }

            $user.SetInfo()

            $user.physicalDeliveryOfficeName=$user.department

            $user.description=$user.title

            $user.SetInfo()

}

Now we need to set the appropriate permission for the home folder.

function SetSharePerm

{

   Param([string]$user)

   $shareName="\\acmenet01\$user$"

   $userName="acme\$user"

   $SUBINACL='c:\subinacl.exe'

   &$SUBINACL /Share $shareName /grant="acme\Domain Admins"=F /grant=$userName=C |Out-Null

}

Finally, we will send an email to the Help Desk to notify them that the accounts have been created.

function mailit {

Param(

[string]$user,

[string]$FirstName,

[string]$LastName

)

$EmailList="helpdesk@acme.org"

$MailMessage= @{

To=$EmailList

From="DONOTREPLY@acme.org"

Subject="NEW USER ACCorganizational unitNT"

Body="A new user account has been created. Initial login information listed below: `n

First name: $FirstName

Last name:  $LastName

Userid: $user

Password: letmein

Thank You."

SmtpServer="smtp.acme.org"

ErrorAction="Stop"

            }

Send-MailMessage @MailMessage

}

That’s all it takes. The script can be scheduled to run once a night, making Active Directory account creation totally hands free.

Automating this process gives us time to do more challenging and exciting work—like writing new Windows PowerShell scripts.

The full script can be downloaded from the Script Center Repository: PowerShell: Create AD Users from CSV File.

~Gary

Thanks for sharing your time and knowledge, Gary.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Report Daylight Savings Time

$
0
0

Summary: Use Windows PowerShell to report if it is daylight savings time.

Hey, Scripting Guy! Question How can I use Windows PowerShell to report if it is daylight savings time?

Hey, Scripting Guy! Answer Use the Get-Date cmdlet, and call the IsDaylightSavingTime method from the DateTime object:

(Get-Date).IsDaylightSavingTime()

Weekend Scripter: Predicting the Future of PowerShell

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about Windows PowerShell and the future of the technology.

Microsoft Scripting Guy, Ed Wilson, is here. Ahhh…the weekend. This morning, I decided (actually, I decided last night by not setting my alarm clock) to sleep in until the sun gently woke me with it’s warm embrace streaming in the window. It is a much more pleasant way to awaken than the jarring, blaring, ANK ANK ANK sound of my electronic alarm clock. I seriously believe that the engineer who designed this thing laughed heartily as he imagined the effect it would have on a slumbering Scripting Guy.

Anyway, I beat the evil alarm clock by not placing myself into its clutches, and I am still awake at 6:30. But it is somehow a kinder, gentler 6:30 than the one produced by the aforementioned evil clock. I decided to make a nice pot of English Breakfast tea with a bit of spearmint, peppermint, licorice root, and a cinnamon stick in it. With a toasted cinnamon raisin bagel, and I call it a good morning indeed.

The future of PowerShell?

Dude, this one is easy. Windows PowerShell is mission critical, and it is central to our management strategy. All enterprise products are adding more Windows PowerShell to their systems. A good way to look into the future is to look at TechEd.

Over the years, I have learned to look at the Microsoft TechEd conference as a very good way to predict what is coming from Microsoft in the next year to 18 months. Often, we even catch glimpses of things two years out. In the fast paced world of technology, that is like 14 years in doggie years (which for whatever reason, bears a reasonable relationship to Internet years).

The predictive power of TechEd

So how important is Windows PowerShell? Well for starters, Windows PowerShell grabbed three of the top ten TechEd 2014 talks in Houston this year. PowerShell.org printed 3,000 Desired State Configuration (DSC) resource books to hand out at the Scripting Guys booth and at presentations. They were gone in two days. In addition, there have been more than 10,000 downloads of the electronic version from the PowerShell.org Free eBooks website.

At the Scripting Guys booth this year, we talked to more than 5,000 people during the week. This equates to about half of all attendees at TechEd. After the first two days, we had nothing to give away, but people still came to talk to Windows PowerShell people. This is incredible.

If you missed TechEd 2014 in Houston, Texas, here is a rundown of the blog posts I published about TechEd. It will give you a really good feel about Windows PowerShell at the conference. (Some people were saying that next year they should change the name of TechEd to Windows PowerShell Summit. But that name is already taken, and the Windows PowerShell Summit in 2015 will be in Charlotte, North Carolina).

There was a lot of very rich Windows PowerShell content at TechEd 2014 this year, and the Scripting Wife had a hard time creating her ideal Windows PowerShell schedule. Interesting enough, she has been doing this for the last several TechEds, and this is the first time that we actually had people come up and say they followed her schedule. This happened more than once. Way cool.

We were able to arrange for some excellent guests to assist with fielding questions at the Scripting Guys booth. We had MVPs, Microsoft PFEs, Microsoft Windows PowerShell team members, and other luminaries from the scripting world. Jeffrey Snover was scheduled to talk to people for 30 minutes. He ended up staying for nearly an hour and a half—and the people kept coming. At one point Mark Minassi and Jeffrey Snover were sitting side-by-side, tag teaming questions. It was an incredible moment.

One of the things that we demonstrated to our visitors were the way cool Script Browser and Script Analyzer tools. To time with our demos, I worked with Scott and Bill to have a blog post announcing the new tool: Introducing Script Browser and Script Analyzer. People were really impressed with the power and how easy it is to use.

During TechEd 2014 in Houston, I wrote seven extra Hey, Scripting Guy! Blog posts about the conference, the people we were seeing, the questions that were asked, and things that were going on. Here is a quick link to those posts. They will give you a feel for the popularity of Windows PowerShell.

Scripting Cmdlet Style

One of the highlights of TechEd was when Windows PowerShell MVP and honorary Scripting Guy, Sean Kearney came by with his camera to film a video called Scripting Cmdlet Style. If you have not seen it, you should check it out. But bear in mind that it is addictive.

Although the video is really funny, the message is very serious, and it highlights the community support for Windows PowerShell, in addition to the power of Windows PowerShell to solve real-world issues. See how many Windows PowerShell MVPs you can count and how many members of the Windows PowerShell team are there.

What is coming next?

If you want to know what is next for Windows PowerShell, check out the preview of the Windows Management Framework 5.0. For a great overview of Windows PowerShell 5.0, check out this blog post: Windows Management Framework 5.0 Preview May 2014 is now available.

One of the really neat new features is PowerShellGet, which will make it easy to discover, install, and update Windows PowerShell modules. Jeffrey Snover also has a great blog post that talks about some of the improvements in the preview: Windows Management Framework V5 Preview.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Test Virtual Hard Disks

$
0
0

Summary: Use Windows PowerShell to test your virtual hard disks.

Hey, Scripting Guy! Question How can I easily test the virtual hard disks on my Windows 8.1 laptop to ensure that they will work?

Hey, Scripting Guy! Answer Get a collection of all .vhd and .vhdx disks, then pipe the resulting objects to the Test-VHD cmdlet:

Get-ChildItem -Path E:\vms\ -Recurse -Include *.vh* | Test-VHD

You can shorten the command to:

gci E:\vms\ -r -in *.vh* | Test-VHD

Use PowerShell to Troubleshoot Defrag Issues

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to troubleshoot defrag issues in Windows 8.1.

Microsoft Scripting Guy, Ed Wilson, is here. This week, I begin a week-long series about looking at the Windows event and diagnostic logs via Windows PowerShell. I have previously written about this, and you may want to look at some of the previous posts.

With the operating system becoming more complicated, and with more demands placed on it, it is inevitable that things will go wrong. However, Windows (especially Windows 8.1) is very robust. This means that a computer may appear to run well, yet there might be tons of things going on in the event log.

In the old days, back when it was typical for a network administrator to be in charge of less than 20 servers, it was a best practice to review the event logs (there were only three: Security, Application, and System) on a daily basis. Now, I have more than 20 virtual machines ON MY LAPTOP, and the number of event logs and diagnostic logs has exploded. It could take a person all week to properly review the logs for a single day. And in the end, it would be like painting the Golden Gate Bridge (that is, it needs painting before it is completely painted).

The neat thing is that Windows PowerShell makes it very easy to review event logs.

A quick look at Event Viewer

I opened the Event Viewer tool (I typed eventvwr inside the Windows PowerShell console), and I filtered the Application log to show only errors. I had a quick glance, and I saw something I was not expecting...a couple of Defrag errors. Hmm…

These errors are shown in the following image:

Image of menu

Well, I certainly was not expecting to see that. I was actually going to look at the application hangs. But I also noticed several other “problems.” Looks like I am going to be busy fixing my laptop this week.

So what is going on with the defrag?

The error says the volume recovery was not optimized because an error was encountered, and it gives a nice hex number. Unfortunately, this particular number only means that a parameter is incorrect, and a quick search reveals that it happens to be thrown by lots of applications.

Dude, where’s my defrag?

My first concern is that am I getting any defragging. I mean, my drive C: is SSD, but the modern defrag utility is supposed to detect this and perform a trim operation to optimize the drive. But what about my other drives? Clearly I need to investigate this.

I decide to use the Get-EventLog cmdlet to search the application log for messages from microsoft-windows-defrag. I sort by the TimeGenerated, and then I produce a list with only the TimeGenerated property and the Message properties. I know that the Get-WinEvent cmdlet is more powerful than the older Get-EventLog cmdlet, but for working with the application log, I did not think I needed to use Get-WinEvent. Here is the command (it is a single-line command that I broke at the pipe character to make it easier to read):

Get-EventLog -LogName Application -Source "microsoft-windows-defrag" |

Sort-object timegenerated -desc | Format-List timegenerated, message

When I run the command, the output in the following image appears:

Image of command output

I see that at least some of the drives are being defraged. Drive E: and drive D: are both defragmented. In drive C:, my SSD is optimized via the retrim operation. So all is groovy, right?

Well, no. The volume recovery is generating an error. Also, drive D: is a portable drive, and I really do not want to worry about having it defragmented. So clearly, I need to look at some stuff.

I cheat and open the Defrag utility

I could probably do what I need to do by using Windows PowerShell and maybe WMI or something else. But dude, it is my laptop, and as I indicated earlier, there are a lot of issues I need to look at. So I cheat and open the Defrag utility. But I don’t cheat too badly because, at least I open it via Windows PowerShell and not the GUI.

To be honest, I am not even sure what the utility is called, where I might find the icon to launch the utility, or any of that. But I do know the executable name is dfrgui (as in defrag user interface). So I type that into my Windows PowerShell console, and about a minute later, the user interface appears. I learn that the program is actually called Optimize Drives now, because that is what appears in the title bar. Here is what I see when it finally opens:

Image of menu

I see that the three drives (C, D (or FourTB_BU_Drive), and E) are all optimized. I also see that the utility correctly identifies my drive C: as an SSD. In addition, I now see what my problem is. There are two things I am concerned about:

  • The portable drive
  • The recovery drive

I do not want to defrag my four gigabyte portable backup drive because it will take forever. And I do not want to defrag my recovery drive. It is a special partition on my SSD. So I need to look at the settings. Obviously, I click the Change settings box. Next I click the Choose button. The dialog box is shown here:

Image of menu

So all I need to do is clear the check boxes for the back-up drive (FourTB_BU_Drive) and the Recovery partition. I also see that I can tell it to not automatically optimize new drives. So by clearing this box, I can keep it from trying to optimize other portable USB drives that I plug in from time-to-time.

Sweet. So Windows PowerShell pointed me to a problem that I did not even know I had. Nice.

That is all there is to using Windows PowerShell to troubleshoot defrag issues in Windows 8.1. Windows Event Log Week will continue tomorrow when I will talk about more cool things.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Open Printer Dialog Box from PowerShell

$
0
0

Summary: Open Printer and Devices in Control Panel from within Windows PowerShell.

Hey, Scripting Guy! Question How can I use Windows PowerShell to open Printer and Devices in Control Panel so I don't have to use the mouse?

Hey, Scripting Guy! Answer Windows 8 introduced the Show-ControlPanelItem cmdlet, which you can use with a wildcard character:

Show-ControlPanelItem -Name *print*


Use FilterHashTable to Filter Event Log with PowerShell

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using a filter hash table to filter the event log with Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. The weather here in Charlotte, North Carolina has turned hot and humid. As a result, the Scripting Wife decided to migrate north for a while. Actually, she is attending a conference in Cincinnati, Ohio. This has given me a bit of extra time to play around with Windows PowerShell and to work on my laptop.

The most powerful way to filter event and diagnostic logs by using Windows PowerShell is to use the Get-WinEvent cmdlet. Introduced in Windows PowerShell 2.0, the Get-WinEvent cmdlet is not new technology. But most people do not use the Get-WinEvent cmdlet because it seems to be more difficult to use. The Get-EventLog cmdlet that I used yesterday is easy-to-use, and for a lot of things, it works just fine.

But Get-WinEvent has several ways to filter the left side of the pipeline. When working with large logs, grabbing everything and sending it down the pipeline to a Where-Object cmdlet is not the most efficient thing to do. It fact, it can be downright slow. An example of this sort of slow command is shown here:

Get-EventLog -LogName application | where source -match 'defrag'

Get-WinEvent the easy way

The easiest way to perform powerful queries by using the Get-WinEvent cmdlet is to use the FilterHashTable parameter. As the parameter name might imply, it accepts a hash table as a filter. A hash table is made up of key/value pairs. Therefore, the trick is to know the permissible key names and what an acceptable value for that key might look like. Here is a table that shows the key names, the data type it accepts, and whether it will accept a wildcard character for that data value.

Key name

Value data type

Accepts wildcard characters?

 LogName

 <String[]>

 Yes

 ProviderName

 <String[]>

 Yes

 Path

 <String[]>

 No

 Keywords

 <Long[]>

 No

 ID

 <Int32[]>

 No

 Level

 <Int32[]>

 No

 StartTime

 <DateTime>

 No

 EndTime

 <DataTime>

 No

 UserID

 <SID>

 No

 Data

 <String[]>

 No

 *

 <String[]>

 No

One thing I like to do when I build a query using Get-Winevent is to take it a step at a time. I begin with the LogName. As shown here, this first query is the same as typing Get-EventLog –LogNameApplication:

Get-WinEvent -FilterHashtable @{logname='application'}

The next thing I want to specify (I do not have to use the order that is presented in my previous table) is the ProviderName. This example returns entries generated by the .NET RunTime source, in the Application log:

Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime' }

The ProviderName is the name that appears in the Source field in the Event Viewer. This is shown here:

Image of menu

I use the –path parameter when I am working with archived event logs. I wrote a good blog post about that: Use PowerShell to Parse Saved Event Logs for Errors.

In my hash table, the next key is the Keywords key name. This sounds like you would be able to use keywords to filter out event log events. But the Data Type field holds an array made up of the [long] value type, and a [long] value type holds a really large number. Here is the maximum value:

PS C:\> [long]::MaxValue

9223372036854775807

Therefore, what Windows PowerShell wants is a number, not a keyword (such as Security). I can use the GUI to see what permissible keywords are feasible. This is shown here:

Image of menu

The problem is that when I attempt to use one of these keywords, I get an error message. This is because these are string values, and not long numbers. So when I have a potential list of keywords that have associated numeric values, I think enumeration. In fact, these are the StandardEventKeywords enumeration, as shown here:

PS C:\> [System.Diagnostics.Eventing.Reader.StandardEventKeywords] | gm -s -MemberType property

   TypeName: System.Diagnostics.Eventing.Reader.StandardEventKeywords

Name             MemberType Definition

----             ---------- ----------

AuditFailure     Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

AuditSuccess     Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

CorrelationHint  Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

CorrelationHint2 Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

EventLogClassic  Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

None             Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

ResponseTime     Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

Sqm              Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

WdiContext       Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

WdiDiagnostic    Property   static System.Diagnostics.Eventing.Reader.StandardEventKey...

This enumeration is documented on MSDN (StandardEventKeywords Enumeration), but it does not display the enumeration numeric values. For a function that will do this, take a look at my series of blog posts about enumerations, and in particular, read Enumerations and Values. In fact, my Get-EnumAndValues function is so helpful that it is a function I have in my Windows PowerShell profile. When I use the Get-EnumAndValues function, I retrieve the following results:

Name                           Value                                          

----                           -----                                          

AuditFailure                   4503599627370496                               

AuditSuccess                   9007199254740992                               

CorrelationHint2               18014398509481984                              

EventLogClassic                36028797018963968                              

Sqm                            2251799813685248                               

WdiDiagnostic                  1125899906842624                               

WdiContext                     562949953421312                                

ResponseTime                   281474976710656                                

None                           0                                            

Now the query looks like the following (this is a one-line command that is wrapped for readability):

Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime'; keywords=36028797018963968}

Because this is an enumeration, I can also use the actual enumeration static property, but I have to convert it to the value by calling the value__ property, and not to the returned string. To do this, I might use the following script:

$c = [System.Diagnostics.Eventing.Reader.StandardEventKeywords]::EventLogClassic

Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime'; keywords=$c.value__}

As I have been running my commands, I have been getting increasingly shorter outputs of event log records. From that list, I select the particular event ID, which in FilterHashTable becomes the keyword ID. This command is shown here:

Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime'

; keywords=36028797018963968; ID=1023}

I now decide that I want to filter out only the errors. This is the Level property. But when I use the command shown here, it generates an error message.

PS C:\> Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime'

; keywords=36028797018963968; ID=1023; level='error'}

I go back to my has table, and sure enough, I see that Level needs an Int32, or an array of Int32s. So it wants another number, and not the standard error, warning, information that I am used to using. This smells like another enumeration.

I look back at the MSDN page I had open for the previous enumeration, and I discover that there is a StandardEventLevel Enumeration. It works the same way as the previous enumeration. I can create the class, use the Get-Member with –Static, and use my enumeration value function. Here are the enumeration member names and their associated values:

Name                           Value                                           

----                           -----                                          

Verbose                        5                                              

Informational                  4                                              

Warning                        3                                              

Error                          2                                              

Critical                       1                                              

LogAlways                      0        

Armed with this information, I know that an error is level 2. So here is the modified command:

Get-WinEvent -FilterHashtable @{logname='application'; providername='.Net Runtime'; keywords=36028797018963968; ID=1023; level=2}

That is all there is to using Get-WinEvent to look at event log keywords. Event Log Week will continue tomorrow when I will talk about more way cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Find Permissible Event Log Keywords with PowerShell

$
0
0

Summary: Learn to find permissible event log keywords values with Windows PowerShell.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find examples of keywords when I use a FilterHashTable value
          with Get-WinEvent?

Hey, Scripting Guy! Answer Create an instance of the StandardEventKeywords enumeration, and then look at the Static properties:

[System.Diagnostics.Eventing.Reader.StandardEventKeywords] | gm -s -MemberType property

Data Mine the Windows Event Log by Using PowerShell and XML

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Get-WinEvent in Windows PowerShell with FilterXML to parse event logs.

Microsoft Scripting Guy, Ed Wilson, is here. Today I am sipping a cup of English Breakfast tea. In my pot, I decided to add a bit of spearmint, peppermint, licorice root, lemon peel, orange peel, and lime peel to the tea. The result is a very refreshing cup of tea with a little added zing.

XML adds zing to event log queries

The other day when I opened the event log on my laptop, I noticed all the red stop signs, and I thought, "Dude, I really need to investigate this."

I decided to look at the application hangs. Although I can use the Event Viewer to filter for application hang, errors, and event ID 1002, that is as far as I can go by default. To see what application is hanging, I need to go into the message details box. This is a manual process and it is shown here:

Image of menu

It is possible to improve this situation, and to filter only on a specific application. This is because the data is stored in the Event Data portion of the message. This section appears when I select XML View from the Details tab, as shown here:

Image of menu

I can use this information to create a custom XML query by clicking Filter Current Log, clicking XML, and then clicking the Edit query manually check box. This is shown here:

Image of menu

In fact, this process outlines my process for creating a custom XML filter to filter the event log. I select as much as I need by using the graphical tools, then I edit the XML query manually in the dialog box. The advantage is that if I do not get the query correct, it does not display any records, it displays the incorrect records, or it tells me my query is invalid. At least that is what happens to me.

But I do not directly edit the query in the dialog box because if I get it wrong the first time, I have messed up my query. So I copy the autogenerated XML filter and paste it for safe keeping in a blank Notepad. I then edit the query. If I mess it up, I simple return to Notepad, retrieve my previous query, and start over. Simple.

Looking for instances of LYNC hangs

When I was rummaging around in the Event Viewer, I noticed that several of the hangs were caused by Lync.exe. So, I thought I would create a custom query to look for those instances. To do this, I need to get into the Event Data node and look for Lync.

After I create a generic XML query by using the GUI tools, I copy the query, and turn it into a here string. Here is the basic query:

<QueryList>

  <Query Id="0" Path="Application">

    <Select Path="Application">*[System[Provider[@Name='Application Hang'] and (Level=2) and (EventID=1002)]] </Select>

  </Query>

</QueryList>

To make it a here string, I add @” and “@ around the string, and I assign it to a variable. Now I need to access EventData and the first data that is equal to Lync.exe. I add it after EventID=1002)]] by using and to bring them together. Here is the completed query.

$query = @"

<QueryList>

  <Query Id="0" Path="Application">

    <Select Path="Application">*[System[Provider[@Name='Application Hang']

    and (Level=2) and (EventID=1002)]]

    and *[EventData[Data='lync.exe']]</Select>

  </Query>

</QueryList>

"@

To run it, all I do is call the Get-WinEvent and pass it to the $query parameter as a value for –FilterXML. This is shown here:

Get-WinEvent -FilterXml $query 

The command and the results are shown in the following image:

Image of command output

Without using XML

Without using XML, someone may come up with a command something like the following:

Get-WinEvent -LogName application |

    where { $_.providername -eq 'application hang' -and

    $_.level -eq 2 -and

    $_.ID -eq 1002 -and

    $_.message -match 'lync.exe'}

It works, and it gets the job done. But what about the results?

Although the command seems to work pretty well, I will use Measure-Command to see how well. To do this, I add the command to a script block for Measure-Command. Here is what it looks like:

Measure-Command {Get-WinEvent -LogName application |

    where { $_.providername -eq 'application hang' -and

    $_.level -eq 2 -and

    $_.ID -eq 1002 -and

    $_.message -match 'lync.exe'} }

The results? It takes 10.16 seconds as shown here:

Image of command output

And now for the XML query...

$query = @"

<QueryList>

  <Query Id="0" Path="Application">

    <Select Path="Application">*[System[Provider[@Name='Application Hang']

    and (Level=2) and (EventID=1002)]]

    and *[EventData[Data='lync.exe']]</Select>

  </Query>

</QueryList>

"@

Measure-Command {Get-WinEvent -FilterXml $query }

The results take a mere 0.07 second. This is an amazing speed increase. Here is an image of the script and the output:

Image of command output

Although this is a great performance, and it makes me happy on my laptop, suppose I was trying to run the command against a thousand computers. That ten seconds stretches into over two and a half hours. I spent less than five minutes making the query. So five minutes to save ten seconds is not a great investment. But five minutes dev time to save over two and a half hours is a great ROI.

Spend a little time to work out the syntax for XML filters by using Get-WinEvent. This is an area where a bit of investment in learning will pay off handsomely in the future.

That is all there is to using Get-WinEvent and an XML filter to parse the event log message data. Event Log Week will continue tomorrow when I will talk about more cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to See Latest Log Entry

$
0
0

Summary: Easily see the newest log entry by using Windows PowerShell.

Hey, Scripting Guy! Question How can I use Windows PowerShell to quickly check the most recent entry from a classic event log,
          such as the application, system, or security log?

Hey, Scripting Guy! Answer Use the Get-EventLog cmdlet and specify the –newest parameter, for example:

Get-EventLog application -new 1

   Or use the Get-WinEvent cmdlet:

Get-WinEvent application -max 1

Using PowerShell to Parse System Log for Windows Updates

$
0
0

Summary: Learn how to use XML and Windows PowerShell to parse the Windows system event log for Windows updates.

Microsoft Scripting Guy, Ed Wilson, is here. Sometimes I come up with a solution, and then I go looking for a problem to fix. Not often, but sometimes. This is usually the result of playing around with Windows PowerShell on the weekend, and trying different things until I come up with something I think is cool.

This is not one of those occasions. In fact, today’s blog post is pretty important for several reasons. The first reason is that I show how to use an XML filter to retrieve Windows event log entries that are written by the Windows Update. Not too big of a deal because I did that yesterday—at least, I used XML to filter the event log. In and of itself, this is a very valuable skill to develop because it generates huge performance increases.

The second reason this post is important is because I also show you how to use XML to retrieve a specific node from the event data portion of the event log entry. This is huge, because more event log entries are storing more information in the message portion. It is well-formed XML. Most of the stuff you see floating around on the Internet takes a long way around, and it ends up doing all sorts of contorted stuff. My approach is relatively straightforward.

First the query

Obviously the first thing to do is to develop the query. I am using a FilterXML query to retrieve Windows Update events from the system event log.

Note  For more information about this technique, refer to Data Mine the Windows Event Log by Using PowerShell and XML.

I did not have to do too much to this query. In fact, it was as simple as clicking the boxes in the Event Viewer, and then switching to XML. The results of this operation are shown here:

Image of menu

I store the query in a here string. Here is my query:

$query = @"

<QueryList>

  <Query Id="0" Path="System">

    <Select Path="System">*[System[Provider

        [@Name='Microsoft-Windows-WindowsUpdateClient']

        and (Level=4 or Level=0) and Task = 1

        and (band(Keywords,8200))]]</Select>

  </Query>

</QueryList>

"@

Now, I need to use the –FilterXML parameter of the Get-WinEvent cmdlet to perform the actual query. I will also store the resulting events in a variable I call $systemEvents. This is shown here:

$systemEvents = Get-WinEvent -FilterXml $query 

Search the matching events

I now need to search all of the matching Windows Update events that I stored in the $systemEvents variable and look for the name of the updates. To do this, I use a Foreach loop to walk through the events. I store the time the event was created, and I convert the event log entry to XML by calling the toxml method. This is shown here:

$systemEvents = Get-WinEvent -FilterXml $query

Foreach($event in $systemEvents)

{

 $tc = $event.TimeCreated

 $xml = [xml]$event.toxml()

When I have an XML document, I call the SelectSingleNode method, and I retrieve the @Name=’updateTitle’ node. When trying to find out what I need to use, I refer to the XML view of the event log entry in the Event Viewer as shown here:

Image of menu

I then pipe the node to the Select-Object cmdlet, and I expand the #text property. I found this whilst playing around with Get-Member and exploring the returned objects from the various commands. I store the update title (from the #text property) in a variable I call $ud.

Note  Because the #text property begins with a pound sign, I have to put quotation marks around it; otherwise, Windows PowerShell views it as the comment character. I should also say that using the grave accent character ( ` ) does not work for this.

Now, I want to return an object, so I use the New-Object cmdlet to create an object that contains the time stamp and the name of the update. This section is shown here:

$ud = $xml.SelectSingleNode("//*[@Name='updateTitle']") |

   Select -ExpandProperty '#text'

   New-Object -TypeName psobject -Property @{"Date"=$tc;"Update"=$ud}

}

When I run the script, the following output appears:

Image of command output

The complete script is shown here:

$query = @"

<QueryList>

  <Query Id="0" Path="System">

    <Select Path="System">*[System[Provider

        [@Name='Microsoft-Windows-WindowsUpdateClient']

        and (Level=4 or Level=0) and Task = 1

        and (band(Keywords,8200))]]</Select>

  </Query>

</QueryList>

"@

 

$systemEvents = Get-WinEvent -FilterXml $query

Foreach($event in $systemEvents)

{

 $tc = $event.TimeCreated

 $xml = [xml]$event.toxml()

 $ud = $xml.SelectSingleNode("//*[@Name='updateTitle']") |

   Select -ExpandProperty '#text'

   New-Object -TypeName psobject -Property @{"Date"=$tc;"Update"=$ud}

}

That is all there is to using XML to return event log data from a single node. Event Log Week will continue tomorrow when I will talk about some really cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 2129 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>