[VMUG Austria] PowerCLI 101

This is the second part of the guide line of my VMUG session of PowerCLI 101. In first part I focused on PowerShell basics.

PowerCLI Installation

Since version 6.5.1.5 of PowerCLI it is just available on Microsoft PowerShell Gallery. Until than it was a separate download on VMware pages. The advantage of PowerShell Gallery is that modules can be installed by using PowerShell command:

install-module

To use this you need an current version of PowerShell like 5.1. You can show your version by running

(get-host).version

To list already installed VMware modules on your PowerShell host, run:

get-module vmware* -ListAvailable

If your version of PowerCLI is older than 6.5.1.5 you have to uninstall it before you can install a version from PowerShell Gallery. To query versions in PowerShell Gallery run:

find-module vmware.powercli
find-module vmware.powercli -AllVersions | select name, version, publisheddate

First command shows only latest version. Second shows all available version.

To install PowerCLI use these command:

install-module vmware.powercli -allowClobber

Option “allowClobber” allows installing modules including cmdlets with names that are already installed on the host. If your host does not have access to internet, install PowerCLI on a host that has. Use this command to export module:

Save-Module -Name VMware.PowerCLI -Path C:\PathToDestination\

To update PowerCLI to current version – when installed version already from PowerShell Gallery:

Update-Module vmware.powercli

After PowerShell is installed, it has to be imported in current session to use it. This is done with the command:

import-module vmware.powercli

Since latest versions and Windows 10 and Server 2016 it is necessary to configure certificate handling in PowerCLI. Otherwise it is not possible to connect to vCenter or host using untrusted certificates.

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope Session

Get help

There are build-in cmdlets in PowerShell to get help. The following I use more often.

Get-Command
Get-Command -Module VMware.VimAutomation.Core
Get-Command -Module VMware.VimAutomation.Core get-*

Cmdlets:

1: list all available cmdlets – if module is installed or not
2: list all cmdlets of a specific module – in this example VMware.VimAutomation.Core
3: list all cmdlets, beginning with “get-” of a specific module.

To get more information of a cmdlet, use:

Get-Help
Get-Help Get-VM
Get-Help Get-VM -Examples

1: Shows help for get-help
2: Shows information of cmdlet Get-VM
3: Shows examples for cmdlet Get-VM

Object Cmdlets

There are a few Cmdlets that can just be used to pipe something in. Three of them are very common.

Where-Object

With this cmdlet, it is possible to filter within the result of a cmdlet. This is useful if cmdlet itself does not offer the option to filter. This is to prefer.

Get-VM
Get-VM | Where-Object {$_.MemoryGB -gt 8}

1: Show all VMs
2: Show all VMs with more than 8GB configured memory.
You can see in this example that variable $_ is used to refer to objects came from pipe.

Select-Object

When running a PowerCLI cmdlet a pre-defined set of columns are shown. This does not mean there isn’t more to see. Using Select-Object, you can select columns to show. A command I use very often:

Get-VMHost | Select-Object Name, Version, Build

Sort-Object

It is easy to imagine what this cmdlet does. Here two examples:

Get-VM | Sort-Object Name
Get-VM | Sort-Object Name -Descending

1: Show all VMs in ascending order which is the default order.
2: Show all VMs in descending order.

PowerCLI Usage

To use PowerCLI you need to connect to a vCenter instance or a single host. To do this use this command:

Connect-VIServer vCenter

To show how to use PowerCLI, I show how to mange VM port groups on standard vSwitches. This is something time-consuming and error-prone. Automating this tasks by using PowerCLI is a way to overcome this – without the need for any additional licences.

At first we take a look at the vSwitch configuration of a host managed by vCenter:

Get-VMHost
Get-VMHost hostname | Get-VirtualSwitch
Get-VMHost hostname | Get-VirtualSwitch -name vSwitch1 | Get-VirtualPortGroup

1: Shows all ESXi hosts, managed by vCenter.
2: Shows all vSwitches of host hostname.
3: Shows all port groups of vSwitch1 of host hostname.

Now we add a new port group to this vSwitch. In our example we call it “DMZ” and use VLAN ID 999. As you can see, cmdlet New-VirtualPortGroup gets vSwitch to create portgroup as input from pipe:

Get-VMHost hostname | Get-VirtualSwitch -name vSwitch1 | New-VirtualPortGroup -Name "DMZ" -VLanId 999

To change VLAN ID – to 777 in this example – of an existing port group, run:

Get-VMHost hostname | Get-VirtualSwitch -name vSwitch1 | Get-VirtualPortGroup -Name "DMZ" | Set-VirtualPortGroup -VLanId 777

To remove a port group, tun:

Get-VMHost hostname | Get-VirtualSwitch -name vSwitch1 | Get-VirtualPortGroup -name "DMZ"| Remove-VirtualPortGroup -Confirm:$false

Use option “-Confirm:$false” to avoid PowerShell to ask if you are sure to do this.

This examples should illustrate how to use PowerCLI. We did all task for a specific host. But how to do this for, lets say all hosts of a cluster? Exactly here we can see the power of PowerShell respectively PowerCLI. We just need to select the hosts we want to perform theses task for and pipe them to the rest of our cmdlets. For example, we want to do this VLAN management for all hosts in cluster “Main-Cluster”. These commands can be used:

Get-Cluster "Main-Cluster" | Get-VMHost | Get-VirtualSwitch -name vSwitch1 | New-VirtualPortGroup -Name "delete" -VLanId 999
Get-Cluster "Main-Cluster" | Get-VMHost | Get-VirtualSwitch -name vSwitch1 | Get-VirtualPortGroup -Name delete | Set-VirtualPortGroup -VLanId 777
Get-Cluster "Main-Cluster" | Get-VMHost | Get-VirtualSwitch -name vSwitch1 | Get-VirtualPortGroup -name delete | Remove-VirtualPortGroup -Confirm:$false

If tasks have to be done for alle hosts, managed by vCenter, start command line with Get-VMHost.

Tips and Tricks

A way to check if a download is error-free, checksums are very useful. With PowerShell, you can easily calculate them for files. For example MD5-checksum:

Get-FileHash -Algorithm MD5 file.txt

When you generate an important list and want to send it per mail you can export the list to an HTML file. For example:

Get-VM | ConvertTo-Html | Out-File -FilePath c:\temp\out.html

Often the default output list cuts off width of columns. To overcome this, try Format-Table with option AutoSize

Get-VMHost | Format-Table -AutoSize

When you are familiar with shell in ESXi you probably know esxcli. This command does not know tab-completion or shortening of input. With PowerCLI you get tab-completion! To get information, see this example:

$esxcli = Get-VMHost kbcesx01.kbc.lab | Get-EsxCli
$esxcli.  # try tab
$esxcli.system.hostname.get()

1: Sets $esxcli for a host
2: Try tab-completion
3: complete command

To see all information a cmdlet can show (in first instance) you can use Select-Object:

Get-VMHost
Get-VMHost | Select-Object *

1: Show pre-defined data set of  Get-VMHost.
2: Show all possible data-columns. Using “.” you can dive deeper…

Links

Advertisements

vMotion ends in “Cannot connect to host”

Recently I entered a problem when trying to migrate VMs to new hosts and storage. vMotion in new environment was already running without problems. After changing vMotion configuration in new environment to enable migration from existing host, I got an error when trying to migrate using storage vMotion. The error was: “Cannot connect to host”. The vMotion task ended a second after start. Source host didn’t show any entry in vmkernel.log. It appears, vCenter didn’t even try to connect a host.

Fortunately, solution is quite simple: re-add target-host to vCenter. To do so without shutting down or moving any VM you can:

  1. Disconnect host from vCenter
  2. Remove host from inventory
  3. Add host to vCenter and cluster again

It appears vCenter did not recognize changes in vMotion configuration. With re-adding host, vCenter gets new settings. Maybe there is a better way?

A downside of this procedure is that each VM of this host will get a new ID generated by vCenter. So each software that uses this ID will recognice VMs as new VMs. For example you will proably get a full as next backup.

[VMUG Session] PowerCLI 101 – PowerShell Basics

Introduction

This is the first blog post of my guiding thread of my session for VMUG Austria meeting on 25th of April 2019. I talked about:

  • PowerShell Basics
  • PowerCLI 101
  • PowerCLI first function
  • Great PowerCLI open source project: vCheck

Some notes about the topics:

  • One of my goals was that every participant – no matter how experienced – should be able to get some new information.
  • Another goal is that attendees/readers wanna start to play with PowerShell.
  • This lecture should enable you to help yourself.
  • I concentrated on daily used structures (loops, variables, …).

Continue reading “[VMUG Session] PowerCLI 101 – PowerShell Basics”

Ramdisk full errors

These days I had a problem with full ESXi ramdisk. Running vdf -h shows tmp as 100% used. Furthermore there are a lot of errors in vmkernel.log like:

VisorFSRam: 233: Cannot extend visorfs file /var/lib/vmware/hostd/journal/ because its ramdisk (root) is full.

There are some other problems these hosts suffer from time to time:

  • vSphere HA issus like master of cluster cannot be found/elected.
  • Connection problems with vCenter.

Hosts are ProLiant DL380 Gen8 servers running current ESXi 6.5 U2 image. After some troubleshooting I found out:

  • /tmp/ql_ima.log consumed nearly all the space in tmp-partition.
  • Some QLogic software logs into this file.
  • ql_ima.log is locked by process hostd-worker.
  • In ql_ima.log entries are about a libqima4xxx module. A component can be found in /usr/lib/vmware/ima_plugins named libqima4xxx.so.

Very strange fact about this findings is that there is no QLogic hardware running in this host! So there should be two ways to solve the problem: Update or uninstall appropriate driver. I decided to remove the unnecessary software. I searched for driver by running:

esxcli software vib list | grep -i ima

and found: ima-qla4xxx; version: 500.2.01.31-1vmw.0.3.040121. To remove it, I run:

esxcli software vib remove -n ima-qla4xxx

After a reboot:

  • no /tmp/ql_ima.log any more.
  • no /usr/lib/vmware/ima_plugins/libqima4xxx.so any more.
  • temp-partition uses less than 200kb.

Problem removed.

Notes

  • To remove a VIB and use Update Manager to patch a host without rebooting does not work. After remove a driver, Update Manager scan shows all patches of the current version (6.5) as missing.
  • When your host suffers this issue, try to restart management agents. Without it could happens your host does not even reboot gracefully because of full ramdisk.

Alarm after SDRS recommendation is automatically applied

When you operate a vSphere storage cluster, you will know you can set the automation level of Storage DRS automation to No Automation (Manual Mode) or Fully Automated. When set to automatic, the cluster applies recommendations automatically. When set to manual, recommendations can be applied manually. By default a alarm (name: Storage DRS recommendation) on cluster level pops up when a recommendation arises.

SDRS1

As you can see in the screenshot, alarm definition includes the possibility to set the alarm back to normal when Pending storage recommendations were applied. And here is the problem: this does not work in vCenter 6.7 any more. The alarm in a fully automated storage cluster does not get back to normal automatically – which it does in 6.5.

According to VMware support, it works as designed, which is hard to believe. There were some changes in alarm management – including alarm naming. This leads to this strange behavior:

  • In a fully automated storage cluster, default alarm Storage DRS recommendation stays on warning status, even if recommendation was applied. It has to be set to green manually.
  • In a storage cluster in manual mode, recommendations has to be applied manually and alarm gets set to normal automatically.

There is no plan to change this behavior back to 6.5-style in 6.7 U2.

Currently there are at least three options when running automatic mode:

  • Leave it as it is.
  • Set cluster to manual mode. Alarm is canceled automatically, but recommendations are not applied automatically.
  • Disable alarm Storage DRS recommendation.

At first glance last option seems to be a good solution. But there is a problem with this option. When a recommendation is generated but it cannot be applied – e.g. because of insufficient space on other datastores within the cluster – there will be no alarm on this cluster. But – when enabled – alarm Storage DRS recommendation would stay on warning-level, because recommendation couldn’t be applied.

When disabling this alarm, I would strongly recommend to set percentage of Datastore Disk Usage in alarm Datastore usage on disk like Space threshold in storage cluster configuration. So you get at least alarms when datastores in storage cluster are at threshold level. Furthermore you should regularly have a look at cluster faults:

SDRS2.png

This can also be queried by PowerCLI:

(Get-DatastoreCluster cluster_name).ExtensionData.PodStorageDrsEntry.DrsFault.FaultsByVm.Fault

 

What is the size of a Changed Block Tracking block

An interesting question arose some time ago. A customer changed permissions of files on a Windows file-server VM. Changes were replicated using DFS. So far so good, but the amount of data that was processed at next incremental backup of target VM was even more than a full backup. Because incremental backup is based on VMware Changed Block Tracking (CBT) feature, it was suddenly interesting to know whats the size of a CBT block. After some recherche I did not find an answer to this question. But I did find a great script that calculates CBT incremental backup size. Based on this script I created the following script to list the amount and the size of blocks changed in between of two snapshots.

Continue reading “What is the size of a Changed Block Tracking block”

3PAR – no connection to quorum witness

Recently I had problems to connect a 3PAR system to a quorum appliance. The system had a connection to the quorum. After HPE support did replace a node, connection could not be established any more.

Command showcropy -qw shows error “below min safe num QAs” and other errors talking about QAs. In SSMC quorum was stated as “Initializing”. When testing connection to quorum using command setrcopytarget witness check witness_ip error “No route to Quorum Witness at witness_ip” was shown. I needed some time to figure out, QAs are Quorum Announcers. On every node such process tries to connect to quorum. When testing each node, using setrcopytarget witness check -remote -node 0|1 witness_ip error like “Error: node 0|1 is not registered” appears. Solution that worked for me was to restart both nodes using shutdownnode reboot 0|1. It should be clear to reboot one node after the other. Check successful reboot using shownode, when first node is back in cluster, reboot second node. It should also be clear, to reboot nodes when no performance problems are expected (new or empty system, CPU usage <50%, AFA, consider settings system parameter AllowWrtbackSingleNode to continue using write cache during reboot).