When initialize a new 3PAR, depending on your protection level, at least the size of one disk per disk-type will be reserved as spare. Also when you add disks to a running system, spare-chunklets are created. This is done by
admithw. For example when you initialize a system containing 2 SSDs for adaptive flash cache (AFC) the size of one SSD is reserved for spare. So without writing any byte to SSDs, 50% is already used! So how to be able to get the most space for AFC?
- Check the capacity of you physical disks (PDs) by running
showpd -c. You see free and used chunklet-counts. Also spare-chunklet are listed.
- Remove spare-chunklets from disk by running:
removespare n:a. n stands for PD-ID of your SSD.
- Check again capacity by running
showpd -c. You can see,
free is the same as before, but
uninit is the size of previous spares.
- Wait a few minutes and you can see,
uninit decreases and
free increases. Chunklets get initialized automatically.
- When all chunklets are initialized and therefore free, go ahead and create cache for AFC in GUI or CLI (
createflashcache). SSMC needs a few minutes to see new free space on PDs.
FC AFA (all flash array) 3PAR systems show quite a good latency on reads and writes. When operating an iSCSI AFA 3PAR it could happen that the systems show a rather high write latency on ESXi hosts. In this post you can read how to fix this.
Continue reading “3PAR (iSCSI) – resolve high write latency on ESXi hosts”
Here is a very simple linux bash script to shut down all VMs of a ESXi host and the host itself, for example when a power failure occurs, this script can be used in UPS software. Some time ago a posted how to use such script in an HPE UPS environment. You can find the post here.
Continue reading “simple linux Script to shut down VMs and ESXi host”
In a 3PAR Peer Persistence configuration volumes are synchronously replicated between two 3PAR arrays. For each volume one array acts as primary or source array. This array exports the volume for read/write. The other array acts as secondary or target for replication. For performance reasons it could be an advantage when backup reads data from target array. This is easily possible with Veeam B&R 9.5.
Continue reading “Backup from snapshot on target 3PAR”
In a 3PAR system it is best practice that setsize defined in CPG is a divisor of the number of disks in the CPG. The setsize defines basically the size of a sub-RAID within the CPG. For example a setsize of 8 in a RAID 5 means a sub-RAID uses 7 data and one parity disk.
Continue reading “3PAR – does CPG setsize to disk-count ratio matter?”
Removing a cage can by a simple task. But it can also be impossible without help from HPE. Before starting at point 1, you should probably check point 3 first.
Generally I would recommend to do any of these steps ONLY when your are
- very familiar to 3PAR systems!
- know what you are doing!
- know what are the consequences of your actions!
If you have doubt about any of the following steps, contact HPE! Also you should use the 3PAR Command line Reference Guide to check the used commands.
Continue reading “Remove a cage from a 3PAR array”
Generally it is supported to mix disks of different sizes of the same type within a 3PAR system. For example you can use 900GB and 1.2TB FC-disks – within the same cage and even within the same CPG. When a disk fails, HPE sends an replacement disk. Some time ago, stock of 900GB FC disks seem to be empty. So when a 900GB disk fails, you will probably get a 1.2TB disk instead.
So how to handle different disk sizes? Here are a few points to consider:
- How do a 3PAR system handle different sizes within the same CPG? The system tries to put he same amount of data on every disk in a CPG – no matter if there a different disk-sizes. When the smaller disk are full, larger disks continue to fill up. So replacing just a few disks within a CPG with larger disks does not matter – as long as smaller disks not running full. When this happens, just larger disks gets new data. This can lead to a serious performance problem.
- When talking about SSDs: mixing different sizes will probably be no problem. Even when you think of point 1. But: when your SSDs are near the performance maximum you can also get an performance problem after smaller SSDs are full.
- When you have different CPGs for different disk sizes (how this can be done, you can read here), you must check before replacing a failed disk by a disk of a new size. Will the replaced disk be part of the right CPG? If not, your should re-define you CPG disk filter. By the way, this cannot be done in SSMC any more! You need CLI. See point 4.
- What about filtering disks for CPGs by cage or position in cage instead of disk size? Since I know, HPE replaces 900GB disks by 1.2TB disks, this is my preferred option, when different CPGs are desired.
For example you can use this command to change the disk filter for an existing CPG:
setcpg -sdgs 32g -t r6 -ha mag -ssz 10 -p -devtype NL -cg 2 -saga „-ha -p -devtype NL -cg 2“ NL_r6_cage2
The meaning of the different parameters, you can find here (Option
-devtype is mandatory for option
-cg, which is for cage selection. You can list more than one cage by list them separated by comma (1,2), or define as range (1-3). Another option is to define filter as positions of disks. Check 3PAR Command Line Reference for more information – command: