3PAR: configure AFC with SSDs only for AFC

When initialize a new 3PAR, depending on your protection level, at least the size of one disk per disk-type will be reserved as spare. Also when you add disks to a running system, spare-chunklets are created. This is done by admithw. For example when you initialize a system containing 2 SSDs for adaptive flash cache (AFC) the size of one SSD is reserved for spare. So without writing any byte to SSDs, 50% is already used! So how to be able to get the most space for AFC?

  1. Check the capacity of you physical disks (PDs) by running showpd -c. You see free and used chunklet-counts. Also spare-chunklet are listed.
  2. Remove spare-chunklets from disk by running: removespare n:a. n stands for PD-ID of your SSD.
  3. Check again capacity by running showpd -c. You can see, free is the same as before, but uninit is the size of previous spares.
  4. Wait a few minutes and you can see, uninit decreases and free increases. Chunklets get initialized automatically.
  5. When all chunklets are initialized and therefore free, go ahead and create cache for AFC in GUI or CLI (createflashcache). SSMC needs a few minutes to see new free space on PDs.
3PAR: configure AFC with SSDs only for AFC

3PAR (iSCSI) – resolve high write latency on ESXi hosts

FC AFA (all flash array) 3PAR  systems show quite a good latency on reads and writes. When operating an iSCSI AFA 3PAR it could happen that the systems show a rather high write latency on ESXi hosts. In this post you can read how to fix this.

Continue reading “3PAR (iSCSI) – resolve high write latency on ESXi hosts”

3PAR (iSCSI) – resolve high write latency on ESXi hosts

Backup from snapshot on target 3PAR

In a 3PAR Peer Persistence configuration volumes are synchronously replicated between two 3PAR arrays. For each volume one array acts as primary or source array. This array exports the volume for read/write. The other array acts as secondary or target for replication. For performance reasons it could be an advantage when backup reads data from target array. This is easily possible with Veeam B&R 9.5.

Continue reading “Backup from snapshot on target 3PAR”

Backup from snapshot on target 3PAR

3PAR – does CPG setsize to disk-count ratio matter?

In a 3PAR system it is best practice that setsize defined in CPG is a divisor of the number of disks in the CPG. The setsize defines basically the size of a sub-RAID within the CPG. For example a setsize of 8 in a RAID 5 means a sub-RAID uses 7 data and one parity disk.

Continue reading “3PAR – does CPG setsize to disk-count ratio matter?”

3PAR – does CPG setsize to disk-count ratio matter?

Remove a cage from a 3PAR array

Removing a cage can by a simple task. But it can also be impossible without help from HPE. Before starting at point 1, you should probably check point 3 first.

Generally I would recommend to do any of these steps ONLY when your are

  • very familiar to 3PAR systems! 
  • know what you are doing!
  • know what are the consequences of your actions!

If you have doubt about any of the following steps, contact HPE! Also you should use the 3PAR Command line Reference Guide to check the used commands.

Continue reading “Remove a cage from a 3PAR array”

Remove a cage from a 3PAR array

3PAR: Considerations when mixing sizes of same disk type

Generally it is supported to mix disks of different sizes of the same type within a 3PAR system. For example you can use 900GB and 1.2TB FC-disks – within the same cage and even within the same CPG. When a disk fails, HPE sends an replacement disk. Some time ago, stock of 900GB FC disks seem to be empty. So when a 900GB disk fails, you will probably get a 1.2TB disk instead.

So how to handle different disk sizes? Here are a few points to consider:

    1. How do a 3PAR system handle different sizes within the same CPG? The system tries to put he same amount of data on every disk in a CPG – no matter if there a different disk-sizes. When the smaller disk are full, larger disks continue to fill up. So replacing just a few disks within a CPG with larger disks does not matter – as long as smaller disks not running full. When this happens, just larger disks gets new data. This can lead to a serious performance problem.
    2. When talking about SSDs: mixing different sizes will probably be no problem. Even when you think of point 1. But: when your SSDs are near the performance maximum you can also get an performance problem after smaller SSDs are full.
    3. When you have different CPGs for different disk sizes (how this can be done, you can read here), you must check before replacing a failed disk by a disk of a new size. Will the replaced disk be part of the right CPG? If not, your should re-define you CPG disk filter. By the way, this cannot be done in SSMC any more! You need CLI. See point 4.
    4. What about filtering disks for CPGs by cage or position in cage instead of disk size? Since I know, HPE replaces 900GB disks by 1.2TB disks, this is my preferred option, when different CPGs are desired.
      For example you can use this command to change the disk filter for an existing CPG:
      setcpg -sdgs 32g -t r6 -ha mag -ssz 10 -p -devtype NL -cg 2 -saga „-ha -p -devtype NL -cg 2“ NL_r6_cage2
      The meaning of the different parameters, you can find here (Option -devtype is mandatory for option -cg, which is for cage selection. You can list more than one cage by list them separated by comma (1,2), or define as range (1-3). Another option is to define filter as positions of disks. Check 3PAR Command Line Reference for more information – command: setcpg.
3PAR: Considerations when mixing sizes of same disk type

Create CPG using Disk Filter

Recently I had to create a new 3PAR CPG using just new added 1.2TB disks. But the system uses already a cage full of 600GB disk. While it was straight forward to change the existing CPG (using 3PAR Management ConsoleStoreServe Management Console (SSMC) does not support this feature any more) to use all disk in cage 0, it was not possible to create a new CPG for all FC disks in cage 1. It was not possible to filter cage number, slot in cage, magazine or chunklet-sizes in the GUI. Also using a simple version of createcpg command does not work. All I got was an error: “Error: no available space for given (invalid?) parameters“.

[Update] Check also my post (3PAR: Considerations when mixing sizes of same disk type) about disks of same type but different size within an array.

To get the work done, I used this command:

createcpg -sdgs 32g -t r5 -ha mag -ssz 4 -ss 128 -ch first -p -devtype FC -tc_gt 546 -saga "-ha mag -p -devtype FC -tc_gt 546" FC_r5_1200_AO


  • -sdgs 32g
    Growth Increment. Note: use 32g instead of 32768!
  • -t
    Raid-Level (r0, r1, r5, r6)
  • -ha
    Availability (port, cag, mag)
  • -ssz
    Set-size. For example: set-size of 4 when using Raid5 means 3+1
  • -ss
    Step-size (KB)
  • -ch
    Preferred chunklets (first, last)
  • -p

    • -devtype
      Disktype (FC, NL, SSD)
    • -tc_gt
      Means: “total chunklets greater than”. -tc_gt 546 selects all disk have more than 546 chunklets (size of a chunklet = 1G –> use all disks greater than 600GB)
    • -tc_lt
      Means: “total chunklets less than”.
  • -saga
    This parameter describes the admin-space of the CPG. If you don’t enter this parameter, the characteristic of the admin-space will be different to the characteristic  of the user-space.
  • FC_r5_1200_AO
    Name of the new CPG
Create CPG using Disk Filter