This post includes a lot of detailed information of HP 3PAR storage. Most of the stuff is also true for the new 8000 series. But this post is no introduction to 3PAR. You should know 3PAR terminology to get useful information out of this post.
- Perform Performance-Size based on Fast Class disks.
- Space based sizing
- Check available space in CPG:
- Check hit rate. If hit rate is 100% you see sequential read. SSDs are not needed.
- Check available space in CPG:
- Use a CPG|Region IO density Report to check if SSDs would be useful/necessary
- Locality of IOs is critical! Many IOs must be used by little data.
- Because of ASIC Raid5 volumes can deliver very high performance. Therefore this should be the default design for new volumes.
- Full mashed passive backplane.
- system does wide striping on level:
- controller nodes.
- Tasks for ASCI
- Zero detection
- Raid calculation
- Data Integrity Feature
- Dedup (8×00 series).
- In 7400/8400 a controller per pair can fail without interruptions.
- Dedup is available just on SSDs
- This is because even sequential IO became random when using dedup. From a performance perspective random read is the worst case for a storage system, because it has to be read from disks.
- General: Check the restore speed on backup devices with dedup-option! Because this is random read.
- Writes get into cache first. Therefore write should normally be no performance problem. The warning: delayed acknowledgement is an indicator that write cache is getting full.
- always: 50% read, 50%write
- write-part is dynamic, can also be used for read cache.
- Every disk in 3PAR system is associated with a controller as primary and another as secondary node.
- When a Virtual Volume (VV) is exported, 3PAR creates 4 VLUNs (in 7200 and 7400, also in 8000 series).
- in 7200/7400-Availability magazine means: Availability disk
- Availability Port is for F-series.
- Recommended Growth increment
- For spindles: 32GB
- For SSD: 8GB (means 8 per node-pair)
- because otherwise there are many reserved regions
- TPVV (thin provisioned VV) share LDs (logical disk).
- FPVV (fat provisioned VV) has dedicated LDs
- use CLI:
- use CLI:
- When LD is filled up to 75%, 3PAR adds an groth increment (vertical or horizontal).
- There is no reason to create more than one CPG with same settings.
- Compact CPG to delete unused regions.
- Dynamic Optimization enables you to move regions between CPGs.
- When adding additional hardware (disks, nodes)
- After just adding disks, capacity is available for new and existing volumes. Performance just for new volumes!
- To use capacity and performance for new and existing volumes, run
- Reginon in Admin-Space: 64MB
- Regions in User Space: 256MB
- During Dynamic Optimization regions are copied in two sub-regions (half size of full region)
- Copy CPG
- Used for snapshots (copy on write).
- Also used when enabling replication. In case connection between systems is broken, Copy CPG is used to write delta-data.
- You can create a new CPG with grow limit to reduce danger to fill up space.
- When copy CPG runs full, allow stale snapshots (property of VV) to ensure source volume stays online and lets snapshots become corrupt.
3PAR and VMware Interoperability
- VMware Storage Cluster with Storage DRS enabled on a 3PAR system with Adaptive Optimization enabled
- Disable DRS decisions based on performance (latency)
- Best practice is to allow decisions based on capacity.
- VMware Storage IO Control
- VMware Storage IO Control is fully compatible to Adaptive Optimization.
Adaptive Optimization (AO)
- AO follows a trend.
- When a volume is for example hot on just one day per week, use scheduled dynamic optimization instead of AO.
- AO is designed for capacity management, not for performance!
- Interpret counter
- Used: actually used chucklets;
- Reserved: pre-created chuncklets to use
- Windows Server 2012 R2 uses T10-unmap to notify storage that data is deleted.
- VMware still does not automatically unmap! When writing NULL to reclaim storage, Zero Detection on ASIC does not write these zeros, but storage get reclaimed.
This licensed feature just means: Host sets + Volume sets.
- Every hostport has a partner port on pair-node.
- For planed action (firmware update) the port switches to partner part, so MPIO software on hosts does not do a port-failover. State of ports are either native or guest (partner port).
- Just available for 4-node systems.
- When a node crashes, its partner node selects a new partner in other node-pair to synchronize cache. When this node also crashes, its partner write the cache of the failed node to disk.
- Read Remote Copy guide to check each possible option and requirements.
- Link Pair: min: 2, max: 8
- Links are used for load balancing and redundancy.
- native IP (RCP Port)
- native FC
- FCoIP, using additional devices like
- HP 1606 Extension SAN Switch
- MPX 200
- Using one link pair you can replicate sync XOR async
- For these 3: different volumes! so 1:N means one system replicates N different volumes
- Synchronous Long Distance Replication
- Two systems are synchronous synced
- A third system is periodicity synced
- Synchronous replication results in higher write latency.
- Requirements for replicating 3 different sites (native IP)
- Jumbo frames
- Extra subnet.
- Transparent failover licenses necessary
- Peer Persistent and Remote Copy
- OR: Remote Copy suite
- max latency 2,6 ms
- NEW with OS 3.2.2: max latency 5 ms