Posts

Showing posts from September, 2015

How to Replace a Drive in Solaris[TM] ZFS

There are a few different cases in relation to disk replacement in Solaris ZFS: Case 1  - LUN went offline, then came back online. Case 2  - Replace a disk with the same target number Case 3  - Replace a disk with different target number The steps below illustrate how to proceed in each case. SOLUTION Case 1.  Drive went offline, then back online. No hardware problem. (Common problem with SAN LUNs.) There are 2 methods we can try: Method 1. online the drive went the LUN came back.  The drive once went offline and the zpool got degraded.         18. c3t54d0 <drive not available>              /pci@1f,0/pci@1/pci@3/SUNW,qlc@5/fp@0,0/ssd@w22000020370705f1,0 # zpool status -v viper   pool: viper  state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for         the p...

Understanding How ZFS Calculates Used Space

This document explains how ZFS calculates the amount of used space within a zpool and provides examples of how various ZFS features affect that calculation. DETAILS A ZFS storage pool is a logical collection of devices that provide space for datasets such as filesystems, snapshots and volumes. A zfs pool can be used as a filesystem, i.e. such as mounting/unmounting; to take snapshots that provides read-only (clones are writable copies of snapshots) copies of the filesystem taken in the past; to create volumes that can be accessed as a raw and a block device and for setting properties on datasets. Interaction of all these datasets and associated properties such as quotas, reservations, compression and deduplication (Solaris 11 only) play a role in how ZFS calculates space usage. Neither the du nor df command have been updated to account for ZFS file system space usage. When calculating zfs space usage, always use the following command. # zfs list -r -o space <pool_name...

How Solaris ZFS Cache Management Differs From UFS and VxFS File Systems

ZFS manages its cache differently to other filesystems such as: UFS and VxFS.  ZFS' use of kernel memory as a cache results in higher kernel memory allocation as compared to UFS and VxFS filesystems.  Monitoring a system with tools such as vmstat would report less free memory with ZFS and may lead to unnecessary support calls. SOLUTION This is due to ZFS's cache management being different from UFS and VxFS filesystems.  ZFS does not use the page cache, unlike other filesystems such as UFS and VxFS.  ZFS's caching is drastically different from these old filesystems where cached pages can be moved to the cache list after being written to the backing store and thus counted as free memory. ZFS affects the VM subsystem in terms of memory management.  Monitoring of systems with vmstat(1M) and prstat(1M) would report less free memory when ZFS is used heavily ie, copying large file(s) into a ZFS filesystem.  The same load running on a UFS filesystem would u...