vol_checkpt_default
|
The interval at which utilities performing recoveries or
resynchronization operations load the current offset into the kernel as a
checkpoint. A system failure during such operations does not require a
full recovery, but can continue from the last reached checkpoint.
The default value is 20480 sectors (10MB).
Increasing this size reduces the overhead of checkpoints on recovery
operations at the expense of additional recovery following a system
failure during a recovery.
|
vol_default_iodelay
|
The count in clock ticks for which utilities pause if they have been
directed to reduce the frequency of issuing I/O requests, but have not
been given a specific delay time. This tunable is used by utilities
performing operations such as resynchronizing mirrors or rebuilding
RAID-5 columns.
The default value is 50 ticks.
Increasing this value results in slower recovery operations and
consequently lower system impact while recoveries are being performed.
|
vol_fmr_logsz
|
The maximum size in kilobytes of the bitmap that Non-Persistent
FastResync uses to track changed blocks in a volume. The number of
blocks in a volume that are mapped to each bit in the bitmap depends on
the size of the volume, and this value changes if the size of the volume
is changed.
For example, if the volume size is 1 gigabyte and the system block
size is 512 bytes, a value for this tunable of 4 yields a map that
contains 32,768 bits, each bit representing one region of 64 blocks.
The larger the bitmap size, the fewer the number of blocks that are
mapped to each bit. This can reduce the amount of reading and writing
required on resynchronization, at the expense of requiring more
non-pageable kernel memory for the bitmap. Additionally, on clustered
systems, a larger bitmap size increases the latency in I/O performance,
and it also increases the load on the private network between the
cluster members. This is because every other member of the cluster must
be informed each time a bit in the map is marked.
Since the region size must be the same on all nodes in a cluster for a
shared volume, the value of the this tunable on the master node
overrides the tunable values on the slave nodes, if these values are
different. Because the value of a shared volume can change, the value of
this tunable is retained for the life of the volume.
In configurations which have thousands of mirrors with attached
snapshot plexes, the total memory overhead can represent a significantly
higher overhead in memory consumption than is usual for VxVM.
The default value is 4KB. The maximum and minimum permitted values are 1KB and 8KB.
Note:
|
The value of this tunable does not have any effect on Persistent FastResync.
|
|
vol_kmsg_resend_period
|
This is an obsolete tunable parameter. Use vol_kmsg_resend_period_usecs instead. If specified in /kernel/drv/vxio.conf, the value is internally converted to microseconds, and applied to vol_kmsg_resend_period_usecs instead.
|
vol_kmsg_resend_period_usecs
|
The value in microseconds of the kernel message (KMSG) resend period
that is used by the clustering functionality of VxVM with SunCluster.
The default value is 3000000 microseconds (3 seconds).
This tunable should be used instead of vol_kmsg_resend_period from release 5.0 onward as it allows finer granularity to be applied to performance tuning.
|
vol_kmsg_send_period
|
This is an obsolete tunable parameter. Use vol_kmsg_send_period_usecs instead. If specified in /kernel/drv/vxio.conf, the value is internally converted to microseconds, and applied to vol_kmsg_send_period_usecs instead.
|
vol_kmsg_send_period_usecs
|
The value in microseconds of the kernel message (KMSG) send period
that is used by the clustering functionality of VxVM with SunCluster.
The default value is 1000000 microseconds (1 second). This tunable
should be used instead of vol_kmsg_send_period from release 5.0 onward as it allows finer granularity to be applied to performance tuning.
|
vol_max_vol
|
The maximum number of volumes that can be created on the system. The
minimum and maximum permitted values are 1 and the maximum number of
minor numbers representable on the system.
The default value is 131071.
|
vol_maxio
|
The maximum size of logical I/O operations that can be performed
without breaking up the request. I/O requests to VxVM that are larger
than this value are broken up and performed synchronously. Physical I/O
requests are broken up based on the capabilities of the disk device and
are unaffected by changes to this maximum logical request limit.
The default value is 2048 sectors (1MB).
The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.
If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio.
|
vol_maxioctl
|
The maximum size of data that can be passed into VxVM via an ioctl
call. Increasing this limit allows larger operations to be performed.
Decreasing the limit is not generally recommended, because some
utilities depend upon performing operations of a certain size and can
fail unexpectedly if they issue oversized ioctl requests.
The default value is 32768 bytes (32KB).
|
vol_maxparallelio
|
The number of I/O operations that the vxconfigd daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call.
The default value is 256. This value should not be changed.
|
vol_maxspecialio
|
The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl
request itself can be small, it can request a large I/O request be
performed. This tunable limits the size of these I/O requests. If
necessary, a request that exceeds this value can be failed, or the
request can be broken up and performed synchronously.
The default value is 2048 sectors (1MB).
Raising this limit can cause difficulties if the size of an I/O
request causes the process to take more memory or kernel virtual mapping
space than exists and thus deadlock. The maximum limit for this tunable
is 20% of the smaller of physical memory or kernel virtual memory. It
is inadvisable to go over this limit, because deadlock is likely to
occur.
If stripes are larger than the value of this tunable, full stripe I/O
requests are broken up, which prevents full-stripe read/writes. This
throttles the volume I/O throughput for sequential I/O or larger I/O
requests.
This tunable limits the size of an I/O request at a higher level in
VxVM than the level of an individual disk. For example, for an 8 by 64KB
stripe, a value of 256KB only allows I/O requests that use half the
disks in the stripe; thus, it cuts potential throughput in half. If you
have more columns or you have used a larger interleave factor, then your
relative performance is worse.
This tunable must be set, as a minimum, to the size of your largest stripe (RAID-0 or RAID-5).
|
vol_subdisk_num
|
The maximum number of subdisks that can be attached to a single plex.
There is no theoretical limit to this number, but it has been limited
to a default value of 4096. This default can be changed, if required.
|
volcvm_smartsync
|
If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups.
|
voldrl_max_drtregs
|
The maximum number of dirty regions that can exist on the system for
non-sequential DRL on volumes. A larger value may result in improved
system performance at the expense of recovery time. This tunable can be
used to regulate the worse-case recovery time for the system following a
failure.
The default value is 2048.
|
voldrl_max_seq_dirty
|
The maximum number of dirty regions allowed for sequential DRL. This
is useful for volumes that are usually written to sequentially, such as
database logs. Limiting the number of dirty regions allows for faster
recovery if a crash occurs.
The default value is 3.
|
voldrl_min_regionsz
|
The minimum number of sectors for a dirty region logging (DRL) volume
region. With DRL, VxVM logically divides a volume into a set of
consecutive regions. Larger region sizes tend to cause the cache
hit-ratio for regions to improve. This improves the write performance,
but it also prolongs the recovery time.
The default value is 1024 sectors.
If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio.
|
voliomem_chunk_size
|
The granularity of memory chunks used by VxVM when allocating or
releasing system memory. A larger granularity reduces CPU overhead due
to memory allocation by allowing VxVM to retain hold of a larger amount
of memory.
The default value is 64KB.
|
voliomem_maxpool_sz
|
The maximum memory requested from the system by VxVM for internal
purposes. This tunable has a direct impact on the performance of VxVM as
it prevents one I/O operation from using all the memory in the system.
VxVM allocates two pools that can grow up to this size, one for RAID-5 and one for mirrored volumes.
A write request to a RAID-5 volume that is greater than one tenth of
the pool size is broken up and performed in chunks of one tenth of the
pool size.
A write request to a mirrored volume that is greater than half the
pool size is broken up and performed in chunks of one half of the pool
size.
The default value is 5% of memory up to a maximum of 128MB.
The value of voliomem_maxpool_sz must be greater than the value of volraid_minpool_size.
The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.
|
voliot_errbuf_dflt
|
The default size of the buffer maintained for error tracing events.
This buffer is allocated at driver load time and is not adjustable for
size while VxVM is running.
The default value is 16384 bytes (16KB).
Increasing this buffer can provide storage for more error events at
the expense of system memory. Decreasing the size of the buffer can
result in an error not being detected via the tracing device.
Applications that depend on error tracing to perform some responsive
action are dependent on this buffer.
|
voliot_iobuf_default
|
The default size for the creation of a tracing buffer in the absence
of any other specification of desired kernel buffer size as part of the
trace ioctl.
The default value is 8192 bytes (8KB).
If trace data is often being lost due to this buffer size being too
small, then this value can be tuned to a more generous amount.
|
voliot_iobuf_limit
|
The upper limit to the size of memory that can be used for storing
tracing buffers in the kernel. Tracing buffers are used by the VxVM
kernel to store the tracing event records. As trace buffers are
requested to be stored in the kernel, the memory for them is drawn from
this pool.
Increasing this size can allow additional tracing to be performed at
the expense of system memory usage. Setting this value to a size greater
than can readily be accommodated on the system is inadvisable.
The default value is 4194304 bytes (4MB).
|
voliot_iobuf_max
|
The maximum buffer size that can be used for a single trace buffer.
Requests of a buffer larger than this size are silently truncated to
this size. A request for a maximal buffer size from the tracing
interface results (subject to limits of usage) in a buffer of this size.
The default size for this buffer is 1048576 bytes (1MB).
Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes.
Care should be taken not to increase this value above the value for the voliot_iobuf_limit tunable value.
|
voliot_max_open
|
The maximum number of tracing channels that can be open
simultaneously. Tracing channels are clone entry points into the tracing
device driver. Each vxtrace process running on a system consumes a single trace channel.
The default number of channels is 32.
The allocation of each channel takes up approximately 20 bytes even when the channel is not in use.
|
volpagemod_max_memsz
|
The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata.
The default value is 6144KB (6MB).
The valid range for this tunable is from 0 to 50% of physical memory.
The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications.
Setting the value below 512KB fails if cache objects or volumes that
have been prepared for instant snapshot operations are present on the
system.
If you do not use the FastResync or DRL features that are implemented
using a version 20 DCO volume, the value can be set to 0. However, if
you subsequently decide to enable these features, you can use the vxtune command to change the value to a more appropriate one:
# vxtune volpagemod_max_memsz value
where the new value is specified in kilobytes. Using the vxtune command to adjust the value of volpagemod_max_memsz does not persist across system reboots unless you also adjust the value that is configured in the /kernel/drv/vxio.conf file.
|
volraid_minpool_size
|
The initial amount of memory that is requested from the system by
VxVM for RAID-5 operations. The maximum size of this memory pool is
limited by the value of voliomem_maxpool_sz.
The default value is 16384 sectors (8MB).
|
volraid_rsrtransmax
|
The maximum number of transient reconstruct operations that can be
performed in parallel for RAID-5. A transient reconstruct operation is
one that occurs on a non-degraded RAID-5 volume that has not been
predicted. Limiting the number of these operations that can occur
simultaneously removes the possibility of flooding the system with many
reconstruct operations, and so reduces the risk of causing memory
starvation.
The default value is 1.
Increasing this size improves the initial performance on the system
when a failure first occurs and before a detach of a failing object is
performed, but can lead to memory starvation. |
Comments
Post a Comment