Posts

Showing posts from January, 2014

Rock Ridge extensions ... pfs_mount?

Do you have problems mounting Oracle cd? or perhaps a Windows burned one? There is the minimal needed steps to mount a RickRidge CD on HP-UX 11i and how to export it to other machine: In the system where you've the CD do the next: 0.- mysystem:/#mkdir -p /mnt/pfs_cdrom (use your own) 1.- mysystem:/#nohup /usr/sbin/pfs_mountd & 2.- mysystem:/#nohup /usr/sbin/pfsd & 3.- Add the filesystem to /etc/pfs_fstab: Sample: /dev/dsk/cXtYdZ /mnt/pfs_cdrom pfs-rrip xlat=unix 0 3.1.- As is not usual mount Rock Ridge CD i prefer do it manually and not in pfs_fstab 4.- If you want export to other system perform steps 5 & 6 if not, go to 7. 5.- Put the rigth entry to your filesystem in the /etc/pfs_exports (man pfs_exports(5)) 6.- mysystem:/#pfs_exportfs -av 7.- mysystem:/#pfs_mount /mnt/pfs_cdrom Now is the moment to mount in our remote system, so you must perform a few steps: 0.- remotesystem:/#mkdir -p /mnt/pfs_cdrom (use your own) 1.- remotesystem:/#nohup /usr/sbin/pfs_mountd...

Timestamp .sh_history

Do you want timestamp or ip-sign every .sh_history entry ?? Try this... simply fantastic....: trap 'date "+# %c" | read -s' debug or... trap 'who am i -R | read -s' debug ...you can try any other command you imagine to sign your history

Kernel not Relocatable - VPAR - Ignite

If you are doing something like... vparcreate -p <name> / -a cpu::<x> / -a cpu:::<x>:<x> / -a cpu:0.10 -a cpu:<xx.yy> / -a mem::<nnnn> / -a io:<n.n.n...> / -a io:<n.n.n...> / -a io:<n.n.n...> -a io:<n.n.n...>:boot ...and you get an error similar to... [MON] Booting Kernel file is not relocatable File address is 0x0000000000020000 and memory address is 0x0000000004000000 Error reading program segments Failed to load (n.n.n.n.....)/stand/vmunix ...when you're trying to install S.O. cloning another machine from Ignite, in my own experience is because the originar server copied with Ignite is not a vpar system, the problem is simply that the original soft doesn't have vpar software installed or the one installed is a previous version, so in order to load the O.S. you must: -Create a depot with your os and vpar soft... make_depots -r <version ??B.11.11??> -s <source> (/dev/cdrom) make_depots -r <version...

How to Enable Asynchronous I/O Disk HP-UX 11.11

To use async I/O with your Oracle, Sybase ... etc ... a) From the SAM Kernel configuration menu, choose Drivers and set the Pending State for asyncdisk to In. b) From the Actions Menu rebuild the Kernel c) At the Unix Prompt , execute the following statements as 'root' # insf you must have in your /dev crw-rw-rw- 1 bin bin 101 0x000000 Jan 23 12:41 /dev/async crw-rw-rw- 1 bin bin 101 0x000000 Jan 23 12:41 /dev/asyncdsk if u don't have try: #insf -e ...simple not?? Another way is doing manually... not dificult... supose you are doing it for oracle: 1. cd /stand/build 2. /usr/lbin/sysadm/system_prep -s system 3. vi /stand/build/system - add the following 2 lines: asyncdsk & asyncdsk_included 4. mk_kernel -s /stand/build/system 5. cp /stand/system /stand/system.prev 6. cp /stand/build/system /stand 7. kmupdate /stand/build/vmunix_test 8. cd / 9. shutdown -r -y now 10. after the system is reboot.. 11. Use sam to change the value of the param...

VXVM not started/ VXVM disabled during boot

Many times we get error like VXVM not started in Solaris 10. To correct this problem, follow to remove the below file.  rm /etc/vx/reconfig.d/state.d/install-db and reboot the system. VXVM should start now. Many times vxconfigd is not started manytimes. bash-3.00# ps -ef|grep vx     root   748     1   0 20:30:19 ?           0:05 /sbin/vxesd      root   626     1   0 20:30:13 ?           0:02 vxconfigd -m disable It does not bring any disk in OS. bash-3.00# vxdisk list DEVICE       TYPE            DISK         GROUP        STATUS bash-3.00# vxdg list NAME      ...

solaris 9 server rebooting again and again ( bootblk missing)

Image
After a routine reboot the system, encountered the problem of rebooting the server again and again. There was problem of missing bootblk on the system. system had 2 mirrored disk having metadevices. That was causing to reboot the system again and again . The error was as below: SunOS Release 5.9 Version Generic_118558-28 64-bit Copyright 1983-2003 Sun Microsystems, Inc.  All rights reserved. Use is subject to license terms. Cannot mount root on /pseudo/md@0:0,0,blk fstype ufs panic[cpu0]/thread=140a000: vfs_mountroot: cannot mount root 0000000001409970 genunix:vfs_mountroot+70 (0, 0, 0, 200, 14a8270, 0) %l0-3: 000000000149bc00 000000000149bc00 0000000000002000 00000000014e5b00 %l4-7: 00000000014eb800 0000000001422568 000000000149c400 000000000149f400 0000000001409a20 genunix:main+90 (1409ba0, f000d348, 1409ec0, 399604, 2000, 500) %l0-3: 0000000000000001 000000000140a000 0000000001423728 0000000000000000 %l4-7: 0000000078002000 000000000039c000 00000000014fdb18 0...

Does not ping with hostname

Faced a surprising problem when not able to ping with the hostname only. From backup server all host servers should be able to ping to enable its backup. Problem: Not able to ping with hostname Able to ping with FQDN/IP, nslookup works fine. But only ping and traceroute does not happen. Solution: There was changes in hosts distribution file, few hosts IPs were changed and reflected in backup server host file. But ping/traceroute does not pickup from /etc/hosts it takes . /etc/hosts has correct entry but after lot of head scratching found that /etc/inet/ipnodes has the older entry. and that was causing it. Looks pretty simple but consumed a lot of time:(

removing file system from veritas cluster

we need to remove a file system from veritas cluster configuration. /dev/vx/dsk/testdg/testvol 62G   38G   24G  62% /db/US41/data/test we must check the resource dependency before starting to act. root@river01# hares -dep testdb_orus41_test_mnt #Group       Parent                  Child testsg    testdb_US41_ora          testdb_orus41_test_mnt testsg     testdb_orus41_test_mnt testdb_orus41_test_vol and same way find dependency for Volume resource. Method1: we can check the cluster configuration main.cf file and check resource dependency and further steps. But it becomes tediuos if cluster is complex like in my case. so best bet is to try menthod2 in such case. Method2: 1) Copy all cf files to /var/tmp 2) run  hacf -cftocmd /var/tmp which will create...

Veritas volume is not mirroring due to mismatch plex and volume size

An unusual problem where Plex length and volume size is different. i was trying to mirror this particular volume but getting error. volume had below status: BEFORE: v  DXarchive90d -            ENABLED  ACTIVE   419430400 SELECT   -        fsgen pl DXarchive90d-01 DXarchive90d ENABLED ACTIVE 418426880 CONCAT   -        RW sd asmm7_8-01   DXarchive90d-01 asmm7_8 0      167270400 0        usp006_21 ENA sd asmm7_9-01   DXarchive90d-01 asmm7_9 0      167270400 167270400 usp006_19 ENA sd asmm7_10-01  DXarchive90d-01 asmm7_20 0     83886080 334540800 usp006_9 ENA can be seen Plex and volume size has difference of 1003520 sectors. Now while trying to add mirror plex it throws error. ...

Jumpstart server setup - for x86 client

Note again: this is only if you want to configure Jumpstart server  for x86 clients. So let's first do quick introduction about JumpStart Server. It consists of three servers (yes, all can be in one machine): 1. Boot (provides RARP, TFTP and bootparam services) 2. Configuration (specifies client's profile, list of software, begin  and finish scripts) 3. Install Server (provides OS/software to be installed on a client) Basically next tasks are necessary for setting up jumpstart server: 1. Installing Solaris OS distribution 2. Setup configuration server with configuration files, and verify them  (config file syntax) 3. Share installation directories (NFS server is running, right?) 4. Make sure client has access Installing OS distribution Mount Solaris OS DVD ISO image, see how . And install the server (yes in designated location, like /jumpstart/ server/x86). # /mnt/Solaris_10/Tools> ./ setup_install_server /jumpstart/server/x86 Verify...

RAID-0 (stripe) on solaris 10 using solaris volume manager

First step is to prepare hard drives for raid 0. So we will create one big partition that will span across whole drive. Run format and select first drive: root@jsc-x4100-17:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2t0d0 /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0 1. c2t1d0 /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0 2. c2t2d0 /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@2,0 3. c2t3d0 /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@3,0 Specify disk (enter its number): 1 selecting c2t1d0 [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the di...

SVM - RAID 0 - Concatenation and Stripe Volumes

This doesn't provide any redundancy but allows you to expand storage capacity. There are three of them: 1. RAID-0 Stripe spreads data equally across all components (disk, slice or  soft partition). 2. RAID-0 Concatenation writes data to first available component and then  moves to next one. 3. RAID-0 Concatenated Stripe is stripe expanded by additional component. You will probably use these guys to create submirrors (in order to create  mirrors later). RAID-0 cannot be used for FS that are used during OS upgrade/installation:  /, /usr, /var, /opt, swap. Logical data segments (define size in [K|M]bytes or blocks) of volume (RAID-0) is called interlace . Different values can be used to adjust application I/O performance. Default  is 16Kb (32 blocks). Creating RAID-0 First you need state database replicas (let's create on 2 slices, total 6  replicas). # metadb -afc 3 c1t0d0s7 c1t1d0s7 # metadb flags first blk ...

veritas DG disabled due to serial split brain..

Issue A disk group cannot be imported because the configuration databases do not agree on the actual and expected serial IDs (SSB's) on all the disks within the disk group. This is a true serial split brain condition, which Volume Manager cannot correct automatically. You must choose which configuration database to use on a specific disk to resolve the issue Error #  vxdg import demodg VxVM vxdg ERROR V-5-1-10978 Disk group demodg: import failed: Serial Split Brain detected. Run vxsplitlines to import the diskgroup   Environment All platforms Cause The Serial Split Brain condition arises because VERITAS Volume Manager (tm) increments the serial ID (SSB) in the disk media record of each imported disk in the disk group. and if the serial IDs on the disks do not agree( due to network/disk/connection issue) with the expected values from the configuration copies on other disks in the disk group....