Tuesday, June 10, 2014

LiveUpgrade issue - Solution

root@# time lucreate -c s10u9 -m /:/dev/md/dsk/d210:ufs,mirror -m /:/dev/md/dsk/d12:detach,attach,preserve -m /var:/dev/md/dsk/d230:ufs,mirror -m /var:/dev/md/dsk/d32:detach,attach,preserve -m /opt:/dev/md/dsk/d250:ufs,mirror -m /opt:/dev/md/dsk/d52:detach,attach,preserve -n s10u11 -C /dev/dsk/c3t0d0s0
Determining types of file systems supported
Validating file system requests
The device name </dev/md/dsk/d210> expands to device path </dev/md/dsk/d210>
The device name </dev/md/dsk/d230> expands to device path </dev/md/dsk/d230>
The device name </dev/md/dsk/d250> expands to device path </dev/md/dsk/d250>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <s10u9>.
Creating initial configuration for primary boot environment <s10u9>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c3t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10u9> PBE Boot Device </dev/dsk/c3t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10u11>.
Source boot environment is <s10u9>.
Creating file systems on boot environment <s10u11>.
Preserving <ufs> file system for </> on </dev/md/dsk/d210>.
Preserving <ufs> file system for </opt> on </dev/md/dsk/d250>.
Preserving <ufs> file system for </var> on </dev/md/dsk/d230>.
Mounting file systems for boot environment <s10u11>.
ERROR: mount: The state of /dev/md/dsk/d210 is not okay
        and it was attempted to be mounted read/write
mount: Please run fsck and try again
ERROR: cannot mount mount point </.alt.tmp.b-fng.mnt> device </dev/md/dsk/d210>
ERROR: failed to mount file system </dev/md/dsk/d210> on </.alt.tmp.b-fng.mnt>
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
WARNING: Unable to mount BE <s10u11>.
Removing incomplete BE <s10u11>.
ERROR: Cannot make file systems for boot environment <s10u11>.

real    3m1.695s
user    0m14.315s
sys     0m24.271s

root@# fsck /dev/md/rdsk/d210
** /dev/md/rdsk/d210
** Last Mounted on /
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
127965 files, 4754892 used, 1441355 free (6331 frags, 179378 blocks, 0.1% fragmentation)

NOTE:  AFTER THIS YOU WILL HAVE TO RE-RUN LUCREATE HOWEVER TO DO SO, YOU WILL HAVE TO REMOVE NEWLY CREATED METATDEVICES, SYNC SUB-MIRROS TO ORIGINAL METADEVICES, WAIT TILL THEY ALL RE-SYNC BEFORE YOU SPLIT THEM AGAIN AFTER RE-EXECUTING THE LUCREATE AS SAME ABOVE….  TIME CONSUMING? YES – CERTAINLY IT IS!! SO USE BELOW TRICK!

root@# time lucreate -c s10u9 -m /:/dev/md/dsk/d210:ufs -m /var:/dev/md/dsk/d230:ufs -m /opt:/dev/md/dsk/d250:ufs  -n s10u11 -C /dev/dsk/c3t0d0s0
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10u11>.
Source boot environment is <s10u9>.
Creating file systems on boot environment <s10u11>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d210>.
Creating <ufs> file system for </opt> in zone <global> on </dev/md/dsk/d250>.
Creating <ufs> file system for </var> in zone <global> on </dev/md/dsk/d230>.
Mounting file systems for boot environment <s10u11>.
Calculating required sizes of file systems for boot environment <s10u11>.
Populating file systems on boot environment <s10u11>.
Analyzing zones.
Mounting ABE <s10u11>.
Cloning mountpoint directories.
Generating file list.
Copying data from PBE <s10u9> to ABE <s10u11>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <s10u11>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <s10u9>.
Making boot environment <s10u11> bootable.
Setting root slice to Solaris Volume Manager metadevice </dev/md/dsk/d210>.
Population of boot environment <s10u11> successful.
Creation of boot environment <s10u11> successful.

HTH!!

_________________________________________________________________

# luupgrade -n sol10U11 -u -s /mnt -k /tmp/sysidcfg

67352 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
ERROR: Unable to mount boot environment <>.

Solution for above error -

Live Upgrade uses /a as a temporary mount point and directory for some of its actions.  Accordingly, it needs to be an empty directory, or ideally shouldn't exist at all.

To resolve this issue, move your current /a out of the way on both the original boot environment and the new boot environments as follows:

# mv /a /a.orig
# lumount <alt_BE> /mnt
# mv /mnt/a /mnt/a.orig
# luumount <alt_BE>

If you have no need for the contents of /a, you can safely delete the file or directory instead of renaming it.
Once you have removed or renamed /a, you should find luupgrade will now operate as expected.

# luupgrade -n s10u11 -u -s /mnt -k /tmp/sysidcfg

[...]

Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10u11>.
Package information successfully updated on boot environment <s10u11>.
Adding operating system patches to the BE <s10u11>.
The operating system patch installation is complete.

[...]

The Solaris upgrade of the boot environment <s10u11> is partially complete.
Installing failsafe
Failsafe install is complete.
___________________________________________________________________________

root@# lucreate -n s10u11 -p rpool
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u11>.
Source boot environment is <s10u7>.
Creating file systems on boot environment <s10u11>.
Populating file systems on boot environment <s10u11>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u11>.
Creating clone for <rpool/ROOT/s10u7@s10u11> on <rpool/ROOT/s10u11>.
Creating snapshot for <rpool/ROOT/s10u7/var> on <rpool/ROOT/s10u7/var@s10u11>.
Creating clone for <rpool/ROOT/s10u7/var@s10u11> on <rpool/ROOT/s10u11/var>.
Mounting ABE <s10u11>.
ERROR: error retrieving mountpoint source for dataset < >
ERROR: failed to mount file system < > on </.alt.tmp.b-.nb.mnt/opt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Failed to mount ABE.
Reverting state of zones in PBE <s10u7>.
ERROR: Unable to copy file systems from boot environment <s10u7> to BE <s10u11>.
ERROR: Unable to populate file systems on boot environment <s10u11>.
Removing incomplete BE <s10u11>.
ERROR: Cannot make file systems for boot environment <s10u11>.

Problem - Having /opt as a separate ZFS filesystem, it prevents liveupgrade to function well.

root@:/root# zfs list -r rpool
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 93.3G  40.6G   100K  /rpool
rpool/ROOT            14.3G  40.6G    21K  legacy
rpool/ROOT/s10u7      14.3G  40.6G  2.29G  /
rpool/ROOT/s10u7/var  11.3G  41.3G  11.3G  /var
rpool/dump            32.0G  40.6G  32.0G  -
rpool/export_home       24K  20.0G    24K  /rpool/export/home
rpool/homevol          218K  20.0G   218K  /homevol
rpool/opt             13.0G  2.04G  7.74G  /opt
rpool/opt/oracle      5.22G  2.04G  5.22G  /opt/oracle
rpool/swap              32G  72.6G    16K  -
rpool/var_log         23.3M  30.0G  23.3M  /var/log

root@:/root# init 0

{9} ok boot -F failsafe
Resetting...

# mv opt opt_save
# zfs set mountpoint=/opt_save rpool/opt
# zfs create rpool/ROOT/s10u7/opt
# mv * ../opt/
# zfs set quota=15g rpool/ROOT/s10u7/opt

# df -kh /a/opt
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10u7/opt    15G   7.7G   7.3G    52%    /a/opt

# zfs set mountpoint=none rpool/opt

# zfs set mountpoint=/opt/oracle rpool/opt/oracle

# zfs list -r
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 93.3G  40.6G   100K  /a/rpool
rpool/ROOT            22.0G  40.6G    21K  legacy
rpool/ROOT/s10u7      22.0G  40.6G  2.29G  /a
rpool/ROOT/s10u7/opt  7.74G  7.26G  7.74G  /a/opt
rpool/ROOT/s10u7/var  11.3G  41.3G  11.3G  /a/var
rpool/dump            32.0G  40.6G  32.0G  -
rpool/export_home       24K  20.0G    24K  /a/rpool/export/home
rpool/homevol          218K  20.0G   218K  /a/homevol
rpool/opt             5.22G  9.78G    21K  none
rpool/opt/oracle      5.22G  2.78G  5.22G  /a/opt/oracle
rpool/swap              32G  72.6G    16K  -
rpool/var_log         23.3M  30.0G  23.3M  /a/var/log

# zfs set canmount=off rpool/opt

# init 6

After reoot.

root@:/root# svcs -vx;uptime;df -kh | grep rpool
  1:50am  up 3 min(s),  1 user,  load average: 1.17, 0.64, 0.27
rpool/ROOT/s10u7       134G   2.3G    41G     6%    /
rpool/ROOT/s10u7/var   134G    11G    41G    22%    /var
rpool/ROOT/s10u7/opt    15G   7.7G   7.3G    52%    /opt
rpool/homevol           20G   217K    20G     1%    /homevol
rpool/opt/oracle       8.0G   5.2G   2.8G    66%    /opt/oracle
rpool                  134G    99K    41G     1%    /rpool
rpool/export_home       20G    24K    20G     1%    /rpool/export/home
rpool/var_log           30G    23M    30G     1%    /var/log

Now let's try and run lucreate

root@:/root# lucreate -n s10u11 -p rpool
Analyzing system configuration.
ERROR: All datasets within a BE must have the canmount value set to noauto.
/usr/lib/lu/ludefine: cannot return when not in function
ERROR: All datasets within a BE must have the canmount value set to noauto.
/usr/lib/lu/ludefine: cannot return when not in function
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u11>.
Source boot environment is <s10u7>.
Creating file systems on boot environment <s10u11>.
ERROR: All datasets within a BE must have the canmount value set to noauto.
/usr/lib/lu/ludefine: cannot return when not in function
Populating file systems on boot environment <s10u11>.
ERROR: All datasets within a BE must have the canmount value set to noauto.
/usr/lib/lu/ludefine: cannot return when not in function
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u11>.
Creating clone for <rpool/ROOT/s10u7@s10u11> on <rpool/ROOT/s10u11>.
Creating snapshot for <rpool/ROOT/s10u7/opt> on <rpool/ROOT/s10u7/opt@s10u11>.
Creating clone for <rpool/ROOT/s10u7/opt@s10u11> on <rpool/ROOT/s10u11/opt>.
Creating snapshot for <rpool/ROOT/s10u7/var> on <rpool/ROOT/s10u7/var@s10u11>.
Creating clone for <rpool/ROOT/s10u7/var@s10u11> on <rpool/ROOT/s10u11/var>.
Mounting ABE <s10u11>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <s10u11>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <s10u7>.
Making boot environment <s10u11> bootable.
Population of boot environment <s10u11> successful.
Creation of boot environment <s10u11> successful.


It's sucessful. :)

root@:/root# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10u7                      yes      yes    yes       no     -
s10u11                     yes      no     no        yes    -

No comments:

Post a Comment