Friday, July 21, 2023

Out of Place (OOP) Patching of Oracle Restart 21c

Previous post showed OOP for 19c Oracle restart. Things are much simpler in 21c and can expect the same for 23c once released. The -switchGridHome option is supported for Oracle Restart in 21c (Oracle doc here). As such the OOP is simply installing new GI home in a different location with -switchGridHome option.
The current GI home is in /opt/codegen/app/oracle/product/21.x.0/grid and release patch is
crsctl query has releasepatch
Oracle Clusterware release patch level is [3414221900] and the complete list of patches [35132583 35134934 35134943 35149778 35222143 35226235 ] have been applied on the local node. The release patch string is [21.10.0.0.0].


Run the gridsetup with -switchGridHome option and any RU and one-off patches. This step doesn't result in downtime.
./gridSetup.sh -silent -switchGridHome  -applyRU /opt/installs/patches/35427907
Preparing the home to patch...
Applying the patch /opt/installs/patches/35427907...
Successfully applied the patch.
The log can be found at: /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/installerPatchActions_2023-07-21_01-55-36PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /opt/codegen/app/oraInventory/logs/GridSetupActions2023-07-21_01-55-36PM/gridSetupActions2023-07-21_01-55-36PM.log

As a root user, execute the following script(s):
        1. /opt/codegen/app/oracle/product/21.11.0/grid/root.sh

Execute /opt/codegen/app/oracle/product/21.11.0/grid/root.sh on the following nodes:
[ip-172-31-10-193]
When prompted run the root.sh. This is where the grid home switching happens and results in down time. In the course of running root.sh the HAS stack is brought down in the old GI home and started in the new GI home.
/opt/codegen/app/oracle/product/21.11.0/grid/root.sh
Check /opt/codegen/app/oracle/product/21.11.0/grid/install/root_ip-172-31-10-193.eu-west-1.compute.internal_2023-07-21_14-06-15-742998108.log for the output of root script
The output on the log files shows prepatch and postpatch steps run on the new GI home.
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/codegen/app/oracle/product/21.11.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-16PM.log
2023/07/21 14:06:18 CLSRSC-347: Successfully unlock /opt/codegen/app/oracle/product/21.11.0/grid
2023/07/21 14:06:18 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /opt/codegen/app/oracle/product/21.11.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/codegen/app/oracle/crsdata/ip-172-31-10-193/crsconfig/hapatch_2023-07-21_02-06-18PM.log
2023/07/21 14:07:16 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/07/21 14:08:01 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


Looking at the currently running processes will show the GI related processes running out of new GI home.
  32070 ?        Ssl    0:02 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
  32281 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/oraagent.bin
  32308 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmd.bin
  32312 ?        Ss     0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
  32368 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/evmlogger.bin -o /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /opt/codegen/app/oracle/product/21.11.0/grid/log/[HOSTNAME]
  32384 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/cssdagent
  32423 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/onmd.bin
  32425 ?        Ssl    0:00 /opt/codegen/app/oracle/product/21.11.0/grid/bin/ocssd.bin
Release patch is 21.11
 crsctl query has releasepatch
Oracle Clusterware release patch level is [1435465441] and the complete list of patches [35428978 35442014 35442022 35442029 35550598 35589155 ] have been applied on the local node. The release patch string is [21.11.0.0.0].

Unlike in 19c no manual work is needed for updating the oracle inventory. New GI home is auto added with crs=true and crs=true is removed from old home during the GI home switch processes.
<HOME NAME="OraGI21Home1" LOC="/opt/codegen/app/oracle/product/21.x.0/grid" TYPE="O" IDX="1"/>
<HOME NAME="OraGI21Home2" LOC="/opt/codegen/app/oracle/product/21.11.0/grid" TYPE="O" IDX="4" CRS="true"/>

Related Posts
Out of Place (OOP) Patching of Oracle Restart

Wednesday, June 14, 2023

Upgrading 11.2.0.4 to 19c Using AutoUpgrade

There is an earlier post which shows how to upgrade 11.2 Oracle restart to 19c. However, in that post the database was upgraded using DBUA and left as a non-CDB. Autoupgrade (AU) could also be used to convert a non-CDB to CDB. This post shows how to do these two steps in one step using AU.
Below is the configuraiton file used in this case.
global.autoupg_log_dir=/home/oracle/upgr_log

# ----- NonCDB to PDB conversion -----
# To upgrade and convert an existing NonCDB database into a PDB of a target CDB,
# use the target_cdb parameter to specify the destination CDB.
# The target_pdb_name and target_pdb_copy_option parameters can be used
# to determine how each PDB is created on the target CDB.
#
# When neither of these options are used, a full upgrade of the source DB/CDB is performed.
#

upg1.sid=testupg
upg1.source_home=/opt/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/opt/app/oracle/product/19.x.0/dbhome_1
upg1.run_utlrp=yes
upg1.timezone_upg=yes

### NonCDB to PDB parameters ###
upg1.target_cdb=testcdb
upg1.target_pdb_name=devpdb
upg1.target_pdb_copy_option=file_name_convert=NONE
The 11.2 DB is called testupg and it's Oracle home is the source_home. The configuration already has a 19c CDB and its Oracle home is the target_home. This CDB is the target_cdb and called testcdb.
The 11.2 DB will be upgraded and plugged into the target CDB as a PDB named devpdb. The target_pdb_copy_option parameter is set to NONE so that upgraded PDB's data files are copied under the target CDB's data file structure.
The 11.2 DB has the final patch set update applied on it and 19c CDB has 19.19 RU applied.
As the first step run the AU in analyze mode to identify any issues.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode analyze

Some prechecks may require manual intervention to fix. For others AU provides a fixup mode.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode fixups




When all the fixups are done and prechceks have passed, run the AU in deploy mode to begin the upgrade and plugin as a PDB.
java -jar autoupgrade.jar -config noncdb11g2_cdb.cfg -mode deploy

The 11.2 DB is now a PDB inside the target CDB. If there was a database service associated with it then this has to be manaully created using srvctl.
One of the things noticed during this upgrade test is with regard to timezone. The 19c CDB had timezone file 32 as the current timezone file. However, the 19.19 RU has newer timezone files but they are simply copied to Oracle home and not applied. This is mentioned in the RU readme.html.
Applying this Release Update (RU) does not change the DST version in any of your databases.
The DST patches are only copied into $ORACLE_HOME/oracore/zoneinfo and no DST upgrade is performed. 
You can decide to upgrade the DST version in your database to a newer DST version independently of 
applying the RU at any time.
During the upgrade, as timezone_upg is set to yes the 11.2 DB's timezone is upgraded to 41. When this is plugged into the CDB then the PDB will have a higher timezone than the CDB. To avoid this from happening, either upgrade the CDB to latest timezone before the upgrade or set timezone_upg to no in the config file. Then run the timezone upgrade in the PDB by setting the timezone value used by the CDB.

The copies of the 11.2 datafiles remains in their original location after the upgrade. This database could be opened as a 19c non-CDB from a 19c home. On the otherhand it could be mounted using the 11.2 home and flashback to 11.2 using the restore point created by AU. In the later case (flashback to 11.2) following error may occur when archivelog clean up is initiated using rman
ORA-19633: control file record 25 is out of sync with recovery catalog
This is due to mismatch in letter cases (upper cases vs lower case use in 11.2 and 19c) in archivelog file paths. More on this is on MOS 1105924.1. To fix this recreate the controlfile from trace
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Related Posts
Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade
Plugging non-CDB as a PDB - Manual vs Autoupgrade
Upgrading Oracle Single Instance with ASM (Oracle Restart) from 11.2.0.4 to 19c (19.6) on RHEL 7

Wednesday, May 31, 2023

Upgrade from 19c to 21c Using Autoupgrade

This post shows the steps for upgrading a 19c database (CDB + PDB) with autoupgrade tool. There's an earlier post which shows upgrading PDB form 19c to 21 using unplug-plug-upgrade method.
If any privileges were reovked from public due to security reason (CIS standard recommendations) then grant those privileges back before starting the ugprade process. Once the upgrade is completed then those privilges could be revoked again.
Generate a sample upgrade config file with
java -jar autoupgrade.jar -create_sample_file config normalupgrade.cfg
and customize the "Full DB/CDB upgrade" section.
global.autoupg_log_dir=/home/oracle/autoupgrade

#
# Database number 1 - Full DB/CDB upgrade
#
upg1.log_dir=/home/oracle/autoupgrade/testcdb             # Path of the log directory for the upgrade job
upg1.sid=testcdb                                              # ORACLE_SID of the source DB/CDB
upg1.source_home=/opt/app/oracle/product/19.x.0/dbhome_2  # Path of the source ORACLE_HOME
upg1.target_home=/opt/app/oracle/product/21.x.0/dbhome_1  # Path of the target ORACLE_HOME
upg1.start_time=NOW                                       # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss]
upg1.upgrade_node=ip-172-31-10-91.eu-west-1.compute.internal                                # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost'
upg1.run_utlrp=yes                                  # Optional. Whether or not to run utlrp after upgrade
upg1.timezone_upg=yes                               # Optional. Whether or not to run the timezone upgrade
upg1.target_version=21                      # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2
The databae being upgraded is called testcdb. The source home is the 19c home and the target home is the 21c home. DB timezone is also upgraded the same time.
Run the autoupgrade mode in analyze mode to check the ugprade readiness of the database.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode analyze
This will generate a status report (status.html) at the end. Cheeck for any pre-req work that need to be carried out manually before the upgrade.

Run autougprade in fixup mode to execute preupgrade fixes.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode fixups

Finally run the autoupgrade in deploy mode to commence the upgrade.
java -jar autoupgrade.jar -config normalupgrade.cfg -mode deploy

If upgrade is successful then drop the GRP associated with it. It is important that GRP is dropped before increasing the compatibility.
To change the compatibility update the compatibility parameter and restart the database.
SQL> alter system set compatible='21.0.0' scope=spfile;
Following will be shown in the alert log.
ALERT: Compatibility of the database is changed from 19.0.0.0.0 to 21.0.0.0.0.
Increased the record size of controlfile section 15 to 104 bytes
Control file expanded from 1156 blocks to 1158 blocks
One of the postupgrade tasks is to run the $ORACLE_HOME/rdbms/admin/auditpostupgrade.sql script (mentioned in 2659172.1 as well). However, this script is not available in 21c home (checked on 21.7 and 21.8 the last RU at the time of this post). After raising an SR oracle support confirmed that script is there on 23c and at the moment not avialable on 21c. So for the time being to ignore the execution of auditpostupgrade.sql script.

Related Posts
Upgrading Oracle Restart from 19c to 21c on RHEL 7
Unplug-Plug-Upgrade from 19c to 21c Using Autoupgrade