Friday, June 28, 2024

Globalization leads to mobilization

 Thanks for landing here.I am not going to explain economics here.I will explain step by step process about How to migrate database character set to AL32UTF8.

Oracle recommend to use DMU tool to convert database character set.Tool can be downloaded using below link.

https://www.oracle.com/database/technologies/appdev/oracle-database-migration-assistant.html

Extract the zip file in a folder where admin privilege is there.After extracting we can see dmu32/dmu64.Based on the OS platform click the executable and it will ask for JDK home.Mention the jdk home path and application will open.

Jdk 8 can be downloaded using below link in case jdk 8 is not installed in your system.

https://download.oracle.com/otn/java/jdk/8u202-b08/1961070e4c9b4e26a04e7f5a083f551e/jdk-8u202-windows-x64.exe

 

Please follow below steps for converting character set.

1.       Execute below pre-requisite script as sysdba in the database so that it creates SYS.DBMS_DUMA_INTERNAL package

$ORACLE_HOME/rdbms/admin/prvtdumi.plb

2.       Compile invalid objects by executing $ORACLE_HOME/rdbms/admin/utlrp.sql

3.       Purge dba_recyclebin

4.       Take full backup and create guaranteed restore point for recovery

5.       Gahther database dictionary stats using below command in sqlplus

Exec dbms_stats.gather_dictionary_stats;

6.       During conversion phase lots of undo data will be generated and to avoid that we should add datafile in undo tablespace proactively.Use below SQL script which will create a job and monitor space in every 15 minutes and add datafile if required.

***********************************************************************************

set serveroutput on;

create or replace procedure add_datafile as

sumb number;

summaxb number;

vSQL varchar2(300);

begin

for c1 in (select tablespace_name,sum(bytes)/(1024*1024*1024) Freespace from dba_free_space group by tablespace_name)

loop

      --dbms_output.put_line(c1.tablespace_name||':'||c1.Freespace);

      If c1.Freespace <1 then

        dbms_output.put_line(c1.tablespace_name||':'||c1.Freespace);

        select sum(bytes)/(1024*1024*1024),sum(maxbytes)/(1024*1024*1024) into sumb,summaxb from dba_data_files where tablespace_name=c1.tablespace_name;

        if (sumb/summaxb)*100 >=85 then

           dbms_output.put_line('ADDING DATAFILE- IN TABLESPACE '||c1.tablespace_name||'....');

           vSQL :='ALTER TABLESPACE '||c1.tablespace_name||' ADD DATAFILE ''+DATA'' SIZE 10G AUTOEXTEND ON MAXSIZE 32767M';

           execute immediate vSQL;

        end if;

      end if;

end loop;

exception

WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(SQLERRM);

end;

/

BEGIN

  SYS.DBMS_SCHEDULER.DROP_JOB

    (job_name  => 'SYS.ADD_DATA_FILE');

END;

/


BEGIN

  SYS.DBMS_SCHEDULER.CREATE_JOB

    (

       job_name        => 'SYS.ADD_DATA_FILE'

      ,start_date      => TO_TIMESTAMP_TZ('2024/06/07 22:47:25.939518 America/Phoenix','yyyy/mm/dd hh24:mi:ss.ff tzr')

      ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=15'

      ,end_date        => NULL

      ,job_class       => 'DEFAULT_JOB_CLASS'

      ,job_type        => 'STORED_PROCEDURE'

      ,job_action      => 'sys.add_datafile'

      ,comments        => NULL

    );

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'RESTARTABLE'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'LOGGING_LEVEL'

     ,value     => SYS.DBMS_SCHEDULER.LOGGING_OFF);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'MAX_FAILURES');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'MAX_RUNS');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'STOP_ON_WINDOW_CLOSE'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'JOB_PRIORITY'

     ,value     => 3);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'SCHEDULE_LIMIT');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'AUTO_DROP'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'RESTART_ON_RECOVERY'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'RESTART_ON_FAILURE'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'SYS.ADD_DATA_FILE'

     ,attribute => 'STORE_OUTPUT'

     ,value     => TRUE);


  SYS.DBMS_SCHEDULER.ENABLE

    (name                  => 'SYS.ADD_DATA_FILE');

END;

/

**********************************************************************************

7.       Create database connection in DMU tool providing host,port and service name

8.       Create migration repository like below which will create some internal tables under system schema



 





9.       In this step we will scan the database to find issues before migration and those need to be fixed before conversion.



 


 

 


 


 

A screenshot of a computer

Description automatically generated

 

A screenshot of a computer

Description automatically generated

 

 

 

A screenshot of a computer

Description automatically generated

This step will run as per 8 parallel thread as we defined previously.In cist4 it took around 2 hrs of time.

10.   Next step is to review the scan report and fix issues accordingly.Click on the link

View the scan report

A screenshot of a computer

Description automatically generated

 

 

 

 

A screenshot of a computer

Description automatically generated

Followed below note to clear binary representation of WRI$_ADV_OBJECTS

DMU Frags WRI$_ADV_OBJECTS.ATTR5 as INVALID DATA (Doc ID 2634488.1)

 

1) Drop The Statistics Advisor Task (AUTO_STATS_ADVISOR_TASK) :

DECLARE
  v_tname VARCHAR2(32767);
BEGIN
  v_tname := 'AUTO_STATS_ADVISOR_TASK';
  DBMS_STATS.DROP_ADVISOR_TASK(v_tname);
END;
/



Once the task is dropped then all of its dependent task output data would be deleted from the WRI$_ADV_OBJECTS table.

Now rerun the scan only for that table and we can see the invalid representation error is now removed.

11.   Now run bulk cleansing from the object panel and only for those 4 tables identified

In this step byte representation for char/varchar column will be converted to char representation.

This process will create huge amount of UNDO data.So it is advisable to allocate at least 300 GB of undo before this is run.Otherwise cleansing might fail.

               This is done in restricted mode so no client connection can happen.

A screenshot of a computer

Description automatically generated

Check progress of bulk cleansing using below query

CONN / as sysdba
set PAGES 1000
select TARGET, TO_CHAR(START_TIME,'HH24:MI:SS - DD-MM-YY'), TIME_REMAINING, SOFAR, TOTALWORK, SID, SERIAL#, OPNAME
from V$SESSION_LONGOPS
where SID in(select SID from V$SESSION where program = 'Database Migration Assistant for Unicode')
and SOFAR < TOTALWORK
order by START_TIME
/

 

Once bulk cleansing is done we can see that some rows have invalid binary representation as shown below.

A screenshot of a computer

Description automatically generated

Click on the number link under Invalid representation column.

Click on the column heading and change characterset drop down and see whether color changes to normal. A screenshot of a computer

Description automatically generated

The most appropriate charset has been found as WE8MSWIN1252.

select owner,count(*) from dba_objects where status='INVALID' group by owner

run utlrp.sql to compile invalid objects before conversion.

A screenshot of a computer

Description automatically generated

 

 

 

 

 

A screenshot of a computer

Description automatically generated

 

A screenshot of a computer

Description automatically generated

 

 

A screenshot of a computer error message

Description automatically generated

 

A screenshot of a computer error message

Description automatically generated

 

 

 

 

 

A screenshot of a computer

Description automatically generated

 

A screenshot of a computer

Description automatically generated

 

A screenshot of a computer

Description automatically generated

To fix this issue datafile added in system tablespace

A screenshot of a computer

Description automatically generated

 

 

 

 

A screenshot of a computer

Description automatically generated

 

 

 

 

 


 


Thursday, June 13, 2024

Explore alternate archivelog location

Lets explore an oracle feature and test it today.

What is alternate archive log location and how to test alternate archive log location feature?

Basically we set archivelog location in log_archive_dest_1 parameter but what if that location is full or due to some network issue archiver failed to write there.All your transaction will hang until the issue is resolved.Oracle has given a feature to set an alternate archivelog location in that scenario so that archiver can continue writing there and we can prevent database hang issue.

To do that we shall create a backup table considerable large and decrease FRA size relatively small so that archiver error situation is created

Prepation:

--Geting size of the table

SQL> select sum(bytes)/(1024*1024*1024) from dba_segments where segment_name='CI_CC';


SUM(BYTES)/(1024*1024*1024)

---------------------------

                     25.875

--Current FRA size

SQL> show parameter db_recovery_file_dest


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

db_recovery_file_dest                string      +RECO

db_recovery_file_dest_size           big integer 300G

--Current FRA usage

SQL>select * from v$flash_recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID

----------------------- ------------------ ------------------------- --------------- ----------

CONTROL FILE                             0                         0               0          0

REDO LOG                                 0                         0               0          0

ARCHIVED LOG                           4.8                         0               2          0

BACKUP PIECE                           .01                         0               1          0

IMAGE COPY                               0                         0               0          0

FLASHBACK LOG                            0                         0               0          0

FOREIGN ARCHIVED LOG                  1.74                         0               3          0

AUXILIARY DATAFILE COPY                  0                         0               0          0


--Current setting  of archive log locatiomn

SQL> show parameter log_archive_dest_1

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_1                   string      LOCATION=USE_DB_RECOVERY_FILE_

                                                 DEST VALID_FOR=(ALL_LOGFILES,A

                                                 LL_ROLES) MAX_FAILURE=1 REOPEN

                                                 =5 DB_UNIQUE_NAME=ccbx ALTERNA

                                                 TE=log_archive_dest_10

log_archive_dest_10                  string      LOCATION=+DATA/ccbx/arc10 VALI

                                                 D_FOR=(ALL_LOGFILES,ALL_ROLES)

                                                  DB_UNIQUE_NAME=ccbx ALTERNATE

                                                 =log_archive_dest_1

Emphasize altenate highlighted in bold.In case dest1 is not available archiver will attempt to write in dest10.Once dest1 is availble archiver will start writing back in dest1.

ref:https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/LOG_ARCHIVE_DEST_n-parameter-attributes.html#GUID-DD58C7C2-EE9D-4632-94C9-F6C652718006

Existence of ASM path for log_archive_dest_10:-

Lets check whether asm path +DATA/ccbx/arc10 is alreday created or not.To do that login as grid user and execute below

asmcmd ls -l +DATA/ccbx/arc10

If this command does not return any error which means path is existing and valid.


Plan in action:-

Now we will reduce FRA to 30G so that FRA gets filled up while creating backup table.


SQL> alter system set db_recovery_file_dest_size=30G;

System altered.


SQL> show parameter db_recovery_file

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

db_recovery_file_dest                string      +RECO

db_recovery_file_dest_size           big integer 30G

SQL> select * from v$flash_recovery_area_usage;


FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID

----------------------- ------------------ ------------------------- --------------- ----------

CONTROL FILE                             0                         0               0          0

REDO LOG                                 0                         0               0          0

ARCHIVED LOG                         47.99                         0               2          0

BACKUP PIECE                           .05                         0               1          0

IMAGE COPY                               0                         0               0          0

FLASHBACK LOG                            0                         0               0          0

FOREIGN ARCHIVED LOG                 17.36                         0               3          0

AUXILIARY DATAFILE COPY                  0                         0               0          0


Now we will create a table like ci_cc_bkp copying all data from cisadm.ci_cc.

create table cisadm.ci_cc_bkp tablespace cists_01 as (select * from cisadm.ci_cc);

After some time I see archiver error in database alert log and as expected it has started created archvivelog in alternate location.LAD stands for log archive destination.

We can see that it has started writing in LAD10

-----Snippet of alert log-------------------------------------------------------------

2024-06-13T02:27:12.327483-07:00

ORACLE Instance ccbx, archival error, archiver continuing

2024-06-13T02:27:18.817944-07:00

ARC3 (PID:20343): Archived Log entry 4 added for T-1.S-4 ID 0x8563c335 LAD:10

---------------------------------------------------------------------------------------

When I checked in ASM path I can see archivelog in alternate location


 asmcmd ls -l +DATA/ccbx/arc10

Type        Redund  Striped  Time             Sys  Name

ARCHIVELOG  MIRROR  COARSE   JUN 13 02:00:00  N    1_4_1170387009.dbf => +DATA/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_4.572.1171506433

ARCHIVELOG  MIRROR  COARSE   JUN 13 02:00:00  N    1_5_1170387009.dbf => +DATA/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_5.573.1171506493

ARCHIVELOG  MIRROR  COARSE   JUN 13 02:00:00  N    1_6_1170387009.dbf => +DATA/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_6.576.1171506553

ARCHIVELOG  MIRROR  COARSE   JUN 13 02:00:00  N    1_7_1170387009.dbf => +DATA/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_7.577.1171506609

ARCHIVELOG  MIRROR  COARSE   JUN 13 02:00:00  N    1_8_1170387009.dbf => +DATA/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_8.578.1171506663

Now I have reverted back FRA to 300G and saw below status

select DEST_NAME,TARGET,DESTINATION,VALID_ROLE,STATUS,ALTERNATE from v$archive_dest where status!='INACTIVE';


DEST_NAME                      TARGET           DESTINATION                    VALID_ROLE   STATUS    ALTERNATE

------------------------------ ---------------- ------------------------------ ------------ --------- ------------------------------

LOG_ARCHIVE_DEST_1             PRIMARY          USE_DB_RECOVERY_FILE_DEST      ALL_ROLES    VALID     LOG_ARCHIVE_DEST_10

LOG_ARCHIVE_DEST_10            PRIMARY          +DATA/ccbx/arc10               ALL_ROLES    ALTERNATE LOG_ARCHIVE_DEST_1


Also,When I checked archievlog file creation status then I see latest archivelog is  now generating in log_archive_dest_1

SQL> select name,archived,completion_time from v$archived_log;

NAME                                                                             ARC COMPLETION_TIME

-------------------------------------------------------------------------------- --- --------------------

+RECO/CCBX/ARCHIVELOG/2024_05_31/thread_1_seq_1.609.1170387619                   YES 31-MAY-2024 03:40:25

+RECO/CCBX/ARCHIVELOG/2024_06_07/thread_1_seq_2.613.1170992223                   YES 07-JUN-2024 03:37:09

+RECO/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_3.598.1171506375                   YES 13-JUN-2024 02:26:21

+DATA/ccbx/arc10/1_4_1170387009.dbf                                              YES 13-JUN-2024 02:27:18

+DATA/ccbx/arc10/1_5_1170387009.dbf                                              YES 13-JUN-2024 02:28:18

+DATA/ccbx/arc10/1_6_1170387009.dbf                                              YES 13-JUN-2024 02:29:18

+DATA/ccbx/arc10/1_7_1170387009.dbf                                              YES 13-JUN-2024 02:30:15

+DATA/ccbx/arc10/1_8_1170387009.dbf                                              YES 13-JUN-2024 02:31:09

+DATA/ccbx/arc10/1_9_1170387009.dbf                                              YES 13-JUN-2024 02:32:00

+DATA/ccbx/arc10/1_10_1170387009.dbf                                             YES 13-JUN-2024 02:32:48

+DATA/ccbx/arc10/1_11_1170387009.dbf                                             YES 13-JUN-2024 02:33:33

+DATA/ccbx/arc10/1_12_1170387009.dbf                                             YES 13-JUN-2024 02:34:21

+DATA/ccbx/arc10/1_13_1170387009.dbf                                             YES 13-JUN-2024 02:35:09

+RECO/CCBX/ARCHIVELOG/2024_06_13/thread_1_seq_14.592.1171507699                  YES 13-JUN-2024 02:48:22

This concludes failover and failback is automatic and archive log location setting is as expected.

Suggestion:Once dest1 is reachable do not try to change log_archive_dest_state_1 to enable manually.

You will end up enabling two valid archievlog location in that case.

When SQL plan baseline is not used

 Thank you very much for your interest. Problem Statement: Lets assume you are reported about an application query running slow in one envir...