Acerca de admin

Tras más de 20 años trabajando con tecnologías Oracle, me decidí a recopilar en un Blog algunas de las cosillas útiles para el día a día.

Eliminar un nodo del rac

Hoy vamos a ver la manera de eliminar de manera limpia un nodo de un RAC .
Supondremos que queremos eliminar de nuestro cluster el nododo que llamamos rac1, los pasos a llevar a cabo seran:

Eliminar del nodo las bases de datos corriendo

Supongamos que tenemos la BBDD

ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db
ORACLE_SID=TEST1
DB_NAME=TEST
INSTANCE_NAME=TEST1
NODE_NAME=rac1

Para cada una de las bases de datos , la parariamos con :

 srvctl stop instance  -db $DB_NAME -i $INSTANCE_NAME

Y posteriormente la eliminariamos con:

rac1.pamplona.name:oracle (TEST:/u01/app/19c/grid) srvctl config database -db $DB_NAME
Database unique name: TEST
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/db
Oracle user: oracle
Spfile: +DATA/TEST/PARAMETERFILE/spfile.357.935866381
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: +DATA,+REDO,+FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: osoper
Database instances: TEST1,TEST2
Configured nodes: rac1,rac2
Database is administrator managed

dbca -silent -deleteInstance -nodeList rac1 -gdbName $DB_NAME -instanceName $INSTANCE_NAME -sysDBAUserName sys -sysDBAPassword syspass

Tras esto comprobariamos que no quedan bases de datos en este nodo con crsctl stat rs -t

Eliminams los recursos del cluster de ese nodo

Una vez hemos eliminado las bases de datos, eliminaremos los recursos del cluster.
El primer paso es mover la mgtdb y para el proxy

srvctl relocate mgmtdb -n rac2
srvctl stop asm -proxy -n wiractst01

Eliminamos el listener

srvctl disable listener -l LISTENER -n rac1
srvctl stop listener -l LISTENER -n rac1

Desinstalando los binarios del CRS

Con el entorno cargado el ASM procederemos a desinstalar los binarios del RAC , para este proposito usaremos el comando deinstall del $ORACLE_HOME deel nodo, este comando se encargara de hacernos una desinstalacion limpia borrandonos:

  • Configuraciones
  • Binarios
  • Interfaces de red virtuales
$ORACLE_HOME/deinstall/deinstall -local

rac1.pamplona.name:oracle (+ASM1:/u01/app/19c/grid/deinstall) $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/19c/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/19c/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1'
## [END] Install check configuration ##

Traces log file: /u01/app/oraInventory/logs/crsdc_2021-03-23_09-16-26-PM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check9_09-16-27PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2021-03-23_09-16-27PM.log
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2021-03-23_09-16-27PM.log
Oracle Grid Management database was found in this Grid Infrastructure home
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################

####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/19c/grid
The following nodes are part of this cluster: rac1,rac2
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1
Oracle Home selected for deinstall is: /u01/app/19c/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Option -local will not modify any ASM configuration.
Oracle Grid Management database was found in this Grid Infrastructure home
Oracle Grid Management database will be relocated to another node during deconfiguration of local node
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2021-03-23_09-16-21-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2021-03-23_09-16-21-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2021-03-23_09-16-27PM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2021-03-23_09-16-27PM.log
ASM Clean Configuration END

Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2021-03-23_09-16-27PM.log
Network Configuration clean config END
Run the following command as the root user or the administrator on node "wiractst01".
/u01/app/19c/grid/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2021-03-23_09-15-25PM/response/deinstall_OraGI19Home1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------

Ejecutamos el comando como root

[oracle@rac1~]$ sudo /u01/app/19c/grid/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2021-03-23-15-25PM/response/deinstall_OraGI19Home1.rsp"
Using configuration parameter file: /tmp/deinstall2021-03-23_09-15-25PM/response/deinstall_OraGI19Home1.rsp
The log of current session can be found at:
  /u01/app/oraInventory/logs/crsdeconfig_wiractst01_2021-03-23_09-19-35PM.log
Redirecting to /bin/systemctl restart rsyslog.service
2021/04/09 21:23:06 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

Y continuamos el proceso dandole a [ ENTER ]

######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Local node configuration of Oracle Grid Management database was removed successfully
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2021-03-23-15-25PM/response/deinstall_2021-03-23-16-21-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2021-03-23-16-21-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2021-03-23-16-21-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to rac1
Setting CLUSTER_NODES to rac1
Setting CRS_HOME to true
Setting oracle.installer.invPtrLoc to /tmp/deinstall2021-03-23-15-25PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/19c/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/19c/grid' on the local node : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/oracle/product/12.1.0.2/db'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/19/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/19/grid' on the local node.
Oracle Universal Installer cleanup was successful.

Review the permissions and contents of '/u01/app/oracle' on nodes(s) 'rac1'.
If there are no Oracle home(s) associated with '/u01/app/oracle', manually delete '/u01/app/oracle' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############

Ahora, iremos a uno de los nodos que quedan y eliminaremos el nodo 1

[root@rac2 ~]$ .oraenv
ORACLE_SID = [root] ? +ASM2
ORACLE_BASE environment variable is not being set since this
information is not available for the current user ID root.
You can set ORACLE_BASE manually if it is required.
Resetting ORACLE_BASE to its previous value or ORACLE_HOME
The Oracle base has been set to /u01/app/19c/grid

[root@rac2 ~]$  $ORACLE_HOME/bin/crsctl delete node -n  rac1
CRS-4661: Node rac1 successfully deleted.

Comprobamos de nuevo como usuario oracle el numero de nodos

rac2:oracle (+ASM2:/home/oracle) olsnodes -s -t
rac2  Active  Unpinned

Mas informacion en:

  • How to Add Node/Instance or Remove Node/Instance with Oracle Clusterware and RAC (Doc ID 1332451.1)
  • https://oracledbwr.com/step-by-step-deleting-node-in-oracle-rac-12c-release-1-environment

Mas entradas para dummies sobre RAC:
Comandos basicos en Orace RAC
Comandos basicos del RAC II
Eliminar un nodo del rac

Comandos basicos del RAC II añadir bases de datos

Hoy vamos a seguir con la serie de entradas de comandos basicos de administracion de RAC.
Hoy vamos a centrarnos en añadir bases de datos y mover los recursos del cluster

Añadir una base de datos

Para anadir una base de datos esta tendra que tener en el init.ora los parametros

  • CLUSTER_DATABASE=TRUE
  • CLUSTER_DATABASE_INSTANCES=2
  • TEST1.INSTANCE_NUMBER=1
  • TEST2.INSTANCE_NUMBER=2
  • TEST1.THREAD=1
  • TEST2.THREAD=2
  • TEST1.UNDO_TABLESPACE=’UNDOTBS1′
  • TEST2.UNDO_TABLESPACE=’UNDOTBS2′

    or supuesto,deberemos de contar con tantos grupos de UNDO y threads de REDO como nodos vayamos a tener.
    Una vez tenemos esto, la registraremos en el crs con los comandos

    srvctl add database  -db TEST-instance IBTEST1 -spfile +DATA/TEST/spfileTEST.ora -diskgroup "DATA,FRA,REDO1,REDO2"-oraclehome $ORACLE_HOME
    srvctl add instance -d TEST-i TEST1 -n rac1.pamplona.name
    srvctl start database -db TEST
    srvctl add instance -d TEST-i TEST2 -n rac2.pamplona.name
    srvctl start  instance -db IBTES -i IBTEST2 
    

    Mas entradas para dummies sobre RAC:
    Comandos basicos en Orace RAC
    Comandos basicos del RAC II
    Eliminar un nodo del rac

  • Consultas utiles con Rman

    Hoy vamos a ver una entrada especial para dummies.
    Vamos a utilizar esta entrada para ir recopliando algunas de las consultas utiles que podemos necesitar relaccionadas con rman .
    Me gustaria decir que estas consultas son mias, pero varias de ellas han sido recopiladas a lo largo del tiempo , bien ca trabajo propio, como de casos con Oracle como de algunas webs que me han ayudado, con lo que, si alguien encuentra alguna y recooce a su propietario,e stare encantado de enlazarlo en caso que lo requiera

    Informe de los ultimos backups de rman

    set pagesize 200
    set linesize 200
    col COMMAND_ID format a20;
    col status  format a20;
    col Gb format 99999999;
     col OBJECT_TYPE format a20;
    
    Select COMMAND_ID,
    status,
    round(output_bytes /1024/1024,2)Gb ,
    start_time,end_time ,
    OBJECT_TYPE,round((end_time-start_time )*24,1) Duracion
     from v$rman_status 
    where 
    OBJECT_TYPE in ('DB FULL','DB INCR') 
    and output_bytes  > 0 
    order by start_time desc ;
    

    Ver los jobs de RMAN que ha habido

    set lines 220
    set pages 1000
    col cf for 9,999
    col df for 9,999
    col elapsed_seconds heading "ELAPSED|SECONDS"
    col i0 for 9,999
    col i1 for 9,999
    col l for 9,999
    col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES"
    col session_recid for 999999 heading "SESSION|RECID"
    col session_stamp for 99999999999 heading "SESSION|STAMP"
    col status for a10 trunc
    col time_taken_display for a10 heading "TIME|TAKEN"
    col output_instance for 9999 heading "OUT|INST"
    select
      j.session_recid, j.session_stamp,
      to_char(j.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time,
      to_char(j.end_time, 'yyyy-mm-dd hh24:mi:ss') end_time,
      (j.output_bytes/1024/1024) output_mbytes, j.status, j.input_type,
      decode(to_char(j.start_time, 'd'), 1, 'Sunday', 2, 'Monday',
                                         3, 'Tuesday', 4, 'Wednesday',
                                         5, 'Thursday', 6, 'Friday',
                                         7, 'Saturday') dow,
      j.elapsed_seconds, j.time_taken_display,
      x.cf, x.df, x.i0, x.i1, x.l,
      ro.inst_id output_instance
    from V$RMAN_BACKUP_JOB_DETAILS j
      left outer join (select
                         d.session_recid, d.session_stamp,
                         sum(case when d.controlfile_included = 'YES' then d.pieces else 0 end) CF,
                         sum(case when d.controlfile_included = 'NO'
                                   and d.backup_type||d.incremental_level = 'D' then d.pieces else 0 end) DF,
                         sum(case when d.backup_type||d.incremental_level = 'D0' then d.pieces else 0 end) I0,
                         sum(case when d.backup_type||d.incremental_level = 'I1' then d.pieces else 0 end) I1,
                         sum(case when d.backup_type = 'L' then d.pieces else 0 end) L
                       from
                         V$BACKUP_SET_DETAILS d
                         join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count
                       where s.input_file_scan_only = 'NO'
                       group by d.session_recid, d.session_stamp) x
        on x.session_recid = j.session_recid and x.session_stamp = j.session_stamp
      left outer join (select o.session_recid, o.session_stamp, min(inst_id) inst_id
                       from GV$RMAN_OUTPUT o
                       group by o.session_recid, o.session_stamp)
        ro on ro.session_recid = j.session_recid and ro.session_stamp = j.session_stamp
    where j.start_time > trunc(sysdate)-&NUMBER_OF_DAYS
    order by j.start_time;
    

    Operaciones acrivas de Rman

    set linesize 200;
    set pagesize 9000;
     column STATUS FORMAT A15;
     column APPLIED FORMAT A4;
    COLUMN NAME format a20;
    column DB_UNIQUE_NAME format a20;
    COLUMN sequence# CLEAR;
    column OPEN_MODE format a10;
    column name format a100;
    col OPNAME for a40;
    set linesize 220 pagesize 10000
    col OPNAME for a40
    select OPNAME,SOFAR/TOTALWORK*100 PCT, trunc(TIME_REMAINING/60) MIN_RESTANTES,
    trunc(ELAPSED_SECONDS/60) MIN_ATEAGORA
    from v$session_longops where TOTALWORK>0 and OPNAME like '%RMAN%';
    
    
     select OPNAME,SOFAR/TOTALWORK*100 PCT, trunc(TIME_REMAINING/60) MIN_RESTANTES,
      trunc(ELAPSED_SECONDS/60) MIN_ATEAGORA
      from v$session_longops where TOTALWORK>0 and OPNAME like '%RMAN: incremental datafile%';
    

    En fin, como dije en un principio, simplemente una recopilacion de consultas utiles

    Problemas con el fichero crsgenconfig_params al intentar extender un rac

    Hoy vamos a ver un caso curioso .

    En el proceso de añadir un nodo en el rac nos encontramos con el error

    2021/02/10 20:14:04 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
    oracle.ops.mgmt.cluster.ClusterException: scp: /u01/app/19c/grid/crs/install/crsgenconfig_params: No such file or directory
    
    2021/02/10 20:14:06 CLSRSC-180: An error occurred while executing the command '/u01/app/19c/grid/bin/cluutil -copy -sourcefile /u01/app/19c/grid/crs/install/crsgenconfig_params -fromnodesfile /tmp/7pqMR5nHtf -destfile /u01/app/19c/grid/crs/install/crsgenconfig_params -nodelist rac1'
    2021/02/10 20:14:06 CLSRSC-571: failed to copy file '/u01/app/19c/grid/crs/install/crsgenconfig_params' from node 'rac2' to file '/u01/app/19c/grid/crs/install/crsgenconfig_params' on local node
    Died at /u01/app/19c/grid/crs/install/crsutils.pm line 15809.
    

    Afortunadamente el error es claro , nos falta el fichero /u01/app/19c/grid/crs/install/crsgenconfig_params del nodo del que estamos extendiendo el rac, lo que hace que falle el punto 4 de la ejecucion del root.sh
    Si buscamos por internet veremos varias entradas en las que nos indican que es un problema en la descompresion de los paquetes originales, lo que no nos ayuda.
    La ayda viene en la nota 12.2 GI Upgrade fails with : CLSRSC-635: Failed to get EXTENDED_CLUSTER_SITE_GUIDS (Doc ID 2259672.1)
    En la que nos indican que solo hemos de poner el EXTENDED_CLUSTER_SITE_GUIDS en ese fichero y reejecutar el root.sh

    La idea parece buena, pero … cual deberia de ser el contenido de ese fichero ?

    El contenido de ese fichero es el siguiente

    EXTENDED_CLUSTER_SITE_GUIDS=rac
    

    Donde, el valor del campo EXTENDED_CLUSTER_SITE_GUIDS es el mismo que aparece en el campo CLUSTER_NAME del fichero /opt/app/19c/grid/crs/install/crsconfig_params

    CLUSTER_TYPE=DB
    VNDR_CLUSTER=false
    OCR_LOCATIONS=
    CLUSTER_NAME=rac
    NODE_NAME_LIST=rac1,rac2
    PRIVATE_NAME_LIST=
    VOTING_DISKS=
    #VF_DISCOVERY_STRING=%s_vfdiscoverystring%
    

    Mas informacion en

    • 12.2 GI Upgrade fails with : CLSRSC-635: Failed to get EXTENDED_CLUSTER_SITE_GUIDS (Doc ID 2259672.1)

    restaurar una base de datos desde el backup de una Standby

    Hoy vamos a ver como anticipar un error en lo que puede ser una recuperacion larga

    Supongamos que hemos recuperado una base de datos desde una standby database.
    Si ese es el caso, cuando intentemos abrir la base de datos despues del recover tendremos un error

    ORA-01666: control file is for a standby database
    
    
    

    Como lo prevenimos

    Vamos a ver primero como prevenir este error, y es simplemente añadiendo la clausula primary a la cadena que vamos a usar para recuperar la base de datos desde ese standby backup
    rman target /
    restore primary controlfile from '/backup/SID_STBY/standby_controlfile_backup';
    exit;
    

    Y pensareis, esto esta muy bien, pero.... y si ya tengo el backup restaurado

    Como lo solucionamos ?

     ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
    ALTER DATABASE ACTIVATE STANDBY DATABASE;
    select name,open_mode ,database_role from v$database;
    alter database open resetlogs