Instalacion de Oracle 26ai- Creacion de ASM

Seguimos con la seria de la instalación de un Oracle 26ai , ahora vamos a por el ASM

El appliance en el que estoy usando de laboratorio es bastante raquítico, con un pequeño disco de 100Gb, por lo que habra que apañarse con lo que tenemos,.
Mi configuracion de disco es :

[root@gigabyte u01]# fdisk -l
Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SUV400S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00fde38f

Device     Boot     Start       End   Sectors  Size Id Type
/dev/sda1  *         2048 156250111 156248064 74.5G 83 Linux
/dev/sda2       156250112 181415935  25165824   12G 82 Linux swap / Solaris

Asi que vamos a crear dos discos pequeños de 5 Gb que pondremos en /dev/sda3 y /dev/sda4 para poder hacer ver que tenemos un DATA y FRA , el disco quedará:

[root@gigabyte ~]# fdisk -l
Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SUV400S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00fde38f

Device     Boot     Start       End   Sectors  Size Id Type
/dev/sda1  *         2048 156250111 156248064 74.5G 83 Linux
/dev/sda2       156250112 181415935  25165824   12G 82 Linux swap / Solaris
/dev/sda3       181415936 191180799   9764864  4.7G 83 Linux
/dev/sda4       191180800 200945663   9764864  4.7G 83 Linux

Descargaremos el asmlib desde
https://www.oracle.com/linux/downloads/linux-asmlib-v9-downloads.html e instalaremos los paquetes.
Una vez instalados,configuraremos el orcleasm

[root@gigabyte ~]# oracleasm configure -i
Configuring the Oracle ASM system service.
This will configure the on-boot properties of the Oracle ASM system
service.  The following questions will determine whether the service
is started on boot and what permissions it will have.  The current
values will be shown in brackets ('[]').  Hitting  without
typing an answer will keep that current value.  Ctrl-C will abort.
Default user to own the ASM disk devices []: grid
Default group to own the ASM disk devices []: asmdba
Start Oracle ASM system service on boot (y/n) [y]: y
Scan for Oracle ASM disks when starting the oracleasm service (y/n) [y]: y
Maximum number of ASM disks that can be used on system [2048]:
Enable iofilter if kernel supports it (y/n) [y]: y
Writing Oracle ASM system service configuration: done

Configuration changes only come into effect after the Oracle ASM
system service is restarted.  Please run 'systemctl restart oracleasm'
after making changes.

WARNING: All of your Oracle and ASM instances must be stopped prior
to restarting the oracleasm service.

[root@gigabyte ~]# systemctl stop oracleasm
[root@gigabyte ~]# systemctl start oracleasm
[root@gigabyte ~]# systemctl status oracleasm
● oracleasm.service - Oracle ASM Service
     Loaded: loaded (/usr/lib/systemd/system/oracleasm.service; enabled; preset: disabled)
     Active: active (exited) since Wed 2026-01-28 17:06:33 CET; 4s ago
    Process: 5027 ExecStartPre=/usr/bin/udevadm settle -t 120 (code=exited, status=0/SUCCESS)
    Process: 5028 ExecStart=/usr/sbin/oracleasm.init start (code=exited, status=0/SUCCESS)
   Main PID: 5028 (code=exited, status=0/SUCCESS)
        CPU: 378ms

Jan 28 17:06:33 gigabyte.pamplona.name systemd[1]: Starting Oracle ASM Service...
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5042]: Mounting oracleasm driver filesystem: Not applicable with UEK8
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5053]: Reloading disk partitions: done
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5053]: Cleaning any stale ASM disks...
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5053]: Setting up iofilter map for ASM disks: done
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5065]: Scanning system for ASM disks...
Jan 28 17:06:33 gigabyte.pamplona.name oracleasm.init[5081]: Disk scan successful
Jan 28 17:06:33 gigabyte.pamplona.name systemd[1]: Finished Oracle ASM Service.

Como formatear los discos

crearemos los discos, en mi caso, al ser un pequeño sistema de prueba,estos han de ser particiones en un disco, en el caso de produccion,creariamos una particion única para cada disco.

[root@gigabyte ~]# oracleasm  createdisk DATA01 /dev/sda3
Writing disk header: done
Instantiating disk: done
[root@gigabyte ~]# oracleasm  createdisk FRA01 /dev/sda4
Writing disk header: done
Instantiating disk: done

[root@gigabyte ~]# oracleasm  scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Setting up iofilter map for ASM disks: done
Scanning system for ASM disks...

[root@gigabyte ~]# oracleasm  listdisks
DATA01
FRA01

Evitar problemas con SElinux

Para evitar problemas en los siguientes pasos con con SeLinux, ejecutaremos :

semanage fcontext -a -e /bin /u01/app/grid/bin
semanage fcontext -a -e /lib /u01/app/grid/lib
semanage fcontext -a -e /etc /etc/oracle/scls_scr

Configurar el CRS/HAS

Una vez tenemos los discos creados, procederemos a configurar el CRS,para eso, como root
ejecutaremos:

[root@gigabyte ~]# export GI_HOME=/u01/app/grid
[root@gigabyte ~]#  $GI_HOME/perl/bin/perl -I $GI_HOME/perl/lib -I $GI_HOME/crs/install $GI_HOME/crs/install/roothas.pl
Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/gigabyte/crsconfig/roothas_2026-01-28_05-21-47PM.log
2026/01/28 17:21:52 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2026/01/28 17:23:12 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

gigabyte     2026/01/28 17:25:07     /u01/app/oracle/crsdata/gigabyte/olr/backup_20260128_172507.olr     2107015493
2026/01/28 17:25:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

configuracion de oracle net

Como usuario grid iremos a $ORACLE_HOME/network/admin y crearemos los ficheros
sqlnet.ora

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
DIAG_ADR_ENABLED=ON
SQLNET.EXPIRE_TIME= 10
SQLNET.INBOUND_CONNECT_TIMEOUT=60

listener.ora

LISTENER =
   (DESCRIPTION_LIST =
    (DESCRIPTION =
       (ADDRESS = (PROTOCOL = TCP)(HOST = gigabyte.pamplona.name)(PORT = 1521))
        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
      )
   )
USE_SID_AS_SERVICE_LISTENER=ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER=ON
LOG_FILE_SIZE_LISTENER=50
LOG_FILE_NUM_LISTENER=365

Y despues, crearemos el servicio del listner

grid@gigabyte admin]$ $GI_HOME/bin/srvctl add listener -listener LISTENER -oraclehome  $GI_HOME

grid@gigabyte admin]$ $GI_HOME/bin/srvctl start listener

[grid@gigabyte admin]$ $GI_HOME/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       gigabyte                 STABLE
ora.ons
               OFFLINE OFFLINE      gigabyte                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       gigabyte                 STABLE
--------------------------------------------------------------------------------

creacion del ASM

Ahora crearemos el ASM con:

[grid@gigabyte admin]$ $GI_HOME/bin/asmca -silent \
      -configureASM \
      -sysAsmPassword CHANGE.me.26ai \
      -asmsnmpPassword CHANGE.me.26ai \
      -diskString "ORCL:*"  \
      -diskGroupName DATA \
      -disk "ORCL:DATA*" \
      -param ASM_POWER_LIMIT=1 \
      -param DIAGNOSTIC_DEST=$ORACLE_BASE \
      -param AUDIT_SYS_OPERATIONS=TRUE \
      -redundancy EXTERNAL



ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-260128PM053656.log for details.

Luego, craremos los otros diskgroups al estilo de

$GI_HOME/bin/asmca -silent \
      -createDiskGroup \
      -sysAsmPassword CHANGE.me.26ai \
      -asmsnmpPassword CHANGE.me.26ai \
      -diskString "ORCL:*"  \
      -diskGroupName FRA \
      -disk "ORCL:FRA*" \
      -param ASM_POWER_LIMIT=1 \
      -param DIAGNOSTIC_DEST=$ORACLE_BASE \
      -param AUDIT_SYS_OPERATIONS=TRUE \
      -redundancy EXTERNAL
[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-260128PM053938.log for details.

Con esto, ya tendremos el asm creado e instalado en nuestra versio

[grid@gigabyte admin]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       gigabyte                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       gigabyte                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       gigabyte                 STABLE
ora.asm
               ONLINE  ONLINE       gigabyte                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      gigabyte                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       gigabyte                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       gigabyte                 STABLE
--------------------------------------------------------------------------------
[grid@gigabyte admin]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  1048576      4768     4701                0            4701              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  1048576      4768     4704                0            4704              0             N  FRA/

Siguiente paso.. creación de una base de datos 26ai

Comandos basicos del RAC II añadir bases de datos

Hoy vamos a seguir con la serie de entradas de comandos basicos de administracion de RAC.
Hoy vamos a centrarnos en añadir bases de datos y mover los recursos del cluster

Añadir una base de datos

Para anadir una base de datos esta tendra que tener en el init.ora los parametros

  • CLUSTER_DATABASE=TRUE
  • CLUSTER_DATABASE_INSTANCES=2
  • TEST1.INSTANCE_NUMBER=1
  • TEST2.INSTANCE_NUMBER=2
  • TEST1.THREAD=1
  • TEST2.THREAD=2
  • TEST1.UNDO_TABLESPACE=’UNDOTBS1′
  • TEST2.UNDO_TABLESPACE=’UNDOTBS2′

    or supuesto,deberemos de contar con tantos grupos de UNDO y threads de REDO como nodos vayamos a tener.
    Una vez tenemos esto, la registraremos en el crs con los comandos

    srvctl add database  -db TEST-instance IBTEST1 -spfile +DATA/TEST/spfileTEST.ora -diskgroup "DATA,FRA,REDO1,REDO2"-oraclehome $ORACLE_HOME
    srvctl add instance -d TEST-i TEST1 -n rac1.pamplona.name
    srvctl start database -db TEST
    srvctl add instance -d TEST-i TEST2 -n rac2.pamplona.name
    srvctl start  instance -db IBTES -i IBTEST2 
    

    Mas entradas para dummies sobre RAC:
    Comandos basicos en Orace RAC
    Comandos basicos del RAC II
    Eliminar un nodo del rac

  • El RAC se queda en estado [ROLLING PATCH]

    Hoy vamos a ver una entrada sobre ago que puede dar mucho miedo pero que tiene una solucion muy sencila
    Pongamos que tras aplicar una serie de parches comprobamos la version de nuesrto softare en el RAC y nos encontramos lo siguiente

    [oracle@rac1~]$ sudo  $ORACLE_HOME/bin/crsctl query  crs  activeversion -f
    Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [724960844].
    
    Oracle Clusterware patch level on node rac1is [2701864972].
    [oracle@rac1~]$ sudo  $ORACLE_HOME/bin/crsctl query  crs  softwarepatch rac2
    Oracle Clusterware patch level on node rac2 is [387459443].
    

    De alguna manera que no alcanzamos a entender ( o igual si), tenemos que tras finalizar un parcheado los parches de los dos nodos son iguales.

    Que hacemos ??

    Veamos a ver cuales son los parches que tenemos instalados.
    Lo primero que se nos viene a la cabeza es tirar del comando optch
    Y ecejutamos un
    $ORACLE_HOME/OPatch/opatch -lsinventory
    o bien
    $ORACLE_HOME/OPatch/opatch -lspatches
    Pero, para nuestra desesperacion resulta que Opatch nos dice que hay los mismos parches instalados.
    ¿que hacemos ahora?

    La solucon esta en patchgen

    Vamos a ver realmente que es lo que tenemos instalado en los nodos.
    Para ello usaremos en ambos nodos el comando
    $ORACLE_HOME/bin/kfod op=patches

    [oracle@rac1~]$  $ORACLE_HOME/bin/kfod op=patches
    ---------------
    List of Patches
    ===============
    30489227
    30489632
    30557433
    30655595
    
    [oracle@rac2~]$ $ORACLE_HOME/bin/kfod op=patches
    ---------------
    List of Patches
    ===============
    29517242
    29517247
    29585399
    30489227
    30489632
    30557433
    30655595
    

    Como podemos ver, en el rac2 nos aparecen 3 parches que no tenemos en rac1.
    El siguiente paso deberia de ser el buscar cuales son esos parches y decidir si los queremos aplicar donde no estan , o quitrlos de donde estan.
    Dado que quitar un parche suele ser mas complicado que ponerlo , vamos ha hacer esta segunda opcion y a eliminar esos 3 parches de rac2.

    Para ello,lo primero que tendremos que hacer es como usuario root

    . oaenv
     $ORACLE_HOME/crs/install/rootcrs.sh -prepatch
    

    Y tras esto, eliminaremos los parches con

    $ORACLE_HOME/bin/patchgen commit -rb 29517242 
    $ORACLE_HOME/bin/patchgen commit -rb 29517247
    $ORACLE_HOME/bin/patchgen commit -rb 29585399
    

    Una vez eliminados, comprobamos d enuevo con kfod que tenemos solamente los parches deseados, y sera en ese momento cuando cerremos la operacion con (de nuevo como root)

     $ORACLE_HOME/crs/install/rootcrs.sh -postpatch
    

    Tras esto solamente tenemos que comprobar que el estado del cluster es normal y que las versiones y parches son los correctos

    [oracle@rac1~]$) crsctl query crs softwarepatch -all
    Oracle Clusterware patch level on node rac1 is [2701864972].
    [oracle@rac1~]$ crsctl query crs activeversion  -f
    Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2701864972].
    [oracle@rac1~]$ crsctl query crs releasepatch
    Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
    [oracle@rac2~]$ crsctl query crs releasepatch
    Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
    

    Mas informacion como siempre en la documentacion de oracle

    • Troubleshooting OPatchAuto
    • KFOD, KFED, AMDU (Doc ID 1485597.1)
    • Note 1180491.1 – KFED Tool For Windows OS
    • Note 1346190.1 – KFED.PL for diagnosing – ORA-15036 ORA-15042 ORA-15020 ORA-15033
    • Note 1505005.1 – Where to find kfed utility before Oracle Grid Infrastructure is installed

    Comandos basicos en Orace RAC

    Hoy vamoa a volver a las entradas para dummies, esta vez con los comandos basicos del RAC

    Como paramos un RAC?

    La manera mas sencilla escon permisos de root mediante el comando

    export ORACLE_SID=+ASM1
    export ORAENV_ASK=NO
    . oraenv
    sudo $ORACLE_HOME/bin/crsctl stop crs
    sudo $ORACLE_HOME/bin/crsctl disable crs 
    

    A la hora de arrancarlo ejecutaremos

    export ORACLE_SID=+ASM1
    export ORAENV_ASK=NO
    . oraenv
    sudo  $ORACLE_HOME/bin/crsctl enable crs
    sudo $ORACLE_HOME/bin/crsctl start crs  
    

    Como arrancamos/paramos una base de datos

    srvctl stop database -d $DB_NAME
    
    srvctl start database -d $DB_NAME
    

    Como arrancamos/paramos una instancia en un nodo

    Podemos hacerlo de varias maneras

    srvctl start instance -d $DB_NAME-n $NODE_NAME 
    srvctl start instance -d $DB_NAME -i $INSTANCE_NAME
    

    Para pararla seria similar cambiando el start por stop

    srvctl stop instance -d $DB_NAME-n $NODE_NAME 
    srvctl stop instance -d $DB_NAME -i $INSTANCE_NAME
    

    Parar elementos dedicados del RAC

    Hay algunos componentes dedicados del RAC que no funcionan con la sintaxsis estandard, estos son:

    • Management database
    • ASM prxy

    Administracion de la Management database

    Los comados que podemos llevar a cabo sobre la management database son stop y relocate

    srvctl start mgmtdb -n  $NODENAME 
    
    srvctl stop mgmtdb -n $NODENAME 
    
    srvctl relocate mgmtdb -n $OTRO_NODO
    

    Administracion del ASM proxy

    srvctl start res ora.proxy_advm -n  $NODENAME 
    
    srvctl stop res ora.proxy_advm -n  $NODENAME
    

    comandos sobre el CRS

    Podemos ver la entrada Comprobar versiones del cluster

    Comandos sobre el OCR

    Podemos verlos en la entrada Oracle cluster registry OCR (componentes del grid)

    Comandos sobre los voting disk

    Podemos verlos en la entradas
    Redundancia de los votingdisk en ASM
    Voting disk (componentes del grid)

    Comandos sobre ADVM

    Introducción al ADVM

    Mas entradas para dummies sobre RAC:
    Comandos basicos en Orace RAC
    Comandos basicos del RAC II
    Eliminar un nodo del rac

    desinstalacion del grid control

    Tras un largo periodo de inactividad, vamos a añadir una pequeña entrada con la manera de desinstalar el grid control de Oracle .

    Al contrario de cuando se tiene una base ded datos sencilla de Oracle donde con borrar binarios, inventario y oratab el sistema ( en unix) queda lo sufucientemente limpio como para llevar a cabo una reinstalacion, si tenemos que reinstalar un grid infraestructure por que ha habido problemas en la instalacion o por que queremos quitarlo, no se puede hacer «a las bravas».

    La manera de lelvarlo a cabo es muy sencilla.
    Solamente tenemos que ir al $GRID_HOME/deinstall y ejecutar ./deinstall
    Este proceso se ha de ejecutar como propietario del grid, y , al igual que la instalacion, en un momento determinado nos solicitará la ejecucion de una cadena como root.

    [grid@serverpruebas ] cd  $GRID_HOME/deinstall
    [grid@serverpruebas ] ls 
    bootstrap.pl         bootstrap_files.lst  deinstall            deinstall.pl         deinstall.xml        jlib                 readme.txt           response             sshUserSetup.sh      utl
    [grid@serverpruebas ]./deinstall 	
    Checking for required files and bootstrapping ...
    Please wait ...
    Location of logs /tmp/deinstall2016-11-03_09-31-33PM/logs/
    
    ############ ORACLE DECONFIG TOOL START ############
    
    
    ######################### DECONFIG CHECK OPERATION START #########################
    ## [START] Install check configuration ##
    
    
    Checking for existence of the Oracle home location /opt/app/oracle/product/12.1.0/grid
    Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
    Oracle Base selected for deinstall is: /opt/app/oracle
    Checking for existence of central inventory location /opt/app/oraInventory
    Checking for existence of the Oracle Grid Infrastructure home /opt/app/oracle/product/12.1.0/grid
    
    ## [END] Install check configuration ##
    
    Traces log file: /tmp/deinstall2016-11-03_09-31-33PM/logs//crsdc_2016-11-03_09-32-36PM.log
    
    Network Configuration check config START
    
    Network de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/netdc_check2016-11-03_09-32-38-PM.log
    
    Network Configuration check config END
    
    Asm Check Configuration START
    
    ASM de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/asmcadc_check2016-11-03_09-32-39-PM.log
    
    ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
    Automatic Storage Management (ASM) instance is detected in this Oracle home /opt/app/oracle/product/12.1.0/grid.
    ASM Diagnostic Destination : /opt/app/oracle
    ASM Diskgroups :
    ASM diskstring : 
    Diskgroups will not be dropped
     If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]:
    Database Check Configuration START
    
    Database de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/databasedc_check2016-11-03_09-32-50-PM.log
    
    Database Check Configuration END
    
    ######################### DECONFIG CHECK OPERATION END #########################
    
    
    ####################### DECONFIG CHECK OPERATION SUMMARY #######################
    Oracle Grid Infrastructure Home is: /opt/app/oracle/product/12.1.0/grid
    The following nodes are part of this cluster: null
    The cluster node(s) on which the Oracle home deinstallation will be performed are:null
    Oracle Home selected for deinstall is: /opt/app/oracle/product/12.1.0/grid
    Inventory Location where the Oracle home registered is: /opt/app/oraInventory
    ASM instance will be de-configured from this Oracle home
    Do you want to continue (y - yes, n - no)? [n]: y
    A log of this session will be written to: '/tmp/deinstall2016-11-03_09-31-33PM/logs/deinstall_deconfig2016-11-03_09-32-35-PM.out'
    Any error messages from this session will be written to: '/tmp/deinstall2016-11-03_09-31-33PM/logs/deinstall_deconfig2016-11-03_09-32-35-PM.err'
    
    ######################## DECONFIG CLEAN OPERATION START ########################
    Database de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/databasedc_clean2016-11-03_09-32-53-PM.log
    ASM de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/asmcadc_clean2016-11-03_09-32-53-PM.log
    ASM Clean Configuration START
    ASM Clean Configuration END
    
    Network Configuration clean config START
    
    Network de-configuration trace file location: /tmp/deinstall2016-11-03_09-31-33PM/logs/netdc_clean2016-11-03_09-33-02-PM.log
    
    De-configuring Listener configuration file...
    Listener configuration file de-configured successfully.
    
    De-configuring Naming Methods configuration file...
    Naming Methods configuration file de-configured successfully.
    
    De-configuring backup files...
    Backup files de-configured successfully.
    
    The network configuration has been cleaned up successfully.
    
    Network Configuration clean config END
    
    
    ---------------------------------------->
    
    Run the following command as the root user or the administrator on node "serverpruebas-m".
    
    /tmp/deinstall2016-11-03_09-31-33PM/perl/bin/perl -I/tmp/deinstall2016-11-03_09-31-33PM/perl/lib -I/tmp/deinstall2016-11-03_09-31-33PM/crs/install /tmp/deinstall2016-11-03_09-31-33PM/crs/install/roothas.pl -force  -deconfig -paramfile "/mp/deinstall2016-11-03_09-31-33PM/response/deinstall_OraGI12Home1.rsp"
    
    Press Enter after you finish running the above commands
    
    <----------------------------------------
    
    
    
    ######################### DECONFIG CLEAN OPERATION END #########################
    
    
    ####################### DECONFIG CLEAN OPERATION SUMMARY #######################
    ASM instance was de-configured successfully from the Oracle home
    The stopping and de-configuring of Oracle Restart failed. Fix the problem and rerun this tool to completely remove the Oracle Restart configuration and the software
    Oracle Restart was already stopped and de-configured on node "serverpruebas-m"
    Oracle Restart is stopped and de-configured successfully.
    #######################################################################
    
    
    ############# ORACLE DECONFIG TOOL END #############
    
    Using properties file /tmp/deinstall2016-11-03_09-31-33PM/response/deinstall_2016-11-03_09-32-35-PM.rsp
    Location of logs /tmp/deinstall2016-11-03_09-31-33PM/logs/
    
    ############ ORACLE DEINSTALL TOOL START ############
    
    
    
    
    
    ####################### DEINSTALL CHECK OPERATION SUMMARY #######################
    A log of this session will be written to: '/tmp/deinstall2016-11-03_09-31-33PM/logs/deinstall_deconfig2016-11-03_09-32-35-PM.out'
    Any error messages from this session will be written to: '/tmp/deinstall2016-11-03_09-31-33PM/logs/deinstall_deconfig2016-11-03_09-32-35-PM.err'
    
    ######################## DEINSTALL CLEAN OPERATION START ########################
    ## [START] Preparing for Deinstall ##
    Setting LOCAL_NODE to serverpruebas-m
    Setting CRS_HOME to true
    Setting oracle.installer.invPtrLoc to /tmp/deinstall2016-11-03_09-31-33PM/oraInst.loc
    Setting oracle.installer.local to false
    
    ## [END] Preparing for Deinstall ##
    
    Setting the force flag to false
    Setting the force flag to cleanup the Oracle Base
    Oracle Universal Installer clean START
    
    
    Detach Oracle home '/opt/app/oracle/product/12.1.0/grid' from the central inventory on the local node : Done
    
    Failed to delete the directory '/opt/app/oracle/product/12.1.0/grid'. The directory is in use.
    Delete directory '/opt/app/oracle/product/12.1.0/grid' on the local node : Failed <<<<
    
    Delete directory '/opt/app/oraInventory' on the local node : Done
    
    Failed to delete the directory '/opt/app/oracle/product/12.1.0/grid'. The directory is in use.
    The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is not empty.
    
    Oracle Universal Installer cleanup was successful.
    
    Oracle Universal Installer clean END
    
    
    ## [START] Oracle install clean ##
    
    Clean install operation removing temporary directory '/tmp/deinstall2016-11-03_09-31-33PM' on node 'serverpruebas-m'
    
    ## [END] Oracle install clean ##
    
    
    ######################### DEINSTALL CLEAN OPERATION END #########################
    
    
    ####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
    Successfully detached Oracle home '/opt/app/oracle/product/12.1.0/grid' from the central inventory on the local node.
    Failed to delete directory '/opt/app/oracle/product/12.1.0/grid' on the local node.
    Successfully deleted directory '/opt/app/oraInventory' on the local node.
    Oracle Universal Installer cleanup was successful.
    
    
    Run 'rm -r /etc/oraInst.loc' as root on node(s) 'serverpruebas-m' at the end of the session.
    
    Run 'rm -r /opt/ORCLfmap' as root on node(s) 'serverpruebas-m' at the end of the session.
    Run 'rm -r /etc/oratab' as root on node(s) 'serverpruebas-m' at the end of the session.
    Oracle deinstall tool successfully cleaned up temporary directories.
    #######################################################################
    
    
    ############# ORACLE DEINSTALL TOOL END #############
    

    Como veis, tremendamente sencillo si se hace de manera ordenada