Oracle RAC集群中的命令整理

Oracle RAC集群中的命令整理
Oracle RAC集群中的命令整理
节点层olsnodes
网络层oifcfg
集群层crsctl(平时使用最多的两个命令之一) ocrcheck,ocrdump,ocrconfig
应用层srvctl(平时使用最多的两个命令之一) onsctl,crs_XX
Oracle RAC集群中的命令整理
Oracle Clusterware的命令集可以分为以下4种:
节点层:osnodes
网络层:oifcfg
集群层:crsctl, ocrcheck,ocrdump,ocrconfig
应用层:srvctl,onsctl,crs_stat
下面分别来介绍这些命令:

节点层olsnodes
olsnodes,这个命令用来显示集群点列表,可用的参数如下:

查看此命令参数帮助:

[grid@rac1 ~]$ olsnodes --help
Usage: olsnodes [ [ [-n] [-i] [-s] [-t] [ | -l [-p]] [-a] ] | [-f] | [-c] ] [-g] [-v]

     where
             -n print node number with the node name
             -p print private interconnect address for the local node
             -i print virtual IP name or address with the node name
             <node> print information for the specified node
             -l print information for the local node 
             -s print node status - active or inactive 
             -t print node type - pinned or unpinned 
             -g turn on logging 
             -v Run in debug mode; use at direction of Oracle Support only.
             -c print clusterware name
             -a print active node roles of the nodes in the cluster
             -f print historical Leaf Nodes (active and recent)

查看节点的状态:

[grid@rac1 ~]$ olsnodes -s //查看两节点状态
rac1 Active
rac2 Active

[grid@rac1 ~]$ olsnodes -c
rac19c
[grid@rac1 ~]$ olsnodes -n -i -s -t
rac1 1 Active Unpinned
rac2 2 Active Unpinned

网络层oifcfg
OIFCFG命令可以定义和管理网络接口:分配和释放网络接口到组件、直接组件使用特定的网络接口、检索组件配置信息
OIFCFG主要显示一些网段信息,只存放在grid的bin下面,较少使用,root、grid、oracle用户都可以执行。

查看所有网卡网段:

[grid@rac1 ~]$ oifcfg iflist
ens192 10.16.35.0
ens224 10.16.16.0
ens224 169.254.0.0
virbr0 192.168.122.0

获得单个网口信息:

[grid@rac1 ~]$ oifcfg getif
ens192 10.16.35.0 global public
ens224 10.16.16.0 global cluster_interconnect,asm

集群层crsctl(平时使用最多的两个命令之一) ocrcheck,ocrdump,ocrconfig
CRSCTL是您与Oracle Clusterware之间的接口,用于解析和调用Oracle Clusterware对象的Oracle Clusterware API
Crsctl只在grid的bin下面,用于管理集群件,很多查看相关的命令都是grid用户的操作的,涉及到启动停止等会使用root账号操作,具体查看以下内容:

检查集群件各个组件状态:

 [grid@rac1 ~]$ crsctl stat res -t  //检查集群件各个组件状态
 --------------------------------------------------------------------------------
 Name           Target  State        Server                   State details       
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.LISTENER.lsnr
                ONLINE  ONLINE       rac1                     STABLE
                ONLINE  ONLINE       rac2                     STABLE
 ora.chad
                ONLINE  ONLINE       rac1                     STABLE
                ONLINE  ONLINE       rac2                     STABLE
 ora.net1.network
                ONLINE  ONLINE       rac1                     STABLE
                ONLINE  ONLINE       rac2                     STABLE
 ora.ons
                ONLINE  ONLINE       rac1                     STABLE
                ONLINE  ONLINE       rac2                     STABLE
 
 ...............................................................
 ora.cvu
       1        ONLINE  ONLINE       rac1                     STABLE
 ora.qosmserver
       1        ONLINE  ONLINE       rac1                     STABLE
 ora.rac1.vip
       1        ONLINE  ONLINE       rac1                     STABLE
 ora.rac2.vip
       1        ONLINE  ONLINE       rac2                     STABLE
 ora.racdb.db
       1        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                              racle/product/19.3.0
                                                              /db_1,STABLE
       2        ONLINE  ONLINE       rac2                     Open,HOME=/u01/app/o
                                                              racle/product/19.3.0
                                                              /db_1,STABLE
 ora.scan1.vip
       1        ONLINE  ONLINE       rac1                     STABLE
 --------------------------------------------------------------------------------

查、新增、替换votedisk信息:

[grid@rac1 ~]$ crsctl query css votedisk //检查、新增、替换votedisk信息
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------

  1. ONLINE 78d77b08c21c4fbebf0cbb8180468fff (AFD:ORS1) [ORS]
  2. ONLINE d3a7d6b4a07d4f3bbf20faa032ef7b9d (AFD:ORS2) [ORS]
  3. ONLINE c56e3e7993554f40bf69e3b287edec01 (AFD:ORS3) [ORS]

Located 3 voting disk(s).

[grid@rac1 ~]$ crsctl add css votedisk /sharednfs/vote2
[grid@rac1 ~]$ crsctl replace votedisk +DATA

查看server信息

[grid@rac1 ~]$ crsctl status server -f
NAME=rac1
MEMORY_SIZE=7820
CPU_COUNT=8
CPU_CLOCK_RATE=2128
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
CSS_CRITICAL=no
CSS_CRITICAL_TOTAL=0
RESOURCE_TOTAL=0
SITE_NAME=rac19c
STATE=ONLINE
ACTIVE_POOLS=Generic ora.racdb
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

NAME=rac2
MEMORY_SIZE=7820
CPU_COUNT=8
CPU_CLOCK_RATE=2128
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
CSS_CRITICAL=no
CSS_CRITICAL_TOTAL=0
RESOURCE_TOTAL=0
SITE_NAME=rac19c
STATE=ONLINE
ACTIVE_POOLS=Generic ora.racdb
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

检查crs状态

 [grid@rac1 ~]$ crsctl check crs 
 CRS-4638: Oracle High Availability Services is online
 CRS-4537: Cluster Ready Services is online
 CRS-4529: Cluster Synchronization Services is online
 CRS-4533: Event Manager is online

查看grid版本

 [grid@rac1 ~]$ crsctl query crs activeversion
 Oracle Clusterware active version on the cluster is [19.0.0.0.0]
 [grid@rac1 ~]$ crsctl query crs releaseversion
 Oracle High Availability Services release version on the local node is [19.0.0.0.0]
 [grid@rac1 ~]$ crsctl query crs softwareversion
 Oracle Clusterware version on node [rac1] is [19.0.0.0.0]

配置集群自动启动(使用root账号并每个节点)

如果root账号没有crsctl等命令的话需要配置环境变量

 [root@rac1 ~]# pwd
 /root
 [root@rac1 ~]# vi .bash_profile   //增加以下内容
 export ORACLE_HOME=/u01/app/19.3.0/grid;
 export PATH=$ORACLE_HOME/bin:$PATH;
 [root@rac1 ~]# source .bash_profile 

之后就能使用grid用户的相关命令了

[root@rac1 ~]# crsctl -h
 Usage:
        crsctl add       - add a resource, type or other entity
        crsctl check     - check the state or operating status of a service, resource, or other entity
        crsctl config    - display automatic startup configuration
        crsctl create    - display entity creation options
        crsctl debug     - display or modify debug state
        crsctl delete    - delete a resource, type or other entity
        crsctl disable   - disable automatic startup
        crsctl discover  - discover DHCP server
        crsctl enable    - enable automatic startup
        crsctl eval      - evaluate operations on resource or other entity without performing them
        crsctl export    - export entities
        crsctl get       - get an entity value
        crsctl getperm   - get entity permissions
        crsctl lsmodules - list debug modules
        crsctl modify    - modify a resource, type or other entity
        crsctl pin       - make the leases of specified nodes immutable
        crsctl query     - query service state
        crsctl release   - release a DHCP lease
        crsctl relocate  - relocate a resource, server or other entity
        crsctl replace   - change the location of voting files
        crsctl request   - request a DHCP lease or an action entry point
        crsctl set       - set an entity value
        crsctl setperm   - set entity permissions
        crsctl start     - start a resource, server or other entity
        crsctl status    - get status of a resource or other entity
        crsctl stop      - stop a resource, server or other entity
        crsctl unpin     - make the leases of previously pinned nodes mutable
        crsctl unset     - unset an entity value, restoring its default

其实集群的自动启动修改是修改文件/etc/oracle/scls_scr/rac1/root/ohasdstr

[root@rac1 ~]# cat /etc/oracle/scls_scr/rac1/root/ohasdstr
enable
[root@rac1 ~]# crsctl config crs //查看集群是否自动启动,这里是自动启动
CRS-4622: Oracle High Availability Services autostart is enabled.

[root@rac1 ~]# crsctl disable crs //取消集群自动启动
CRS-4621: Oracle High Availability Services autostart is disabled.

[root@rac1 ~]# cat /etc/oracle/scls_scr/rac1/root/ohasdstr //查看相关文件
disable

[root@rac1 ~]# crsctl enable crs //设置集群自动启动
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@rac1 ~]# cat /etc/oracle/scls_scr/rac1/root/ohasdstr
enable

集群的节点启、停(当前节点的相关资源进行操作也只能是当前节点,包含ohas,会影响ohas)

 [root@rac1 ~]# crsctl stop crs  //停止当前节点的集群服务
 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
 CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
 CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac1'
 CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
 CRS-2673: Attempting to stop 'ora.chad' on 'rac1'
 CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'rac1'
 CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'
 CRS-2673: Attempting to stop 'ora.ORS.dg' on 'rac1'
 CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
 CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.ORS.dg' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
 CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'
 CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.chad' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'rac1'
 CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'rac1' succeeded
 CRS-33677: Stop of resource group 'ora.asmgroup' on server 'rac1' succeeded.
 CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
 CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
 CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
 CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
 CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
 CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
 CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
 CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
 CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
 CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
 CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
 CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
 CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
 CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac1'
 CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
 CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
 CRS-2677: Stop of 'ora.driver.afd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
 CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
 CRS-4133: Oracle High Availability Services has been stopped.
 [root@rac1 ~]# crsctl check crs //检查当前节点集群状态
 CRS-4639: Could not contact Oracle High Availability Services

 [root@rac1 ~]# crsctl start crs  //启动当前节点的集群服务
 CRS-4123: Oracle High Availability Services has been started.
 [root@rac1 ~]# crsctl check crs  //检查当前节点的集群服务
 CRS-4638: Oracle High Availability Services is online
 CRS-4537: Cluster Ready Services is online
 CRS-4529: Cluster Synchronization Services is online
 CRS-4533: Event Manager is online

集群的节点启、停(集群的所有节点相关资源进行操作+all,不加的话是当前节点并且前提是ohas是启动的,启停不影响ohas)

 [root@rac1 ~]# crsctl stop cluster
 CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
 CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac1'
 CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
 CRS-2673: Attempting to stop 'ora.chad' on 'rac1'
 CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'rac1'
 CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'
 CRS-2673: Attempting to stop 'ora.ORS.dg' on 'rac1'
 CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
 CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.ORS.dg' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
 CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
 CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'
 CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'rac1'
 CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'rac1' succeeded
 CRS-33677: Stop of resource group 'ora.asmgroup' on server 'rac1' succeeded.
 CRS-2677: Stop of 'ora.chad' on 'rac1' succeeded
 CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
 CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
 CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
 CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
 CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
 CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
 CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
 CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
 CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
 CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
 CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
 CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

 [root@rac1 ~]# crsctl check crs  //查看状态可以发现OHAS没有关闭    
 CRS-4638: Oracle High Availability Services is online
 CRS-4535: Cannot communicate with Cluster Ready Services
 CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
 CRS-4534: Cannot communicate with Event Manager

 [root@rac1 ~]# crsctl check cluster
 CRS-4535: Cannot communicate with Cluster Ready Services
 CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
 CRS-4534: Cannot communicate with Event Manager

关闭所有节点

 [root@rac1 ~]# crsctl stop cluster -all
 CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
 CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac2'
 CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
 CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac2'
 CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
 CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
 CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'rac2'
 CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac2'
 CRS-2673: Attempting to stop 'ora.ORS.dg' on 'rac2'
 CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
 CRS-2673: Attempting to stop 'ora.chad' on 'rac2'
 CRS-2673: Attempting to stop 'ora.qosmserver' on 'rac2'
 CRS-2677: Stop of 'ora.rac1.vip' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.ORS.dg' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.FRA.dg' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
 CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2'
 CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
 CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.scan1.vip' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2'
 CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.qosmserver' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.chad' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'rac2'
 CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'rac2' succeeded
 CRS-33677: Stop of resource group 'ora.asmgroup' on server 'rac2' succeeded.
 CRS-2673: Attempting to stop 'ora.ons' on 'rac2'
 CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2'
 CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded
 CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
 CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.storage' on 'rac2'
 CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
 CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
 CRS-2677: Stop of 'ora.storage' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
 CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
 CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
 CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
 CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
 CRS-4688: Oracle Clusterware is already stopped on server 'rac1'

启动某一节点

[root@rac1 ~]# crsctl start cluster -n rac2    
 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
 CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
 CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
 CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
 CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
 CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
 CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
 CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
 CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.asm' on 'rac2'
 CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.storage' on 'rac2'
 CRS-2676: Start of 'ora.storage' on 'rac2' succeeded
 CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
 CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded

类似单一节点启停:

 [root@rac1 ~]# crsctl start cluster -n rac1
 [root@rac1 ~]# crsctl stop  cluster -n rac2

查看serverpool信息

[root@rac1 ~]# crsctl status serverpool
 NAME=Free
 ACTIVE_SERVERS=
 
 NAME=Generic
 ACTIVE_SERVERS=rac1 rac2

 NAME=ora.racdb
 ACTIVE_SERVERS=rac1 rac2

查看当前节点的时间同步

[root@rac1 ~]# crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

ocrcheck

显示OCR的块格式版本,可用空间总数和使用空间,OCRID和您配置的OCR位置

[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84528
         Available space (kbytes) :     407156
         ID                       : 1552918072
         Device/File Name         :       +ORS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

ocrconfig管理命令

查看OCR的手工备份

[root@rac1 ~]# ocrconfig -showbackup manual
PROT-25: Manual backups for the Oracle Cluster Registry are not available

查看OCR的自动备份

[root@rac1 ~]# ocrconfig -showbackup auto

rac2     2020/05/07 12:58:15     +ORS:/rac19c/OCRBACKUP/backup00.ocr.263.1039784283     724960844

rac2     2020/05/07 08:58:02     +ORS:/rac19c/OCRBACKUP/backup01.ocr.265.1039769871     724960844

rac2     2020/05/07 04:57:48     +ORS:/rac19c/OCRBACKUP/backup02.ocr.261.1039755457     724960844

rac1     2020/05/06 01:35:08     +ORS:/rac19c/OCRBACKUP/day.ocr.259.1039656909     724960844

rac1     2020/04/28 05:23:43     +ORS:/rac19c/OCRBACKUP/week.ocr.260.1038893025     724960844

手工备份OCR

[root@rac1 ~]# ocrconfig -manualbackup

rac2     2020/05/07 15:00:46     +ORS:/rac19c/OCRBACKUP/backup_20200507_150046.ocr.258.1039791647     724960844 

配置OCR备份路径

[root@rac1 /]# ocrconfig -backuploc /ocrbak
PROT-42: The specified location '/ocrbak' designates an invalid storage type for Oracle Cluster Registry backup files.


[root@rac1 /]# ocrconfig -add +DATA  //添加一个新的ocr文件到磁盘组
[root@rac1 /]# ocrconfig -delete +DATA  //从磁盘组里删除一个OCR文件

ocrdump查看命令

该命令能以ASCII的方式打印出OCR的内容,但是这个命令不能用作OCR的备份恢复,也就是说产生的文件只能用作阅读,而不能用于恢复。

命令格式: ocrdump [-stdout] [filename] [-keyname name] [-xml]

参数说明:

    -stdout: 把内容打印输出到屏幕上

Filename:内容输出到文件中

-keyname:只打印某个键及其子健内容

-xml:以xml格式打印输出

示例:把system.css键的内容以.xml格式打印输出到屏幕

[root@raw1 bin]# ./ocrdump -stdout -keyname system.css -xml|more

<OCRDUMP>


<TIMESTAMP>03/08/2010 04:28:41</TIMESTAMP>

<DEVICE>/dev/raw/raw1</DEVICE>

<COMMAND>./ocrdump.bin -stdout -keyname system.css -xml </COMMAND>

......

这个命令在执行过程中,会在$CRS_HOME/log/<node_name>/client 目录下产生日志文件,文件名ocrdump_.log,如果命令执行出现问题,可以从这个日志查看问题原因。

应用层srvctl(平时使用最多的两个命令之一) onsctl,crs_XX
查看nodeapps节点配置(含VIP、GSD、ONS)

[oracle@rac1 ~]$ srvctl config nodeapps
 Network 1 exists
 Subnet IPv4: 10.16.35.0/255.255.255.0/ens192, static
 Subnet IPv6: 
 Ping Targets: 
 Network is enabled
 Network is individually enabled on nodes: 
 Network is individually disabled on nodes: 
 VIP exists: network number 1, hosting node rac1
 VIP Name: rac1-vip
 VIP IPv4 Address: 10.16.35.62
 VIP IPv6 Address: 
 VIP is enabled.
 VIP is individually enabled on nodes: 
 VIP is individually disabled on nodes: 
 VIP exists: network number 1, hosting node rac2
 VIP Name: rac2-vip
 VIP IPv4 Address: 10.16.35.63
 VIP IPv6 Address: 
 VIP is enabled.
 VIP is individually enabled on nodes: 
 VIP is individually disabled on nodes: 
 ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
 ONS is enabled
 ONS is individually enabled on nodes: 
 ONS is individually disabled on nodes:

查看nodeapps节点状态(含VIP、GSD、ONS)

 [oracle@rac1 ~]$ srvctl status nodeapps
 VIP 10.16.35.62 is enabled
 VIP 10.16.35.62 is running on node: rac1
 VIP 10.16.35.63 is enabled
 VIP 10.16.35.63 is running on node: rac2
 Network is enabled
 Network is running on node: rac1
 Network is running on node: rac2
 ONS is enabled
 ONS daemon is running on node: rac1
 ONS daemon is running on node: rac2

查看DB配置

 [oracle@rac1 ~]$ srvctl config database   //查看在集群中注册的所有数据库
 racdb
 [oracle@rac1 ~]$ srvctl config database -d racdb  //查看RAC信息
 Database unique name: racdb
 Database name: racdb
 Oracle home: /u01/app/oracle/product/19.3.0/db_1
 Oracle user: oracle
 Spfile: +DATA/RACDB/PARAMETERFILE/spfile.272.1038940639
 Password file: +DATA/RACDB/PASSWORD/pwdracdb.256.1038938679
 Domain: 
 Start options: open
 Stop options: immediate
 Database role: PRIMARY
 Management policy: AUTOMATIC
 Server pools: 
 Disk Groups: DATA
 Mount point paths: 
 Services: 
 Type: RAC
 Start concurrency: 
 Stop concurrency: 
 OSDBA group: dba
 OSOPER group: oper
 Database instances: racdb1,racdb2
 Configured nodes: rac1,rac2
 CSS critical: no
 CPU count: 0
 Memory target: 0
 Maximum memory: 0
 Default network number for database services: 
 Database is administrator managed

[oracle@rac1 ~]$ srvctl config database -d db_name -a(加了-a可以看到DB是否随集群自动启动)

查看DB状态

[oracle@rac1 ~]$ srvctl status database -d racdb
 Instance racdb1 is running on node rac1
 Instance racdb2 is running on node rac2

 [oracle@rac1 ~]$ srvctl status instance -d racdb -i racdb1
 Instance racdb1 is not running on node rac1

关闭和启动DB

在RAC环境下启动,关闭数据库虽然仍然可以使用SQL/PLUS方法,但是更推荐使用srvctl命令来做这些工作,这可以保证即使更新CRS中运行信息,可以使用start/stop命令启动,停止对象,然后使用status命令查看对象状态。

0) 关闭所有数据库

 [oracle@rac1 ~]$ srvctl  stop database -d racdb  //关闭所有节点数据库
 [oracle@rac1 ~]$ srvctl  status database -d racdb      //查看所有节点数据库状态 
 Instance racdb1 is not running on node rac1
 Instance racdb2 is not running on node rac2 
 1) 启动数据库,默认启动到open状态
 [oracle@rac1 ~]$ srvctl start database -d racdb 
 2) 指定启动状态 
 [oracle@rac1 ~]$ srvctl start instance -d racdb -i racdb1 -o mount  //测试可以的方法
 [oracle@rac1 ~]$ srvctl start instance -d racdb -i racdb1 -o nomount   //测试可以的方法

 //[root@rac1 bin]# ./srvctl start database -d raw -i rac1 -o mount 
 //[root@rac1 bin]# ./srvctl start database -d raw -i rac1 -o nomount 
 3) 关闭对象,并指定关闭方式 
 [oracle@rac2 ~]$ srvctl stop instance -d racdb -i racdb1 -o immediate  
 [oracle@rac2 ~]$ srvctl stop instance -d racdb -i racdb1 -o   abort 
 4) 在指定实例上启动服务: 
 [root@rac1 bin]# ./srvctl start service -d raw -s rawservice -i rac1 
 –查看服务状态 
 [root@rac1 bin]# ./srvctl status service -d raw -v 
 5) 关闭指定实例上的服务 
 [root@rac1 bin]# ./srvctl stop service -d raw -s rawservice -i rac1 
 –查看服务状态 
 [root@rac1 bin]# ./srvctl status service -d raw -v

查看listener配置、状态

 [oracle@rac1 ~]$ srvctl config listener  //查看配置
 Name: LISTENER
 Type: Database Listener
 Network: 1, Owner: grid
 Home: <CRS home>
 End points: TCP:1521
 Listener is enabled.
 Listener is individually enabled on nodes: 
 Listener is individually disabled on nodes:

 [oracle@rac1 ~]$ srvctl status listener //查看状态
 Listener LISTENER is enabled
 Listener LISTENER is running on node(s): rac1,rac2

查看scan_listener配置、状态(不能通过lsnrctl来看scan的监听,所以只能以下两者结合srvctl config scan)

 [oracle@rac1 ~]$ srvctl  config scan_listener
 SCAN Listeners for network 1:
 Registration invited nodes: 
 Registration invited subnets: 
 Endpoints: TCP:1521
 SCAN Listener LISTENER_SCAN1 exists
 SCAN Listener is enabled.

 
 [oracle@rac1 ~]$ srvctl  status scan_listener  //查看状态
 SCAN Listener LISTENER_SCAN1 is enabled
 SCAN listener LISTENER_SCAN1 is running on node rac2

查看SCAN IP配置和状态(IP地址具体是多少)

 [oracle@rac1 ~]$ srvctl config scan  //具体多少
 SCAN name: rac19c-scan, Network: 1
 Subnet IPv4: 10.16.35.0/255.255.255.0/ens192, static
 Subnet IPv6: 
 SCAN 1 IPv4 VIP: 10.16.35.64
 SCAN VIP is enabled.

 
 [oracle@rac1 ~]$ srvctl status scan  //scan ip在哪个节点
 SCAN VIP scan1 is enabled
 SCAN VIP scan1 is running on node rac2

查看VIP的配置、状态

 [oracle@rac1 ~]$ srvctl config vip -n rac1  //配置
 VIP exists: network number 1, hosting node rac1
 VIP Name: rac1-vip
 VIP IPv4 Address: 10.16.35.62
 VIP IPv6 Address: 
 VIP is enabled.
 VIP is individually enabled on nodes: 
 VIP is individually disabled on nodes: 


 [oracle@rac1 ~]$ srvctl status vip -n  rac1  //状态
 VIP 10.16.35.62 is enabled
 VIP 10.16.35.62 is running on node: rac1

DB注册到集群和从集群取消

srvctl add database -d db_name -o $ORACLE_HOME
或加上-p固化参数文件
srvctl add database -d db_name -o $ORACLE_HOME -p +XX/spfilevm72.ora

添加一个serverpool

srvctl add srvpool -g serverpoolname -n "node1, node2"(srvctl创建的serverpool只能给数据库使用,不能给第三方的应用使用)

配置DB是否随集群启动而自动启动(必须同时Management policy: AUTOMATIC和Database is enabled)

经过测试会随crsctl start crs,每个节点的单独集群启动后,数据库会随着启动。crsctl start cluster,集群一起重启后数据库也会挨个启动

[oracle@rac1 ~]$ srvctl modify database -d  racdb -y AUTOMATIC
[oracle@rac1 ~]$ srvctl enable database -d  racdb
PRCC-1010 : racdb was already enabled
PRCR-1002 : Resource ora.racdb.db is already enabled


srvctl modify database -d db_name -y MANUAL --只要设置则数据库不自动启动
srvctl disable database -d db_name --只要设置则数据库不自动启动
srvctl enable instance -d db_name –i instance_name --某个实例自动启动
srvctl disable instance -d db_name –i instance_name --某个实例不自动启动
srvctl config database -d db_name -a(加了-a可以看到DB是否随集群自动启动)

查看ASM状态

[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac1,rac2
ASM is enabled.
ASM instance +ASM1 is running on node rac1
Number of connected clients: 2
Client names: rac1:_OCR:rac19c racdb1:racdb:rac19c
ASM instance +ASM2 is running on node rac2
Number of connected clients: 2
Client names: rac2:_OCR:rac19c racdb2:racdb:rac19c

————————————————
版权声明:本文为CSDN博主「看不见阿唱」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_36065860/article/details/106011806

标签: none

评论已关闭