分类 ORACLE 下的文章

生产案例二(重建master):

指定数据目录信息,docker环境在容器外备份需要。

vim /soft/mysql.cnf
[mysqld]
datadir=/root/.data/mysql/data
log-bin=/root/.data/mysql/data/mysql-bin
log-bin-index=/root/.data/mysql/data/mysql-bin.index

备份

./xtrabackup --defaults-file=/soft/mysql.cnf --host=10.12.6.229 --user=root --password='MGR_12345678' --port=3308 --backup --target-dir /soft/full-database-20221017

转移

scp -r /soft/full-database-20221017 root@10.12.6.233:/soft/

从库还原

./xtrabackup --prepare --target-dir=/soft/full-database-20221017
docker stop mysql-mgr-m
rm -rf /root/.data/mysql/data/*
rsync -avrP /soft/full-database-20221017/ --exclude='xtrabackup_*' /root/.data/mysql/data/ > /soft/rsync.log
docker start mysql-mgr-m

查看最后一条GTID事物

cat /soft/full-database-20221017/xtrabackup_binlog_info

mysql-bin.000515 224 aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-182146236:182880967-182916144

进入数据开启组复制

docker exec -it mysql-mgr-m /bin/bash

mysql -uroot -p'xxxxxxxx'

stop group_replication;
SET SQL_LOG_BIN=0;
reset master;
set @@GLOBAL.GTID_PURGED= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-182146236:182880967-182916144';
SET SQL_LOG_BIN=0;
start group_replication;

检查

select * from performance_schema.replication_group_members;

SQL> alter session set workarea_size_policy=manual; --执行后可以手动调整排序区而不是自动使用pga

Session altered

SQL> alter session set sort_area_size=1073741824; --提高当前会话的排序区

Session altered

SQL> alter session set sort_area_retained_size=1073741824; --排序结束后用于返回排序行。

Session altered

SQL> alter session set db_file_multiblock_read_count=128; --增加一次读取的block的数量.

Session altered

SQL> set timing on;
SQL> set serveroutput on;

SQL> create index IDX_PRWAY_BILLID_update_time on TMS_WAY_BILL (update_time) nologging online parallel 4
2 tablespace IDEAS_DATA
3 pctfree 10
4 initrans 10
5 maxtrans 255
6 storage
7 (
8 initial 64K
9 next 1M
10 minextents 1
11 maxextents unlimited
12 ) ;

Index created

Executed in 1435.04 seconds

SQL> alter index IDX_PRWAY_BILLID_update_time noparallel ;

Index altered

Executed in 0.637 seconds

监控索引使用情况:
alter index IX_USERINFO$SALARY monitoring usage ;
alter index IX_USERINFO$SALARY nomonitoring usage ;
select * from v$object_usage;

--查监控历史
select u.name owner,

   io.name index_name,
   t.name table_name,
   decode(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring,
   decode(bitand(ou.flags, 1), 0, 'NO', 'YES') used,
   ou.start_monitoring start_monitoring,
   ou.end_monitoring end_monitoring

from sys.user$ u,

   sys.obj$         io,
   sys.obj$         t,
   sys.ind$         i,
   sys.object_usage ou

where i.obj# = ou.obj#
and io.obj# = ou.obj#
and t.obj# = i.bo#
and u.user# = io.owner#
order by 6 desc

Oracle 如何停止正在后台执行的impdp/expdp 任务
Oracle 在执行 impdp或expdp过程中如果不想执行按Ctrl+C中断,但进程并未中断仍在后台运行,可以看导出的文件大小一直在长
expdp正确停止过程:

  1. 查看正在运行的job,可以发现自己的job还在执行
    select * from dba_datapump_jobs;
  2. 根据上面 job_name 进入到刚才执行的expdp下
    expdp system/123456 attach=SYS_EXPORT_TABLE_01
  3. 执行
    stop_job=immediate

    kill_job

附:
status:查看作业状态,监控作业进度

4.再次查看job为 not running,
select * from dba_datapump_jobs;


 

1、监控脚本(注:这里没用Sendmail工具发送邮件,如果用的话需要修改)

$cat check_oracle_dg_delay.sh 

#!/bin/sh
source ~/.bash_profile
#定义变量
v_datetime=`date +"%Y-%m-%d %H:%M:%S"`
v_username='oracle'
v_hostname=`hostname`
#获取IP地址
v_ip=`/sbin/ifconfig bond0|grep 'inet addr'|awk '{print $2}'|cut -d: -f2`
#发送人
v_sendmail='xxxxx@139.com'
#接收人,多个联系人用空格分开
v_receivemail='XXXXX@139.com XXXXX@163.com'
#路径
v_path="/home/${v_username}/scripts"
#####define tns#####
v_dbname='XXXXX'
v_standby_tns='xxxxx_dg'
#####报警阀值设置(单位:秒)#####
v_delay_time_warning=1800

#####Oracle DG 延时获取开始#####
sqlplus -S "/ as sysdba" <<EOF
--#####primary database####
col delay_time for 9999999999;
col tmp_delay_time new_val tmp_delay_time
set timing off time off verify off heading off trimout on trimspool on linesize 180;

spool ${v_path}/delay_time.log
select (to_date(to_char(scn_to_timestamp(current_scn),
                        'yyyy-mm-dd hh24:mi:ss'),
                'yyyy-mm-dd hh24:mi:ss') -
       to_date(to_char(scn_to_timestamp
                        ((select current_scn
                           from v\$database@phystandby.localdomain)),
                        'yyyy-mm-dd hh24:mi:ss'),
                'yyyy-mm-dd hh24:mi:ss')) * 24 * 3600 tmp_delay_time  
from v\$database;
spool off;

spool ${v_path}/delay_scn_and_time.log
col standby_current_scn for 999999999999999;
col primary_current_scn for 999999999999999;
col standby_current_time for a30;
col primary_current_time for a30;
col delay_time for 9999999999;
set heading on 
select current_scn primary_current_scn,
       (select current_scn from v\$database@phystandby.localdomain) standby_current_scn,
       to_char(scn_to_timestamp(current_scn), 'yyyy-mm-dd hh24:mi:ss') primary_current_time,
       to_char(scn_to_timestamp
               ((select current_scn from v\$database@phystandby.localdomain)),
               'yyyy-mm-dd hh24:mi:ss') standby_current_time,
       (to_date(to_char(scn_to_timestamp(current_scn),
                        'yyyy-mm-dd hh24:mi:ss'),
                'yyyy-mm-dd hh24:mi:ss') -
       to_date(to_char(scn_to_timestamp
                        ((select current_scn
                           from v\$database@phystandby.localdomain)),
                        'yyyy-mm-dd hh24:mi:ss'),
                'yyyy-mm-dd hh24:mi:ss')) * 24 * 3600 "delay_time(s)"
  from v\$database;
spool off;

exit
EOF
#####Oracle DG 延时获取结束#####

#####Oracle DG 延时时长提取#####
v_delay_time=`cat ${v_path}/delay_time.log|sed '/^$/d'|sed -e 's/[[:space:]][[:space:]]*/ /g'`
#####监控代码开始#####
if [ ${v_delay_time} -ge ${v_delay_time_warning} ] ; then

echo "############################" > ${v_path}/oracle_dg_delay.log
echo "USER: ${v_username}" >> ${v_path}/oracle_dg_delay.log
echo "HOSTNAME: ${v_hostname}" >> ${v_path}/oracle_dg_delay.log
echo "IP: ${v_ip}" >> ${v_path}/oracle_dg_delay.log
echo "############################" >> ${v_path}/oracle_dg_delay.log
cat ${v_path}/delay_scn_and_time.log >> ${v_path}/oracle_dg_delay.log
${v_path}/bsmtp -f ${v_sendmail} -h smtp.api.localdomain -s "${v_dbname} ORACLE DG DELAY ${v_delay_time}s WARNING!!! - ${v_datetime}" ${v_receivemail} < ${v_path}/oracle_dg_delay.log
fi
#####监控代码结束#####



2、配置crontab,每十分钟执行一次

$crontab -l

复制 
#oracle DG delay check
*/10 * * * * /home/oracle/scripts/check_oracle_dg_delay.sh >/dev/null 2>&1

在RAC环境中,序列的Cache问题可能会对性能有着决定性的影响,缺省的序列Cache值为20,这对RAC环境远远不够。
如果存在序列号使用的竞争,就可能在数据库中看到明显的队列等待:
enq: SQ - contention
在RAC情况下,可以将使用频繁的序列Cache值增加到10000,或者更高到50000,这些值在客户的环境中都有采用。
这是RAC设置和RAC使用的基本常识,不可或忘。
在以下测试中,可以显示Cache序列对于性能的影响:
http://space.itpub.net/14941137/viewspace-629941
摘要如下:
RAC两个会话分别处于不同node同时并发循环间断去取4万个值  :           
    nocache:               2100s
    cache =1000:         55s
差别却是好大。
单Instance数据库单会话循环不间断去1-4万个值  测试(在家里笔记本上测试结果)过程如下:
    nocache:             37.7s          10000   
    cache :20            4.31s          10000
    cache :100         2.92s           10000
    cache :1000       5.56s          40000
    nocache:             97.7s         40000
基本上cache 大于20的时候性能基本可以接受,最好设置100以上,
nocache的时候性能确实很差,最大相差20倍.
排序参数:oracle默认是NOORDER,如果设置为ORDER;在单实例环境没有影响,在RAC环境此时,多实例实际缓存相同的序列,此时在多个实例 并发取该序列的时候,会有短暂的资源竞争来在多实例之间进行同步。因次性能相比noorder要差,所以RAC环境非必须的情况下不要使用ORDER,尤 其要避免NOCACHE   ORDER组合;
在某些版本中存在BUG,会导致过度的 enq : SQ 竞争。

如在Oracle Database 11g中存在 IDGEN$ 序列 cache 设置过小问题,可能导致严重竞争,建议增加该序列的Cache值设置。

RAC环境下与sequence相关的锁

oracle为了在rac环境下为了sequence的一致性,使用了三种锁:row cache lock、SQ锁、SV锁。
row cache lock的目的是在sequence指定nocache的情况下调用sequence.nextval过程中保证序列的顺序性;
SQ锁是应用于指定了cache+noorder的情况下调用sequence.nextval过程中。
SV锁(dfs lock handel) 是调用sequence.nextval期间拥有的锁。前提是创建sequence时指定了cache 和order属性 (cache+order)。order参数的目的是为了在RAC上节点之间生成sequence的顺序得到保障。
创建sequence赋予的cache值较小时,有enq:sq-contention等待增加的趋势。
cache的缺省值是20.因此创建并发访问多的sequence时,cacheh值应取大一些。否则会发生enq:sq-contention等待事件。
rac上创建sequence时,如果指定了cache大小同时赋予了noorder属性,则各节点将会把不同范围的sequence值cache到内存上。

create sequence TX_SEND_SEQ_ACC
minvalue 1
maxvalue 999999999999999999999999999
start with 673560
increment by 1
cache 20;
RAC1取序列
SQL> select tx_send_seq_acc.nextval from dual;
   NEXTVAL
----------
    673560
SQL> select tx_send_seq_acc.nextval from dual;
   NEXTVAL
----------
    673561
RAC2取序列
SQL> select tx_send_seq_acc.nextval from dual;
   NEXTVAL
----------
    673580
SQL> select tx_send_seq_acc.nextval from dual;
   NEXTVAL
----------
    673581

若两个节点之间都必须通过依次递增方式使用sequence,必须赋予如下的order属性
如果是已赋予了cache+order属性的sequence,oracle使用SV锁进行同步。SV锁争用问题发生时的解决方法与sq锁的情况相同,就是将cache 值进行适当调整。
在RAC多节点环境下,Sequence的Cache属性对性能的影响很大。应该尽量赋予cache+noorder属性,并要给予足够的cache值。如果需要保障顺序,必须赋予cache+order属性。但这时为了保障顺序,实例之间需要不断的交换数据。因此性能稍差。

oracle RAC环境sequence不一致问题
Sequences in Oracle 10g RAC
Just recently I got a call from a developer. He had a table with a primary key populated by a sequence, a timestamp column with the current date and some other columns. He had a specific set of data that, when ordered by the primary key had out of order timestamps. He was puzzled how this could be. This is a RAC database and the sequence was created with the default values.  Not only the sequences cache was the default of 20, but it was “noordered”.  Being “noordered” Oracle will not guarantee the order in which numbers are generated.
Example of “noorder” sequence in 10g RAC:
Session 1 on node-A: nextval -> 101
Session 2 on node-A: nextval -> 102
Session 1 on node-B: nextval -> 121
Session 1 on node-B: nextval -> 122
Session 1 on node-A: nextval -> 103
Session 1 on node-A: nextval -> 104
The sequence cache is in the shared pool, therefore sessions on the same node can share the cached entry, but sessions on different nodes cannot. I wonder why Oracle doesnt make “ordered” the default for sequences.  So I explained to the developer how sequences work in RAC and how each node has its own “cache”.
We changed the sequence to “ordered” and increased the cache to 1000. Now selecting on either node gets the next number as he expected. I warned him that there would be some performance implications due to cluster synchronization. Him been a responsive developer, asked me what would be the impact, so I tested it out.
How does RAC synchronize sequences?
In Oracle 10g RAC, if you specify the “ordered” clause for a sequence, then a global lock is allocated by the node when you access the sequence.  This lock acquisition happens only at the first sequence access for the node (A), and subsequent uses of the sequence do not wait on this lock. If another node (B) selects from that sequence, it requests the same global lock and once acquired it returns the sequences next value.  The wait event associated with this activity is recorded as "events in waitclass Other" when looked in gv$system_event. So much for event groups, it couldn't be more obscure. That view shows overall statistics for the session.
However if you look in the gv$session_wait_history it shows as “DFS lock handle” with the “p1″ parameter been the object_id of the sequence. This second view has a sample of the last 10 wait events for a session.
In a SQL_TRACE with waitevents (10046 trace) it will be a "DFS lock handle" but in AWR or statspack reports it will be “events in waitclass Other”. So much for consistency.
How does that change our example?
Session 1 on node-A: nextval -> 101 (DFS Lock handle) (CR read)
Session 2 on node-A: nextval -> 102
Session 1 on node-B: nextval -> 103 (DFS Lock handle)
Session 1 on node-B: nextval -> 104
Session 1 on node-A: nextval -> 105 (DFS Lock handle)
Session 1 on node-A: nextval -> 106
(more selects)
Session 1 on node-A: nextval -> 998
Session 1 on node-B: nextval -> 999 (DFS Lock handle)
Session 1 on node-B: nextval -> 1000 (CR read)
The cache size also has some RAC synchronization implications. When the cached entries for the sequence are exhausted, the sequence object needs to be updated. This usually causes a remote CR (current read) over the interconnect for the block that has the specific sequence object. So a bit more activity here.
Test case:
create sequence test_rac;
declare
  dummy number;
begin
  for i in 1..50000 loop
    select test_rac.nextval into dummy from dual;
  end loop;
end;
/
Results:
50 000 loops with cache = 20 (default)
1 node = 5 seconds
2 nodes at same time = 14 seconds
2 nodes at same time ordered = 30 seconds
50 000 loops with cache = 1000
1 node = 1.5 seconds
2 nodes at same time = 1.8 seconds
2 nodes at same time ordered = 20 seconds
With a smaller cache, the “noordered” still has as significant impact as every 10 fetches (cache 20 divided by 2 nodes fetching) it has to synchronize between the 2 nodes
The conclusion
By default sequences in 10g RAC are created without ordering. Beware of using applications that rely on sequences to be ordered and using it in a RAC environment.  Consider changing all user sequences to “ordered” as a precaution and increasing the cache size.  The default cache value is still very low and even not-ordered sequences will cause contention in a highly-active sequence even in non-RAC and causing an additional block exchange every 20 values in RAC.
For high volume insert operations where ordering is not performed on the value returned from the sequence, consider leaving the sequence “noordered” but increasing the cache size significantly.
Either way, the sequence parameters should be reviewed, as chances are, the defaults are not what you need.  I remember reading somewhere that in Oracle 9i the “ordered” clause in RAC was equivalent to “nochache”.  I cant imagine how bad that would be in concurrent selects from the same sequence.  It would be interesting if someone running 9i RAC performs the test case and I would appreciate if you post the results in the comments.

 

参考至:http://www.aixchina.net/home/space.php?uid=20260&do=blog&id=26752

              http://blog.csdn.net/zftang/article/details/6321513

              http://www.eygle.com/archives/2012/05/oracle_rac_sequence_cache.html

              http://wenku.baidu.com/link?url=GpJLv3Lrl6YxxnYMkZZlMcKa9z9LCaAVyXsp-tjizHlvUluY3jQbPaZBAMiNvw1md9_yZEIWCl9CD2JeakbM--u8ngBz7Bs27lfLSNgEPg_

              http://bbs.csdn.net/topics/390310984

本文原创,转载请注明出处、作者

如有错误,欢迎指正

邮箱:czmcj@163.com