Windows-使用pscp免密码传输文件到Linux服务器上
工具准备

pscp

http://files.iuie.pub:8080/files/1512702236880.exe

puttygen

http://files.iuie.pub:8080/files/1512703827187.exe

pscp和puttygen的原下载地址

https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
带密码方式传文件到Linux服务器
复制 pscp.exe 到 C:\System32目录下
命令行终端打开输入命令
pscp e:\ss.txt root@www.iuie.club:/root
<输入密码>
免密码方式传文件到Linux服务器

  1. 生成 key
    首先运行puttygen,打开生成key的界面

点击Generate后,鼠标在进度条下方的空白区域,随机点击或拖动或拖动画圆,生成随机key

完成之后,如图所示,复制公钥,保存私钥到 C:\Users\iuie目录下,文件名为 club.ppk, 登录你的CentOS服务器,进入用户根目录(/root, /home/username)下的.ssh目录,新建并编辑文件 authorized_keys,将刚刚复制好的公钥粘贴进去,注意不需要最后的 rsa-key-20171208,内容以两个等号结尾。

  1. 免密方式上传文件到服务器
    pscp -i C:\Users\admin\club.ppk yourfiile root@yourhost:/yourname
    参考:http://blog.csdn.net/ppdouble/article/details/21623547

转载于:https://my.oschina.net/u/3307502/blog/1587025

19c dataguard 快速配置

1,数据库版本

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.11.0.0.0

2,主机信息

192.168.3.230 db01 #主

192.168.3.233 db02 #备

3,主 -开启归档模式

ALTER DATABASE FORCE LOGGING;

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE OPEN;

4,tnsnames.ora 主,备都需要配置

cdb01 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.3.230)(PORT = 1521))

(CONNECT_DATA =

  (SERVER = DEDICATED)

  (SERVICE_NAME = cdb01)

)

)

sbcdb01 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.3.233)(PORT = 1521))

(CONNECT_DATA =

  (SERVER = DEDICATED)

  (SERVICE_NAME = sbcdb01)

)

)

5,DBCA to Create a Data Guard Standby

备数据库只需要安装软件既可以,监听不需要配置,dbca自动配置。

在备执行:

dbca -silent

-createDuplicateDB

-gdbName cdb01 #DB_NAME

-sid sbcdb01 #Standby SID

-sysPassword oracle

-primaryDBConnectionString 192.168.3.230:1521/cdb01

-createAsStandby -dbUniqueName sbcdb01 #db_unique_name

日志信息:

[oracle@db02 admin]$ dbca -silent -createDuplicateDB -gdbName cdb01 -sid sbcdb01 -sysPassword oracle -primaryDBConnectionString 192.168.3.230:1521/cdb01 -createAsStandby -dbUniqueName sbcdb01

Prepare for db operation

22% complete

Listener config step

44% complete

Auxiliary instance creation

67% complete

RMAN duplicate

89% complete

Post duplicate database operations

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/sbcdb01/sbcdb011.log" for further details.

6,开启同步

主:

alter system set LOG_ARCHIVE_DEST_2='service=sbcdb01 VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=sbcdb01';

alter system set log_archive_config='dg_config=(cdb01,sbcdb01)';

alter system set standby_file_management=auto;

备:

alter system set log_archive_config='dg_config=(cdb01,sbcdb01)';

alter system set standby_file_management=auto;

alter system set fal_server='cdb01';

alter database recover managed standby database using current logfile disconnect;

7,验证同步状态

主:创建pdb01.

SQL> create pluggable database pdb01

admin user admin identified by admin

file_name_convert = ('/pdbseed', '/pdb01');

Pluggable database created.

SQL> SQL> show pdbs;

CON_ID CON_NAME                       OPEN MODE  RESTRICTED

     2 PDB$SEED                       READ ONLY  NO

     3 PDB                            MOUNTED

     4 PDB01                          MOUNTED


备:pdb01已经同步完成。

[oracle@db02 admin]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 20 23:28:19 2021

Version 19.11.0.0.0

Copyright (c) 1982, 2020, Oracle. All rights reserved.

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.11.0.0.0

SQL> show pdbs;

CON_ID CON_NAME                       OPEN MODE  RESTRICTED

     2 PDB$SEED                       READ ONLY  NO

     3 PDB                            READ ONLY  NO

SQL> show pdbs;

CON_ID CON_NAME                       OPEN MODE  RESTRICTED

     2 PDB$SEED                       READ ONLY  NO

     3 PDB                            READ ONLY  NO

     4 PDB01                          MOUNTED




2021-05-20T23:28:46.531506+08:00

PR00 (PID:16461): Media Recovery Log /u01/app/19.3/dbs/arch1_53_1073060659.dbf

Recovery created pluggable database PDB01

Recovery copied files for tablespace SYSTEM

Recovery successfully copied file /data2/CDB01/pdb01/system01.dbf from /data2/CDB01/pdbseed/system01.dbf

PDB01(4):WARNING: File being created with same name as in

PDB01(4):primary. Existing file may be overwritten

PDB01(4):Recovery created file /data2/CDB01/pdb01/system01.dbf

PDB01(4):Successfully added datafile 35 to media recovery

PDB01(4):Datafile #35: '/data2/CDB01/pdb01/system01.dbf'

2021-05-20T23:28:48.972799+08:00

Recovery copied files for tablespace SYSAUX

Recovery successfully copied file /data2/CDB01/pdb01/sysaux01.dbf from /data2/CDB01/pdbseed/sysaux01.dbf

PDB01(4):WARNING: File being created with same name as in

PDB01(4):primary. Existing file may be overwritten

PDB01(4):Recovery created file /data2/CDB01/pdb01/sysaux01.dbf

PDB01(4):Successfully added datafile 36 to media recovery

PDB01(4):Datafile #36: '/data2/CDB01/pdb01/sysaux01.dbf'

Recovery copied files for tablespace UNDOTBS1

Recovery successfully copied file /data2/CDB01/pdb01/undotbs01.dbf from /data2/CDB01/pdbseed/undotbs01.dbf

PDB01(4):WARNING: File being created with same name as in

PDB01(4):primary. Existing file may be overwritten

PDB01(4):Recovery created file /data2/CDB01/pdb01/undotbs01.dbf

PDB01(4):Successfully added datafile 37 to media recovery

PDB01(4):Datafile #37: '/data2/CDB01/pdb01/undotbs01.dbf'

PR00 (PID:16461): Media Recovery Waiting for T-1.S-54 (in transit)

提问,如果把表keep到内存后,读取大表的时候, 还会触发直接路径读吗?

线上问题
一、原服务器内存24G、oracle设置内存为18G、目前服务器扩容到64G

二、部分热点表数据行已上百万、读取过于频繁、考虑KEEP到内存中

解决方案
一、内存调整
注:

1、正常设置为物理内存的75% 、线上设置成48G

2、sga_target不能大于sga_max_size,可以设置为相等

3、SGA加上PGA等其他进程占用的内存总数必须小于操作系统的物理内存

4、修改完成后重启数据库生效

--显示内存分配情况
show parameter sga;
--修改占用内存的大小
alter system set sga_max_size=48000m scope=spfile;
--修改SGA的大小:
alter system set sga_target=48000m scope=spfile;

二、设置表KEEP内存
注:

1、设置后及时生效、线上设置了10G空间的缓存、查询了实际需要KEEP的表、实际占用空间为5G、设置10G够用几年

--查看KEEP缓存空间
show parameter db_keep_cache_size
--设置KEEP缓存空间
alter system set db_keep_cache_size=10000M scope=both;

三、添加表到KEEP中
注:

1、设置后及时生效

2、想要看具体详细数据、可以直接查询dba_segments表

--将表缓存到内存中
alter table t_cdeli_wait_processing_info storage(buffer_pool keep);
--取消表缓存
alter table t_cdeli_wait_processing_info storage(buffer_pool default);
--查询总大小
select component,current_size from v$sga_dynamic_components where component='KEEP buffer cache';
--查询已缓存表
select segment_name from dba_segments where BUFFER_POOL = 'KEEP';
————————————————
版权声明:本文为CSDN博主「wucao110」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/wucao110/article/details/125271195

Plan hash value: 2490995645
 
-------------------------------------------------------------------------------------------
| Id  | Operation           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                     |       |       |   465K(100)|          |
|   1 |  SORT AGGREGATE     |                     |     1 |    38 |            |          |
|   2 |   NESTED LOOPS OUTER|                     | 51621 |  1915K|   465K  (1)| 01:33:05 |
|*  3 |    TABLE ACCESS FULL| BYDD9WMS_PRINT_DATA | 51621 |  1058K|   465K  (1)| 01:33:02 |
|*  4 |    INDEX UNIQUE SCAN| UK_PO_DATA          |     1 |    17 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter(("AA"."CODE_DATE" IS NULL AND "AA"."DATECODE" IS NOT NULL AND 
              NVL("AA"."UPDATE_NAME",'X')<>'UPCD2303210001'))
   4 - access("PD"."PO"="AA"."PO" AND "PD"."LINE"="AA"."LINE")
 
Note
-----
   - cardinality feedback used for this statement

查看执行计划发现:cardinality feedback used for this statement
11g之后的新特性, 如果cbo发现e_row和a_row的差距过多, 认为执行计划不准确,则重新生成执行计划。
检查发现对应的表的统计信息,上次分析是3年前,说明上次导入之后,连带统计信息也被导入,到目前为止,更新的次数不超过原来的10%,所以一直未更新。
所以可以在业务空闲时间重新收集统计信息,但是目前等待事件为全表扫描的io,cpu资源空闲较多,所以重新收集统计信息也无用,需要sql优化,减少全表扫描次数。才会有明显改善,检查业务逻辑发现,该动作是对一张800万的表进行全表扫描,可以预见,未来随着业务的增长,该语句的性能会越来越低,最终将无法执行下去,可能的措施是归档历史数据,或者改变表结构为分区表。减少查询扫描的范围。