每天 TB 大小的归档存储
Terabyte size archive storage per day
我正在研究 Oracle 数据库。
归档存储高达每天 TB 级别。我需要解决这个问题。
如何检测引起数据库变化最多的表达式(dml + ddl)?
你可以检查这张图片。
有日志挖掘pl/sql,你可以在oracle功能中开启,sql可以看到archivelog的内容,然后你就可以知道,是什么生成了这么多archivelog。
请参阅以下内容link(但您需要担任 dba 角色才能启用和跟踪存档日志的内容)
https://docs.oracle.com/cd/B19306_01/server.102/b14215/logminer.htm
斯科特
虽然我不认为有一种方法可以直接测量每个语句生成的存档日志量 space,但有几个简单的查询可以测量与存档日志密切相关的事物 space.
使用视图 GV$SQL
查找使用时间最多的 non-SELECT SQL 语句。 longest-running DML 和 DDL 语句通常是写入最多数据的语句。根据我的经验,只要查看此查询的结果,SQL 语句之一就会成为罪魁祸首。
select round(elapsed_time/1000000) seconds, gv$sql.*
from gv$sql
where command_type <> 3 --ignore SELECT statements.
order by seconds desc;
使用视图 DBA_HIST_SQLSTAT
查找在过去一天生成最多物理写入的 SQL 语句。这些结果也可能具有误导性,因为 direct-path 写入不会生成 REDO 和归档日志数据。但是罪魁祸首很可能位于此查询结果的顶部。 (此视图需要获得诊断包的许可 - 它是 AWR 的一部分。)
select sql_id, round(physical_write_bytes_total/1024/1024) mb, dba_hist_sqlstat.*
from dba_hist_sqlstat
where (snap_id, dbid) in
(
--Snapshots in the last 24 hours.
select snap_id, dbid
from dba_hist_snapshot
where begin_interval_time >= sysdate - interval '240' hour
)
order by dba_hist_sqlstat.physical_write_bytes_total desc;
这将为您提供全天 log-switches 何时发生的图形视图。 https://www.morganslibrary.org/reference/log_files.html
set echo on
/*
Look for anomalies in log switch frequency and
switch frequencies greater than 12 per hour.
*/
set echo off
select
to_char(first_time,'MMDD') MMDD,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'999') "23"
from
v$log_history
group by to_char(first_time,'MMDD')
order by 1;
祝你好运!
我正在研究 Oracle 数据库。 归档存储高达每天 TB 级别。我需要解决这个问题。 如何检测引起数据库变化最多的表达式(dml + ddl)? 你可以检查这张图片。
有日志挖掘pl/sql,你可以在oracle功能中开启,sql可以看到archivelog的内容,然后你就可以知道,是什么生成了这么多archivelog。
请参阅以下内容link(但您需要担任 dba 角色才能启用和跟踪存档日志的内容)
https://docs.oracle.com/cd/B19306_01/server.102/b14215/logminer.htm
斯科特
虽然我不认为有一种方法可以直接测量每个语句生成的存档日志量 space,但有几个简单的查询可以测量与存档日志密切相关的事物 space.
使用视图 GV$SQL
查找使用时间最多的 non-SELECT SQL 语句。 longest-running DML 和 DDL 语句通常是写入最多数据的语句。根据我的经验,只要查看此查询的结果,SQL 语句之一就会成为罪魁祸首。
select round(elapsed_time/1000000) seconds, gv$sql.*
from gv$sql
where command_type <> 3 --ignore SELECT statements.
order by seconds desc;
使用视图 DBA_HIST_SQLSTAT
查找在过去一天生成最多物理写入的 SQL 语句。这些结果也可能具有误导性,因为 direct-path 写入不会生成 REDO 和归档日志数据。但是罪魁祸首很可能位于此查询结果的顶部。 (此视图需要获得诊断包的许可 - 它是 AWR 的一部分。)
select sql_id, round(physical_write_bytes_total/1024/1024) mb, dba_hist_sqlstat.*
from dba_hist_sqlstat
where (snap_id, dbid) in
(
--Snapshots in the last 24 hours.
select snap_id, dbid
from dba_hist_snapshot
where begin_interval_time >= sysdate - interval '240' hour
)
order by dba_hist_sqlstat.physical_write_bytes_total desc;
这将为您提供全天 log-switches 何时发生的图形视图。 https://www.morganslibrary.org/reference/log_files.html
set echo on
/*
Look for anomalies in log switch frequency and
switch frequencies greater than 12 per hour.
*/
set echo off
select
to_char(first_time,'MMDD') MMDD,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'999') "23"
from
v$log_history
group by to_char(first_time,'MMDD')
order by 1;
祝你好运!