Tuesday, March 03, 2009

Materialized View Log Space Requirement

To understand how much space is needed for materialized view logs, I did a test in a 9i database on my PC.

After this test, I can make this point: the size of materialized view is proportional to the number of rows that are changed by DML statments: insert, update and delete.

Here is the summary of the size of the mvlog segment in my test:



size of MLOG$_BIG_TABLE segment
--------------------------------

Condition size_M
--------- -------
At beginning 0.0625
After update 1 col for 100,000 rows 7
After insert 100,000 rows 13
After delete 100,000 rows 19
After update 2 col for 100,000 rows 25



The following are more detailed steps and observations:

1. A table called big_table is created in scott schema from dba_objects:

select count(*) from scott.big_table;

COUNT(*)
----------
1000000

2. Create the materialized view log and measure the size of the segments

create materialized view log on scott.big_table tablespace mviewlog;


select owner, segment_name, segment_type, bytes/1024/1024 size_m from dba_segments
where segment_name like '%BIG_TABLE%'
and owner='SCOTT';


OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
------------ ------------------------- ------------------ ----------
SCOTT BIG_TABLE TABLE 120
SCOTT MLOG$_BIG_TABLE TABLE .0625

SCOTT BIG_TABLE_PK INDEX 17


 
3. Update a column for 100,100 rows


begin
for rec in (select id from scott.big_table where rownum <=100000 )
loop
update scott.big_table set status=lower(status)
where id=rec.id;
commit;
end loop;
end;
/

OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
----------- ----------------------- ------------- -------
SCOTT BIG_TABLE TABLE 120
SCOTT MLOG$_BIG_TABLE TABLE 7
SCOTT BIG_TABLE_PK INDEX 17

 

4. insert 100,000 rows

declare
maxid number;
begin
select max(id) into maxid from scott.big_table;
for rec in (select * from scott.big_table where rownum <=100000 )
loop
maxid := maxid +1;
insert into scott.big_table values (
maxid
,rec.OWNER
,rec.OBJECT_NAME
,rec.SUBOBJECT_NAME
,rec.OBJECT_ID
,rec.DATA_OBJECT_ID
,rec.OBJECT_TYPE
,sysdate
,sysdate
,rec.TIMESTAMP
,rec.STATUS
,rec.TEMPORARY
,rec.GENERATED
,rec.SECONDARY
);
commit;
end loop;
end;
/

OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
-------------- ------------------ -------------- ----------
SCOTT BIG_TABLE TABLE 128
SCOTT MLOG$_BIG_TABLE TABLE 13
SCOTT BIG_TABLE_PK INDEX 20

 

5. Delete 100,000 rows

begin
for rec in (select id from scott.big_table where rownum <=100000 )
loop
delete from scott.big_table where id=rec.id;
commit;
end loop;
end;
/

OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
------------- ----------------- -------------- ----------
SCOTT BIG_TABLE TABLE 128
SCOTT MLOG$_BIG_TABLE TABLE 19
SCOTT BIG_TABLE_PK INDEX 20

 

6. Update two colunms for 100,000 rows

begin
for rec in (select id from scott.big_table where rownum <=100000 )
loop
update scott.big_table set status=upper(status),
created=sysdate - 1
where id=rec.id;
commit;
end loop;
end;
/

OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
------------ -------------------- -------------- ----------
SCOTT BIG_TABLE TABLE 128
SCOTT MLOG$_BIG_TABLE TABLE 25
SCOTT BIG_TABLE_PK INDEX 20

No comments: