CLOB insertion is very slow

I am experiencing significant performance degradation when inserting a row 'TinyString'

(example row only) in a CLOB with strings stored as compared to VARCHAR2. I understand that when storing data <4000 bytes in a CLOB with STORAGE IN ROW enabled, the data is effectively stored in the same way as VARCHAR2 (unless it overflows "4000 bytes") and there should be no significant performance degradation. However, my test routine * shows that inserting the same data into a CLOB is 15 times slower than inserting into a VARCHAR2.

Take a look at the code below:

I have multiple tables, each with a CONNECTED TRIGGER like the one below:

CREATE OR REPLACE TRIGGER mdhl_basic_trigger_compound
  FOR INSERT OR UPDATE OR DELETE ON target_table

  COMPOUND TRIGGER TYPE EVENTS_HIST IS TABLE OF log_table%ROWTYPE INDEX BY PLS_INTEGER;
                                                coll_events_hist EVENTS_HIST;
                                                ctr PLS_INTEGER := 0;
                                                my_bgroup VARCHAR2(3);

  BEFORE EACH ROW IS    
    BEGIN

      IF INSERTING OR UPDATING THEN
        my_bgroup  := :NEW.BGROUP;
      ELSE
        my_bgroup  := :OLD.BGROUP;
      END IF;

      ctr := ctr + 1;
      coll_events_hist(ctr).BGROUP := my_bgroup;
      coll_events_hist(ctr).TABLE_NAME := 'BASIC_MDHL';
      coll_events_hist(ctr).EVENT_TS := current_timestamp;         
      coll_events_hist(ctr).EVENT_RAW := 'TinyString';

  END BEFORE EACH ROW;

  AFTER STATEMENT IS 
    BEGIN
      FORALL counter IN 1 .. coll_events_hist.count() 
           INSERT INTO log_table VALUES coll_events_hist(counter); 
  END AFTER STATEMENT; 
END mdhl_basic_trigger_compound;

      

For any operation on target_table

the above trigger stores data filled in coll_events_hist

, in log_table

, which is defined as follows:

CREATE TABLE "USERNAME"."LOG_TABLE" 
   (  "BGROUP" VARCHAR2(3) NOT NULL ENABLE, 
        "TABLE_NAME" VARCHAR2(255) NOT NULL ENABLE, 
      "EVENT_TS" TIMESTAMP (7) DEFAULT current_timestamp, 
      "EVENT_RAW" CLOB
   ) 
  SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS" 
 LOB ("EVENT_RAW") STORE AS BASICFILE "EV_RAW_SEG"(
  TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 16384 PCTVERSION 5
  CACHE 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))

      

My setup: Windows 7 SP1, Oracle 11g

* My test routine repeats 10x updating of 21k rows in target_table on each iteration.

+3


source to share


2 answers


in your case is "tinystring" always <32767?

Your time will be wasted in the FORALL part looking at all the time traps you have made.

you will find better performance with inserts at every part of the line:

for example, on my test system using a lob trigger:

SQL> insert into target_Table select 'ABC' from dual connect by level <= 10000;

10000 rows created.

Elapsed: 00:00:10.49

      



vs has a trigger like:

SQL> CREATE OR REPLACE TRIGGER mdhl_basic_trigger
  2    before INSERT OR UPDATE OR DELETE ON target_table for each row
  3  declare
  4
  5  my_bgroup VARCHAR2(3);
  6
  7    v_timer2 number := 0;
  8    v_timer number;
  9  BEGIN
 10
 11        IF INSERTING OR UPDATING THEN
 12          my_bgroup  := :NEW.BGROUP;
 13        ELSE
 14          my_bgroup  := :OLD.BGROUP;
 15        END IF;
 16
 17        INSERT INTO log_table VALUES(my_bgroup, 'BASIC_MDHL', current_timestamp, 'TinyString');
 18
 19  END mdhl_basic_trigger;
 20  /

SQL> insert into target_Table select 'ABC' from dual connect by level <= 10000;

10000 rows created.

Elapsed: 00:00:01.18

      

if you KNOW your lines are always <32k, you can keep forall to improve this speed if you create a trigger like:

SQL> CREATE OR REPLACE TRIGGER mdhl_basic_trigger_compound
  2    FOR INSERT OR UPDATE OR DELETE ON target_table
  3
  4     COMPOUND TRIGGER
  5
  6     type events_rec is record (BGROUP VARCHAR2(3),
  7          TABLE_NAME VARCHAR2(255) ,
  8        EVENT_TS TIMESTAMP (7),
  9        EVENT_RAW varchar2(32767));
 10     TYPE EVENTS_HIST IS TABLE OF events_rec INDEX BY PLS_INTEGER;
 11     coll_events_hist EVENTS_HIST;
 12     ctr PLS_INTEGER := 0;
 13     my_bgroup VARCHAR2(3);
 14
 15  v_timer2 number := 0;
 16  v_timer number;
 17    BEFORE EACH ROW IS
 18      BEGIN
 19
 20        IF INSERTING OR UPDATING THEN
 21          my_bgroup  := :NEW.BGROUP;
 22        ELSE
 23          my_bgroup  := :OLD.BGROUP;
 24        END IF;
 25
 26        ctr := ctr + 1;
 27        coll_events_hist(ctr).BGROUP := my_bgroup;
 28        coll_events_hist(ctr).TABLE_NAME := 'BASIC_MDHL';
 29        coll_events_hist(ctr).EVENT_TS := current_timestamp;
 30        coll_events_hist(ctr).EVENT_RAW := 'TinyString';
 31
 32    END BEFORE EACH ROW;
 33
 34    AFTER STATEMENT IS
 35      BEGIN
 36  v_timer := dbms_utility.get_time;
 37        FORALL counter IN 1 .. coll_events_hist.count()
 38             INSERT INTO log_table VALUES coll_events_hist(counter);
 39  v_timer2 := v_timer2 + (dbms_utility.get_time - v_timer);
 40             dbms_output.put_line(v_timer2/100);
 41    END AFTER STATEMENT;
 42  END mdhl_basic_trigger_compound;
 43  /
SQL> insert into target_Table select 'ABC' from dual connect by level <= 10000;

10000 rows created.

Elapsed: 00:00:00.39

      

i.e. defer webbing operation until insertion.

+3


source


Even when a CLOB

is stored in a string, there is some overhead compared to the standard one VARCHAR2

, as described in Appendix C of the LOB efficiency guidelines .

When the length of a is LOB

less than 3964 bytes, it is stored in a line with a header of 36 bytes. A of VARCHAR2

length X will be stored as X data bytes with an additional 1 or 2 bytes of overhead.

I think this overhead will be carried over into memory, which means that PLSQL objects CLOB

will be less efficient than VARCHAR2

comparable size.

34-35 extra bytes will be added as shown with the following script:

SQL> create table test_var(a varchar2(4000));

Table created

SQL> create table test_clob(a clob);

Table created

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2    l_time TIMESTAMP := systimestamp;
  3  BEGIN
  4    FOR i IN 1..100000 LOOP
  5      INSERT INTO test_var VALUES (rpad('x', 1000, 'x'));
  6    END LOOP;
  7    dbms_output.put_line(systimestamp - l_time);
  8  END;
  9  /
+000000000 00:00:16.180299000

SQL> DECLARE
  2    l_time TIMESTAMP := systimestamp;
  3  BEGIN
  4    FOR i IN 1..100000 LOOP
  5      INSERT INTO test_clob VALUES (rpad('x', 1000, 'x'));
  6    END LOOP;
  7    dbms_output.put_line(systimestamp - l_time);
  8  END;
  9  /
+000000000 00:00:27.180716000

      



It takes longer to insert the CLOB due to the extra space consumed:

SQL> EXEC dbms_stats.gather_table_stats(USER, 'TEST_VAR');

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_table_stats(USER, 'TEST_CLOB');

PL/SQL procedure successfully completed.

SQL> select blocks, table_name from user_tables where table_name like 'TEST_%';

    BLOCKS TABLE_NAME
---------- ------------------------------
     33335 TEST_CLOB
     28572 TEST_VAR

      

The problem is compounded when inserting smaller rows:

-- after TRUNCATE tables
SQL> DECLARE
  2    l_time TIMESTAMP := systimestamp;
  3  BEGIN
  4    FOR i IN 1..1000000 LOOP
  5      INSERT INTO test_var VALUES (rpad('x', 10, 'x'));
  6    END LOOP;
  7    dbms_output.put_line(systimestamp - l_time);
  8  END;
  9  /

+000000000 00:00:51.916675000

SQL> DECLARE
  2    l_time TIMESTAMP := systimestamp;
  3  BEGIN
  4    FOR i IN 1..1000000 LOOP
  5      INSERT INTO test_clob VALUES (rpad('x', 10, 'x'));
  6    END LOOP;
  7    dbms_output.put_line(systimestamp - l_time);
  8  END;
  9  /

+000000000 00:01:57.377676000

-- Gather statistics

SQL> select blocks, table_name from user_tables where table_name like 'TEST_%';

    BLOCKS TABLE_NAME
---------- ------------------------------
      7198 TEST_CLOB
      2206 TEST_VAR

      

+1


source







All Articles