BULK COLLECT & FORALL vs. CURSOR & FOR LOOP

After more and more reads about BULK COLLECT and FORALL and their performance improvements I decided to have a closer look on it by myself to see how powerful they really are. So I built a little test-case which inserts all entries from the all_object view into another table. The inserts happens on three different ways:
First way is a simple cursor over the view and a insert in a loop with FETCH into local variables. This way also shows how slow the opening of the cursor itself is.
The second way is a simple FOR – IN LOOP with the insert of the cursor variables.
And, of course, the third way is the way with bulking the rows and inserting them with FORALL so lets see.

So the other table looks like this (three columns are enough for this tests)

SQL> create table temp (owner varchar2(30), name varchar2(30), type varchar2(19));

Table created.

And the three different procedures looks like this

CREATE OR REPLACE PROCEDURE CURSOR_FOR_OPEN_QUERY
 IS
 l_sOwner VARCHAR2(30);
 l_sName VARCHAR2(30);
 l_sType VARCHAR2(19);
 CURSOR cur IS SELECT owner, object_name name, object_type type FROM all_objects;
 BEGIN
 dbms_output.put_line('Before CURSOR OPEN: ' || systimestamp);
 OPEN cur;
 dbms_output.put_line('Before LOOP: ' || systimestamp);
 LOOP
 FETCH cur INTO l_sOwner, l_sName, l_sType;
 IF cur%NOTFOUND THEN
 EXIT;
 END IF;
 INSERT INTO temp values (l_sOwner, l_sName, l_sType);
 END LOOP;
 CLOSE cur;
 dbms_output.put_line('After CURSOR CLOSE: ' || systimestamp);
 COMMIT;
 END;
 /

CREATE OR REPLACE PROCEDURE CURSOR_FOR_QUERY
 IS
 BEGIN
 dbms_output.put_line('Before CURSOR: ' || systimestamp);
 FOR cur IN (SELECT owner, object_name name, object_type type FROM all_objects) LOOP
 INSERT INTO temp values (cur.owner, cur.name, cur.type);
 END LOOP;
 dbms_output.put_line('After CURSOR: ' || systimestamp);
 COMMIT;
 END;
 /

CREATE OR REPLACE PROCEDURE BULK_COLLECT_QUERY
 IS
 TYPE sOwner IS TABLE OF VARCHAR2(30);
 TYPE sName IS TABLE OF VARCHAR2(30);
 TYPE sType IS TABLE OF VARCHAR2(19);
 l_sOwner sOwner;
 l_sName sName;
 l_sType sType;
 BEGIN
 dbms_output.put_line('Before Bulk Collect: ' || systimestamp);
 SELECT owner, object_name, object_type
 BULK COLLECT INTO l_sOwner, l_sName, l_sType
 FROM all_objects;
 dbms_output.put_line('After Bulk Collect: ' || systimestamp);
 --
 FORALL indx IN l_sName.FIRST..l_sName.LAST
 INSERT INTO temp values (l_sOwner(indx), l_sName(indx), l_sType(indx));
 --
 dbms_output.put_line('After FORALL: ' || systimestamp);
 COMMIT;
 END;
 /

Ok, then I bounced the database to get no buffers, caching, etc. on it.

So the first execute

SQL> exec cursor_for_open_query
Before CURSOR OPEN: 27-SEP-07 10.56.30.699401000 AM +02:00
Before LOOP: 27-SEP-07 10.56.30.922366000 AM +02:00
After CURSOR CLOSE: 27-SEP-07 10.57.07.699791000 AM +02:00

Only look at the seconds it took 37 seconds and nearly nothing for opening the cursor! But how much rows were inserted?

SQL> select count(*) from temp;

COUNT(*)
----------
49424

Truncate the table (truncate to free the extends!) and bounce the database again and now the second run

SQL> exec cursor_for_query
Before CURSOR: 27-SEP-07 10.59.47.848249000 AM +02:00
After CURSOR: 27-SEP-07 11.00.09.072525000 AM +02:00

The whole loop took 22 seconds, well this looks already better. Well, also all rows inserted?

SQL> select count(*) from temp;

COUNT(*)
----------
49424

But now (after truncate and bouncing) the bulk collect run

SQL> exec bulk_collect_query
Before Bulk Collect: 27-SEP-07 11.01.33.553224000 AM +02:00
After Bulk Collect: 27-SEP-07 11.01.41.874054000 AM +02:00
After FORALL: 27-SEP-07 11.01.42.065753000 AM +02:00

Look at this, for bulking all the lines into the collection took just 8 seconds (for 49 424 rows) and the inserts just 1 second! Unbelievable, together we did everything in 9 seconds where the other ways took over 20 seconds!

Well now lets try to first execute the bulk load then truncate the table again but not bouncing the database so that the buffers and caches a still filled

SQL> exec bulk_collect_query
Before Bulk Collect: 27-SEP-07 11.02.31.257498000 AM +02:00
After Bulk Collect: 27-SEP-07 11.02.41.614205000 AM +02:00
After FORALL: 27-SEP-07 11.02.41.818092000 AM +02:00

PL/SQL procedure successfully completed.

SQL> select count(*) from temp;

COUNT(*)
----------
49423

SQL> truncate table temp;

Table truncated.

SQL> exec cursor_for_query
Before CURSOR: 27-SEP-07 11.04.04.960254000 AM +02:00
After CURSOR: 27-SEP-07 11.04.25.749038000 AM +02:00

Ok so now we need 10 seconds for the run with the bulk but we sill need 21 seconds for the cursor! So not really a improvement with the cache and so on. Ok final test on a big system with over 268 thousand rows

Before Bulk Collect: 27-SEP-07 11.24.17.034732000 AM +02:00
After Bulk Collect: 27-SEP-07 11.24.25.111020000 AM +02:00
After FORALL: 27-SEP-07 11.24.26.129826000 AM +02:00
PL/SQL procedure successfully completed.

COUNT(*)
----------
267985

Table truncated.

Before CURSOR: 27-SEP-07 11.24.29.629354000 AM +02:00
After CURSOR: 27-SEP-07 11.25.02.244549000 AM +02:00
PL/SQL procedure successfully completed.

COUNT(*)
----------
268056

And again, bulking took 8 seconds and the inserts just 1 second! But the run with the cursor took 33 seconds!

So this was just a short test but it definitely shows that BULK COLLECT and FORALL are much faster than cursors within the FOR loop! Only disadvantage of FORALL as you maybe already guess if you looked at the code: You can just perform one DML statement, there is no “FORALL END” clause! But anyway also bulking is a very high-performance functionality of Oracle! So if you have to run throw data collections then use BULK COLLECT!

44 thoughts on “BULK COLLECT & FORALL vs. CURSOR & FOR LOOP

  1. hi venzi,

    cool one… saved my time to reinvent the same thing on bulk collect whether it will help me in my performance

    thanks a lot

    Like

  2. Hi Venzi,
    I have a question on Bulk and regular insert.
    Can you tell me which one is better?

    Writing select insert or Bulk insert?
    1) By select insert I mean this:
    insert into temp((owner, name, type)
    SELECT owner, object_name name, object_type type FROM all_objects;

    Thanks in advance.
    Raj

    Like

  3. Hi Raj,

    Unfortunately I didn’t had time yet to have a closer look into it. I’ll provide a little test as soon as I’m back from my vacation which will be till middle of October!

    Regards,

    Gerald

    Like

  4. Hi Venzi,

    This very heplful.
    What is the reason of such a vast performance difference between the Table Collection loop and Cursor loop? Do you have any detailed information about it?

    Thanks,
    Mike

    Like

  5. Hi Mike,

    The one big advantage of BULK COLLECT/FORALL is: BULKING!

    In the cursor Oracle must refresh the binds every loop with the new data. Also the Insert is executed each time! With BULK COLLECT Oracle can put all the values into a collection at onces. No need to refresh something again and again. FORALL then executes the insert just one time in bulk. No soft parses, reading of the new values of the binds…

    Regards,

    Gerald

    Like

  6. Can you tell us what version of Oracle you ran this test on? Oracle improved the performance of the FOR LOOP in 10g so I am trying to confirm whether your test is valid for 10g.

    Regards, James

    Like

  7. Gerald,

    There is a subtle flaw in your tests that leads to incorrect conclusions.

    There are two differences between CURSOR_FOR_QUERY and BULK_COLLECT_QUERY. The first difference is that the FOR loop is replaced by BULK COLLECT and the second is that the INSERT statements are replaced by FORALL. In fact, all the performance benefit is gained by the latter change and none by the former.

    In 10gR2 (or 10gR1 with PLSQL_OPTIMIZE_LEVEL set to 2) a FOR loop does a BULK COLLECT under the covers.

    To show that the BULK COLLECT offers no benefit you can either remove the INSERT and FORALL statements (replacing the former with NULL) or you can modify CURSOR_FOR_QUERY to build an in-memory table as follows:

    CREATE OR REPLACE PROCEDURE cursor_for_query
    IS
    TYPE sowner IS TABLE OF VARCHAR2 (30);

    TYPE sname IS TABLE OF VARCHAR2 (30);

    TYPE stype IS TABLE OF VARCHAR2 (19);

    l_sowner sowner := sowner ();
    l_sname sname := sname ();
    l_stype stype := stype ();
    i INTEGER := 1;
    BEGIN
    DBMS_OUTPUT.put_line ('Before CURSOR: ' || SYSTIMESTAMP);

    FOR cur IN (SELECT owner, object_name NAME, object_type TYPE
    FROM all_objects)
    LOOP
    l_sowner.EXTEND;
    l_sname.EXTEND;
    l_stype.EXTEND;
    l_sowner (i) := cur.owner;
    l_sname (i) := cur.NAME;
    l_stype (i) := cur.TYPE;
    i := i + 1;
    END LOOP;

    DBMS_OUTPUT.put_line ('After CURSOR: ' || SYSTIMESTAMP);
    FORALL indx IN l_sname.FIRST .. l_sname.LAST
    INSERT INTO temp
    VALUES (l_sowner (indx), l_sname (indx), l_stype (indx));
    DBMS_OUTPUT.put_line ('After FORALL: ' || SYSTIMESTAMP);
    COMMIT;
    END;
    /

    I think you will find, as I did, that there is no measurable difference between BULK COLLECT and a FOR loop.

    You are not alone in your conclusions. Steven Feuerstein has written this on OTN:

    http://www.oracle.com/technology/oramag/oracle/08-nov/o68plsql.html

    I will endeavour to engage Steven to get his comments on my thoughts.

    –Tony Hasler

    Welcome!

    Like

  8. Hi
    I am using Bulk collect for parameterised cursors. I have 3cursors which is running one after other…If I am using Bulk collect then for first loop iteration its running fine but for second time it doesnt find any value and returns the error…ORA – 0652 PL/SQL numeric or Value… Where as if I use the same code with normal Cursor n for loop it works perfectly fine…n I get my output…

    Plz can u suggest me can we use bulk collect in different loops or is there any constraints in using them

    Like

  9. Hi Tony,

    You’re right, but as I’ve written: This was just a small test hacked down in 5 minutes.

    Here is the example about the CURSOR loop and the BULK COLLECT:

    Again the procedures which are just doing the selecting part:

    SQL> CREATE OR REPLACE PROCEDURE CURSOR_FOR_QUERY
    2 IS
    3 BEGIN
    4 dbms_output.put_line(‘Before CURSOR: ‘ || systimestamp);
    5 FOR cur IN (SELECT owner, object_name name, object_type type FROM all_objects) LOOP
    6 NULL;
    7 END LOOP;
    8 dbms_output.put_line(‘After CURSOR: ‘ || systimestamp);
    9 END;
    10 /

    Procedure created.

    SQL> CREATE OR REPLACE PROCEDURE BULK_COLLECT_QUERY
    2 IS
    3 TYPE sOwner IS TABLE OF VARCHAR2(30);
    4 TYPE sName IS TABLE OF VARCHAR2(30);
    5 TYPE sType IS TABLE OF VARCHAR2(19);
    6 l_sOwner sOwner;
    7 l_sName sName;
    8 l_sType sType;
    9 BEGIN
    10 dbms_output.put_line(‘Before Bulk Collect: ‘ || systimestamp);
    11 SELECT owner, object_name, object_type
    12 BULK COLLECT INTO l_sOwner, l_sName, l_sType
    13 FROM all_objects;
    14 dbms_output.put_line(‘After Bulk Collect: ‘ || systimestamp);
    15 END;
    16 /

    The optimizer parameter:

    SQL> show parameter plsql_optim

    NAME TYPE VALUE
    ———————————— ———– ——————————
    plsql_optimize_level integer 2

    And here the executions of them:

    SQL> set serveroutput on;

    SQL> exec CURSOR_FOR_QUERY;
    Before CURSOR: 30-MAR-09 09.53.15.061996000 AM +02:00
    After CURSOR: 30-MAR-09 09.53.32.602228000 AM +02:00

    –> 17 seconds cold run – nothing in the buffer pool, etc.

    SQL> exec BULK_COLLECT_QUERY
    Before Bulk Collect: 30-MAR-09 09.54.09.204335000 AM +02:00
    After Bulk Collect: 30-MAR-09 09.54.18.715380000 AM +02:00

    –> 9 seconds cold run

    SQL> exec CURSOR_FOR_QUERY;
    Before CURSOR: 30-MAR-09 09.54.47.520029000 AM +02:00
    After CURSOR: 30-MAR-09 09.54.55.230735000 AM +02:00

    –> 8 seconds hot run

    SQL> exec CURSOR_FOR_QUERY;
    Before CURSOR: 30-MAR-09 09.55.01.945848000 AM +02:00
    After CURSOR: 30-MAR-09 09.55.08.162602000 AM +02:00

    –> 7 seconds hot run

    Well, one second isn’t really meaningful in a time frame of 8 but as I’m a advocate of “knowing what you’re doing” I don’t trust on optimizer parameters and stuff that may or may not happen under the hood. I get sure that they happen and so I’ll always write BULK COLLECT rather than trusting that it will happen under the hood when I want a bulk collect.

    Thanks,

    Gerald

    Like

  10. Hi preeti,

    That issue you’ve sounds like a bug in your code. I don’t know any constraints on Bulk Collect which can produce this issue.

    Thanks,

    Gerald

    Like

  11. OK, the first and main point is that the FOR loop performs comparably to BULK COLLECT unless somebody lowers the optimizer level.

    The FOR loop was provided – as with other languages – to simplify code and provide structure. In the vast majority of cases the BULK COLLECT clause is clumsier and, in my opinion, can generally no longer be justified now there is no longer a performance benefit.

    There are some rare cases where I would still use BULK COLLECT, however. For example, if I want to process a bunch of rows multiple times I can read them into a collection first. This turns out to be simpler with BULK COLLECT (providing you do not need the limit clause) than a FOR loop but this is the exception that proves the rule.

    Like

  12. Hi Tony,

    Yes, they show the same performance with that optimizer level!

    Not sure if I get you right here, but I’ve never written something like “Use BULK COLLECT/FORALL because FOR LOOP is crap”.
    Also I don’t tell developer which syntax they should use. It depends on the use cases anyway.
    But what I wouldn’t do is to compare PL/SQL FOR loop with FOR loop in other languages like Java or C++.
    And yes, FOR loops was provided for writting better structured code but I kwow C programs for example, where you can’t read the for loop anymore. So it depends also here on the developer how good the code is written!

    Ok, so when I got you right then you say that you wouldn’t use the BULK COLLECT clause anymore.
    That’s fine, I say: The developer has to/should know when to use BULK COLLECT and when to use FOR clause!

    Thanks,

    Gerald

    Like

  13. By the way, I repeated your “cold” tests on my computer and found no difference in performance. Did you bounce the database between your tests? Flushing the buffer cache would be insuffficient in this case:

    SQL> exec BULK_COLLECT_QUERY;
    Before Bulk Collect: 06-APR-09 12.19.29.312000000 +01:00
    After Bulk Collect: 06-APR-09 12.19.39.390000000 +01:00

    SQL> exec cursor_for_query;
    Before CURSOR: 06-APR-09 12.21.55.203000000 +01:00
    After CURSOR: 06-APR-09 12.22.05.203000000 +01:00

    Like

  14. Gerald,

    Perhaps we have done this to death but I am a little confused. When would you recommend using BULK COLLECT and when would you recommend using a FOR loop?

    This seems like a reasonable question given the title of this BLOG entry.

    It seems like we largely, perhaps completely, agree with each other.

    In case my position remains unclear, my answer to the question is “only use BULK COLLECT in the very rare cases where the code is simpler to read. In nearly all cases the FOR loop will be easier to both write and read”.

    Like

  15. Hi Tony,

    In my cold run I just executed both procedures so that buffer pool and shared pool are filled. It’s a little bit unclear here – I just wanted to warm up every thing and not to test how it behaves when it’s executed the first time.

    Yeah, looks like that we agree at least to a certain degree.

    I would say that it depends on the use case and the benefit for the developer – so the developer has to know when to use one of these clauses.

    An example for BULK COLLECT would be when you’ve to call a sub procedure for example and this sub procedure gets collections as parameter. Then it would be better to fill the collections with bulk collect rather than write a for loop and and filling the collections by yourself.

    A FOR loop I would use when I’ve to go over some data and do a lot with it. For example reading some records of a table, check some values, pass some values to a sub routine, having some if/else depending on the values of one or more field. Well, that will be one of the standard scenarios I guess!

    Thanks,

    Gerald

    Like

  16. Hi Venzi,

    Thank you for this test, very interesting.
    I just found one thing strange : why aren’t the results after the BULK_FORALL and the CURSOR_FOR the same ?

    Before Bulk Collect: 27-SEP-07 11.24.17.034732000 AM +02:00
    After Bulk Collect: 27-SEP-07 11.24.25.111020000 AM +02:00
    After FORALL: 27-SEP-07 11.24.26.129826000 AM +02:00
    PL/SQL procedure successfully completed.

    COUNT(*)
    ———-
    267985

    Table truncated.

    Before CURSOR: 27-SEP-07 11.24.29.629354000 AM +02:00
    After CURSOR: 27-SEP-07 11.25.02.244549000 AM +02:00
    PL/SQL procedure successfully completed.

    COUNT(*)
    ———-
    268056

    The difference between 267985 and 268056 isn’t huge, but there is still a difference. Did you perform the test on a working database? (Whith DMLs during the tests) ?

    Like

  17. I have read the complete discussion but still having one doubt since one querry is not cleared by anyone or not ask by anyone, i.e. in case of multiple DML statement has to perform then what should be use.Like I have a need of purging Record from around 10 table and If I go with BULK COLLECT FORALL then I wud have to write 10 forall loop whereas in case of for loop i can do the same with single loop instead. one more question is, if I use Bulk collect with for loop then wud it give any performance benefit? can someone sugges what wud be the best approach to perform this? what wud be the impact on memory since bulk collect use collection of data so it must keep all the data in cache, I guess? Please make me correct if am wrong.

    Like

  18. Hi Aashish,

    I’m not sure if I get your question here: If you would have tables where you have to purge data out I would do a simple “DELETE FROM table WHERE” with the specific where clause. I don’t see any reason to select the data first and do a BULK COLLECT FORALL there! And yes, I would use 10 different delete statements or if is generic one dynamic sql delete statement.

    Thx,

    Venzi

    Like

  19. how can i use collections to improve performance for below queries..

    1) insert into table_1 select * from table_2 rownum <10000
    and
    2) delete from table_2 where rownum <10000;

    taking into consideratoin that..table_2 having large number of rows….

    Like

    1. Hi Sagar,

      I don’t think that a collection would be the right choice to do the insert part. “Insert into select * from” is very good optimized. But parallel queries might help here. Anyway – I would check the code first. Selecting from a table with a rownum where clause is a statement that does not have any business logic. You just get some “random” rows into the new table.

      For the delete you can collect the primary key first with a BULK COLLECT and then execute a FORALL DELETE with the primary keys. But again: Check the code, rownum in a delete sounds just wrong!

      Thx,

      Venzi

      Like

  20. Hi Venzi,

    Actually venzi u dint get my Question. am using the 10 delete staatment but before deletion I have to get all the eligible record into one table and from that table am reading the record on matching in all the table into Where clause and deleting them…and am reading only once all the record from the table and starting the loop. here my question arise that this I can do using either bulk collect or cursor but in case of bulk collect Ill have to write 10 forall loop since only once DML operation can perform with forall and in case of cursor Ill have to write single for loop and all the 10 delete statement inside that.

    I hope this time my query might be clear to you

    Regards
    Aashish Mor

    Like

    1. Hi Aashish,

      I hope I got your question this time:

      You are right! You have to write 10 FORALL statements for those 10 DELETES.
      As I said in my originally post: “Only disadvantage of FORALL as you maybe already guess if you looked at the code: You can just perform one DML statement, there is no “FORALL END” clause! But anyway also bulking is a very high-performance functionality of Oracle! So if you have to run throw data collections then use BULK COLLECT!”

      And also the Oracle documentation says so:
      “The FORALL statement runs one DML statement multiple times, with different values in the VALUES and WHERE clauses. The different values come from existing, populated collections or host arrays. The FORALL statement is usually much faster than an equivalent FOR LOOP statement.”

      But: You have to see FORALL as a DML statement and not as a LOOP!
      It’s just like a DELETE statement or an INSERT statement. So rather than having 10 DELETE statements you have 10 FORALL statements there. The amount is the same but with the FORALL statements you save the whole FOR LOOP!

      Thx,

      Venzi

      Like

  21. There is a restriction in bulk colllect and forall

    for example,
    declare
    type t_tab is table of emp%rowtype index by binary_integer;
    l_tab t_tab;
    cursor c is select * from emp;
    begin
    open c;
    loop
    fetch c bulk collect into l_tab;
    forall i in l_tab.first .. l_tab.last
    update emp set sal= sal+10
    where empno=l_tab(i).empno;
    exit when c%notfound;
    end loop;
    end;

    above code will failed to execute.

    Need to do as

    declare
    type t_tab is table of emp.empno%type index by binary_integer;
    l_tab t_tab;
    cursor c is select empno from emp;
    begin
    open c;
    loop
    fetch c bulk collect into l_tab;
    forall i in l_tab.first .. l_tab.last
    update emp set sal= sal+10
    where empno=l_tab(i).empno;
    exit when c%notfound;
    end loop;
    end;

    that means if where conditions are move, need to define more index by table data types.

    This is implementation restriction.

    Like

    1. Hi Mahesh,

      Actually, I tested it and it does work – at least with Oracle 11g:

      Here is the salary from one of the employees (empno 7934):

      SQL>
      select sal from emp where empno = 7934;

      SAL
      ———-
      1300

      Now I execute your script and select the salary again:

      SQL> declare
      2 type t_tab is table of emp%rowtype index by binary_integer;
      3 l_tab t_tab;
      4 cursor c is select * from emp;
      5 begin
      6 open c;
      7 loop
      8 fetch c bulk collect into l_tab;
      9 forall i in l_tab.first .. l_tab.last
      10 update emp set sal= sal+10
      11 where empno=l_tab(i).empno;
      12 exit when c%notfound;
      13 end loop;
      14 end;
      15 /

      PL/SQL procedure successfully completed.

      SQL> select sal from emp where empno = 7934;

      SAL
      ———-
      1310

      The salary got updated successfully.

      Actually I note that you have two unnecessary steps in there. You open a loop and exit it when c%notfound is true.
      As you do bulk collect and forall you don’t need to do that; that’s the whole purpose of this functionality. So you can remove those lines and it still works fine:

      SQL> rollback;

      Rollback complete.

      SQL> select sal from emp where empno = 7934;

      SAL
      ———-
      1300

      SQL> declare
      2 type t_tab is table of emp%ROWTYPE index by binary_integer;
      3 l_tab t_tab;
      4 cursor c is select * from emp;
      5 begin
      6 open c;
      7 fetch c bulk collect into l_tab;
      8 forall i in l_tab.first..l_tab.last
      9 update emp set sal=sal+10
      10 where empno=l_tab(i).empno;
      11 close c;
      12 end;
      13 /

      PL/SQL procedure successfully completed.

      SQL> select sal from emp where empno = 7934;

      SAL
      ———-
      1310

      Like

  22. hi mike

    is there any limitation in number of record can be stored in bulk collect.

    thanks in advance
    rabi

    Like

    1. Hi Rabi,

      Bulk collect doesn’t have a limit, it’s a PL/SQL feature. However some collections have limits like the VARRAY collection. Others, like that one used in my example, do not have virtual limits, however, they do have physical. The collection is stored within the PGA of the session process which means that the bigger the collection gets the more memory is used by the session process. So the size of the collection is depending on the amount of the physical memory available on the database machine.

      Venzi

      Like

  23. hello,
    but query takes the time for fetching records rather insert. so what is main difference in record by record insert record by FOR LOOP and BULK COLLECT/ FORALL?? I have still confusion. Please clear.

    Like

    1. Hi Shibbu,

      Yes, the query takes the time for fetching the results, that is correct. In the CURSOR/FOR example this time will be spent by the FOR loop iteration over the cursor. In the BULK COLLECT/FORALL example this time will be spent during the BULK COLLECT.

      The major difference between the INSERT and the FORALL statements is that the INSERT statement gets execute for every single row from your cursor/select statement. So for every row you primary have a soft parse, a round trip to the database and a query execution.
      While in the scenario of the FORALL statement the statement will only do a soft parse once, send/stream the collection to the database once (only 1 round trip) and execute the statement only once streaming the whole collection into the table. So at the end the FORALL statement does simply much less work.

      Thanks,

      Venzi

      Like

    1. Hi All,

      I am calling a function through cursor , But i want to do in collection how?
      can you guys reply me..

      CURSOR leave_cur
      IS
      (SELECT regexp_substr(Validate_Leave(p_empid),'[a-zA-Z0-9]+’,1,level) leave_name
      FROM dual
      CONNECT BY level < regexp_count(Validate_Leave(p_empid),',') + 2
      );

      Like

      1. Hi naveen,

        You can simply BULK COLLECT your SELECT as is without having to have a cursor, for example:

        DECLARE
        TYPE t_emp_id IS TABLE OF VARCHAR2(255);
        coll_emp_id t_emp_id;
        BEGIN
        SELECT regexp_substr(Validate_Leave(emp_id),'[a-zA-Z0-9]+’,1,level) leave_name
        BULK COLLECT INTO coll_emp_id
        FROM test
        CONNECT BY level < regexp_count(Validate_Leave(emp_id),',') + 2;
        END;
        /

        For more details see: https://livesql.oracle.com/apex/livesql/s/CB3AT8IJ7EJJVOJ3XQKHE1RTG

        Thanks,

        Venzi

        Like

  24. I could’t able to add if condition inside Forall it is not allowing .. if there is any conditions has to check how this supposed to be written.

    Like

    1. Hi Naveen,

      FORALL is a single PL/SQL statement that executes one DML statement multiple times for the different values of the collection. You can much think about it like an INSERT, UPDATE, or DELETE statement. If you need to handle the logic differently based on values, depending on what that logic is, you may either be able to put the condition as part of the WHERE clause, create multiple collections and have multiple FORALL statements, or fall back to conventional DML operations with a loop.

      For more information about FORALL have a look at the PL/SQL Language Reference

      Thx,

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.