From: | Constantin Teodorescu <teo(at)flex(dot)ro> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | mviorel(at)flex(dot)ro |
Subject: | Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results |
Date: | 1998-01-07 08:14:37 |
Message-ID: | 34B3396D.50488A88@flex.ro |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Peter T Mount wrote:
>
> The only solution I was able to give was for them to use cursors, and
> fetch the result in chunks.
Got it!!!
Seems everyone has 'voted' for using cursors.
As a matter of fact, I have tested both a
BEGIN ; DECLARE CURSOR ; FETCH N; END;
and a
SELECT FROM
Both of them are locking for write the tables that they use, until end
of processing.
Fetching records in chunks (100) would speed up a little the processing.
But I am still convinced that if frontend would be able to process
tuples as soon as they come, the overall time of processing a big table
would be less.
Fetching in chunks, the frontend waits for the 100 records to come (time
A) and then process them (time B). A and B cannot be overlapped.
Thanks a lot for helping me to decide. Reports in PgAccess will use
cursors.
--
Constantin Teodorescu
FLEX Consulting Braila, ROMANIA
From | Date | Subject | |
---|---|---|---|
Next Message | Keith Parks | 1998-01-07 09:40:41 | Re: consttraints.source |
Previous Message | Bruce Momjian | 1998-01-07 08:07:32 | fix for views and outnodes |