From: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
---|---|
To: | Paul Guo <pguo(at)pivotal(dot)io> |
Cc: | PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Taylor Vesely <tvesely(at)pivotal(dot)io> |
Subject: | Re: Batch insert in CTAS/MatView code |
Date: | 2019-08-01 18:54:56 |
Message-ID: | be75c9bc-89a7-4227-cfb1-690c005cf36b@iki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 17/06/2019 15:53, Paul Guo wrote:
> I noticed that to do batch insert, we might need additional memory copy
> sometimes comparing with "single insert"
> (that should be the reason that we previously saw a bit regressions) so a
> good solution seems to fall back
> to "single insert" if the tuple length is larger than a threshold. I set
> this as 2000 after quick testing.
Where does the additional memory copy come from? Can we avoid doing it
in the multi-insert case?
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Corey Huinker | 2019-08-01 18:57:19 | Re: \describe* |
Previous Message | Alvaro Herrera | 2019-08-01 18:43:33 | Re: [HACKERS] CLUSTER command progress monitor |