From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, Alvaro Herrera from 2ndQuadrant <alvherre(at)alvh(dot)no-ip(dot)org>, Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Subject: | Re: logical decoding : exceeded maxAllocatedDescs for .spill files |
Date: | 2020-02-09 03:48:29 |
Message-ID: | CAA4eK1+NkknWor5H6CQkYJ+uPksXnT7toOY7Tnz9ozNDHetSTg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | Postg토토 핫SQL : |
On Sat, Feb 8, 2020 at 12:10 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:
> > I performed the same test in pg11 and reproduced the issue on the
> > commit prior to a4ccc1cef5a04 (Generational memory allocator).
> >
> > ulimit -s 1024
> > ulimit -v 300000
> >
> > wal_level = logical
> > max_replication_slots = 4
> >
> > [...]
>
> > After that, I applied the "Generational memory allocator" patch and
> > that solved the issue. From the error message, it is evident that the
> > underlying code is trying to allocate a MaxTupleSize memory for each
> > tuple. So, I re-introduced the following lines (which are removed by
> > a4ccc1cef5a04) on top of the patch:
>
> > --- a/src/backend/replication/logical/reorderbuffer.c
> > +++ b/src/backend/replication/logical/reorderbuffer.c
> > @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)
> >
> > alloc_len = tuple_len + SizeofHeapTupleHeader;
> >
> > + if (alloc_len < MaxHeapTupleSize)
> > + alloc_len = MaxHeapTupleSize;
>
> Maybe I'm being slow here - but what does this actually prove? Before
> the generation contexts were introduced we avoided fragmentation (which
> would make things unusably slow) using a a brute force method (namely
> forcing all tuple allocations to be of the same/maximum size).
>
It seems for this we formed a cache of max_cached_tuplebufs number of
objects and we don't need to allocate more than that number of tuples
of size MaxHeapTupleSize because we will anyway return that memory to
aset.c.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2020-02-09 07:27:27 | Re: ERROR: subtransaction logged without previous top-level txn record |
Previous Message | Peter Geoghegan | 2020-02-09 02:50:20 | Re: Building infrastructure for B-Tree deduplication that recognizes when opclass equality is also equivalence |