Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records

Lists: pgsql-bugspgsql-hackers
From: Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-09-05 10:19:58
Message-ID: CAHLDt=_ts0A7Agn=hCpUh+RCFkxd+G6uuT=kcTfqFtGur0dp=A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

Greetings!

One of our clients experienced a crash of startup process with an error
"invalid memory alloc request size 1073741824" on a hot standby, which
ended in replica reinit.

According to logs, startup process crashed while trying to replay
"Standby/LOCK" record with a huge list of locks(see attached
replicalog_tail.tgz):

FATAL: XX000: invalid memory alloc request size 1073741824
CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db 7550635
rel 8500880 xid 1638575 db 7550635 rel 10324499...
LOCATION: repalloc, mcxt.c:1075
BACKTRACE:
postgres: startup recovering
000000010000000700000033(repalloc+0x61) [0x8d7611]
postgres: startup recovering 000000010000000700000033() [0x691c29]
postgres: startup recovering 000000010000000700000033() [0x691c74]
postgres: startup recovering 000000010000000700000033(lappend+0x16)
[0x691e76]
postgres: startup recovering
000000010000000700000033(StandbyAcquireAccessExclusiveLock+0xdd) [0x7786bd]
postgres: startup recovering
000000010000000700000033(standby_redo+0x5d) [0x7789ed]
postgres: startup recovering
000000010000000700000033(StartupXLOG+0x1055) [0x51d7c5]
postgres: startup recovering
000000010000000700000033(StartupProcessMain+0xcd) [0x71d65d]
postgres: startup recovering
000000010000000700000033(AuxiliaryProcessMain+0x40c) [0x52c7cc]
postgres: startup recovering 000000010000000700000033() [0x71a62e]
postgres: startup recovering
000000010000000700000033(PostmasterMain+0xe74) [0x71cf74]
postgres: startup recovering 000000010000000700000033(main+0x70d)
[0x4891ad]
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f0be2db3555]
LOG: 00000: startup process (PID 2650) exited with exit code 1

Looks like startup process at some point hits the MaxAllocSize
limit(memutils.h), which forbids allocation of more than 1gb-1 bytes.

Judging by pg_waldump output, there was long running transaction on
primary, that sequentially locked and modified a lot of tables. Right
before the crash there was about 85k exclusively locked tables.

Trying to reproduce the issue, I found out that the problem is not so much
in the number of locks, but in the duration of the transaction. Replaying
"Standby/LOCK" records, the startup process eventually crashes with the
mentioned error if a long transaction holds a large number of locks long
enough.

I managed to reproduce the situation on 13.7, 14.4, 15beta3 and master
using the following steps:
1) get primary and replica with the following settings:
max_locks_per_transaction = '10000' and max_connections = '1000'
2) create 950k tables
3) lock them in AccessExclusive mode in transaction and leave it in "idle
in transaction state"
4) make some side activity with pgbench (pgbench -R 100 -P 5 -T 7200 -c 1)
In about 20-30 minutes startup process crashes with the same error.

As far as I understand, there is fixed amount of AccessExclusive locks in
this scenario. 950k exclusive locks acquired by "long running" transaction
and no additional exclusive locks held by pgbench. But startup consumes
more and more memory while replaying records, that contain exacly the same
list of locks. Could it be a memory leak? If not, is there any way to
improve this behavior?

If you're going to reproduce it, get primary and replica with enough RAM
and simultaneously run on primary:
$ *pgbench -i && pgbench -R 100 -P 5 -T 7200 -c 1* in one terminal
$ *psql -f 950k_locks.sql* in another terminal
and observe startup memory usage and replica's logs.

Best regards,
Dmitry Kuzmin

Attachment Content-Type Size
950k_locks.sql application/sql 706 bytes
replicalog_tail.tgz application/x-compressed-tar 4.2 MB

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>
Cc: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-09-05 12:13:13
Message-ID: CAApHDvrDg2rJ-sqa7c=wPoHeEGrox46sQ=CFj=FkXqBx26dr0A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

On Mon, 5 Sept 2022 at 22:38, Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com> wrote:
> One of our clients experienced a crash of startup process with an error "invalid memory alloc request size 1073741824" on a hot standby, which ended in replica reinit.
>
> According to logs, startup process crashed while trying to replay "Standby/LOCK" record with a huge list of locks(see attached replicalog_tail.tgz):
>
> FATAL: XX000: invalid memory alloc request size 1073741824
> CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db 7550635 rel 8500880 xid 1638575 db 7550635 rel 10324499...
> LOCATION: repalloc, mcxt.c:1075
> BACKTRACE:
> postgres: startup recovering 000000010000000700000033(repalloc+0x61) [0x8d7611]
> postgres: startup recovering 000000010000000700000033() [0x691c29]
> postgres: startup recovering 000000010000000700000033() [0x691c74]
> postgres: startup recovering 000000010000000700000033(lappend+0x16) [0x691e76]

This must be the repalloc() in enlarge_list(). 1073741824 / 8 is
134,217,728 (2^27). That's quite a bit more than 1 lock per your 950k
tables.

I wonder why the RecoveryLockListsEntry.locks list is getting so long.

from the file you attached, I see:
$ cat replicalog_tail | grep -Eio "rel\s([0-9]+)" | wc -l
950000

So that confirms there were 950k relations in the xl_standby_locks.
The contents of that message seem to be produced by standby_desc().
That should be the same WAL record that's processed by standby_redo()
which adds the 950k locks to the RecoveryLockListsEntry.

I'm not seeing why 950k becomes 134m.

David


From: Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-09-06 06:38:43
Message-ID: CAHLDt=9uO-mauy6VGX3jbwNCpD3xKGC225QtH25Pcj4hn4BnKA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

Thanks, David!

Let me know if there's any additional information i could provide.

Best regards,
Dmitry Kuzmin

пн, 5 сент. 2022 г. в 22:13, David Rowley <dgrowleyml(at)gmail(dot)com>:

> On Mon, 5 Sept 2022 at 22:38, Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com> wrote:
> > One of our clients experienced a crash of startup process with an error
> "invalid memory alloc request size 1073741824" on a hot standby, which
> ended in replica reinit.
> >
> > According to logs, startup process crashed while trying to replay
> "Standby/LOCK" record with a huge list of locks(see attached
> replicalog_tail.tgz):
> >
> > FATAL: XX000: invalid memory alloc request size 1073741824
> > CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db
> 7550635 rel 8500880 xid 1638575 db 7550635 rel 10324499...
> > LOCATION: repalloc, mcxt.c:1075
> > BACKTRACE:
> > postgres: startup recovering
> 000000010000000700000033(repalloc+0x61) [0x8d7611]
> > postgres: startup recovering 000000010000000700000033()
> [0x691c29]
> > postgres: startup recovering 000000010000000700000033()
> [0x691c74]
> > postgres: startup recovering
> 000000010000000700000033(lappend+0x16) [0x691e76]
>
> This must be the repalloc() in enlarge_list(). 1073741824 / 8 is
> 134,217,728 (2^27). That's quite a bit more than 1 lock per your 950k
> tables.
>
> I wonder why the RecoveryLockListsEntry.locks list is getting so long.
>
> from the file you attached, I see:
> $ cat replicalog_tail | grep -Eio "rel\s([0-9]+)" | wc -l
> 950000
>
> So that confirms there were 950k relations in the xl_standby_locks.
> The contents of that message seem to be produced by standby_desc().
> That should be the same WAL record that's processed by standby_redo()
> which adds the 950k locks to the RecoveryLockListsEntry.
>
> I'm not seeing why 950k becomes 134m.
>
> David
>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-04 22:54:08
Message-ID: 2138765.1664924048@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

[ redirecting to -hackers because patch attached ]

David Rowley <dgrowleyml(at)gmail(dot)com> writes:
> So that confirms there were 950k relations in the xl_standby_locks.
> The contents of that message seem to be produced by standby_desc().
> That should be the same WAL record that's processed by standby_redo()
> which adds the 950k locks to the RecoveryLockListsEntry.

> I'm not seeing why 950k becomes 134m.

I figured out what the problem is. The standby's startup process
retains knowledge of all these locks in standby.c's RecoveryLockLists
data structure, which *has no de-duplication capability*. It'll add
another entry to the per-XID list any time it's told about a given
exclusive lock. And checkpoints cause us to regurgitate the entire
set of currently-held exclusive locks into the WAL. So if you have
a process holding a lot of exclusive locks, and sitting on them
across multiple checkpoints, standby startup processes will bloat.
It's not a true leak, in that we know where the memory is and
we'll release it whenever we see that XID commit/abort. And I doubt
that this is a common usage pattern, which probably explains the
lack of previous complaints. Still, bloat bad.

PFA a quick-hack fix that solves this issue by making per-transaction
subsidiary hash tables. That's overkill perhaps; I'm a little worried
about whether this slows down normal cases more than it's worth.
But we ought to do something about this, because aside from the
duplication aspect the current storage of these lists seems mighty
space-inefficient.

regards, tom lane

Attachment Content-Type Size
fix-RecoveryLockLists-data-structure-1.patch text/x-diff 9.9 KB

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-04 23:53:11
Message-ID: 2208668.1664927591@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

I wrote:
> PFA a quick-hack fix that solves this issue by making per-transaction
> subsidiary hash tables. That's overkill perhaps; I'm a little worried
> about whether this slows down normal cases more than it's worth.
> But we ought to do something about this, because aside from the
> duplication aspect the current storage of these lists seems mighty
> space-inefficient.

After further thought, maybe it'd be better to do it as attached,
with one long-lived hash table for all the locks. This is a shade
less space-efficient than the current code once you account for
dynahash overhead, but the per-transaction overhead should be lower
than the previous patch since we only need to create/destroy a hash
table entry not a whole hash table.

regards, tom lane

Attachment Content-Type Size
fix-RecoveryLockLists-data-structure-2.patch text/x-diff 10.5 KB

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>, Dmitriy Kuzmin <kuzmin(dot)db4(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-05 00:15:31
Message-ID: 20221005001531.GC779730@nathanxps13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: 503 사설 토토 사이트 페치 실패 pgsql-hackers

On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:
> I wrote:
>> PFA a quick-hack fix that solves this issue by making per-transaction
>> subsidiary hash tables. That's overkill perhaps; I'm a little worried
>> about whether this slows down normal cases more than it's worth.
>> But we ought to do something about this, because aside from the
>> duplication aspect the current storage of these lists seems mighty
>> space-inefficient.
>
> After further thought, maybe it'd be better to do it as attached,
> with one long-lived hash table for all the locks. This is a shade
> less space-efficient than the current code once you account for
> dynahash overhead, but the per-transaction overhead should be lower
> than the previous patch since we only need to create/destroy a hash
> table entry not a whole hash table.

This feels like a natural way to solve this problem. I saw several cases
of the issue that was fixed with 6301c3a, so I'm inclined to believe this
usage pattern is actually somewhat common.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com


From: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
To: nathandbossart(at)gmail(dot)com
Cc: tgl(at)sss(dot)pgh(dot)pa(dot)us, dgrowleyml(at)gmail(dot)com, kuzmin(dot)db4(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-05 01:41:03
Message-ID: 20221005.104103.1560341327554683899.horikyota.ntt@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote in
> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:
> > I wrote:
> >> PFA a quick-hack fix that solves this issue by making per-transaction
> >> subsidiary hash tables. That's overkill perhaps; I'm a little worried
> >> about whether this slows down normal cases more than it's worth.
> >> But we ought to do something about this, because aside from the
> >> duplication aspect the current storage of these lists seems mighty
> >> space-inefficient.
> >
> > After further thought, maybe it'd be better to do it as attached,
> > with one long-lived hash table for all the locks. This is a shade
> > less space-efficient than the current code once you account for
> > dynahash overhead, but the per-transaction overhead should be lower
> > than the previous patch since we only need to create/destroy a hash
> > table entry not a whole hash table.

First one is straight forward outcome from the current implement but I
like the new one. I agree that it is natural and that the expected
overhead per (typical) transaction is lower than both the first one
and doing the same operation on a list. I don't think that space
inefficiency in that extent doesn't matter since it is the startup
process.

> This feels like a natural way to solve this problem. I saw several cases
> of the issue that was fixed with 6301c3a, so I'm inclined to believe this
> usage pattern is actually somewhat common.

So releasing locks becomes somewhat slower? But it seems to still be
far faster than massively repetitive head-removal in a list.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
Cc: nathandbossart(at)gmail(dot)com, dgrowleyml(at)gmail(dot)com, kuzmin(dot)db4(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-05 15:30:22
Message-ID: 2414481.1664983822@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> writes:
> At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote in
>> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:
>>> After further thought, maybe it'd be better to do it as attached,
>>> with one long-lived hash table for all the locks.

> First one is straight forward outcome from the current implement but I
> like the new one. I agree that it is natural and that the expected
> overhead per (typical) transaction is lower than both the first one
> and doing the same operation on a list. I don't think that space
> inefficiency in that extent doesn't matter since it is the startup
> process.

To get some hard numbers about this, I made a quick hack to collect
getrusage() numbers for the startup process (patch attached for
documentation purposes). I then ran the recovery/t/027_stream_regress.pl
test a few times and collected the stats (also attached). This seems
like a reasonably decent baseline test, since the core regression tests
certainly take lots of AccessExclusiveLocks what with all the DDL
involved, though they shouldn't ever take large numbers at once. Also
they don't run long enough for any lock list bloat to occur, so these
numbers don't reflect a case where the patches would provide benefit.

If you look hard, there's maybe about a 1% user-CPU penalty for patch 2,
although that's well below the run-to-run variation so it's hard to be
sure that it's real. The same comments apply to the max resident size
stats. So I'm comforted that there's not a significant penalty here.

I'll go ahead with patch 2 if there's not objection.

One other point to discuss: should we consider back-patching? I've
got mixed feelings about that myself. I don't think that cases where
this helps significantly are at all mainstream, so I'm kind of leaning
to "patch HEAD only".

regards, tom lane

Attachment Content-Type Size
startup-stats.patch text/x-diff 853 bytes
results.txt text/plain 5.2 KB

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, dgrowleyml(at)gmail(dot)com, kuzmin(dot)db4(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-05 19:00:55
Message-ID: 20221005190055.GC201192@nathanxps13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

On Wed, Oct 05, 2022 at 11:30:22AM -0400, Tom Lane wrote:
> One other point to discuss: should we consider back-patching? I've
> got mixed feelings about that myself. I don't think that cases where
> this helps significantly are at all mainstream, so I'm kind of leaning
> to "patch HEAD only".

+1. It can always be back-patched in the future if there are additional
reports.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com


From: Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, nathandbossart(at)gmail(dot)com, dgrowleyml(at)gmail(dot)com, kuzmin(dot)db4(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-10 12:24:34
Message-ID: CANbhV-HYn8GgDheVa94EG9g8EtAmjjoYzWC6_r6Hd0mv=EphKg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

On Wed, 5 Oct 2022 at 16:30, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> writes:
> > At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote in
> >> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:
> >>> After further thought, maybe it'd be better to do it as attached,
> >>> with one long-lived hash table for all the locks.
>
> > First one is straight forward outcome from the current implement but I
> > like the new one. I agree that it is natural and that the expected
> > overhead per (typical) transaction is lower than both the first one
> > and doing the same operation on a list. I don't think that space
> > inefficiency in that extent doesn't matter since it is the startup
> > process.
>
> To get some hard numbers about this, I made a quick hack to collect
> getrusage() numbers for the startup process (patch attached for
> documentation purposes). I then ran the recovery/t/027_stream_regress.pl
> test a few times and collected the stats (also attached). This seems
> like a reasonably decent baseline test, since the core regression tests
> certainly take lots of AccessExclusiveLocks what with all the DDL
> involved, though they shouldn't ever take large numbers at once. Also
> they don't run long enough for any lock list bloat to occur, so these
> numbers don't reflect a case where the patches would provide benefit.
>
> If you look hard, there's maybe about a 1% user-CPU penalty for patch 2,
> although that's well below the run-to-run variation so it's hard to be
> sure that it's real. The same comments apply to the max resident size
> stats. So I'm comforted that there's not a significant penalty here.
>
> I'll go ahead with patch 2 if there's not objection.

Happy to see this change.

> One other point to discuss: should we consider back-patching? I've
> got mixed feelings about that myself. I don't think that cases where
> this helps significantly are at all mainstream, so I'm kind of leaning
> to "patch HEAD only".

It looks fine to eventually backpatch, since StandbyReleaseLockTree()
was optimized to only be called when the transaction had actually done
some AccessExclusiveLocks.

So the performance loss is minor and isolated to the users of such
locks, so I see no problems with it.

--
Simon Riggs http://www.EnterpriseDB.com/


From: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
To: simon(dot)riggs(at)enterprisedb(dot)com
Cc: tgl(at)sss(dot)pgh(dot)pa(dot)us, nathandbossart(at)gmail(dot)com, dgrowleyml(at)gmail(dot)com, kuzmin(dot)db4(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Startup process on a hot standby crashes with an error "invalid memory alloc request size 1073741824" while replaying "Standby/LOCK" records
Date: 2022-10-11 06:48:24
Message-ID: 20221011.154824.2222289551494538331.horikyota.ntt@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-hackers

> On Wed, 5 Oct 2022 at 16:30, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > One other point to discuss: should we consider back-patching? I've
> > got mixed feelings about that myself. I don't think that cases where
> > this helps significantly are at all mainstream, so I'm kind of leaning
> > to "patch HEAD only".

At Mon, 10 Oct 2022 13:24:34 +0100, Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com> wrote in
> It looks fine to eventually backpatch, since StandbyReleaseLockTree()
> was optimized to only be called when the transaction had actually done
> some AccessExclusiveLocks.
>
> So the performance loss is minor and isolated to the users of such
> locks, so I see no problems with it.

At Wed, 5 Oct 2022 12:00:55 -0700, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote in
> +1. It can always be back-patched in the future if there are additional
> reports.

The third +1 from me.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center