Re: Bugs in pgoutput.c

Lists: pgsql-hackers
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Bugs in pgoutput.c
Date: 2022-01-05 22:11:54
Message-ID: 885288.1641420714@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Commit 6ce16088b caused me to look at pgoutput.c's handling of
cache invalidations, and I was pretty appalled by what I found.

* rel_sync_cache_relation_cb does the wrong thing when called for
a cache flush (i.e., relid == 0). Instead of invalidating all
RelationSyncCache entries as it should, it will do nothing.

* When rel_sync_cache_relation_cb does invalidate an entry,
it immediately zaps the entry->map structure, even though that
might still be in use (as per the adjacent comment that carefully
explains why this isn't safe). I'm not sure if this could lead
to a dangling-pointer core dump, but it sure seems like it could
lead to failing to translate tuples that are about to be sent.

* Similarly, rel_sync_cache_publication_cb is way too eager to
reset the pubactions flags, which would likely lead to failing
to transmit changes that we should transmit.

The attached patch fixes these things, but I'm still pretty
unhappy with the general design of the data structures in
pgoutput.c, because there is this weird random mishmash of
static variables along with a palloc'd PGOutputData struct.
This cannot work if there are ever two active LogicalDecodingContexts
in the same process. I don't think serial use of LogicalDecodingContexts
(ie, destroy one and then make another) works very well either,
because pgoutput_shutdown is a mere fig leaf that ignores all the
junk the module previously made (in CacheMemoryContext no less).
So I wonder whether either of those scenarios is possible/supported/
expected to be needed in future.

Also ... maybe I'm not looking in the right place, but I do not
see anything anywhere in logical decoding that is taking any lock
on the relation being processed. How can that be safe? In
particular, how do we know that the data collected by get_rel_sync_entry
isn't already stale by the time we return from the function?
Setting replicate_valid = true at the bottom of the function would
overwrite any notification we might have gotten from a syscache callback
while reading catalog data.

regards, tom lane

Attachment Content-Type Size
clean-up-pgoutput-cache-invalidation.patch text/x-diff 6.7 KB

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Bugs in pgoutput.c
Date: 2022-01-06 04:16:21
Message-ID: CAA4eK1KvyaDpJq7LHAOeZPE76Gf9ZDCnBzPK4_SDGiCj9AySng@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Commit 6ce16088b caused me to look at pgoutput.c's handling of
> cache invalidations, and I was pretty appalled by what I found.
>
> * rel_sync_cache_relation_cb does the wrong thing when called for
> a cache flush (i.e., relid == 0). Instead of invalidating all
> RelationSyncCache entries as it should, it will do nothing.
>
> * When rel_sync_cache_relation_cb does invalidate an entry,
> it immediately zaps the entry->map structure, even though that
> might still be in use (as per the adjacent comment that carefully
> explains why this isn't safe). I'm not sure if this could lead
> to a dangling-pointer core dump, but it sure seems like it could
> lead to failing to translate tuples that are about to be sent.
>
> * Similarly, rel_sync_cache_publication_cb is way too eager to
> reset the pubactions flags, which would likely lead to failing
> to transmit changes that we should transmit.
>
> The attached patch fixes these things, but I'm still pretty
> unhappy with the general design of the data structures in
> pgoutput.c, because there is this weird random mishmash of
> static variables along with a palloc'd PGOutputData struct.
> This cannot work if there are ever two active LogicalDecodingContexts
> in the same process. I don't think serial use of LogicalDecodingContexts
> (ie, destroy one and then make another) works very well either,
> because pgoutput_shutdown is a mere fig leaf that ignores all the
> junk the module previously made (in CacheMemoryContext no less).
> So I wonder whether either of those scenarios is possible/supported/
> expected to be needed in future.
>
> Also ... maybe I'm not looking in the right place, but I do not
> see anything anywhere in logical decoding that is taking any lock
> on the relation being processed. How can that be safe?
>

We don't need to acquire a lock on relation while decoding changes
from WAL because it uses a historic snapshot to build a relcache entry
and all the later changes to the rel are absorbed while decoding WAL.

It is important to not acquire a lock on user-defined relations during
decoding otherwise it could lead to deadlock as explained in the email
[1].

* Would it be better if we move all the initialization done by patch
in get_rel_sync_entry() to a separate function as I expect future
patches might need to reset more things?

*
* logical decoding callback calls - but invalidation events can come in
- * *during* a callback if we access the relcache in the callback. Because
- * of that we must mark the cache entry as invalid but not remove it from
- * the hash while it could still be referenced, then prune it at a later
- * safe point.
- *
- * Getting invalidations for relations that aren't in the table is
- * entirely normal, since there's no way to unregister for an invalidation
- * event. So we don't care if it's found or not.
+ * *during* a callback if we do any syscache or table access in the
+ * callback.

As we don't take locks on tables, can invalidation events be accepted
during table access? I could be missing something but I see relation.c
accepts invalidation messages only when lock mode is not 'NoLock'.

[1] - /message-id/CAA4eK1Ks%2Bp8wDbzhDr7yMYEWDbWFRJAd_uOY-moikc%2Bzr9ER%2Bg%40mail.gmail.com

--
With Regards,
Amit Kapila.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Bugs in pgoutput.c
Date: 2022-01-06 19:58:31
Message-ID: 1166732.1641499111@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Also ... maybe I'm not looking in the right place, but I do not
>> see anything anywhere in logical decoding that is taking any lock
>> on the relation being processed. How can that be safe?

> We don't need to acquire a lock on relation while decoding changes
> from WAL because it uses a historic snapshot to build a relcache entry
> and all the later changes to the rel are absorbed while decoding WAL.

That might be okay for the system catalog entries, but I don't see
how it prevents some other session from dropping the table entirely,
thereby causing the on-disk storage to go away. Is it guaranteed
that logical decoding will never try to fetch any on-disk data?
(I can sort of believe that that might be true, but there are scary
corner cases for toasted data, such as an UPDATE that carries forward
a pre-existing toast datum.)

> * Would it be better if we move all the initialization done by patch
> in get_rel_sync_entry() to a separate function as I expect future
> patches might need to reset more things?

Don't see that it helps particularly.

> + * *during* a callback if we do any syscache or table access in the
> + * callback.

> As we don't take locks on tables, can invalidation events be accepted
> during table access? I could be missing something but I see relation.c
> accepts invalidation messages only when lock mode is not 'NoLock'.

The core point here is that you're assuming that NO code path taken
during logical decoding would try to take a lock. I don't believe it,
at least not unless you can point me to some debugging cross-check that
guarantees it.

Given that we're interested in historic not current snapshots, I can
buy that it might be workable to manage syscache invalidations totally
differently than the way it's done in normal processing, in which case
(*if* it's done like that) maybe no invals would need to be recognized
while an output plugin is executing. But (a) the comment here is
entirely wrong if that's so, and (b) I don't see anything in inval.c
that makes it work differently.

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Bugs in pgoutput.c
Date: 2022-01-06 20:34:28
Message-ID: CA+TgmoZopS7mn_fCAj5Lmn9r=7E-k71=V1TAOSR1fmPKXaPK=Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 6, 2022 at 2:58 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> That might be okay for the system catalog entries, but I don't see
> how it prevents some other session from dropping the table entirely,
> thereby causing the on-disk storage to go away. Is it guaranteed
> that logical decoding will never try to fetch any on-disk data?
> (I can sort of believe that that might be true, but there are scary
> corner cases for toasted data, such as an UPDATE that carries forward
> a pre-existing toast datum.)

If I'm not mistaken, logical decoding is only allowed to read data
from system catalog tables or tables that are flagged as being "like
system catalog tables for logical decoding purposes only". See
RelationIsAccessibleInLogicalDecoding and
RelationIsUsedAsCatalogTable. As far as the actual table data is
concerned, it has to be reconstructed solely from the WAL.

I am not sure what locking is required here and am not taking a
position on that ... but it's definitely not the case that a logical
decoding plugin can decide to just read from any old table it likes.

--
Robert Haas
EDB: http://www.enterprisedb.com


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Bugs in pgoutput.c
Date: 2022-01-07 03:37:25
Message-ID: CAA4eK1+fbBkCjJoCG-841BKvqVwQnMWJ6TLgkFVaiK6+FdJF-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 7, 2022 at 1:28 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
>
> > + * *during* a callback if we do any syscache or table access in the
> > + * callback.
>
> > As we don't take locks on tables, can invalidation events be accepted
> > during table access? I could be missing something but I see relation.c
> > accepts invalidation messages only when lock mode is not 'NoLock'.
>
> The core point here is that you're assuming that NO code path taken
> during logical decoding would try to take a lock. I don't believe it,
> at least not unless you can point me to some debugging cross-check that
> guarantees it.
>

AFAIK, currently, there is no such debugging cross-check in locking
APIs but we can add one to ensure that we don't acquire lock on
user-defined tables during logical decoding. As pointed by Robert, I
also don't think accessing user tables will work during logical
decoding.

> Given that we're interested in historic not current snapshots, I can
> buy that it might be workable to manage syscache invalidations totally
> differently than the way it's done in normal processing, in which case
> (*if* it's done like that) maybe no invals would need to be recognized
> while an output plugin is executing. But (a) the comment here is
> entirely wrong if that's so, and (b) I don't see anything in inval.c
> that makes it work differently.
>

I think we need invalidations to work in output plugin to ensure that
the RelationSyncEntry has correct information.

--
With Regards,
Amit Kapila.


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Hou, Zhijie/侯 志杰 <houzj(dot)fnst(at)fujitsu(dot)com>
Subject: Re: Bugs in pgoutput.c
Date: 2022-01-29 03:02:08
Message-ID: CAA4eK1JACZTJqu_pzTu_2Nf-zGAsupqyfk6KBqHe9puVZGQfvw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Commit 6ce16088b caused me to look at pgoutput.c's handling of
> cache invalidations, and I was pretty appalled by what I found.
>
> * rel_sync_cache_relation_cb does the wrong thing when called for
> a cache flush (i.e., relid == 0). Instead of invalidating all
> RelationSyncCache entries as it should, it will do nothing.
>
> * When rel_sync_cache_relation_cb does invalidate an entry,
> it immediately zaps the entry->map structure, even though that
> might still be in use (as per the adjacent comment that carefully
> explains why this isn't safe). I'm not sure if this could lead
> to a dangling-pointer core dump, but it sure seems like it could
> lead to failing to translate tuples that are about to be sent.
>
> * Similarly, rel_sync_cache_publication_cb is way too eager to
> reset the pubactions flags, which would likely lead to failing
> to transmit changes that we should transmit.
>

Are you planning to proceed with this patch? AFAICS, this is good to
go. Yesterday, while debugging/analyzing one cfbot failure[1] with one
of my colleagues for row filter patch [2], we have seen the problem
due to the exact reason (second reason) you outlined here. After using
your patch and adapting the row filter patch atop it we see problem
got fixed.

[1] - https://cirrus-ci.com/task/5450648090050560?logs=test_world#L3975
[2] - https://commitfest.postgresql.org/36/2906/

--
With Regards,
Amit Kapila.


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Hou, Zhijie/侯 志杰 <houzj(dot)fnst(at)fujitsu(dot)com>
Subject: Re: Bugs in pgoutput.c
Date: 2022-02-03 02:45:16
Message-ID: CAA4eK1+uJZ6qYcZxp_A19ZCByhdhui3T3PZ2_ipnJDpwuuzo-w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jan 29, 2022 at 8:32 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > Commit 6ce16088b caused me to look at pgoutput.c's handling of
> > cache invalidations, and I was pretty appalled by what I found.
> >
> > * rel_sync_cache_relation_cb does the wrong thing when called for
> > a cache flush (i.e., relid == 0). Instead of invalidating all
> > RelationSyncCache entries as it should, it will do nothing.
> >
> > * When rel_sync_cache_relation_cb does invalidate an entry,
> > it immediately zaps the entry->map structure, even though that
> > might still be in use (as per the adjacent comment that carefully
> > explains why this isn't safe). I'm not sure if this could lead
> > to a dangling-pointer core dump, but it sure seems like it could
> > lead to failing to translate tuples that are about to be sent.
> >
> > * Similarly, rel_sync_cache_publication_cb is way too eager to
> > reset the pubactions flags, which would likely lead to failing
> > to transmit changes that we should transmit.
> >
>
> Are you planning to proceed with this patch?
>

Tom, is it okay for you if I go ahead with this patch after some testing?

--
With Regards,
Amit Kapila.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Hou, Zhijie/侯 志杰 <houzj(dot)fnst(at)fujitsu(dot)com>
Subject: Re: Bugs in pgoutput.c
Date: 2022-02-03 02:48:18
Message-ID: 137797.1643856498@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: Postg윈 토토SQL :

Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> Tom, is it okay for you if I go ahead with this patch after some testing?

I've been too busy to get back to it, so sure.

regards, tom lane


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Hou, Zhijie/侯 志杰 <houzj(dot)fnst(at)fujitsu(dot)com>
Subject: Re: Bugs in pgoutput.c
Date: 2022-02-03 11:54:35
Message-ID: CAA4eK1KYdLj7yhN858SxLrwbtxtre1d=z94bpibZWwueYuHCnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Feb 3, 2022 at 8:18 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> > Tom, is it okay for you if I go ahead with this patch after some testing?
>
> I've been too busy to get back to it, so sure.
>

Thanks. I have tested the patch by generating an invalidation message
for table DDL before accessing the syscache in
logicalrep_write_tuple(). I see that it correctly invalidates the
entry and rebuilds it for the next operation. I couldn't come up with
some automatic test for it so used the debugger to test it. I have
made a minor change in one of the comments. I am planning to push this
tomorrow unless there are comments or suggestions.

--
With Regards,
Amit Kapila.

Attachment Content-Type Size
v2-0001-Improve-invalidation-handling-in-pgoutput.c.patch application/octet-stream 7.8 KB

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Hou, Zhijie/侯 志杰 <houzj(dot)fnst(at)fujitsu(dot)com>
Subject: Re: Bugs in pgoutput.c
Date: 2022-02-04 12:20:46
Message-ID: CAA4eK1+vKM9xUMuWjpGvDCAAdpfpHhoQzqGL8++nfHS4ehp03Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Feb 3, 2022 at 5:24 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Thu, Feb 3, 2022 at 8:18 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> > > Tom, is it okay for you if I go ahead with this patch after some testing?
> >
> > I've been too busy to get back to it, so sure.
> >
>
> Thanks. I have tested the patch by generating an invalidation message
> for table DDL before accessing the syscache in
> logicalrep_write_tuple(). I see that it correctly invalidates the
> entry and rebuilds it for the next operation. I couldn't come up with
> some automatic test for it so used the debugger to test it. I have
> made a minor change in one of the comments. I am planning to push this
> tomorrow unless there are comments or suggestions.
>

Pushed!

--
With Regards,
Amit Kapila.