Lists: | pgsql-hackers |
---|
From: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
---|---|
To: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
Cc: | Greg Stark <stark(at)mit(dot)edu>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Lukas Fittl <lukas(at)fittl(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-04-01 20:46:47 |
Message-ID: | ae576cac3f451d318374f2a2e494aab1@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi, hackers. Thank you for your attention to this topic.
Julien Rouhaud wrote:
> +static void show_loop_info(Instrumentation *instrument, bool isworker,
> + ExplainState *es);
>
> I think this should be done as a separate refactoring commit.
Sure. I divided the patch. Now Justin's refactor commit is separated.
Also I actualized it a bit.
> Most of the comments I have are easy to fix. But I think that the real
> problem
> is the significant overhead shown by Ekaterina that for now would apply
> even if
> you don't consume the new stats, for instance if you have
> pg_stat_statements.
> And I'm still not sure of what is the best way to avoid that.
I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.
So currently it's no overheads during basic load. Operations using
INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part
of INSTRUMENT_ALL), but they are much less significant than before. I
apply new overhead statistics collected by pgbench with auto _explain
enabled.
> Why do you need to initialize min_t and min_tuples but not max_t and
> max_tuples while both will initially be 0 and possibly updated
> afterwards?
We need this initialization for min values so comment about it located
above the block of code with initialization.
I am convinced that the latest changes have affected the patch in a
positive way. I'll be pleased to hear your thoughts on this.
--
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment | Content-Type | Size |
---|---|---|
0001-explain.c-refactor-ExplainNode_v2.patch | text/x-diff | 4.8 KB |
0002-extra_statistics_v7.patch | text/x-diff | 19.2 KB |
overhead_v1.txt | text/plain | 1.2 KB |
From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
---|---|
To: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
Cc: | Greg Stark <stark(at)mit(dot)edu>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Lukas Fittl <lukas(at)fittl(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-04-02 14:43:46 |
Message-ID: | 20220402144346.5eb36risy4iu7tsi@jrouhaud |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
On Fri, Apr 01, 2022 at 11:46:47PM +0300, Ekaterina Sokolova wrote:
>
> > Most of the comments I have are easy to fix. But I think that the real
> > problem
> > is the significant overhead shown by Ekaterina that for now would apply
> > even if
> > you don't consume the new stats, for instance if you have
> > pg_stat_statements.
> > And I'm still not sure of what is the best way to avoid that.
> I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.
> So currently it's no overheads during basic load. Operations using
> INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part of
> INSTRUMENT_ALL), but they are much less significant than before. I apply new
> overhead statistics collected by pgbench with auto _explain enabled.
Can you give a bit more details on your bench scenario? I see contradictory
results, where the patched version with more code is sometimes way faster,
sometimes way slower. If you're using pgbench
default queries (including write queries) I don't think that any of them will
hit the loop code, so it's really a best case scenario. Also write queries
will make tests less stable for no added value wrt. this code.
Ideally you would need a custom scenario with a single read-only query
involving a nested loop or something like that to check how much overhead you
really get when you cumulate those values. I will try to
>
> > Why do you need to initialize min_t and min_tuples but not max_t and
> > max_tuples while both will initially be 0 and possibly updated
> > afterwards?
> We need this initialization for min values so comment about it located above
> the block of code with initialization.
Sure, but if we're going to have a branch for nloops == 0, I think it would be
better to avoid redundant / useless instructions, something like:
if (nloops == 0)
{
min_t = totaltime;
min_tuple = tuplecount;
}
else
{
if (min_t...)
...
}
While on that part of the patch, there's an extra new line between max_t and
min_tuple processing.
From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
Cc: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Lukas Fittl <lukas(at)fittl(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-04-05 21:14:09 |
Message-ID: | CAM-w4HNOUdVkHt40Rn7Hn-roXv9nXu7d3rSDUcX6LxANbLTg9Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
This is not passing regression tests due to some details of the plan
output - marking Waiting on Author:
diff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out
c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
--- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05
17:00:25.433576100 +0000
+++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
2022-04-05 17:18:30.092203500 +0000
@@ -2251,10 +2251,7 @@
Workers Planned: 2
Workers Launched: N
-> Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)
- Loop Min Rows: N Max Rows: N Total Rows: N
Output: lprt_b.b
- Worker 0: actual rows=N loops=N
- Worker 1: actual rows=N loops=N
-> Materialize (actual rows=N loops=N)
Loop Min Rows: N Max Rows: N Total Rows: N
Output: lprt_a.a
@@ -2263,10 +2260,8 @@
Workers Planned: 1
Workers Launched: N
-> Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)
- Loop Min Rows: N Max Rows: N Total Rows: N
Output: lprt_a.a
- Worker 0: actual rows=N loops=N
-(24 rows)
+(19 rows)
drop table lprt_b;
delete from lprt_a where a = 1;
From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Greg Stark <stark(at)mit(dot)edu> |
Cc: | Julien Rouhaud <rjuju123(at)gmail(dot)com>, Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru>, Lukas Fittl <lukas(at)fittl(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-04-11 12:34:56 |
Message-ID: | 20220411123456.GE26620@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Apr 05, 2022 at 05:14:09PM -0400, Greg Stark wrote:
> This is not passing regression tests due to some details of the plan
> output - marking Waiting on Author:
It's unstable due to parallel workers.
I'm not sure what the usual workarounds here.
Maybe set parallel_leader_participation=no for this test.
> diff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out
> c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
> --- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05
> 17:00:25.433576100 +0000
> +++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
> 2022-04-05 17:18:30.092203500 +0000
> @@ -2251,10 +2251,7 @@
> Workers Planned: 2
> Workers Launched: N
> -> Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)
> - Loop Min Rows: N Max Rows: N Total Rows: N
> Output: lprt_b.b
> - Worker 0: actual rows=N loops=N
> - Worker 1: actual rows=N loops=N
> -> Materialize (actual rows=N loops=N)
> Loop Min Rows: N Max Rows: N Total Rows: N
> Output: lprt_a.a
> @@ -2263,10 +2260,8 @@
> Workers Planned: 1
> Workers Launched: N
> -> Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)
> - Loop Min Rows: N Max Rows: N Total Rows: N
> Output: lprt_a.a
> - Worker 0: actual rows=N loops=N
> -(24 rows)
> +(19 rows)
From: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Cc: | Greg Stark <stark(at)mit(dot)edu>, Julien Rouhaud <rjuju123(at)gmail(dot)com>, Lukas Fittl <lukas(at)fittl(dot)com>, pryzby(at)telsasoft(dot)com |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-06-24 17:16:06 |
Message-ID: | 420960372f05563984984f195522ff01@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi, hackers!
We started discussion about overheads and how to calculate it correctly.
Julien Rouhaud wrote:
> Can you give a bit more details on your bench scenario? I see
> contradictory
> results, where the patched version with more code is sometimes way
> faster,
> sometimes way slower. If you're using pgbench
> default queries (including write queries) I don't think that any of
> them will
> hit the loop code, so it's really a best case scenario. Also write
> queries
> will make tests less stable for no added value wrt. this code.
>
> Ideally you would need a custom scenario with a single read-only query
> involving a nested loop or something like that to check how much
> overhead you
> really get when you cumulate those values.
I created 2 custom scenarios. First one contains VERBOSE flag so this
scenario uses extra statistics. Second one doesn't use new feature and
doesn't disable its use (therefore still collect data).
I attach scripts for pgbench to this letter.
Main conclusions are:
1) the use of additional statistics affects no more than 4.5%;
2) data collection affects no more than 1.5%.
I think testing on another machine would be very helpful, so if you get
a chance, I'd be happy if you share your observations.
Some fixes:
> Sure, but if we're going to have a branch for nloops == 0, I think it
> would be
> better to avoid redundant / useless instructions
Right. I done it.
Justin Pryzby wrote:
> Maybe set parallel_leader_participation=no for this test.
Thanks for reporting the issue and advice. I set
parallel_leader_participation = off. I hope this helps to solve the
problem of inconsistencies in the outputs.
If you have any comments on this topic or want to share your
impressions, please write to me.
Thank you very much for your contribution to the development of this
patch.
--
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment | Content-Type | Size |
---|---|---|
0001-explain.c-refactor-ExplainNode_v3.patch | text/x-diff | 4.8 KB |
0002-extra_statistics_v8.patch | text/x-diff | 18.6 KB |
pgbench_loop | text/plain | 907 bytes |
pgbench_loop_without_verbose | text/plain | 920 bytes |
overhead_v3.txt | text/plain | 155 bytes |
From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
---|---|
To: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Greg Stark <stark(at)mit(dot)edu>, Lukas Fittl <lukas(at)fittl(dot)com>, pryzby(at)telsasoft(dot)com |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-07-30 12:54:33 |
Message-ID: | 20220730125433.b6eebuwa2l5vpfam@jrouhaud |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
On Fri, Jun 24, 2022 at 08:16:06PM +0300, Ekaterina Sokolova wrote:
>
> We started discussion about overheads and how to calculate it correctly.
>
> Julien Rouhaud wrote:
> > Can you give a bit more details on your bench scenario?
> > [...]
> > Ideally you would need a custom scenario with a single read-only query
> > involving a nested loop or something like that to check how much
> > overhead you
> > really get when you cumulate those values.
> I created 2 custom scenarios. First one contains VERBOSE flag so this
> scenario uses extra statistics. Second one doesn't use new feature and
> doesn't disable its use (therefore still collect data).
> I attach scripts for pgbench to this letter.
I don't think that this scenario is really representative for the problem I was
mentioning as you're only testing the overhead using the EXPLAIN (ANALYZE)
command, which doesn't say much about normal query execution.
I did a simple benchmark using a scale 50 pgbench on a pg_stat_statements
enabled instance, and the following scenario:
SET enable_mergejoin = off;
SELECT count(*) FROM pgbench_accounts JOIN pgbench_tellers on aid = tid;
(which forces a nested loop) and compared the result from this patch and fixing
pg_stat_statements to not request INSTRUMENT extra, something like:
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 049da9fe6d..9a2177e438 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -985,7 +985,7 @@ pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
MemoryContext oldcxt;
oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
- queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL, false);
+ queryDesc->totaltime = InstrAlloc(1, (INSTRUMENT_ALL & ~INSTRUMENT_EXTRA), false);
MemoryContextSwitchTo(oldcxt);
}
}
It turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly
requested by INSTRUMENT_ALL adds a ~27% overhead.
I'm not sure that I actually believe these results, but they're really
consistent, so maybe that's real.
Anyway, even if the overheadwas only 1.5% like in your own benchmark, that
still wouldn't be acceptable. Such a feature is in my opinion very welcome,
but it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).
Note that this was done using a "production build" (so with -02, without assert
and such). Doing the same on a debug build (and a scale 20 pgbench), the
overhead is about 1.75%, which is closer to your result. What was the
configure option you used for your benchmark?
Also, I don't think it's not acceptable to ask every single extension that
currently relies on INSTRUMENT_ALL to be patched and drop some random
INSTRUMENT_XXX flags to avoid this overhead. So as I mentioned previously, I
think we should keep INSTRUMENT_ALL to mean something like "all instrumentation
that gives metrics at the statement level", and have INSTRUMENT_EXTRA be
outside of INSTRUMENT_ALL. Maybe this new category should have a global flag
to request all of them, and maybe there should be some additional alias to grab
all categories.
While at it, INSTRUMENT_EXTRA doesn't really seem like a nice name either since
there's no guarantee that the next time someone adds a new instrument option
for per-node information, she will want to combine it with this one. Maybe
INSTRUMENT_MINMAX_LOOPS or something like that?
From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
---|---|
To: | Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Greg Stark <stark(at)mit(dot)edu>, Lukas Fittl <lukas(at)fittl(dot)com>, pryzby(at)telsasoft(dot)com |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2022-07-31 03:49:39 |
Message-ID: | 20220731034939.4ctdylvy72bl5ozy@jrouhaud |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:
>
> It turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly
> requested by INSTRUMENT_ALL adds a ~27% overhead.
>
> I'm not sure that I actually believe these results, but they're really
> consistent, so maybe that's real.
>
> Anyway, even if the overheadwas only 1.5% like in your own benchmark, that
> still wouldn't be acceptable. Such a feature is in my opinion very welcome,
> but it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).
I did the same benchmark this morning, although trying to stop all background
jobs and things on my machine that could interfere with the results, using
longer runs and more runs and I now get a reproducible ~1% overhead, which is
way more believable. Not sure what happened yesterday as I got reproducible
number doing the same benchmark twice, I guess that the fun doing performance
tests on a development machine.
Anyway, 1% is in my opinion still too much overhead for extensions that won't
get any extra information.
From: | Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | Julien Rouhaud <rjuju123(at)gmail(dot)com>, Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Greg Stark <stark(at)mit(dot)edu>, Lukas Fittl <lukas(at)fittl(dot)com>, pryzby(at)telsasoft(dot)com |
Subject: | Re: [PATCH] Add extra statistics to explain for Nested Loop |
Date: | 2023-09-22 08:14:43 |
Message-ID: | f45c24bb-d60d-48b1-a4db-fbdb94ff8f3c@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 31/7/2022 10:49, Julien Rouhaud wrote:
> On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:
> Anyway, 1% is in my opinion still too much overhead for extensions that won't
> get any extra information.
I have read all the thread and still can't understand something. What
valuable data can I find with these extra statistics if no one
parameterized node in the plan exists?
Also, thinking about min/max time in the explain, I guess it would be
necessary in rare cases. Usually, the execution time will correlate to
the number of tuples scanned, won't it? So, maybe skip the time
boundaries in the instrument structure?
In my experience, it is enough to know the total number of tuples
bubbled up from a parameterized node to decide further optimizations.
Maybe simplify this feature up to the one total_rows field in the case
of nloops > 1 and in the presence of parameters?
And at the end. If someone wants a lot of additional statistics, why not
give them that by extension? It is only needed to add a hook into the
point of the node explanation and some efforts to make instrumentation
extensible. But here, honestly, I don't have code/ideas so far.
--
regards,
Andrey Lepikhov
Postgres Professional