Lists: | Postg토토SQL : Postg토토SQL |
---|
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Parallel Seq Scan |
Date: | 2014-12-04 06:35:18 |
Message-ID: | CAA4eK1KTv73uD9_W5wR4HiMZ_hgi8oseWncxbC53XwZDp-8aEg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
As per discussion on another thread related to using
custom scan nodes for prototype of parallel sequence scan,
I have developed the same, but directly by adding
new nodes for parallel sequence scan. There might be
some advantages for developing this as a contrib
module by using custom scan nodes, however I think
we might get stucked after some point due to custom
scan node capability as pointed out by Andres.
The basic idea used is that while evaluating the cheapest
path for scan, optimizer will also evaluate if it can use
parallel seq path. Currently I have kept a very simple
model to calculate the cost of parallel sequence path which
is that divide the cost for CPU and disk by availble number
of worker backends (We can enhance it based on further
experiments and discussion; we need to consider worker startup
and dynamic shared memory setup cost as well). The work aka
scan of blocks is divided equally among all workers (except for
corner cases where blocks can't be equally divided among workers,
the last worker will be responsible for scanning the remaining blocks).
The number of worker backends that can be used for
parallel seq scan can be configured by using a new GUC
parallel_seqscan_degree, the default value of which is zero
and it means parallel seq scan will not be considered unless
user configures this value.
In ExecutorStart phase, initiate the required number of workers
as per parallel seq scan plan and setup dynamic shared memory and
share the information required for worker to execute the scan.
Currently I have just shared the relId, targetlist and number
of blocks to be scanned by worker, however I think we might want
to generate a plan for each of the workers in master backend and
then share the same to individual worker.
Now to fetch the data from multiple queues corresponding to each
worker a simple mechanism is used that is fetch from first queue
till all the data is consumed from same, then fetch from second
queue and so on. Also here master backend is responsible for just
getting the data from workers and passing it back to client.
I am sure that we can improve this strategy in many ways
like by making master backend to also perform scan for some
of the blocks rather than just getting data from workers and
a better strategy to fetch the data from multiple queues.
Worker backend will receive the information related to scan
from master backend and generate the plan from same and
execute that plan, so here the work to scan the data after
generating the plan is very much similar to exec_simple_query()
(i.e Create the portal and run it based on planned statement)
except that worker backends will initialize the block range it want to
scan in executor initialization phase (ExecInitSeqScan()).
Workers will exit after sending the data to master backend
which essentially means that for each execution we need
to initiate the workers, I think here we can improve by giving the
control for workers to postmaster so that we don't need to
initialize them each time during execution, however this can
be a totally separate optimization which is better to be done
independently of this patch.
As currently we don't have mechanism to share transaction
state, I have used separate transaction in worker backend to
execute the plan.
Any error in master backend either via backend worker or due
to other issue in master backend itself should terminate all the
workers before aborting the transaction.
We can't do it with the error context callback mechanism
(error_context_stack) which we use at other places in code, as
for this case we need it from the time workers are started till
the execution is complete (error_context_stack could get reset
once the control goes out of the function which has set it.)
One way could be that maintain the callback information in
TransactionState and use it to kill the workers before aborting
transaction in main backend. Another could be that have another
variable similar to error_context_stack (which will be used
specifically for storing the workers state), and kill the workers
in errfinish via callback. Currently I have handled it at the time of
detaching from shared memory.
Another point that needs to be taken care in worker backend is
that if any error occurs, we should *not* abort the transaction as
the transaction state is shared across all workers.
Currently the parallel seq scan will not be considered
for statements other than SELECT or if there is a join in
the statement or if statement contains quals or if target
list contains non-Var fields. We can definitely support
simple quals and targetlist other than non-Vars. By simple,
I means that it should not contain functions or some other
conditions which can't be pushed down to worker backend.
Behaviour of some simple statements with patch is as below:
postgres=# create table t1(c1 int, c2 char(500)) with (fillfactor=10);
CREATE TABLE
postgres=# insert into t1 values(generate_series(1,100),'amit');
INSERT 0 100
postgres=# explain select c1 from t1;
QUERY PLAN
------------------------------------------------------
Seq Scan on t1 (cost=0.00..101.00 rows=100 width=4)
(1 row)
postgres=# set parallel_seqscan_degree=4;
SET
postgres=# explain select c1 from t1;
QUERY PLAN
--------------------------------------------------------------
Parallel Seq Scan on t1 (cost=0.00..25.25 rows=100 width=4)
Number of Workers: 4
Number of Blocks Per Workers: 25
(3 rows)
postgres=# explain select Distinct(c1) from t1;
QUERY PLAN
--------------------------------------------------------------------
HashAggregate (cost=25.50..26.50 rows=100 width=4)
Group Key: c1
-> Parallel Seq Scan on t1 (cost=0.00..25.25 rows=100 width=4)
Number of Workers: 4
Number of Blocks Per Workers: 25
(5 rows)
Attached patch is just to facilitate the discussion about the
parallel seq scan and may be some other dependent tasks like
sharing of various states like combocid, snapshot with parallel
workers. It is by no means ready to do any complex test, ofcourse
I will work towards making it more robust both in terms of adding
more stuff and doing performance optimizations.
Thoughts/Suggestions?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_seqscan_v1.patch | application/octet-stream | 80.4 KB |
From: | José Luis Tallón <jltallon(at)adv-solutions(dot)net> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-05 15:08:34 |
Message-ID: | 5481CA72.1070404@adv-solutions.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 12/04/2014 07:35 AM, Amit Kapila wrote:
> [snip]
>
> The number of worker backends that can be used for
> parallel seq scan can be configured by using a new GUC
> parallel_seqscan_degree, the default value of which is zero
> and it means parallel seq scan will not be considered unless
> user configures this value.
The number of parallel workers should be capped (of course!) at the
maximum amount of "processors" (cores/vCores, threads/hyperthreads)
available.
More over, when load goes up, the relative cost of parallel working
should go up as well.
Something like:
p = number of cores
l = 1min-load
additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
(for c>1, of course)
> In ExecutorStart phase, initiate the required number of workers
> as per parallel seq scan plan and setup dynamic shared memory and
> share the information required for worker to execute the scan.
> Currently I have just shared the relId, targetlist and number
> of blocks to be scanned by worker, however I think we might want
> to generate a plan for each of the workers in master backend and
> then share the same to individual worker.
[snip]
> Attached patch is just to facilitate the discussion about the
> parallel seq scan and may be some other dependent tasks like
> sharing of various states like combocid, snapshot with parallel
> workers. It is by no means ready to do any complex test, ofcourse
> I will work towards making it more robust both in terms of adding
> more stuff and doing performance optimizations.
>
> Thoughts/Suggestions?
Not directly (I haven't had the time to read the code yet), but I'm
thinking about the ability to simply *replace* executor methods from an
extension.
This could be an alternative to providing additional nodes that the
planner can include in the final plan tree, ready to be executed.
The parallel seq scan nodes are definitively the best approach for
"parallel query", since the planner can optimize them based on cost.
I'm wondering about the ability to modify the implementation of some
methods themselves once at execution time: given a previously planned
query, chances are that, at execution time (I'm specifically thinking
about prepared statements here), a different implementation of the same
"node" might be more suitable and could be used instead while the
condition holds.
If this latter line of thinking is too off-topic within this thread and
there is any interest, we can move the comments to another thread and
I'd begin work on a PoC patch. It might as well make sense to implement
the executor overloading mechanism alongide the custom plan API, though.
Any comments appreciated.
Thank you for your work, Amit
Regards,
/ J.L.
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | José Luis Tallón <jltallon(at)adv-solutions(dot)net> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-05 15:13:40 |
Message-ID: | 20141205151340.GO25679@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg롤 토토SQL : |
José,
* José Luis Tallón (jltallon(at)adv-solutions(dot)net) wrote:
> On 12/04/2014 07:35 AM, Amit Kapila wrote:
> >The number of worker backends that can be used for
> >parallel seq scan can be configured by using a new GUC
> >parallel_seqscan_degree, the default value of which is zero
> >and it means parallel seq scan will not be considered unless
> >user configures this value.
>
> The number of parallel workers should be capped (of course!) at the
> maximum amount of "processors" (cores/vCores, threads/hyperthreads)
> available.
>
> More over, when load goes up, the relative cost of parallel working
> should go up as well.
> Something like:
> p = number of cores
> l = 1min-load
>
> additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
>
> (for c>1, of course)
While I agree in general that we'll need to come up with appropriate
acceptance criteria, etc, I don't think we want to complicate this patch
with that initially. A SUSET GUC which caps the parallel GUC would be
enough for an initial implementation, imv.
> Not directly (I haven't had the time to read the code yet), but I'm
> thinking about the ability to simply *replace* executor methods from
> an extension.
You probably want to look at the CustomScan thread+patch directly then..
Thanks,
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-05 15:16:10 |
Message-ID: | 20141205151610.GP25679@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Amit,
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> postgres=# explain select c1 from t1;
> QUERY PLAN
> ------------------------------------------------------
> Seq Scan on t1 (cost=0.00..101.00 rows=100 width=4)
> (1 row)
>
>
> postgres=# set parallel_seqscan_degree=4;
> SET
> postgres=# explain select c1 from t1;
> QUERY PLAN
> --------------------------------------------------------------
> Parallel Seq Scan on t1 (cost=0.00..25.25 rows=100 width=4)
> Number of Workers: 4
> Number of Blocks Per Workers: 25
> (3 rows)
This is all great and interesting, but I feel like folks might be
waiting to see just what kind of performance results come from this (and
what kind of hardware is needed to see gains..). There's likely to be
situations where this change is an improvement while also being cases
where it makes things worse.
One really interesting case would be parallel seq scans which are
executing against foreign tables/FDWs..
Thanks!
Stephen
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | José Luis Tallón <jltallon(at)adv-solutions(dot)net>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-05 18:57:34 |
Message-ID: | 5482001E.20001@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 12/5/14, 9:08 AM, José Luis Tallón wrote:
>
> More over, when load goes up, the relative cost of parallel working should go up as well.
> Something like:
> p = number of cores
> l = 1min-load
>
> additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
>
> (for c>1, of course)
...
> The parallel seq scan nodes are definitively the best approach for "parallel query", since the planner can optimize them based on cost.
> I'm wondering about the ability to modify the implementation of some methods themselves once at execution time: given a previously planned query, chances are that, at execution time (I'm specifically thinking about prepared statements here), a different implementation of the same "node" might be more suitable and could be used instead while the condition holds.
These comments got me wondering... would it be better to decide on parallelism during execution instead of at plan time? That would allow us to dynamically scale parallelism based on system load. If we don't even consider parallelism until we've pulled some number of tuples/pages from a relation, this would also eliminate all parallel overhead on small relations.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | José Luis Tallón <jltallon(at)adv-solutions(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 05:10:13 |
Message-ID: | CAA4eK1K31AhrswmLHUufRyvgwDjajdKp6MdPWcjJnJkvXSB5xQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 5, 2014 at 8:38 PM, José Luis Tallón <jltallon(at)adv-solutions(dot)net>
wrote:
>
> On 12/04/2014 07:35 AM, Amit Kapila wrote:
>>
>> [snip]
>>
>> The number of worker backends that can be used for
>> parallel seq scan can be configured by using a new GUC
>> parallel_seqscan_degree, the default value of which is zero
>> and it means parallel seq scan will not be considered unless
>> user configures this value.
>
>
> The number of parallel workers should be capped (of course!) at the
maximum amount of "processors" (cores/vCores, threads/hyperthreads)
available.
>
Also, it should consider MaxConnections configured by user.
> More over, when load goes up, the relative cost of parallel working
should go up as well.
> Something like:
> p = number of cores
> l = 1min-load
>
> additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
>
> (for c>1, of course)
>
How will you identify load in above formula and what is exactly 'c'
(is it parallel workers involved?).
For now, I have managed this simply by having a configuration
variable and it seems to me that the same should be good
enough for first version, we can definitely enhance it in future
version by dynamically allocating the number of workers based
on their availability and need of query, but I think lets leave that
for another day.
>
>> In ExecutorStart phase, initiate the required number of workers
>> as per parallel seq scan plan and setup dynamic shared memory and
>> share the information required for worker to execute the scan.
>> Currently I have just shared the relId, targetlist and number
>> of blocks to be scanned by worker, however I think we might want
>> to generate a plan for each of the workers in master backend and
>> then share the same to individual worker.
>
> [snip]
>>
>> Attached patch is just to facilitate the discussion about the
>> parallel seq scan and may be some other dependent tasks like
>> sharing of various states like combocid, snapshot with parallel
>> workers. It is by no means ready to do any complex test, ofcourse
>> I will work towards making it more robust both in terms of adding
>> more stuff and doing performance optimizations.
>>
>> Thoughts/Suggestions?
>
>
> Not directly (I haven't had the time to read the code yet), but I'm
thinking about the ability to simply *replace* executor methods from an
extension.
> This could be an alternative to providing additional nodes that the
planner can include in the final plan tree, ready to be executed.
>
> The parallel seq scan nodes are definitively the best approach for
"parallel query", since the planner can optimize them based on cost.
> I'm wondering about the ability to modify the implementation of some
methods themselves once at execution time: given a previously planned
query, chances are that, at execution time (I'm specifically thinking about
prepared statements here), a different implementation of the same "node"
might be more suitable and could be used instead while the condition holds.
>
Idea sounds interesting and I think probably in some cases
different implementation of same node might help, but may be
at this stage if we focus on one kind of implementation (which is
a win for reasonable number of cases) and make it successful,
then doing alternative implementations will be comparatively
easier and have more chances of success.
> If this latter line of thinking is too off-topic within this thread and
there is any interest, we can move the comments to another thread and I'd
begin work on a PoC patch. It might as well make sense to implement the
executor overloading mechanism alongide the custom plan API, though.
>
Sure, please go ahead which ever way you like to proceed.
If you want to contribute in this area/patch, then you are
welcome.
> Any comments appreciated.
>
>
> Thank you for your work, Amit
Many thanks to you as well for showing interest.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 05:13:18 |
Message-ID: | CAApHDvrZG5Q9rNxU4WOga8AgvAwQ83bF83CFvMbOQcCg8vk=Zw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 4 December 2014 at 19:35, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> Attached patch is just to facilitate the discussion about the
> parallel seq scan and may be some other dependent tasks like
> sharing of various states like combocid, snapshot with parallel
> workers. It is by no means ready to do any complex test, ofcourse
> I will work towards making it more robust both in terms of adding
> more stuff and doing performance optimizations.
>
> Thoughts/Suggestions?
>
>
This is good news!
I've not gotten to look at the patch yet, but I thought you may be able to
make use of the attached at some point.
It's bare-bones core support for allowing aggregate states to be merged
together with another aggregate state. I would imagine that if a query such
as:
SELECT MAX(value) FROM bigtable;
was run, then a series of parallel workers could go off and each find the
max value from their portion of the table and then perhaps some other node
type would then take all the intermediate results from the workers, once
they're finished, and join all of the aggregate states into one and return
that. Naturally, you'd need to check that all aggregates used in the
targetlist had a merge function first.
This is just a few hours of work. I've not really tested the pg_dump
support or anything yet. I've also not added any new functions to allow
AVG() or COUNT() to work, I've really just re-used existing functions where
I could, as things like MAX() and BOOL_OR() can just make use of the
existing transition function. I thought that this might be enough for early
tests.
I'd imagine such a workload, ignoring IO overhead, should scale pretty much
linearly with the number of worker processes. Of course, if there was a
GROUP BY clause then the merger code would have to perform more work.
If you think you might be able to make use of this, then I'm willing to go
off and write all the other merge functions required for the other
aggregates.
Regards
David Rowley
Attachment | Content-Type | Size |
---|---|---|
merge_aggregate_state_v1.patch | application/octet-stream | 47.6 KB |
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 06:22:17 |
Message-ID: | CAA4eK1+mn=OB1xpw8st_9vN9jw0UAkWfmMNCmy1THrVxzKFFvg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 5, 2014 at 8:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
> Amit,
>
> * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> > postgres=# explain select c1 from t1;
> > QUERY PLAN
> > ------------------------------------------------------
> > Seq Scan on t1 (cost=0.00..101.00 rows=100 width=4)
> > (1 row)
> >
> >
> > postgres=# set parallel_seqscan_degree=4;
> > SET
> > postgres=# explain select c1 from t1;
> > QUERY PLAN
> > --------------------------------------------------------------
> > Parallel Seq Scan on t1 (cost=0.00..25.25 rows=100 width=4)
> > Number of Workers: 4
> > Number of Blocks Per Workers: 25
> > (3 rows)
>
> This is all great and interesting, but I feel like folks might be
> waiting to see just what kind of performance results come from this (and
> what kind of hardware is needed to see gains..).
Initially I was thinking that first we should discuss if the design
and idea used in patch is sane, but now as you have asked and
even Robert has asked the same off list to me, I will take the
performance data next week (Another reason why I have not
taken any data is that still the work to push qualification down
to workers is left which I feel is quite important). However I still
think if I get some feedback on some of the basic things like below,
it would be good.
1. As the patch currently stands, it just shares the relevant
data (like relid, target list, block range each worker should
perform on etc.) to the worker and then worker receives that
data and form the planned statement which it will execute and
send the results back to master backend. So the question
here is do you think it is reasonable or should we try to form
the complete plan for each worker and then share the same
and may be other information as well like range table entries
which are required. My personal gut feeling in this matter
is that for long term it might be better to form the complete
plan of each worker in master and share the same, however
I think the current way as done in patch (okay that needs
some improvement) is also not bad and quite easier to implement.
2. Next question related to above is what should be the
output of ExplainPlan, as currently worker is responsible
for forming its own plan, Explain Plan is not able to show
the detailed plan for each worker, is that okay?
3. Some places where optimizations are possible:
- Currently after getting the tuple from heap, it is deformed by
worker and sent via message queue to master backend, master
backend then forms the tuple and send it to upper layer which
before sending it to frontend again deforms it via slot_getallattrs(slot).
- Master backend currently receives the data from multiple workers
serially. We can optimize in a way that it can check other queues,
if there is no data in current queue.
- Master backend is just responsible for coordination among workers
It shares the required information to workers and then fetch the
data processed by each worker, by using some more logic, we might
be able to make master backend also fetch data from heap rather than
doing just co-ordination among workers.
I think in all above places we can do some optimisation, however
we can do that later as well, unless they hit the performance badly for
cases which people care most.
4. Should parallel_seqscan_degree value be dependent on other
backend processes like MaxConnections, max_worker_processes,
autovacuum_max_workers do or should it be independent like
max_wal_senders?
I think it is better to keep it dependent on other backend processes,
however for simplicity, I have kept it similar to max_wal_senders for now.
> There's likely to be
> situations where this change is an improvement while also being cases
> where it makes things worse.
Agreed and I think that will be more clear after doing some
performance tests.
> One really interesting case would be parallel seq scans which are
> executing against foreign tables/FDWs..
>
Sure.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | José Luis Tallón <jltallon(at)adv-solutions(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 06:41:53 |
Message-ID: | CAA4eK1Lr6JxwfBufaJSuHm1PpYYE9oM-U0e1tpk7itmmowh+zA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 사이트SQL |
On Fri, Dec 5, 2014 at 8:43 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
> José,
>
> * José Luis Tallón (jltallon(at)adv-solutions(dot)net) wrote:
> > On 12/04/2014 07:35 AM, Amit Kapila wrote:
> > >The number of worker backends that can be used for
> > >parallel seq scan can be configured by using a new GUC
> > >parallel_seqscan_degree, the default value of which is zero
> > >and it means parallel seq scan will not be considered unless
> > >user configures this value.
> >
> > The number of parallel workers should be capped (of course!) at the
> > maximum amount of "processors" (cores/vCores, threads/hyperthreads)
> > available.
> >
> > More over, when load goes up, the relative cost of parallel working
> > should go up as well.
> > Something like:
> > p = number of cores
> > l = 1min-load
> >
> > additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
> >
> > (for c>1, of course)
>
> While I agree in general that we'll need to come up with appropriate
> acceptance criteria, etc, I don't think we want to complicate this patch
> with that initially.
>
>A SUSET GUC which caps the parallel GUC would be
> enough for an initial implementation, imv.
>
This is exactly what I have done in patch.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | José Luis Tallón <jltallon(at)adv-solutions(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 06:50:15 |
Message-ID: | CAA4eK1LbimDkcZUkoJqi3yeWCdosccOyWA96_XOCJwUQvJnX-w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 사이트SQL |
On Sat, Dec 6, 2014 at 12:27 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
> On 12/5/14, 9:08 AM, José Luis Tallón wrote:
>>
>>
>> More over, when load goes up, the relative cost of parallel working
should go up as well.
>> Something like:
>> p = number of cores
>> l = 1min-load
>>
>> additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
>>
>> (for c>1, of course)
>
>
> ...
>
>> The parallel seq scan nodes are definitively the best approach for
"parallel query", since the planner can optimize them based on cost.
>> I'm wondering about the ability to modify the implementation of some
methods themselves once at execution time: given a previously planned
query, chances are that, at execution time (I'm specifically thinking about
prepared statements here), a different implementation of the same "node"
might be more suitable and could be used instead while the condition holds.
>
>
> These comments got me wondering... would it be better to decide on
parallelism during execution instead of at plan time? That would allow us
to dynamically scale parallelism based on system load. If we don't even
consider parallelism until we've pulled some number of tuples/pages from a
relation,
>
>this would also eliminate all parallel overhead on small relations.
> --
I think we have access to this information in planner (RelOptInfo -> pages),
if we want, we can use that to eliminate the small relations from
parallelism, but question is how big relations do we want to consider
for parallelism, one way is to check via tests which I am planning to
follow, do you think we have any heuristic which we can use to decide
how big relations should be consider for parallelism?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 07:06:17 |
Message-ID: | CAA4eK1JfOcyEpdg_-Q+x9hVhVrsj85F74NNq_ns9hSOYN9eWLA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Dec 6, 2014 at 10:43 AM, David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> On 4 December 2014 at 19:35, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>> Attached patch is just to facilitate the discussion about the
>> parallel seq scan and may be some other dependent tasks like
>> sharing of various states like combocid, snapshot with parallel
>> workers. It is by no means ready to do any complex test, ofcourse
>> I will work towards making it more robust both in terms of adding
>> more stuff and doing performance optimizations.
>>
>> Thoughts/Suggestions?
>>
>
> This is good news!
Thanks.
> I've not gotten to look at the patch yet, but I thought you may be able
to make use of the attached at some point.
>
I also think so, that it can be used in near future to enhance
and provide more value to the parallel scan feature. Thanks
for taking the initiative to do the leg-work for supporting
aggregates.
> It's bare-bones core support for allowing aggregate states to be merged
together with another aggregate state. I would imagine that if a query such
as:
>
> SELECT MAX(value) FROM bigtable;
>
> was run, then a series of parallel workers could go off and each find the
max value from their portion of the table and then perhaps some other node
type would then take all the intermediate results from the workers, once
they're finished, and join all of the aggregate states into one and return
that. Naturally, you'd need to check that all aggregates used in the
targetlist had a merge function first.
>
Direction sounds to be right.
> This is just a few hours of work. I've not really tested the pg_dump
support or anything yet. I've also not added any new functions to allow
AVG() or COUNT() to work, I've really just re-used existing functions where
I could, as things like MAX() and BOOL_OR() can just make use of the
existing transition function. I thought that this might be enough for early
tests.
>
> I'd imagine such a workload, ignoring IO overhead, should scale pretty
much linearly with the number of worker processes. Of course, if there was
a GROUP BY clause then the merger code would have to perform more work.
>
Agreed.
> If you think you might be able to make use of this, then I'm willing to
go off and write all the other merge functions required for the other
aggregates.
>
Don't you think that first we should stabilize the basic (target list
and quals that can be independently evaluated by workers) parallel
scan and then jump to do such enhancements?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-06 12:07:17 |
Message-ID: | 20141206120717.GZ25679@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg사설 토토SQL |
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> 1. As the patch currently stands, it just shares the relevant
> data (like relid, target list, block range each worker should
> perform on etc.) to the worker and then worker receives that
> data and form the planned statement which it will execute and
> send the results back to master backend. So the question
> here is do you think it is reasonable or should we try to form
> the complete plan for each worker and then share the same
> and may be other information as well like range table entries
> which are required. My personal gut feeling in this matter
> is that for long term it might be better to form the complete
> plan of each worker in master and share the same, however
> I think the current way as done in patch (okay that needs
> some improvement) is also not bad and quite easier to implement.
For my 2c, I'd like to see it support exactly what the SeqScan node
supports and then also what Foreign Scan supports. That would mean we'd
then be able to push filtering down to the workers which would be great.
Even better would be figuring out how to parallelize an Append node
(perhaps only possible when the nodes underneath are all SeqScan or
ForeignScan nodes) since that would allow us to then parallelize the
work across multiple tables and remote servers.
One of the big reasons why I was asking about performance data is that,
today, we can't easily split a single relation across multiple i/o
channels. Sure, we can use RAID and get the i/o channel that the table
sits on faster than a single disk and possibly fast enough that a single
CPU can't keep up, but that's not quite the same. The historical
recommendations for Hadoop nodes is around one CPU per drive (of course,
it'll depend on workload, etc, etc, but still) and while there's still a
lot of testing, etc, to be done before we can be sure about the 'right'
answer for PG (and it'll also vary based on workload, etc), that strikes
me as a pretty reasonable rule-of-thumb to go on.
Of course, I'm aware that this won't be as easy to implement..
> 2. Next question related to above is what should be the
> output of ExplainPlan, as currently worker is responsible
> for forming its own plan, Explain Plan is not able to show
> the detailed plan for each worker, is that okay?
I'm not entirely following this. How can the worker be responsible for
its own "plan" when the information passed to it (per the above
paragraph..) is pretty minimal? In general, I don't think we need to
have specifics like "this worker is going to do exactly X" because we
will eventually need some communication to happen between the worker and
the master process where the worker can ask for more work because it's
finished what it was tasked with and the master will need to give it
another chunk of work to do. I don't think we want exactly what each
worker process will do to be fully formed at the outset because, even
with the best information available, given concurrent load on the
system, it's not going to be perfect and we'll end up starving workers.
The plan, as formed by the master, should be more along the lines of
"this is what I'm gonna have my workers do" along w/ how many workers,
etc, and then it goes and does it. Perhaps for an 'explain analyze' we
return information about what workers actually *did* what, but that's a
whole different discussion.
> 3. Some places where optimizations are possible:
> - Currently after getting the tuple from heap, it is deformed by
> worker and sent via message queue to master backend, master
> backend then forms the tuple and send it to upper layer which
> before sending it to frontend again deforms it via slot_getallattrs(slot).
If this is done as I was proposing above, we might be able to avoid
this, but I don't know that it's a huge issue either way.. The bigger
issue is getting the filtering pushed down.
> - Master backend currently receives the data from multiple workers
> serially. We can optimize in a way that it can check other queues,
> if there is no data in current queue.
Yes, this is pretty critical. In fact, it's one of the recommendations
I made previously about how to change the Append node to parallelize
Foreign Scan node work.
> - Master backend is just responsible for coordination among workers
> It shares the required information to workers and then fetch the
> data processed by each worker, by using some more logic, we might
> be able to make master backend also fetch data from heap rather than
> doing just co-ordination among workers.
I don't think this is really necessary...
> I think in all above places we can do some optimisation, however
> we can do that later as well, unless they hit the performance badly for
> cases which people care most.
I agree that we can improve the performance through various
optimizations later, but it's important to get the general structure and
design right or we'll end up having to reimplement a lot of it.
> 4. Should parallel_seqscan_degree value be dependent on other
> backend processes like MaxConnections, max_worker_processes,
> autovacuum_max_workers do or should it be independent like
> max_wal_senders?
Well, we're not going to be able to spin off more workers than we have
process slots, but I'm not sure we need anything more than that? In any
case, this is definitely an area we can work on improving later and I
don't think it really impacts the rest of the design.
Thanks,
Stephen
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-08 05:10:09 |
Message-ID: | CAA4eK1KAOUVM-GqroDfzDCfbEn_RzKDLzF+UPkV2tHGAw1wrDQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
> * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> > 1. As the patch currently stands, it just shares the relevant
> > data (like relid, target list, block range each worker should
> > perform on etc.) to the worker and then worker receives that
> > data and form the planned statement which it will execute and
> > send the results back to master backend. So the question
> > here is do you think it is reasonable or should we try to form
> > the complete plan for each worker and then share the same
> > and may be other information as well like range table entries
> > which are required. My personal gut feeling in this matter
> > is that for long term it might be better to form the complete
> > plan of each worker in master and share the same, however
> > I think the current way as done in patch (okay that needs
> > some improvement) is also not bad and quite easier to implement.
>
> For my 2c, I'd like to see it support exactly what the SeqScan node
> supports and then also what Foreign Scan supports. That would mean we'd
> then be able to push filtering down to the workers which would be great.
> Even better would be figuring out how to parallelize an Append node
> (perhaps only possible when the nodes underneath are all SeqScan or
> ForeignScan nodes) since that would allow us to then parallelize the
> work across multiple tables and remote servers.
>
> One of the big reasons why I was asking about performance data is that,
> today, we can't easily split a single relation across multiple i/o
> channels. Sure, we can use RAID and get the i/o channel that the table
> sits on faster than a single disk and possibly fast enough that a single
> CPU can't keep up, but that's not quite the same. The historical
> recommendations for Hadoop nodes is around one CPU per drive (of course,
> it'll depend on workload, etc, etc, but still) and while there's still a
> lot of testing, etc, to be done before we can be sure about the 'right'
> answer for PG (and it'll also vary based on workload, etc), that strikes
> me as a pretty reasonable rule-of-thumb to go on.
>
> Of course, I'm aware that this won't be as easy to implement..
>
> > 2. Next question related to above is what should be the
> > output of ExplainPlan, as currently worker is responsible
> > for forming its own plan, Explain Plan is not able to show
> > the detailed plan for each worker, is that okay?
>
> I'm not entirely following this. How can the worker be responsible for
> its own "plan" when the information passed to it (per the above
> paragraph..) is pretty minimal?
Because for a simple sequence scan that much information is sufficient,
basically if we have scanrelid, target list, qual and then RTE (primarily
relOid), then worker can form and perform sequence scan.
> In general, I don't think we need to
> have specifics like "this worker is going to do exactly X" because we
> will eventually need some communication to happen between the worker and
> the master process where the worker can ask for more work because it's
> finished what it was tasked with and the master will need to give it
> another chunk of work to do. I don't think we want exactly what each
> worker process will do to be fully formed at the outset because, even
> with the best information available, given concurrent load on the
> system, it's not going to be perfect and we'll end up starving workers.
> The plan, as formed by the master, should be more along the lines of
> "this is what I'm gonna have my workers do" along w/ how many workers,
> etc, and then it goes and does it.
I think here you want to say that work allocation for workers should be
dynamic rather fixed which I think makes sense, however we can try
such an optimization after some initial performance data.
> Perhaps for an 'explain analyze' we
> return information about what workers actually *did* what, but that's a
> whole different discussion.
>
Agreed.
> > 3. Some places where optimizations are possible:
> > - Currently after getting the tuple from heap, it is deformed by
> > worker and sent via message queue to master backend, master
> > backend then forms the tuple and send it to upper layer which
> > before sending it to frontend again deforms it via
slot_getallattrs(slot).
>
> If this is done as I was proposing above, we might be able to avoid
> this, but I don't know that it's a huge issue either way.. The bigger
> issue is getting the filtering pushed down.
>
> > - Master backend currently receives the data from multiple workers
> > serially. We can optimize in a way that it can check other queues,
> > if there is no data in current queue.
>
> Yes, this is pretty critical. In fact, it's one of the recommendations
> I made previously about how to change the Append node to parallelize
> Foreign Scan node work.
>
> > - Master backend is just responsible for coordination among workers
> > It shares the required information to workers and then fetch the
> > data processed by each worker, by using some more logic, we might
> > be able to make master backend also fetch data from heap rather than
> > doing just co-ordination among workers.
>
> I don't think this is really necessary...
>
> > I think in all above places we can do some optimisation, however
> > we can do that later as well, unless they hit the performance badly for
> > cases which people care most.
>
> I agree that we can improve the performance through various
> optimizations later, but it's important to get the general structure and
> design right or we'll end up having to reimplement a lot of it.
>
So to summarize my understanding, below are the set of things
which I should work on and in the order they are listed.
1. Push down qualification
2. Performance Data
3. Improve the way to push down the information related to worker.
4. Dynamic allocation of work for workers.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-08 17:46:12 |
Message-ID: | CA+TgmoaM+Vr_P6nkj+hRb8VxRDUsL6Ch-aXE5q4Z3ZNJtxAJdg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Dec 6, 2014 at 12:13 AM, David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> It's bare-bones core support for allowing aggregate states to be merged
> together with another aggregate state. I would imagine that if a query such
> as:
>
> SELECT MAX(value) FROM bigtable;
>
> was run, then a series of parallel workers could go off and each find the
> max value from their portion of the table and then perhaps some other node
> type would then take all the intermediate results from the workers, once
> they're finished, and join all of the aggregate states into one and return
> that. Naturally, you'd need to check that all aggregates used in the
> targetlist had a merge function first.
I think this is great infrastructure and could also be useful for
pushing down aggregates in cases involving foreign data wrappers. But
I suggest we discuss it on a separate thread because it's not related
to parallel seq scan per se.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, José Luis Tallón <jltallon(at)adv-solutions(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-08 17:51:33 |
Message-ID: | CA+TgmobkqnWh=GwjpBtyoP0U9MKMr2TAge-ST4yoEkN=ZqH6WA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토SQL : Postg토토SQL |
On Sat, Dec 6, 2014 at 1:50 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> I think we have access to this information in planner (RelOptInfo -> pages),
> if we want, we can use that to eliminate the small relations from
> parallelism, but question is how big relations do we want to consider
> for parallelism, one way is to check via tests which I am planning to
> follow, do you think we have any heuristic which we can use to decide
> how big relations should be consider for parallelism?
Surely the Path machinery needs to decide this in particular cases
based on cost. We should assign some cost to starting a parallel
worker via some new GUC, like parallel_startup_cost = 100,000. And
then we should also assign a cost to the act of relaying a tuple from
the parallel worker to the master, maybe cpu_tuple_cost (or some new
GUC). For a small relation, or a query with a LIMIT clause, the
parallel startup cost will make starting a lot of workers look
unattractive, but for bigger relations it will make sense from a cost
perspective, which is exactly what we want.
There are probably other important considerations based on goals for
overall resource utilization, and also because at a certain point
adding more workers won't help because the disk will be saturated. I
don't know exactly what we should do about those issues yet, but the
steps described in the previous paragraph seem like a good place to
start anyway.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-08 17:57:38 |
Message-ID: | CA+TgmoYWK4ePQdNFZY2PU-w=SypyxnnpYx6_B+48O2jQ4QhZAA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Dec 6, 2014 at 7:07 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> For my 2c, I'd like to see it support exactly what the SeqScan node
> supports and then also what Foreign Scan supports. That would mean we'd
> then be able to push filtering down to the workers which would be great.
> Even better would be figuring out how to parallelize an Append node
> (perhaps only possible when the nodes underneath are all SeqScan or
> ForeignScan nodes) since that would allow us to then parallelize the
> work across multiple tables and remote servers.
I don't see how we can support the stuff ForeignScan does; presumably
any parallelism there is up to the FDW to implement, using whatever
in-core tools we provide. I do agree that parallelizing Append nodes
is useful; but let's get one thing done first before we start trying
to do thing #2.
> I'm not entirely following this. How can the worker be responsible for
> its own "plan" when the information passed to it (per the above
> paragraph..) is pretty minimal? In general, I don't think we need to
> have specifics like "this worker is going to do exactly X" because we
> will eventually need some communication to happen between the worker and
> the master process where the worker can ask for more work because it's
> finished what it was tasked with and the master will need to give it
> another chunk of work to do. I don't think we want exactly what each
> worker process will do to be fully formed at the outset because, even
> with the best information available, given concurrent load on the
> system, it's not going to be perfect and we'll end up starving workers.
> The plan, as formed by the master, should be more along the lines of
> "this is what I'm gonna have my workers do" along w/ how many workers,
> etc, and then it goes and does it. Perhaps for an 'explain analyze' we
> return information about what workers actually *did* what, but that's a
> whole different discussion.
I agree with this. For a first version, I think it's OK to start a
worker up for a particular sequential scan and have it help with that
sequential scan until the scan is completed, and then exit. It should
not, as the present version of the patch does, assign a fixed block
range to each worker; instead, workers should allocate a block or
chunk of blocks to work on until no blocks remain. That way, even if
every worker but one gets stuck, the rest of the scan can still
finish.
Eventually, we will want to be smarter about sharing works between
multiple parts of the plan, but I think it is just fine to leave that
as a future enhancement for now.
>> - Master backend is just responsible for coordination among workers
>> It shares the required information to workers and then fetch the
>> data processed by each worker, by using some more logic, we might
>> be able to make master backend also fetch data from heap rather than
>> doing just co-ordination among workers.
>
> I don't think this is really necessary...
I think it would be an awfully good idea to make this work. The
master thread may be significantly faster than any of the others
because it has no IPC costs. We don't want to leave our best resource
sitting on the bench.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, José Luis Tallón <jltallon(at)adv-solutions(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-09 05:34:19 |
Message-ID: | CAA4eK1J=X=0gxD8-Zn2Z-hVzBE035F64X3b1G2FQaruUOOoVFw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Dec 8, 2014 at 11:21 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sat, Dec 6, 2014 at 1:50 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > I think we have access to this information in planner (RelOptInfo ->
pages),
> > if we want, we can use that to eliminate the small relations from
> > parallelism, but question is how big relations do we want to consider
> > for parallelism, one way is to check via tests which I am planning to
> > follow, do you think we have any heuristic which we can use to decide
> > how big relations should be consider for parallelism?
>
> Surely the Path machinery needs to decide this in particular cases
> based on cost. We should assign some cost to starting a parallel
> worker via some new GUC, like parallel_startup_cost = 100,000. And
> then we should also assign a cost to the act of relaying a tuple from
> the parallel worker to the master, maybe cpu_tuple_cost (or some new
> GUC). For a small relation, or a query with a LIMIT clause, the
> parallel startup cost will make starting a lot of workers look
> unattractive, but for bigger relations it will make sense from a cost
> perspective, which is exactly what we want.
>
Sounds sensible. cpu_tuple_cost is already used for some other
purpose so not sure if it is right thing to override that parameter,
how about cpu_tuple_communication_cost or cpu_tuple_comm_cost.
> There are probably other important considerations based on goals for
> overall resource utilization, and also because at a certain point
> adding more workers won't help because the disk will be saturated. I
> don't know exactly what we should do about those issues yet, but the
> steps described in the previous paragraph seem like a good place to
> start anyway.
>
Agreed.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-09 05:46:55 |
Message-ID: | CAA4eK1K=P4cqpv_CU-0DNd93wv1KBKgvJ5rds3ZCDsQRpJPyrA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Dec 8, 2014 at 11:27 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sat, Dec 6, 2014 at 7:07 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > For my 2c, I'd like to see it support exactly what the SeqScan node
> > supports and then also what Foreign Scan supports. That would mean we'd
> > then be able to push filtering down to the workers which would be great.
> > Even better would be figuring out how to parallelize an Append node
> > (perhaps only possible when the nodes underneath are all SeqScan or
> > ForeignScan nodes) since that would allow us to then parallelize the
> > work across multiple tables and remote servers.
>
> I don't see how we can support the stuff ForeignScan does; presumably
> any parallelism there is up to the FDW to implement, using whatever
> in-core tools we provide. I do agree that parallelizing Append nodes
> is useful; but let's get one thing done first before we start trying
> to do thing #2.
>
> > I'm not entirely following this. How can the worker be responsible for
> > its own "plan" when the information passed to it (per the above
> > paragraph..) is pretty minimal? In general, I don't think we need to
> > have specifics like "this worker is going to do exactly X" because we
> > will eventually need some communication to happen between the worker and
> > the master process where the worker can ask for more work because it's
> > finished what it was tasked with and the master will need to give it
> > another chunk of work to do. I don't think we want exactly what each
> > worker process will do to be fully formed at the outset because, even
> > with the best information available, given concurrent load on the
> > system, it's not going to be perfect and we'll end up starving workers.
> > The plan, as formed by the master, should be more along the lines of
> > "this is what I'm gonna have my workers do" along w/ how many workers,
> > etc, and then it goes and does it. Perhaps for an 'explain analyze' we
> > return information about what workers actually *did* what, but that's a
> > whole different discussion.
>
> I agree with this. For a first version, I think it's OK to start a
> worker up for a particular sequential scan and have it help with that
> sequential scan until the scan is completed, and then exit. It should
> not, as the present version of the patch does, assign a fixed block
> range to each worker; instead, workers should allocate a block or
> chunk of blocks to work on until no blocks remain. That way, even if
> every worker but one gets stuck, the rest of the scan can still
> finish.
>
I will check on this point and see if it is feasible to do something on
those lines, basically currently at Executor initialization phase, we
set the scan limits and then during Executor Run phase use
heap_getnext to fetch the tuples accordingly, but doing it dynamically
means at ExecutorRun phase we need to reset the scan limit for
which page/pages to scan, still I have to check if there is any problem
with such an idea. Do you any different idea in mind?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-09 14:45:04 |
Message-ID: | CA+TgmoZ-41y6ZXv0C8txBRCMn_BSDi27q69-nuS0K1PGt98QWA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Dec 9, 2014 at 12:46 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> I agree with this. For a first version, I think it's OK to start a
>> worker up for a particular sequential scan and have it help with that
>> sequential scan until the scan is completed, and then exit. It should
>> not, as the present version of the patch does, assign a fixed block
>> range to each worker; instead, workers should allocate a block or
>> chunk of blocks to work on until no blocks remain. That way, even if
>> every worker but one gets stuck, the rest of the scan can still
>> finish.
>>
> I will check on this point and see if it is feasible to do something on
> those lines, basically currently at Executor initialization phase, we
> set the scan limits and then during Executor Run phase use
> heap_getnext to fetch the tuples accordingly, but doing it dynamically
> means at ExecutorRun phase we need to reset the scan limit for
> which page/pages to scan, still I have to check if there is any problem
> with such an idea. Do you any different idea in mind?
Hmm. Well, it looks like there are basically two choices: you can
either (as you propose) deal with this above the level of the
heap_beginscan/heap_getnext API by scanning one or a few pages at a
time and then resetting the scan to a new starting page via
heap_setscanlimits; or alternatively, you can add a callback to
HeapScanDescData that, if non-NULL, will be invoked to get the next
block number to scan. I'm not entirely sure which is better.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-18 15:52:26 |
Message-ID: | CAA4eK1+O2fZe88NpxmMne94bwBKUEcx-2yuAe-MTYpbuoJ0NCw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>
> On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >
>
> So to summarize my understanding, below are the set of things
> which I should work on and in the order they are listed.
>
> 1. Push down qualification
> 2. Performance Data
> 3. Improve the way to push down the information related to worker.
> 4. Dynamic allocation of work for workers.
>
>
I have worked on the patch to accomplish above mentioned points
1, 2 and partly 3 and would like to share the progress with community.
If the statement contain quals that don't have volatile functions, then
they will be pushed down and the parallel can will be considered for
cost evaluation. I think eventually we might need some better way
to decide about which kind of functions are okay to be pushed.
I have also unified the way information is passed from master backend
to worker backends which is convert each node to string that has to be
passed and then later workers convert string to node, this has simplified
the related code.
I have taken performance data for different selectivity and complexity of
qual expressions, I understand that there will be other kind of scenario's
which we need to consider, however I think the current set of tests is good
place to start, please feel free to comment on kind of scenario's which you
want me to check
Performance Data
------------------------------
*m/c details*
IBM POWER-8 24 cores, 192 hardware threads
RAM = 492GB
*non-default settings in postgresql.conf*
max_connections=300
shared_buffers = 8GB
checkpoint_segments = 300
checkpoint_timeout = 30min
max_worker_processes=100
create table tbl_perf(c1 int, c2 char(1000));
30 million rows
------------------------
insert into tbl_perf values(generate_series(1,10000000),'aaaaa');
insert into tbl_perf values(generate_series(10000000,30000000),'aaaaa');
Function used in quals
-----------------------------------
A simple function which will perform some calculation and return
the value passed which can be used in qual condition.
create or replace function calc_factorial(a integer, fact_val integer)
returns integer
as $$
begin
perform (fact_val)!;
return a;
end;
$$ language plpgsql STABLE;
In below data,
num_workers - number of parallel workers configured using
parallel_seqscan_degree. 0, means it will execute sequence
scan and greater than 0 means parallel sequence scan.
exec_time - Execution Time given by Explain Analyze statement.
*Tests having quals containing function evaluation in qual*
*expressions.*
*Test-1*
*Query -* Explain analyze select c1 from tbl_perf where
c1 > calc_factorial(29700000,10) and c2 like '%aa%';
*Selection_criteria – *1% of rows will be selected
*num_workers* *exec_time (ms)* 0 229534 2 121741 4 67051 8 35607 16
24743
*Test-2*
*Query - *Explain analyze select c1 from tbl_perf where
c1 > calc_factorial(27000000,10) and c2 like '%aa%';
*Selection_criteria – *10% of rows will be selected
*num_workers* *exec_time (ms)* 0 226671 2 151587 4 93648 8 70540 16
55466
*Test-3*
*Query -* Explain analyze select c1 from tbl_perf
where c1 > calc_factorial(22500000,10) and c2 like '%aa%';
*Selection_criteria –* 25% of rows will be selected
*num_workers* *exec_time (ms)* 0 232673 2 197609 4 142686 8 111664 16
98097
*Tests having quals containing simple expressions in qual.*
*Test-4*
*Query - *Explain analyze select c1 from tbl_perf
where c1 > 29700000 and c2 like '%aa%';
*Selection_criteria –* 1% of rows will be selected
*num_workers* *exec_time (ms)* 0 15505 2 9155 4 6030 8 4523 16 4459
32 8259 64 13388
*Test-5*
*Query - *Explain analyze select c1 from tbl_perf
where c1 > 28500000 and c2 like '%aa%';
*Selection_criteria –* 5% of rows will be selected
*num_workers* *exec_time (ms)* 0 18906 2 13446 4 8970 8 7887 16 10403
*Test-6*
*Query -* Explain analyze select c1 from tbl_perf
where c1 > 27000000 and c2 like '%aa%';
*Selection_criteria – *10% of rows will be selected
*num_workers* *exec_time (ms)* 0 16132 2 23780 4 20275 8 11390 16
11418
Conclusion
------------------
1. Parallel workers help a lot when there is an expensive qualification
to evaluated, the more expensive the qualification the more better are
results.
2. It works well for low selectivity quals and as the selectivity increases,
the benefit tends to go down due to additional tuple communication cost
between workers and master backend.
3. After certain point, increasing having more number of workers won't
help and rather have negative impact, refer Test-4.
I think as discussed previously we need to introduce 2 additional cost
variables (parallel_startup_cost, cpu_tuple_communication_cost) to
estimate the parallel seq scan cost so that when the tables are small
or selectivity is high, it should increase the cost of parallel plan.
Thoughts and feedback for the current state of patch is welcome.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-18 16:03:34 |
Message-ID: | CAA4eK1KHWha5_NqxFZFBZ=ZFkkBSwf+2Z6htJFV+YVY_LW9cQA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>
> On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> >
> > On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfrost(at)snowman(dot)net>
wrote:
> > >
> >
> > So to summarize my understanding, below are the set of things
> > which I should work on and in the order they are listed.
> >
> > 1. Push down qualification
> > 2. Performance Data
> > 3. Improve the way to push down the information related to worker.
> > 4. Dynamic allocation of work for workers.
> >
> >
>
> I have worked on the patch to accomplish above mentioned points
> 1, 2 and partly 3 and would like to share the progress with community.
Sorry forgot to attach updated patch in last mail, attaching it now.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_seqscan_v2.patch | application/octet-stream | 78.6 KB |
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 12:51:01 |
Message-ID: | 20141219125101.GH3510@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Amit,
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> 1. Parallel workers help a lot when there is an expensive qualification
> to evaluated, the more expensive the qualification the more better are
> results.
I'd certainly hope so. ;)
> 2. It works well for low selectivity quals and as the selectivity increases,
> the benefit tends to go down due to additional tuple communication cost
> between workers and master backend.
I'm a bit sad to hear that the communication between workers and the
master backend is already being a bottleneck. Now, that said, the box
you're playing with looks to be pretty beefy and therefore the i/o
subsystem might be particularly good, but generally speaking, it's a lot
faster to move data in memory than it is to pull it off disk, and so I
wouldn't expect the tuple communication between processes to really be
the bottleneck...
> 3. After certain point, increasing having more number of workers won't
> help and rather have negative impact, refer Test-4.
Yes, I see that too and it's also interesting- have you been able to
identify why? What is the overhead (specifically) which is causing
that?
> I think as discussed previously we need to introduce 2 additional cost
> variables (parallel_startup_cost, cpu_tuple_communication_cost) to
> estimate the parallel seq scan cost so that when the tables are small
> or selectivity is high, it should increase the cost of parallel plan.
I agree that we need to figure out a way to cost out parallel plans, but
I have doubts about these being the right way to do that. There has
been quite a bit of literature regarding parallel execution and
planning- have you had a chance to review anything along those lines?
We certainly like to draw on previous experiences and analysis rather
than trying to pave our own way.
With these additional costs comes the consideration that we're looking
for a wall-clock runtime proxy and therefore, while we need to add costs
for parallel startup and tuple communication, we have to reduce the
overall cost because of the parallelism or we'd never end up choosing a
parallel plan. Is the thought to simply add up all the costs and then
divide? Or perhaps to divide the cost of the actual plan but then add
in the parallel startup cost and the tuple communication cost?
Perhaps there has been prior discussion on these points but I'm thinking
we need a README or similar which discusses all of this and includes any
references out to academic papers or similar as appropriate.
Thanks!
Stephen
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 13:15:07 |
Message-ID: | CA+TgmoZwzh=S9izgVvAyQNFgv8V5J+y0bDV-ZFPn+JpSEuy5kQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 19, 2014 at 7:51 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> 3. After certain point, increasing having more number of workers won't
>> help and rather have negative impact, refer Test-4.
>
> Yes, I see that too and it's also interesting- have you been able to
> identify why? What is the overhead (specifically) which is causing
> that?
Let's rewind. Amit's results show that, with a naive algorithm
(pre-distributing equal-sized chunks of the relation to every worker)
and a fairly-naive first cut at how to pass tuples around (I believe
largely from what I did in pg_background) he can sequential-scan a
table with 8 workers at 6.4 times the speed of a single process, and
you're complaining because it's not efficient enough? It's a first
draft! Be happy we got 6.4x, for crying out loud!
The barrier to getting parallel sequential scan (or any parallel
feature at all) committed is not going to be whether an 8-way scan is
6.4 times faster or 7.1 times faster or 7.8 times faster. It's going
to be whether it's robust and won't break things. We should be
focusing most of our effort here on identifying and fixing robustness
problems. I'd vote to commit a feature like this with a 3x
performance speedup if I thought it was robust enough.
I'm not saying we shouldn't try to improve the performance here - we
definitely should. But I don't think we should say, oh, an 8-way scan
isn't good enough, we need a 16-way or 32-way scan in order for this
to be efficient. That is getting your priorities quite mixed up.
>> I think as discussed previously we need to introduce 2 additional cost
>> variables (parallel_startup_cost, cpu_tuple_communication_cost) to
>> estimate the parallel seq scan cost so that when the tables are small
>> or selectivity is high, it should increase the cost of parallel plan.
>
> I agree that we need to figure out a way to cost out parallel plans, but
> I have doubts about these being the right way to do that. There has
> been quite a bit of literature regarding parallel execution and
> planning- have you had a chance to review anything along those lines?
> We certainly like to draw on previous experiences and analysis rather
> than trying to pave our own way.
I agree that it would be good to review the literature, but am not
aware of anything relevant. Could you (or can anyone) provide some
links?
> With these additional costs comes the consideration that we're looking
> for a wall-clock runtime proxy and therefore, while we need to add costs
> for parallel startup and tuple communication, we have to reduce the
> overall cost because of the parallelism or we'd never end up choosing a
> parallel plan. Is the thought to simply add up all the costs and then
> divide? Or perhaps to divide the cost of the actual plan but then add
> in the parallel startup cost and the tuple communication cost?
This has been discussed, on this thread.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 14:27:10 |
Message-ID: | 20141219142710.GA29570@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert,
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Fri, Dec 19, 2014 at 7:51 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >> 3. After certain point, increasing having more number of workers won't
> >> help and rather have negative impact, refer Test-4.
> >
> > Yes, I see that too and it's also interesting- have you been able to
> > identify why? What is the overhead (specifically) which is causing
> > that?
>
> Let's rewind. Amit's results show that, with a naive algorithm
> (pre-distributing equal-sized chunks of the relation to every worker)
> and a fairly-naive first cut at how to pass tuples around (I believe
> largely from what I did in pg_background) he can sequential-scan a
> table with 8 workers at 6.4 times the speed of a single process, and
> you're complaining because it's not efficient enough? It's a first
> draft! Be happy we got 6.4x, for crying out loud!
He also showed cases where parallelizing a query even with just two
workers caused a serious increase in the total runtime (Test 6). Even
having four workers was slower in that case, but a modest performance
improvment was reached at eight but then no improvement from that was
seen when running with 16.
Being able to understand what's happening will inform how we cost this
to, hopefully, achieve the 6.4x gains where we can and avoid the
pitfalls of performing worse than a single thread in cases where
parallelism doesn't help. What would likely be very helpful in the
analysis would be CPU time information- when running with eight workers,
were we using 800% CPU (8x 100%), or something less (perhaps due to
locking, i/o, or other processes).
Perhaps it's my fault for not being surprised that a naive first cut
gives us such gains as my experience with parallel operations and PG has
generally been very good (through the use of multiple connections to the
DB and therefore independent transactions, of course). I'm very excited
that we're making such great progress towards having parallel execution
in the DB as I've often used PG in data warehouse use-cases.
> The barrier to getting parallel sequential scan (or any parallel
> feature at all) committed is not going to be whether an 8-way scan is
> 6.4 times faster or 7.1 times faster or 7.8 times faster. It's going
> to be whether it's robust and won't break things. We should be
> focusing most of our effort here on identifying and fixing robustness
> problems. I'd vote to commit a feature like this with a 3x
> performance speedup if I thought it was robust enough.
I don't have any problem if an 8-way scan is 6.4x faster or if it's 7.1
times faster, but what if that 3x performance speedup is only achieved
when running with 8 CPUs at 100%? We'd have to coach our users to
constantly be tweaking the enable_parallel_query (or whatever) option
for the queries where it helps and turning it off for others. I'm not
so excited about that.
> I'm not saying we shouldn't try to improve the performance here - we
> definitely should. But I don't think we should say, oh, an 8-way scan
> isn't good enough, we need a 16-way or 32-way scan in order for this
> to be efficient. That is getting your priorities quite mixed up.
I don't think I said that. What I was getting at is that we need a cost
system which accounts for the costs accurately enough that we don't end
up with worse performance than single-threaded operation. In general, I
don't expect that to be very difficult and we can be conservative in the
initial releases to hopefully avoid regressions, but it absolutely needs
consideration.
> >> I think as discussed previously we need to introduce 2 additional cost
> >> variables (parallel_startup_cost, cpu_tuple_communication_cost) to
> >> estimate the parallel seq scan cost so that when the tables are small
> >> or selectivity is high, it should increase the cost of parallel plan.
> >
> > I agree that we need to figure out a way to cost out parallel plans, but
> > I have doubts about these being the right way to do that. There has
> > been quite a bit of literature regarding parallel execution and
> > planning- have you had a chance to review anything along those lines?
> > We certainly like to draw on previous experiences and analysis rather
> > than trying to pave our own way.
>
> I agree that it would be good to review the literature, but am not
> aware of anything relevant. Could you (or can anyone) provide some
> links?
There's certainly documentation available from the other RDBMS' which
already support parallel query, as one source. Other academic papers
exist (and once you've linked into one, the references and prior work
helps bring in others). Sadly, I don't currently have ACM access (might
have to change that..), but there are publicly available papers also,
such as:
http://i.stanford.edu/pub/cstr/reports/cs/tr/96/1570/CS-TR-96-1570.pdf
http://www.vldb.org/conf/1998/p251.pdf
http://www.cs.uiuc.edu/class/fa05/cs591han/sigmodpods04/sigmod/pdf/I-001c.pdf
> > With these additional costs comes the consideration that we're looking
> > for a wall-clock runtime proxy and therefore, while we need to add costs
> > for parallel startup and tuple communication, we have to reduce the
> > overall cost because of the parallelism or we'd never end up choosing a
> > parallel plan. Is the thought to simply add up all the costs and then
> > divide? Or perhaps to divide the cost of the actual plan but then add
> > in the parallel startup cost and the tuple communication cost?
>
> This has been discussed, on this thread.
Fantastic. What I found in the patch was:
+ /*
+ * We simply assume that cost will be equally shared by parallel
+ * workers which might not be true especially for doing disk access.
+ * XXX - We would like to change these values based on some concrete
+ * tests.
+ */
What I asked for was:
----
I'm thinking we need a README or similar which discusses all of this and
includes any references out to academic papers or similar as appropriate.
----
Perhaps it doesn't deserve its own README, but we clearly need more.
Thanks!
Stephen
From: | Marko Tiikkaja <marko(at)joh(dot)to> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 14:32:25 |
Message-ID: | 549436F9.5050504@joh.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 결과SQL |
On 12/19/14 3:27 PM, Stephen Frost wrote:
> We'd have to coach our users to
> constantly be tweaking the enable_parallel_query (or whatever) option
> for the queries where it helps and turning it off for others. I'm not
> so excited about that.
I'd be perfectly (that means 100%) happy if it just defaulted to off,
but I could turn it up to 11 whenever I needed it. I don't believe to
be the only one with this opinion, either.
.marko
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Marko Tiikkaja <marko(at)joh(dot)to> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 14:39:57 |
Message-ID: | 20141219143957.GB29570@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg롤 토토SQL : |
* Marko Tiikkaja (marko(at)joh(dot)to) wrote:
> On 12/19/14 3:27 PM, Stephen Frost wrote:
> >We'd have to coach our users to
> >constantly be tweaking the enable_parallel_query (or whatever) option
> >for the queries where it helps and turning it off for others. I'm not
> >so excited about that.
>
> I'd be perfectly (that means 100%) happy if it just defaulted to
> off, but I could turn it up to 11 whenever I needed it. I don't
> believe to be the only one with this opinion, either.
Perhaps we should reconsider our general position on hints then and
add them so users can define the plan to be used.. For my part, I don't
see this as all that much different.
Consider if we were just adding HashJoin support today as an example.
Would we be happy if we had to default to enable_hashjoin = off? Or if
users had to do that regularly because our costing was horrid? It's bad
enough that we have to resort to those tweaks today in rare cases.
Thanks,
Stephen
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Marko Tiikkaja <marko(at)joh(dot)to>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 14:53:53 |
Message-ID: | CA+Tgmob3qJHYtwRnNWySoFRQJkGHdxpkRmRW3Rz9aLmWG4FzZg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 19, 2014 at 9:39 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> Perhaps we should reconsider our general position on hints then and
> add them so users can define the plan to be used.. For my part, I don't
> see this as all that much different.
>
> Consider if we were just adding HashJoin support today as an example.
> Would we be happy if we had to default to enable_hashjoin = off? Or if
> users had to do that regularly because our costing was horrid? It's bad
> enough that we have to resort to those tweaks today in rare cases.
If you're proposing that it is not reasonable to have a GUC that
limits the degree of parallelism, then I think that's outright crazy:
that is probably the very first GUC we need to add. New query
processing capabilities can entail new controlling GUCs, and
parallelism, being as complex at it is, will probably add several of
them.
But the big picture here is that if you want to ever have parallelism
in PostgreSQL at all, you're going to have to live with the first
version being pretty crude. I think it's quite likely that the first
version of parallel sequential scan will be just as buggy as Hot
Standby was when we first added it, or as buggy as the multi-xact code
was when it went in, and probably subject to an even greater variety
of taxing limitations than any feature we've committed in the 6 years
I've been involved in the project. We get to pick between that and
not having it at all.
I'll take a look at the papers you sent about parallel query
optimization, but personally I think that's putting the cart not only
before the horse but also before the road. For V1, we need a query
optimization model that does not completely suck - no more. The key
criterion here is that this has to WORK. There will be time enough to
improve everything else once we reach that goal.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Marko Tiikkaja <marko(at)joh(dot)to> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 14:54:38 |
Message-ID: | 54943C2E.6010401@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 결과SQL |
On 12/19/2014 04:39 PM, Stephen Frost wrote:
> * Marko Tiikkaja (marko(at)joh(dot)to) wrote:
>> On 12/19/14 3:27 PM, Stephen Frost wrote:
>>> We'd have to coach our users to
>>> constantly be tweaking the enable_parallel_query (or whatever) option
>>> for the queries where it helps and turning it off for others. I'm not
>>> so excited about that.
>>
>> I'd be perfectly (that means 100%) happy if it just defaulted to
>> off, but I could turn it up to 11 whenever I needed it. I don't
>> believe to be the only one with this opinion, either.
>
> Perhaps we should reconsider our general position on hints then and
> add them so users can define the plan to be used.. For my part, I don't
> see this as all that much different.
>
> Consider if we were just adding HashJoin support today as an example.
> Would we be happy if we had to default to enable_hashjoin = off? Or if
> users had to do that regularly because our costing was horrid? It's bad
> enough that we have to resort to those tweaks today in rare cases.
This is somewhat different. Imagine that we achieve perfect
parallelization, so that when you set enable_parallel_query=8, every
query runs exactly 8x faster on an 8-core system, by using all eight cores.
Now, you might still want to turn parallelization off, or at least set
it to a lower setting, on an OLTP system. You might not want a single
query to hog all CPUs to run one query faster; you'd want to leave some
for other queries. In particular, if you run a mix of short
transactions, and some background-like tasks that run for minutes or
hours, you do not want to starve the short transactions by giving all
eight CPUs to the background task.
Admittedly, this is a rather crude knob to tune for such things,
but it's quite intuitive to a DBA: how many CPU cores is one query
allowed to utilize? And we don't really have anything better.
In real life, there's always some overhead to parallelization, so that
even if you can make one query run faster by doing it, you might hurt
overall throughput. To some extent, it's a latency vs. throughput
tradeoff, and it's quite reasonable to have a GUC for that because
people have different priorities.
- Heikki
From: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 19:26:28 |
Message-ID: | 54947BE4.8080900@archidevsys.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 20/12/14 03:54, Heikki Linnakangas wrote:
> On 12/19/2014 04:39 PM, Stephen Frost wrote:
>> * Marko Tiikkaja (marko(at)joh(dot)to) wrote:
>>> On 12/19/14 3:27 PM, Stephen Frost wrote:
>>>> We'd have to coach our users to
>>>> constantly be tweaking the enable_parallel_query (or whatever) option
>>>> for the queries where it helps and turning it off for others. I'm not
>>>> so excited about that.
>>>
>>> I'd be perfectly (that means 100%) happy if it just defaulted to
>>> off, but I could turn it up to 11 whenever I needed it. I don't
>>> believe to be the only one with this opinion, either.
>>
>> Perhaps we should reconsider our general position on hints then and
>> add them so users can define the plan to be used.. For my part, I don't
>> see this as all that much different.
>>
>> Consider if we were just adding HashJoin support today as an example.
>> Would we be happy if we had to default to enable_hashjoin = off? Or if
>> users had to do that regularly because our costing was horrid? It's bad
>> enough that we have to resort to those tweaks today in rare cases.
>
> This is somewhat different. Imagine that we achieve perfect
> parallelization, so that when you set enable_parallel_query=8, every
> query runs exactly 8x faster on an 8-core system, by using all eight
> cores.
>
> Now, you might still want to turn parallelization off, or at least set
> it to a lower setting, on an OLTP system. You might not want a single
> query to hog all CPUs to run one query faster; you'd want to leave
> some for other queries. In particular, if you run a mix of short
> transactions, and some background-like tasks that run for minutes or
> hours, you do not want to starve the short transactions by giving all
> eight CPUs to the background task.
>
> Admittedly, this is a rather crude knob to tune for such things,
> but it's quite intuitive to a DBA: how many CPU cores is one query
> allowed to utilize? And we don't really have anything better.
>
> In real life, there's always some overhead to parallelization, so that
> even if you can make one query run faster by doing it, you might hurt
> overall throughput. To some extent, it's a latency vs. throughput
> tradeoff, and it's quite reasonable to have a GUC for that because
> people have different priorities.
>
> - Heikki
>
>
>
How about 3 numbers:
minCPUs # > 0
maxCPUs # >= minCPUs
fractionOfCPUs # rounded up
If you just have the /*number*/ of CPUs then a setting that is
appropriate for quad core, may be too /*small*/ for an octo core processor.
If you just have the /*fraction*/ of CPUs then a setting that is
appropriate for quad core, may be too /*large*/ for an octo core processor.
Cheers,
Gavin
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Marko Tiikkaja <marko(at)joh(dot)to>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 19:49:29 |
Message-ID: | 20141219194929.GC29570@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert,
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Fri, Dec 19, 2014 at 9:39 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > Perhaps we should reconsider our general position on hints then and
> > add them so users can define the plan to be used.. For my part, I don't
> > see this as all that much different.
> >
> > Consider if we were just adding HashJoin support today as an example.
> > Would we be happy if we had to default to enable_hashjoin = off? Or if
> > users had to do that regularly because our costing was horrid? It's bad
> > enough that we have to resort to those tweaks today in rare cases.
>
> If you're proposing that it is not reasonable to have a GUC that
> limits the degree of parallelism, then I think that's outright crazy:
I'm pretty sure that I didn't say anything along those lines. I'll try
to be clearer.
What I'd like is such a GUC that we can set at a reasonable default of,
say, 4, and trust that our planner will generally do the right thing.
Clearly, this may be something which admins have to tweak but what I
would really like to avoid is users having to set this GUC explicitly
for each of their queries.
> that is probably the very first GUC we need to add. New query
> processing capabilities can entail new controlling GUCs, and
> parallelism, being as complex at it is, will probably add several of
> them.
That's fine if they're intended for debugging issues or dealing with
unexpected bugs or issues, but let's not go into this thinking we should
add GUCs which are geared with the expectation of users tweaking them
regularly.
> But the big picture here is that if you want to ever have parallelism
> in PostgreSQL at all, you're going to have to live with the first
> version being pretty crude. I think it's quite likely that the first
> version of parallel sequential scan will be just as buggy as Hot
> Standby was when we first added it, or as buggy as the multi-xact code
> was when it went in, and probably subject to an even greater variety
> of taxing limitations than any feature we've committed in the 6 years
> I've been involved in the project. We get to pick between that and
> not having it at all.
If it's disabled by default then I'm worried it won't really improve
until it is. Perhaps that's setting a higher bar than you feel is
necessary but, for my part at least, it doesn't feel like a very high
level.
> I'll take a look at the papers you sent about parallel query
> optimization, but personally I think that's putting the cart not only
> before the horse but also before the road. For V1, we need a query
> optimization model that does not completely suck - no more. The key
> criterion here is that this has to WORK. There will be time enough to
> improve everything else once we reach that goal.
I agree that it's got to work, but it also needs to be generally well
designed, and have the expectation of being on by default.
Thanks,
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Marko Tiikkaja <marko(at)joh(dot)to>, Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-19 20:00:35 |
Message-ID: | 20141219200035.GD29570@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Heikki Linnakangas (hlinnakangas(at)vmware(dot)com) wrote:
> On 12/19/2014 04:39 PM, Stephen Frost wrote:
> >* Marko Tiikkaja (marko(at)joh(dot)to) wrote:
> >>I'd be perfectly (that means 100%) happy if it just defaulted to
> >>off, but I could turn it up to 11 whenever I needed it. I don't
> >>believe to be the only one with this opinion, either.
> >
> >Perhaps we should reconsider our general position on hints then and
> >add them so users can define the plan to be used.. For my part, I don't
> >see this as all that much different.
> >
> >Consider if we were just adding HashJoin support today as an example.
> >Would we be happy if we had to default to enable_hashjoin = off? Or if
> >users had to do that regularly because our costing was horrid? It's bad
> >enough that we have to resort to those tweaks today in rare cases.
>
> This is somewhat different. Imagine that we achieve perfect
> parallelization, so that when you set enable_parallel_query=8, every
> query runs exactly 8x faster on an 8-core system, by using all eight
> cores.
To be clear, as I mentioned to Robert just now, I'm not objecting to a
GUC being added to turn off or control parallelization. I don't want
such a GUC to be a crutch for us to lean on when it comes to questions
about the optimizer though. We need to work through the optimizer
questions of "should this be parallelized?" and, perhaps later, "how
many ways is it sensible to parallelize this?" I'm worried we'll take
such a GUC as a directive along the lines of "we are being told to
parallelize to exactly this level every time and for every query which
can be." The GUC should be an input into the planner/optimizer much the
way enable_hashjoin is, unless it's being done as a *limiting* factor
for the administrator to be able to control, but we've generally avoided
doing that (see: work_mem) and, if we're going to start, we should
probably come up with an approach that addresses the considerations for
other resources too.
Thanks,
Stephen
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-21 06:42:04 |
Message-ID: | CAA4eK1Jf7ORdkDYFTNC6kYe7zcn5-49c_SztVdJ=1HC7KcdjMQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
> Amit,
>
> * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> > 1. Parallel workers help a lot when there is an expensive qualification
> > to evaluated, the more expensive the qualification the more better are
> > results.
>
> I'd certainly hope so. ;)
>
> > 2. It works well for low selectivity quals and as the selectivity
increases,
> > the benefit tends to go down due to additional tuple communication cost
> > between workers and master backend.
>
> I'm a bit sad to hear that the communication between workers and the
> master backend is already being a bottleneck. Now, that said, the box
> you're playing with looks to be pretty beefy and therefore the i/o
> subsystem might be particularly good, but generally speaking, it's a lot
> faster to move data in memory than it is to pull it off disk, and so I
> wouldn't expect the tuple communication between processes to really be
> the bottleneck...
>
The main reason for higher cost of tuple communication is because at
this moment I have used an approach to pass the tuples which is
comparatively
less error prone and could be used as per existing FE/BE protocol.
To explain in brief, what is happening here is that currently worker backend
gets the tuple from page which it is deforms and send the same to master
backend via message queue, master backend then forms the tuple and send it
to upper layer which before sending it to frontend again deforms it via
slot_getallattrs(slot). The benefit of using this approach is that it works
as per current protocol message ('D') and as per our current executor code.
Now there could be couple of ways with which we can reduce the tuple
communication overhead.
a. Instead of passing value array, just pass tuple id, but retain the
buffer pin till master backend reads the tuple based on tupleid.
This has side effect that we have to retain buffer pin for longer
period of time, but again that might not have any problem in
real world usage of parallel query.
b. Instead of passing value array, pass directly the tuple which could
be directly propagated by master backend to upper layer or otherwise
in master backend change some code such that it could propagate the
tuple array received via shared memory queue directly to frontend.
Basically save the one extra cycle of form/deform tuple.
Both these need some new message type and handling for same in
Executor code.
Having said above, I think we can try to optimize this in multiple
ways, however we need additional mechanism and changes in Executor
code which is error prone and doesn't seem to be important at this
stage where we want the basic feature to work.
> > 3. After certain point, increasing having more number of workers won't
> > help and rather have negative impact, refer Test-4.
>
> Yes, I see that too and it's also interesting- have you been able to
> identify why? What is the overhead (specifically) which is causing
> that?
>
I think there are mainly two things which can lead to benefit
by employing parallel workers
a. Better use of available I/O bandwidth
b. Better use of available CPU's by doing expression evaluation
by multiple workers.
The simple theory here is that there has to be certain limit
(in terms of number of parallel workers) till which there can
be benefit due to both of the above points and after which there
will be overhead (setting up so many workers even though they
are not required, then some additional wait by master backend
for non-helping workers to finish their work, then if there
are not enough CPU's available and may be others as well like
overusing I/O channel might also degrade the performance
rather than improving it).
In the above tests, it seems to me that the maximum benefit due to
'a' is realized upto 4~8 workers and the maximum benefit due to
'b' depends upon the complexity (time to evaluate) of expression.
That is the reason why we can see benefit's in Tests-1 ~ Test-3 above
8 parallel workers as well whereas for Tests-4 to Tests-6 it maximizes
at 8 workers and after that either there is no improvement or
degradation due to one or more reasons as explained in previous
paragraph.
I think important point which is mentioned by you as well is
that there should be a reasonably good cost model which can
account some or all of these things so that by using parallel
query user can achieve the benefit it provides and won't have
to pay the cost in which there is no or less benefit.
I am not sure that in first cut we can come up with a highly
robust cost model, but it should not be too weak that most
of the time user has to find the right tuning based on parameters
we are going to add. Based on my understanding and by referring
to existing literature, I will try to come up with the cost model
and then we can have a discussion if required whether that is good
enough for first cut or not.
> > I think as discussed previously we need to introduce 2 additional cost
> > variables (parallel_startup_cost, cpu_tuple_communication_cost) to
> > estimate the parallel seq scan cost so that when the tables are small
> > or selectivity is high, it should increase the cost of parallel plan.
>
> I agree that we need to figure out a way to cost out parallel plans, but
> I have doubts about these being the right way to do that. There has
> been quite a bit of literature regarding parallel execution and
> planning- have you had a chance to review anything along those lines?
Not now, but sometime back I had read quite a few papers on parallelism,
I will refer some of them again before deciding the exact cost model
and might as well discuss about them.
> We certainly like to draw on previous experiences and analysis rather
> than trying to pave our own way.
>
> With these additional costs comes the consideration that we're looking
> for a wall-clock runtime proxy and therefore, while we need to add costs
> for parallel startup and tuple communication, we have to reduce the
> overall cost because of the parallelism or we'd never end up choosing a
> parallel plan. Is the thought to simply add up all the costs and then
> divide? Or perhaps to divide the cost of the actual plan but then add
> in the parallel startup cost and the tuple communication cost?
>
> Perhaps there has been prior discussion on these points but I'm thinking
> we need a README or similar which discusses all of this and includes any
> references out to academic papers or similar as appropriate.
>
Got the point, I think we need to mention somewhere either in README or
in some file header.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-22 02:04:56 |
Message-ID: | 54977C48.5020600@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 12/21/14, 12:42 AM, Amit Kapila wrote:
> On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost <sfrost(at)snowman(dot)net <mailto:sfrost(at)snowman(dot)net>> wrote:
> a. Instead of passing value array, just pass tuple id, but retain the
> buffer pin till master backend reads the tuple based on tupleid.
> This has side effect that we have to retain buffer pin for longer
> period of time, but again that might not have any problem in
> real world usage of parallel query.
>
> b. Instead of passing value array, pass directly the tuple which could
> be directly propagated by master backend to upper layer or otherwise
> in master backend change some code such that it could propagate the
> tuple array received via shared memory queue directly to frontend.
> Basically save the one extra cycle of form/deform tuple.
>
> Both these need some new message type and handling for same in
> Executor code.
>
> Having said above, I think we can try to optimize this in multiple
> ways, however we need additional mechanism and changes in Executor
> code which is error prone and doesn't seem to be important at this
> stage where we want the basic feature to work.
Would b require some means of ensuring we didn't try and pass raw tuples to frontends? Other than that potential wrinkle, it seems like less work than a.
...
> I think there are mainly two things which can lead to benefit
> by employing parallel workers
> a. Better use of available I/O bandwidth
> b. Better use of available CPU's by doing expression evaluation
> by multiple workers.
...
> In the above tests, it seems to me that the maximum benefit due to
> 'a' is realized upto 4~8 workers
I'd think a good first estimate here would be to just use effective_io_concurrency.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-22 03:57:44 |
Message-ID: | CAA4eK1++HD31vgA2-C5btiX+6bLZrtoTwmwzgk4Cp9A0YEgUJw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Dec 22, 2014 at 7:34 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
>
> On 12/21/14, 12:42 AM, Amit Kapila wrote:
>>
>> On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost <sfrost(at)snowman(dot)net
<mailto:sfrost(at)snowman(dot)net>> wrote:
>> a. Instead of passing value array, just pass tuple id, but retain the
>> buffer pin till master backend reads the tuple based on tupleid.
>> This has side effect that we have to retain buffer pin for longer
>> period of time, but again that might not have any problem in
>> real world usage of parallel query.
>>
>> b. Instead of passing value array, pass directly the tuple which could
>> be directly propagated by master backend to upper layer or otherwise
>> in master backend change some code such that it could propagate the
>> tuple array received via shared memory queue directly to frontend.
>> Basically save the one extra cycle of form/deform tuple.
>>
>> Both these need some new message type and handling for same in
>> Executor code.
>>
>> Having said above, I think we can try to optimize this in multiple
>> ways, however we need additional mechanism and changes in Executor
>> code which is error prone and doesn't seem to be important at this
>> stage where we want the basic feature to work.
>
>
> Would b require some means of ensuring we didn't try and pass raw tuples
to frontends?
That seems to be already there, before sending the tuple
to frontend, we already ensure to deform it (refer printtup()->
slot_getallattrs())
>Other than that potential wrinkle, it seems like less work than a.
>
Here, I am assuming that you are mentioning about *pass the tuple*
directly approach; We also need to devise a new protocol message
and mechanism to directly pass the tuple via shared memory queues,
also I think currently we can send only the things via shared memory
queues which we can do via FE/BE protocol and we don't send tuples
directly to frontend. Apart from this, I am not sure how much benefit it
can give, because it will reduce one part of tuple communication, but still
the amount of data transferred will be almost same.
This is an area of improvement which needs more investigation and even
without this we can get benefit in many cases as shown upthread and
I think after that we can try to parallelize the aggregation (Simon Riggs
and
David Rowley have already worked out some infrastructure for the same)
that will surely give us good benefits. So I suggest it's better to focus
on
the remaining things with which this patch could be in a shape (in terms of
robustness/stability) where it can be accepted rather than trying to
optimize tuple communication which we can do later as well.
> ...
>
>> I think there are mainly two things which can lead to benefit
>> by employing parallel workers
>> a. Better use of available I/O bandwidth
>> b. Better use of available CPU's by doing expression evaluation
>> by multiple workers.
>
>
> ...
>
>> In the above tests, it seems to me that the maximum benefit due to
>> 'a' is realized upto 4~8 workers
>
>
> I'd think a good first estimate here would be to just use
effective_io_concurrency.
>
One thing we should be cautious about this parameter is that currently
it is mapped to number of pages that needs to prefetched, and using
it for deciding degree of parallelism could be slightly tricky, however I
will consider it while working on cost model.
Thanks for your suggestions.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-31 14:20:01 |
Message-ID: | CAA-aLv4abguWP4-NKRcNraCxSxMB5EMhz1GM0E=r8nZ_qb1ONg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 18 December 2014 at 16:03, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
>
> On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >
> > On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> > >
> > > On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfrost(at)snowman(dot)net>
> wrote:
> > > >
> > >
> > > So to summarize my understanding, below are the set of things
> > > which I should work on and in the order they are listed.
> > >
> > > 1. Push down qualification
> > > 2. Performance Data
> > > 3. Improve the way to push down the information related to worker.
> > > 4. Dynamic allocation of work for workers.
> > >
> > >
> >
> > I have worked on the patch to accomplish above mentioned points
> > 1, 2 and partly 3 and would like to share the progress with community.
>
> Sorry forgot to attach updated patch in last mail, attaching it now.
>
When attempting to recreate the plan in your example, I get an error:
➤ psql://thom(at)[local]:5488/pgbench
# create table t1(c1 int, c2 char(500)) with (fillfactor=10);
CREATE TABLE
Time: 13.653 ms
➤ psql://thom(at)[local]:5488/pgbench
# insert into t1 values(generate_series(1,100),'amit');
INSERT 0 100
Time: 4.796 ms
➤ psql://thom(at)[local]:5488/pgbench
# explain select c1 from t1;
ERROR: could not register background process
HINT: You may need to increase max_worker_processes.
Time: 1.659 ms
➤ psql://thom(at)[local]:5488/pgbench
# show max_worker_processes ;
max_worker_processes
----------------------
8
(1 row)
Time: 0.199 ms
# show parallel_seqscan_degree ;
parallel_seqscan_degree
-------------------------
10
(1 row)
Should I really need to increase max_worker_processes to >=
parallel_seqscan_degree? If so, shouldn't there be a hint here along with
the error message pointing this out? And should the error be produced when
only a *plan* is being requested?
Also, I noticed that where a table is partitioned, the plan isn't
parallelised:
# explain select distinct bid from pgbench_accounts;
QUERY
PLAN
----------------------------------------------------------------------------------------
HashAggregate (cost=1446639.00..1446643.99 rows=499 width=4)
Group Key: pgbench_accounts.bid
-> Append (cost=0.00..1321639.00 rows=50000001 width=4)
-> Seq Scan on pgbench_accounts (cost=0.00..0.00 rows=1 width=4)
-> Seq Scan on pgbench_accounts_1 (cost=0.00..4279.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_2 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_3 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_4 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_5 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_6 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_7 (cost=0.00..2640.00
rows=100000 width=4)
...
-> Seq Scan on pgbench_accounts_498 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_499 (cost=0.00..2640.00
rows=100000 width=4)
-> Seq Scan on pgbench_accounts_500 (cost=0.00..2640.00
rows=100000 width=4)
(504 rows)
Is this expected?
Thom
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-31 16:16:20 |
Message-ID: | CAA-aLv4CRgko6C_KaY1gazS1NwTHY=h-Rq8a-VteGHyDqKHRtg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 31 December 2014 at 14:20, Thom Brown <thom(at)linux(dot)com> wrote:
> On 18 December 2014 at 16:03, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
>>
>>
>> On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
>> wrote:
>> >
>> > On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
>> wrote:
>> > >
>> > > On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfrost(at)snowman(dot)net>
>> wrote:
>> > > >
>> > >
>> > > So to summarize my understanding, below are the set of things
>> > > which I should work on and in the order they are listed.
>> > >
>> > > 1. Push down qualification
>> > > 2. Performance Data
>> > > 3. Improve the way to push down the information related to worker.
>> > > 4. Dynamic allocation of work for workers.
>> > >
>> > >
>> >
>> > I have worked on the patch to accomplish above mentioned points
>> > 1, 2 and partly 3 and would like to share the progress with community.
>>
>> Sorry forgot to attach updated patch in last mail, attaching it now.
>>
>
> When attempting to recreate the plan in your example, I get an error:
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # create table t1(c1 int, c2 char(500)) with (fillfactor=10);
> CREATE TABLE
> Time: 13.653 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # insert into t1 values(generate_series(1,100),'amit');
> INSERT 0 100
> Time: 4.796 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # explain select c1 from t1;
> ERROR: could not register background process
> HINT: You may need to increase max_worker_processes.
> Time: 1.659 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # show max_worker_processes ;
> max_worker_processes
> ----------------------
> 8
> (1 row)
>
> Time: 0.199 ms
>
> # show parallel_seqscan_degree ;
> parallel_seqscan_degree
> -------------------------
> 10
> (1 row)
>
>
> Should I really need to increase max_worker_processes to >=
> parallel_seqscan_degree? If so, shouldn't there be a hint here along with
> the error message pointing this out? And should the error be produced when
> only a *plan* is being requested?
>
> Also, I noticed that where a table is partitioned, the plan isn't
> parallelised:
>
> # explain select distinct bid from pgbench_accounts;
>
>
> QUERY
> PLAN
>
> ----------------------------------------------------------------------------------------
> HashAggregate (cost=1446639.00..1446643.99 rows=499 width=4)
> Group Key: pgbench_accounts.bid
> -> Append (cost=0.00..1321639.00 rows=50000001 width=4)
> -> Seq Scan on pgbench_accounts (cost=0.00..0.00 rows=1 width=4)
> -> Seq Scan on pgbench_accounts_1 (cost=0.00..4279.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_2 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_3 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_4 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_5 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_6 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_7 (cost=0.00..2640.00
> rows=100000 width=4)
> ...
> -> Seq Scan on pgbench_accounts_498 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_499 (cost=0.00..2640.00
> rows=100000 width=4)
> -> Seq Scan on pgbench_accounts_500 (cost=0.00..2640.00
> rows=100000 width=4)
> (504 rows)
>
> Is this expected?
>
Another issue (FYI, pgbench2 initialised with: pgbench -i -s 100 -F 10
pgbench2):
➤ psql://thom(at)[local]:5488/pgbench2
# explain select distinct bid from pgbench_accounts;
QUERY
PLAN
-------------------------------------------------------------------------------------------
HashAggregate (cost=245833.38..245834.38 rows=100 width=4)
Group Key: bid
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..220833.38
rows=10000000 width=4)
Number of Workers: 8
Number of Blocks Per Workers: 208333
(5 rows)
Time: 7.476 ms
➤ psql://thom(at)[local]:5488/pgbench2
# explain (analyse, buffers, verbose) select distinct bid from
pgbench_accounts;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Time: 14897.991 ms
The logs say:
2014-12-31 15:21:42 GMT [9164]: [240-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [241-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [242-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [243-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [244-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [245-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [246-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [247-1] user=,db=,client= LOG: registering
background worker "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [248-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [249-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [250-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [251-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [252-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [253-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [254-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:42 GMT [9164]: [255-1] user=,db=,client= LOG: starting
background worker process "backend_worker"
2014-12-31 15:21:46 GMT [9164]: [256-1] user=,db=,client= LOG: worker
process: backend_worker (PID 10887) exited with exit code 1
2014-12-31 15:21:46 GMT [9164]: [257-1] user=,db=,client= LOG:
unregistering background worker "backend_worker"
2014-12-31 15:21:50 GMT [9164]: [258-1] user=,db=,client= LOG: worker
process: backend_worker (PID 10888) exited with exit code 1
2014-12-31 15:21:50 GMT [9164]: [259-1] user=,db=,client= LOG:
unregistering background worker "backend_worker"
2014-12-31 15:21:57 GMT [9164]: [260-1] user=,db=,client= LOG: server
process (PID 10869) was terminated by signal 9: Killed
2014-12-31 15:21:57 GMT [9164]: [261-1] user=,db=,client= DETAIL: Failed
process was running: explain (analyse, buffers, verbose) select distinct
bid from pgbench_accounts;
2014-12-31 15:21:57 GMT [9164]: [262-1] user=,db=,client= LOG: terminating
any other active server processes
Running it again, I get the same issue. This is with
parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
This doesn't happen if I set the pgbench scale to 50. I suspect this is a
OOM issue. My laptop has 16GB RAM, the table is around 13GB at scale 100,
and I don't have swap enabled. But I'm concerned it crashes the whole
instance.
I also notice that requesting BUFFERS in a parallel EXPLAIN output yields
no such information. Is that not possible to report?
--
Thom
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-01 07:11:09 |
Message-ID: | CAA4eK1LVW909spe7PfHO3EkQaQ8O8qTjQ0NhQ-4BPnrh4YuwPA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Dec 31, 2014 at 7:50 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>
>
> When attempting to recreate the plan in your example, I get an error:
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # create table t1(c1 int, c2 char(500)) with (fillfactor=10);
> CREATE TABLE
> Time: 13.653 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # insert into t1 values(generate_series(1,100),'amit');
> INSERT 0 100
> Time: 4.796 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # explain select c1 from t1;
> ERROR: could not register background process
> HINT: You may need to increase max_worker_processes.
> Time: 1.659 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # show max_worker_processes ;
> max_worker_processes
> ----------------------
> 8
> (1 row)
>
> Time: 0.199 ms
>
> # show parallel_seqscan_degree ;
> parallel_seqscan_degree
> -------------------------
> 10
> (1 row)
>
>
> Should I really need to increase max_worker_processes to >=
parallel_seqscan_degree?
Yes, as the parallel workers are implemented based on dynamic
bgworkers, so it is dependent on max_worker_processes.
> If so, shouldn't there be a hint here along with the error message
pointing this out? And should the error be produced when only a *plan* is
being requested?
>
I think one thing we could do minimize the chance of such an
error is set the value of parallel workers to be used for plan equal
to max_worker_processes if parallel_seqscan_degree is greater
than max_worker_processes. Even if we do this, still such an
error can come if user has registered bgworker before we could
start parallel plan execution.
> Also, I noticed that where a table is partitioned, the plan isn't
parallelised:
>
>
> Is this expected?
>
Yes, to keep the initial implementation simple, it allows the
parallel plan when there is single table in query.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-01 10:34:22 |
Message-ID: | CAA4eK1+eaOx=-WtaZrDtn-9QePb50UaUsH6d7g0nZdHRubAuvw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg범퍼카 토토SQL |
On Wed, Dec 31, 2014 at 9:46 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>
> Another issue (FYI, pgbench2 initialised with: pgbench -i -s 100 -F 10
pgbench2):
>
>
> ➤ psql://thom(at)[local]:5488/pgbench2
>
> # explain (analyse, buffers, verbose) select distinct bid from
pgbench_accounts;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> Time: 14897.991 ms
>
> 2014-12-31 15:21:57 GMT [9164]: [260-1] user=,db=,client= LOG: server
process (PID 10869) was terminated by signal 9: Killed
> 2014-12-31 15:21:57 GMT [9164]: [261-1] user=,db=,client= DETAIL: Failed
process was running: explain (analyse, buffers, verbose) select distinct
bid from pgbench_accounts;
> 2014-12-31 15:21:57 GMT [9164]: [262-1] user=,db=,client= LOG:
terminating any other active server processes
>
> Running it again, I get the same issue. This is with
parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
>
> This doesn't happen if I set the pgbench scale to 50. I suspect this is
a OOM issue. My laptop has 16GB RAM, the table is around 13GB at scale
100, and I don't have swap enabled. But I'm concerned it crashes the whole
instance.
>
Isn't this a backend crash due to OOM?
And after that server will restart automatically.
> I also notice that requesting BUFFERS in a parallel EXPLAIN output yields
no such information.
> --
Yeah and the reason for same is that all the work done related
to BUFFERS is done by backend workers, master backend
doesn't read any pages, so it is not able to accumulate this
information.
> Is that not possible to report?
It is not impossible to report such information, we can develop some
way to share such information between master backend and workers.
I think we can do this if required once the patch is more stablized.
Thanks for looking into patch and reporting the issues.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Fabrízio de Royes Mello <fabriziomello(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-01 17:00:10 |
Message-ID: | CAFcNs+pOUEYC4PN4O_hUctHP=NM0=nJK14dBGnX=KY-HM17shg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
> I think one thing we could do minimize the chance of such an
> error is set the value of parallel workers to be used for plan equal
> to max_worker_processes if parallel_seqscan_degree is greater
> than max_worker_processes. Even if we do this, still such an
> error can come if user has registered bgworker before we could
> start parallel plan execution.
>
>
Can we check the number of free bgworkers slots to set the max workers?
Regards,
Fabrízio Mello
>
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Timbira: http://www.timbira.com.br
>> Blog: http://fabriziomello.github.io
>> Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello
>> Github: http://github.com/fabriziomello
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Fabrízio Mello <fabriziomello(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-01 17:59:57 |
Message-ID: | CA+TgmobKthWPjzFCLxAkpoG+59EC23RHti08byVnaNKaTaYeOA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
<fabriziomello(at)gmail(dot)com> wrote:
> Can we check the number of free bgworkers slots to set the max workers?
The real solution here is that this patch can't throw an error if it's
unable to obtain the desired number of background workers. It needs
to be able to smoothly degrade to a smaller number of background
workers, or none at all. I think a lot of this work will fall out
quite naturally if this patch is reworked to use the parallel
mode/parallel context stuff, the latest version of which includes an
example of how to set up a parallel scan in such a manner that it can
run with any number of workers.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-02 10:00:39 |
Message-ID: | CAA-aLv7Y35NtWxSDT8Mxu0DYix3LVq2v376rNmMmjn+LfCnsog@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg범퍼카 토토SQL |
On 1 January 2015 at 17:59, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
> <fabriziomello(at)gmail(dot)com> wrote:
> > Can we check the number of free bgworkers slots to set the max workers?
>
> The real solution here is that this patch can't throw an error if it's
> unable to obtain the desired number of background workers. It needs
> to be able to smoothly degrade to a smaller number of background
> workers, or none at all. I think a lot of this work will fall out
> quite naturally if this patch is reworked to use the parallel
> mode/parallel context stuff, the latest version of which includes an
> example of how to set up a parallel scan in such a manner that it can
> run with any number of workers.
>
+1
That sounds like exactly what's needed.
Thom
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-02 10:36:23 |
Message-ID: | CAA4eK1LvycQtcYem5ZYbceMitMR-ss8kSNCCm7U4DJr=RBgg=Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 1, 2015 at 11:29 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
> <fabriziomello(at)gmail(dot)com> wrote:
> > Can we check the number of free bgworkers slots to set the max workers?
>
> The real solution here is that this patch can't throw an error if it's
> unable to obtain the desired number of background workers. It needs
> to be able to smoothly degrade to a smaller number of background
> workers, or none at all.
I think handling this way can have one side effect which is that if
we degrade to smaller number, then the cost of plan (which was
decided by optimizer based on number of parallel workers) could
be more than non-parallel scan.
Ideally before finalizing the parallel plan we should reserve the
bgworkers required to execute that plan, but I think as of now
we can workout a solution without it.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-02 10:39:29 |
Message-ID: | CAA-aLv52KyeSzV+25QsAoCw-CeQMUOFevgCWcUcXodytahYYhg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1 January 2015 at 10:34, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > Running it again, I get the same issue. This is with
> parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
> >
> > This doesn't happen if I set the pgbench scale to 50. I suspect this is
> a OOM issue. My laptop has 16GB RAM, the table is around 13GB at scale
> 100, and I don't have swap enabled. But I'm concerned it crashes the whole
> instance.
> >
>
> Isn't this a backend crash due to OOM?
> And after that server will restart automatically.
>
Yes, I'm fairly sure it is. I guess what I'm confused about is that 8
parallel sequential scans in separate sessions (1 per session) don't cause
the server to crash, but in a single session (8 in 1 session), they do.
>
> > I also notice that requesting BUFFERS in a parallel EXPLAIN output
> yields no such information.
> > --
>
> Yeah and the reason for same is that all the work done related
> to BUFFERS is done by backend workers, master backend
> doesn't read any pages, so it is not able to accumulate this
> information.
>
> > Is that not possible to report?
>
> It is not impossible to report such information, we can develop some
> way to share such information between master backend and workers.
> I think we can do this if required once the patch is more stablized.
>
Ah great, as I think losing such information to this feature would be
unfortunate.
Will there be a GUC to influence parallel scan cost? Or does it take into
account effective_io_concurrency in the costs?
And will the planner be able to decide whether or not it'll choose to use
background workers or not? For example:
# explain (analyse, buffers, verbose) select distinct bid from
pgbench_accounts;
QUERY
PLAN
---------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=89584.00..89584.05 rows=5 width=4) (actual
time=228.222..228.224 rows=5 loops=1)
Output: bid
Group Key: pgbench_accounts.bid
Buffers: shared hit=83334
-> Seq Scan on public.pgbench_accounts (cost=0.00..88334.00
rows=500000 width=4) (actual time=0.008..136.522 rows=500000 loops=1)
Output: bid
Buffers: shared hit=83334
Planning time: 0.071 ms
Execution time: 228.265 ms
(9 rows)
This is a quick plan, but if we tell it that it's allowed 8 background
workers:
# set parallel_seqscan_degree = 8;
SET
Time: 0.187 ms
# explain (analyse, buffers, verbose) select distinct bid from
pgbench_accounts;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=12291.75..12291.80 rows=5 width=4) (actual
time=603.042..603.042 rows=1 loops=1)
Output: bid
Group Key: pgbench_accounts.bid
-> Parallel Seq Scan on public.pgbench_accounts (cost=0.00..11041.75
rows=500000 width=4) (actual time=2.445..529.284 rows=500000 loops=1)
Output: bid
Number of Workers: 8
Number of Blocks Per Workers: 10416
Planning time: 0.049 ms
Execution time: 663.103 ms
(9 rows)
Time: 663.437 ms
It's significantly slower. I'd hope the planner would anticipate this and
decide, "I'm just gonna perform a single scan in this instance as it'll be
a lot quicker for this simple case." So at the moment
parallel_seqscan_degree seems to mean "You *must* use this many threads if
you can parallelise." Ideally we'd be saying "can use up to if necessary".
Thanks
Thom
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-02 11:13:54 |
Message-ID: | CAA4eK1JFzayCwReAyv78qp3QKjagU5-w9XmKpCfuvxCLVrP-7Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 2, 2015 at 4:09 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>
> On 1 January 2015 at 10:34, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>> > Running it again, I get the same issue. This is with
parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
>> >
>> > This doesn't happen if I set the pgbench scale to 50. I suspect this
is a OOM issue. My laptop has 16GB RAM, the table is around 13GB at scale
100, and I don't have swap enabled. But I'm concerned it crashes the whole
instance.
>> >
>>
>> Isn't this a backend crash due to OOM?
>> And after that server will restart automatically.
>
>
> Yes, I'm fairly sure it is. I guess what I'm confused about is that 8
parallel sequential scans in separate sessions (1 per session) don't cause
the server to crash, but in a single session (8 in 1 session), they do.
>
It could be possible that master backend retains some memory
for longer period which causes it to hit OOM error, by the way
in your test does always master backend hits OOM or is it
random (either master or worker)
>
> Will there be a GUC to influence parallel scan cost? Or does it take
into account effective_io_concurrency in the costs?
>
> And will the planner be able to decide whether or not it'll choose to use
background workers or not? For example:
>
Yes, we are planing to introduce cost model for parallel
communication (there is some discussion about the same
upthread), but it's still not there and that's why you
are seeing it to choose parallel plan when it shouldn't.
Currently in patch, if you set parallel_seqscan_degree, it
will most probably choose parallel plan only.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-02 11:42:45 |
Message-ID: | CAA-aLv7iGpu0gF3HMbKTLMEGEsSJDbi9up2COvsbAjr7Oj=ONQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 2 January 2015 at 11:13, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Fri, Jan 2, 2015 at 4:09 PM, Thom Brown <thom(at)linux(dot)com> wrote:
> >
> > On 1 January 2015 at 10:34, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >>
> >> > Running it again, I get the same issue. This is with
> parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
> >> >
> >> > This doesn't happen if I set the pgbench scale to 50. I suspect this
> is a OOM issue. My laptop has 16GB RAM, the table is around 13GB at scale
> 100, and I don't have swap enabled. But I'm concerned it crashes the whole
> instance.
> >> >
> >>
> >> Isn't this a backend crash due to OOM?
> >> And after that server will restart automatically.
> >
> >
> > Yes, I'm fairly sure it is. I guess what I'm confused about is that 8
> parallel sequential scans in separate sessions (1 per session) don't cause
> the server to crash, but in a single session (8 in 1 session), they do.
> >
>
> It could be possible that master backend retains some memory
> for longer period which causes it to hit OOM error, by the way
> in your test does always master backend hits OOM or is it
> random (either master or worker)
>
Just ran a few tests, and it appears to always be the master that hits OOM,
or at least I don't seem to be able to get an example of the worker hitting
it.
>
> >
> > Will there be a GUC to influence parallel scan cost? Or does it take
> into account effective_io_concurrency in the costs?
> >
> > And will the planner be able to decide whether or not it'll choose to
> use background workers or not? For example:
> >
>
> Yes, we are planing to introduce cost model for parallel
> communication (there is some discussion about the same
> upthread), but it's still not there and that's why you
> are seeing it to choose parallel plan when it shouldn't.
> Currently in patch, if you set parallel_seqscan_degree, it
> will most probably choose parallel plan only.
>
Ah, okay. Great.
Thanks.
Thom
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-05 15:01:07 |
Message-ID: | CA+TgmoZk+z64-ekff_wncJ0R=7dB_5jN3sMy=0vgnd6mnVaPRQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg메이저 토토 사이트SQL |
On Fri, Jan 2, 2015 at 5:36 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Thu, Jan 1, 2015 at 11:29 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
>> <fabriziomello(at)gmail(dot)com> wrote:
>> > Can we check the number of free bgworkers slots to set the max workers?
>>
>> The real solution here is that this patch can't throw an error if it's
>> unable to obtain the desired number of background workers. It needs
>> to be able to smoothly degrade to a smaller number of background
>> workers, or none at all.
>
> I think handling this way can have one side effect which is that if
> we degrade to smaller number, then the cost of plan (which was
> decided by optimizer based on number of parallel workers) could
> be more than non-parallel scan.
> Ideally before finalizing the parallel plan we should reserve the
> bgworkers required to execute that plan, but I think as of now
> we can workout a solution without it.
I don't think this is very practical. When cached plans are in use,
we can have a bunch of plans sitting around that may or may not get
reused at some point in the future, possibly far in the future. The
current situation, which I think we want to maintain, is that such
plans hold no execution-time resources (e.g. locks) and, generally,
don't interfere with other things people might want to execute on the
system. Nailing down a bunch of background workers just in case we
might want to use them in the future would be pretty unfriendly.
I think it's right to view this in the same way we view work_mem. We
plan on the assumption that an amount of memory equal to work_mem will
be available at execution time, without actually reserving it. If the
plan happens to need that amount of memory and if it actually isn't
available when needed, then performance will suck; conceivably, the
OOM killer might trigger. But it's the user's job to avoid this by
not setting work_mem too high in the first place. Whether this system
is for the best is arguable: one can certainly imagine a system where,
if there's not enough memory at execution time, we consider
alternatives like (a) replanning with a lower memory target, (b)
waiting until more memory is available, or (c) failing outright in
lieu of driving the machine into swap. But devising such a system is
complicated -- for example, replanning with a lower memory target
might be latch onto a far more expensive plan, such that we would have
been better off waiting for more memory to be available; yet trying to
waiting until more memory is available might result in waiting
forever. And that's why we don't have such a system.
We don't need to do any better here. The GUC should tell us how many
parallel workers we should anticipate being able to obtain. If other
settings on the system, or the overall system load, preclude us from
obtaining that number of parallel workers, then the query will take
longer to execute; and the plan might be sub-optimal. If that happens
frequently, the user should lower the planner GUC to a level that
reflects the resources actually likely to be available at execution
time.
By the way, another area where this kind of effect crops up is with
the presence of particular disk blocks in shared_buffers or the system
buffer cache. Right now, the planner makes no attempt to cost a scan
of a frequently-used, fully-cached relation different than a
rarely-used, probably-not-cached relation; and that sometimes leads to
bad plans. But if it did try to do that, then we'd have the same kind
of problem discussed here -- things might change between planning and
execution, or even after the beginning of execution. Also, we might
get nasty feedback effects: since the relation isn't cached, we view a
plan that would involve reading it in as very expensive, and avoid
such a plan. However, we might be better off picking the "slow" plan
anyway, because it might be that once we've read the data once it will
stay cached and run much more quickly than some plan that seems better
starting from a cold cache.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-05 15:21:07 |
Message-ID: | 20150105152107.GQ3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> I think it's right to view this in the same way we view work_mem. We
> plan on the assumption that an amount of memory equal to work_mem will
> be available at execution time, without actually reserving it.
Agreed- this seems like a good approach for how to address this. We
should still be able to end up with plans which use less than the max
possible parallel workers though, as I pointed out somewhere up-thread.
This is also similar to work_mem- we certainly have plans which don't
expect to use all of work_mem and others that expect to use all of it
(per node, of course).
Thanks,
Stephen
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-08 11:42:59 |
Message-ID: | CAA4eK1KLyPUz9MVz7FubM0W6ANSk+2mnCePLr7AUXW1iN0YNtQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Jan 5, 2015 at 8:31 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Fri, Jan 2, 2015 at 5:36 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > On Thu, Jan 1, 2015 at 11:29 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
> >> On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
> >> <fabriziomello(at)gmail(dot)com> wrote:
> >> > Can we check the number of free bgworkers slots to set the max
workers?
> >>
> >> The real solution here is that this patch can't throw an error if it's
> >> unable to obtain the desired number of background workers. It needs
> >> to be able to smoothly degrade to a smaller number of background
> >> workers, or none at all.
> >
> > I think handling this way can have one side effect which is that if
> > we degrade to smaller number, then the cost of plan (which was
> > decided by optimizer based on number of parallel workers) could
> > be more than non-parallel scan.
> > Ideally before finalizing the parallel plan we should reserve the
> > bgworkers required to execute that plan, but I think as of now
> > we can workout a solution without it.
>
> I don't think this is very practical. When cached plans are in use,
> we can have a bunch of plans sitting around that may or may not get
> reused at some point in the future, possibly far in the future. The
> current situation, which I think we want to maintain, is that such
> plans hold no execution-time resources (e.g. locks) and, generally,
> don't interfere with other things people might want to execute on the
> system. Nailing down a bunch of background workers just in case we
> might want to use them in the future would be pretty unfriendly.
>
> I think it's right to view this in the same way we view work_mem. We
> plan on the assumption that an amount of memory equal to work_mem will
> be available at execution time, without actually reserving it.
Are we sure that in such cases we will consume work_mem during
execution? In cases of parallel_workers we are sure to an extent
that if we reserve the workers then we will use it during execution.
Nonetheless, I have proceded and integrated the parallel_seq scan
patch with v0.3 of parallel_mode patch posted by you at below link:
http://www.postgresql.org/message-id/CA+TgmoYmp_=XcJEhvJZt9P8drBgW-pDpjHxBhZA79+M4o-CZQA@mail.gmail.com
Few things to note about this integrated patch are:
1. In this new patch, I have just integrated it with Robert's parallel_mode
patch and not done any further development or fixed known things
like changes in optimizer, prepare queries, etc. You might notice
that new patch has lesser size as compare to previous patch and the
reason is that there were some duplicate stuff between previous
version of parallel_seqscan patch and parallel_mode which I have
eliminated.
2. To enable two types of shared memory queue's (error queue and
tuple queue), we need to ensure that we switch to appropriate queue
during communication of various messages from parallel worker
to master backend. There are two ways to do it
a. Save the information about error queue during startup of parallel
worker (ParallelMain()) and then during error, set the same (switch
to error queue in errstart() and switch back to tuple queue in
errfinish() and errstart() in case errstart() doesn't need to
propagate
error).
b. Do something similar as (a) for tuple queue in printtup or other
place
if any for non-error messages.
I think approach (a) is slightly better as compare to approach (b) as
we need to switch many times for tuple queue (for each tuple) and
there could be multiple places where we need to do the same. For now,
I have used approach (a) in Patch which needs some more work if we
agree on the same.
3. As per current implementation of Parallel_seqscan, it needs to use
some information from parallel.c which was not exposed, so I have
exposed the same by moving it to parallel.h. Information that is required
is as follows:
ParallelWorkerNumber, FixedParallelState and shm keys -
This is used to decide the blocks that needs to be scanned.
We might change it in future the way parallel scan/work distribution
is done, but I don't see any harm in exposing this information.
4. Sending ReadyForQuery
> If the
> plan happens to need that amount of memory and if it actually isn't
> available when needed, then performance will suck; conceivably, the
> OOM killer might trigger. But it's the user's job to avoid this by
> not setting work_mem too high in the first place. Whether this system
> is for the best is arguable: one can certainly imagine a system where,
> if there's not enough memory at execution time, we consider
> alternatives like (a) replanning with a lower memory target, (b)
> waiting until more memory is available, or (c) failing outright in
> lieu of driving the machine into swap. But devising such a system is
> complicated -- for example, replanning with a lower memory target
> might be latch onto a far more expensive plan, such that we would have
> been better off waiting for more memory to be available; yet trying to
> waiting until more memory is available might result in waiting
> forever. And that's why we don't have such a system.
>
> We don't need to do any better here. The GUC should tell us how many
> parallel workers we should anticipate being able to obtain. If other
> settings on the system, or the overall system load, preclude us from
> obtaining that number of parallel workers, then the query will take
> longer to execute; and the plan might be sub-optimal. If that happens
> frequently, the user should lower the planner GUC to a level that
> reflects the resources actually likely to be available at execution
> time.
>
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-08 11:47:58 |
Message-ID: | CAA4eK1+hfDXBG2fit8BAd2jDADnGhmnNNfrSpqqin19V1XYeng@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 8, 2015 at 5:12 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Mon, Jan 5, 2015 at 8:31 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> >
Sorry for incomplete mail sent prior to this, I just hit the send button
by mistake.
4. Sending ReadyForQuery() after completely sending the tuples,
as that is required to know that all the tuples are received and I think
we should send the same on tuple queue rather than on error queue.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_seqscan_v3.patch | application/octet-stream | 75.2 KB |
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-08 19:32:15 |
Message-ID: | 54AEDB3F.1000806@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/5/15, 9:21 AM, Stephen Frost wrote:
> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>> I think it's right to view this in the same way we view work_mem. We
>> plan on the assumption that an amount of memory equal to work_mem will
>> be available at execution time, without actually reserving it.
>
> Agreed- this seems like a good approach for how to address this. We
> should still be able to end up with plans which use less than the max
> possible parallel workers though, as I pointed out somewhere up-thread.
> This is also similar to work_mem- we certainly have plans which don't
> expect to use all of work_mem and others that expect to use all of it
> (per node, of course).
I agree, but we should try and warn the user if they set parallel_seqscan_degree close to max_worker_processes, or at least give some indication of what's going on. This is something you could end up beating your head on wondering why it's not working.
Perhaps we could have EXPLAIN throw a warning if a plan is likely to get less than parallel_seqscan_degree number of workers.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-08 19:46:18 |
Message-ID: | 20150108194617.GJ3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Jim Nasby (Jim(dot)Nasby(at)BlueTreble(dot)com) wrote:
> On 1/5/15, 9:21 AM, Stephen Frost wrote:
> >* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> >>I think it's right to view this in the same way we view work_mem. We
> >>plan on the assumption that an amount of memory equal to work_mem will
> >>be available at execution time, without actually reserving it.
> >
> >Agreed- this seems like a good approach for how to address this. We
> >should still be able to end up with plans which use less than the max
> >possible parallel workers though, as I pointed out somewhere up-thread.
> >This is also similar to work_mem- we certainly have plans which don't
> >expect to use all of work_mem and others that expect to use all of it
> >(per node, of course).
>
> I agree, but we should try and warn the user if they set parallel_seqscan_degree close to max_worker_processes, or at least give some indication of what's going on. This is something you could end up beating your head on wondering why it's not working.
>
> Perhaps we could have EXPLAIN throw a warning if a plan is likely to get less than parallel_seqscan_degree number of workers.
Yeah, if we come up with a plan for X workers and end up not being able
to spawn that many then I could see that being worth a warning or notice
or something. Not sure what EXPLAIN has to do anything with it..
Thanks,
Stephen
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 14:04:35 |
Message-ID: | CAA4eK1KrEKksUXf5ES_tk9BNbaSMjaHzzP4Vk7=AnQi9mtBvUQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
>
> On 1/5/15, 9:21 AM, Stephen Frost wrote:
>>
>> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>>>
>>> I think it's right to view this in the same way we view work_mem. We
>>> plan on the assumption that an amount of memory equal to work_mem will
>>> be available at execution time, without actually reserving it.
>>
>>
>> Agreed- this seems like a good approach for how to address this. We
>> should still be able to end up with plans which use less than the max
>> possible parallel workers though, as I pointed out somewhere up-thread.
>> This is also similar to work_mem- we certainly have plans which don't
>> expect to use all of work_mem and others that expect to use all of it
>> (per node, of course).
>
>
> I agree, but we should try and warn the user if they set
parallel_seqscan_degree close to max_worker_processes, or at least give
some indication of what's going on. This is something you could end up
beating your head on wondering why it's not working.
>
Yet another way to handle the case when enough workers are not
available is to let user specify the desired minimum percentage of
requested parallel workers with parameter like
PARALLEL_QUERY_MIN_PERCENT. For example, if you specify
50 for this parameter, then at least 50% of the parallel workers
requested for any parallel operation must be available in order for
the operation to succeed else it will give error. If the value is set to
null, then all parallel operations will proceed as long as at least two
parallel workers are available for processing.
This is something how other commercial database handles such a
situation.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 14:38:57 |
Message-ID: | CAA4eK1L0dk9D3hARoAb84v2pGvUw4B5YoS4x18ORQREwR+1VCg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Dec 19, 2014 at 7:57 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
>
> There's certainly documentation available from the other RDBMS' which
> already support parallel query, as one source. Other academic papers
> exist (and once you've linked into one, the references and prior work
> helps bring in others). Sadly, I don't currently have ACM access (might
> have to change that..), but there are publicly available papers also,
I have gone through couple of papers and what some other databases
do in case of parallel sequential scan and here is brief summarization
of same and how I am planning to handle in the patch:
Costing:
In one of the paper's [1] suggested by you, below is the summarisation:
a. Startup costs are negligible if processes can be reused
rather than created afresh.
b. Communication cost consists of the CPU cost of sending
and receiving messages.
c. Communication costs can exceed the cost of operators such
as scanning, joining or grouping
These findings lead to the important conclusion that
Query optimization should be concerned with communication costs
but not with startup costs.
In our case as currently we don't have a mechanism to reuse parallel
workers, so we need to account for that cost as well. So based on that,
I am planing to add three new parameters cpu_tuple_comm_cost,
parallel_setup_cost, parallel_startup_cost
* cpu_tuple_comm_cost - Cost of CPU time to pass a tuple from worker
to master backend with default value
DEFAULT_CPU_TUPLE_COMM_COST as 0.1, this will be multiplied
with tuples expected to be selected
* parallel_setup_cost - Cost of setting up shared memory for parallelism
with default value as 100.0
* parallel_startup_cost - Cost of starting up parallel workers with
default
value as 1000.0 multiplied by number of workers decided for scan.
I will do some experiments to finalise the default values, but in general,
I feel developing cost model on above parameters is good.
Execution:
Most other databases does partition level scan for partition on
different disks by each individual parallel worker. However,
it seems amazon dynamodb [2] also works on something
similar to what I have used in patch which means on fixed
blocks. I think this kind of strategy seems better than dividing
the blocks at runtime because dividing randomly the blocks
among workers could lead to random scan for a parallel
sequential scan.
Also I find in whatever I have read (Oracle, dynamodb) that most
databases divide work among workers and master backend acts
as coordinator, atleast that's what I could understand.
Let me know your opinion about the same?
I am planning to proceed with above ideas to strengthen the patch
in absence of any objection or better ideas.
[1] : http://i.stanford.edu/pub/cstr/reports/cs/tr/96/1570/CS-TR-96-1570.pdf
[2] :
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#QueryAndScanParallelScan
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 17:24:20 |
Message-ID: | 20150109172420.GB3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg메이저 토토 사이트SQL |
Amit,
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> On Fri, Dec 19, 2014 at 7:57 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > There's certainly documentation available from the other RDBMS' which
> > already support parallel query, as one source. Other academic papers
> > exist (and once you've linked into one, the references and prior work
> > helps bring in others). Sadly, I don't currently have ACM access (might
> > have to change that..), but there are publicly available papers also,
>
> I have gone through couple of papers and what some other databases
> do in case of parallel sequential scan and here is brief summarization
> of same and how I am planning to handle in the patch:
Great, thanks!
> Costing:
> In one of the paper's [1] suggested by you, below is the summarisation:
> a. Startup costs are negligible if processes can be reused
> rather than created afresh.
> b. Communication cost consists of the CPU cost of sending
> and receiving messages.
> c. Communication costs can exceed the cost of operators such
> as scanning, joining or grouping
> These findings lead to the important conclusion that
> Query optimization should be concerned with communication costs
> but not with startup costs.
>
> In our case as currently we don't have a mechanism to reuse parallel
> workers, so we need to account for that cost as well. So based on that,
> I am planing to add three new parameters cpu_tuple_comm_cost,
> parallel_setup_cost, parallel_startup_cost
> * cpu_tuple_comm_cost - Cost of CPU time to pass a tuple from worker
> to master backend with default value
> DEFAULT_CPU_TUPLE_COMM_COST as 0.1, this will be multiplied
> with tuples expected to be selected
> * parallel_setup_cost - Cost of setting up shared memory for parallelism
> with default value as 100.0
> * parallel_startup_cost - Cost of starting up parallel workers with
> default
> value as 1000.0 multiplied by number of workers decided for scan.
>
> I will do some experiments to finalise the default values, but in general,
> I feel developing cost model on above parameters is good.
The parameters sound reasonable but I'm a bit worried about the way
you're describing the implementation. Specifically this comment:
"Cost of starting up parallel workers with default value as 1000.0
multiplied by number of workers decided for scan."
That appears to imply that we'll decide on the number of workers, figure
out the cost, and then consider "parallel" as one path and
"not-parallel" as another. I'm worried that if I end up setting the max
parallel workers to 32 for my big, beefy, mostly-single-user system then
I'll actually end up not getting parallel execution because we'll always
be including the full startup cost of 32 threads. For huge queries,
it'll probably be fine, but there's a lot of room to parallelize things
at levels less than 32 which we won't even consider.
What I was advocating for up-thread was to consider multiple "parallel"
paths and to pick whichever ends up being the lowest overall cost. The
flip-side to that is increased planning time. Perhaps we can come up
with an efficient way of working out where the break-point is based on
the non-parallel cost and go at it from that direction instead of
building out whole paths for each increment of parallelism.
I'd really like to be able to set the 'max parallel' high and then have
the optimizer figure out how many workers should actually be spawned for
a given query.
> Execution:
> Most other databases does partition level scan for partition on
> different disks by each individual parallel worker. However,
> it seems amazon dynamodb [2] also works on something
> similar to what I have used in patch which means on fixed
> blocks. I think this kind of strategy seems better than dividing
> the blocks at runtime because dividing randomly the blocks
> among workers could lead to random scan for a parallel
> sequential scan.
Yeah, we also need to consider the i/o side of this, which will
definitely be tricky. There are i/o systems out there which are faster
than a single CPU and ones where a single CPU can manage multiple i/o
channels. There are also cases where the i/o system handles sequential
access nearly as fast as random and cases where sequential is much
faster than random. Where we can get an idea of that distinction is
with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
lower random_page_cost from the default to indicate that.
> Also I find in whatever I have read (Oracle, dynamodb) that most
> databases divide work among workers and master backend acts
> as coordinator, atleast that's what I could understand.
Yeah, I agree that's more typical. Robert's point that the master
backend should participate is interesting but, as I recall, it was based
on the idea that the master could finish faster than the worker- but if
that's the case then we've planned it out wrong from the beginning.
Thanks!
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 19:01:01 |
Message-ID: | 20150109190101.GD3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg무지개 토토SQL |
Amit,
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
> > I agree, but we should try and warn the user if they set
> > parallel_seqscan_degree close to max_worker_processes, or at least give
> > some indication of what's going on. This is something you could end up
> > beating your head on wondering why it's not working.
>
> Yet another way to handle the case when enough workers are not
> available is to let user specify the desired minimum percentage of
> requested parallel workers with parameter like
> PARALLEL_QUERY_MIN_PERCENT. For example, if you specify
> 50 for this parameter, then at least 50% of the parallel workers
> requested for any parallel operation must be available in order for
> the operation to succeed else it will give error. If the value is set to
> null, then all parallel operations will proceed as long as at least two
> parallel workers are available for processing.
Ugh. I'm not a fan of this.. Based on how we're talking about modeling
this, if we decide to parallelize at all, then we expect it to be a win.
I don't like the idea of throwing an error if, at execution time, we end
up not being able to actually get the number of workers we want-
instead, we should degrade gracefully all the way back to serial, if
necessary. Perhaps we should send a NOTICE or something along those
lines to let the user know we weren't able to get the level of
parallelization that the plan originally asked for, but I really don't
like just throwing an error.
Now, for debugging purposes, I could see such a parameter being
available but it should default to 'off/never-fail'.
Thanks,
Stephen
From: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 21:15:08 |
Message-ID: | 54B044DC.4070104@kaltenbrunner.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 01/09/2015 08:01 PM, Stephen Frost wrote:
> Amit,
>
> * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
>> On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
>>> I agree, but we should try and warn the user if they set
>>> parallel_seqscan_degree close to max_worker_processes, or at least give
>>> some indication of what's going on. This is something you could end up
>>> beating your head on wondering why it's not working.
>>
>> Yet another way to handle the case when enough workers are not
>> available is to let user specify the desired minimum percentage of
>> requested parallel workers with parameter like
>> PARALLEL_QUERY_MIN_PERCENT. For example, if you specify
>> 50 for this parameter, then at least 50% of the parallel workers
>> requested for any parallel operation must be available in order for
>> the operation to succeed else it will give error. If the value is set to
>> null, then all parallel operations will proceed as long as at least two
>> parallel workers are available for processing.
>
> Ugh. I'm not a fan of this.. Based on how we're talking about modeling
> this, if we decide to parallelize at all, then we expect it to be a win.
> I don't like the idea of throwing an error if, at execution time, we end
> up not being able to actually get the number of workers we want-
> instead, we should degrade gracefully all the way back to serial, if
> necessary. Perhaps we should send a NOTICE or something along those
> lines to let the user know we weren't able to get the level of
> parallelization that the plan originally asked for, but I really don't
> like just throwing an error.
yeah this seems like the the behaviour I would expect, if we cant get
enough parallel workers we should just use as much as we can get.
Everything else and especially erroring out will just cause random
application failures and easy DoS vectors.
I think all we need initially is being able to specify a "maximum number
of workers per query" as well as a "maximum number of workers in total
for parallel operations".
>
> Now, for debugging purposes, I could see such a parameter being
> available but it should default to 'off/never-fail'.
not sure what it really would be useful for - if I execute a query I
would truely expect it to get answered - if it can be made faster if
done in parallel thats nice but why would I want it to fail?
Stefan
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-09 21:34:22 |
Message-ID: | 20150109213422.GJ3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 핫SQL : |
* Stefan Kaltenbrunner (stefan(at)kaltenbrunner(dot)cc) wrote:
> On 01/09/2015 08:01 PM, Stephen Frost wrote:
> > Now, for debugging purposes, I could see such a parameter being
> > available but it should default to 'off/never-fail'.
>
> not sure what it really would be useful for - if I execute a query I
> would truely expect it to get answered - if it can be made faster if
> done in parallel thats nice but why would I want it to fail?
I was thinking for debugging only, though I'm not really sure why you'd
need it if you get a NOTICE when you don't end up with all the workers
you expect.
Thanks,
Stephen
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-10 00:14:25 |
Message-ID: | 54B06EE1.2030805@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/9/15, 3:34 PM, Stephen Frost wrote:
> * Stefan Kaltenbrunner (stefan(at)kaltenbrunner(dot)cc) wrote:
>> On 01/09/2015 08:01 PM, Stephen Frost wrote:
>>> Now, for debugging purposes, I could see such a parameter being
>>> available but it should default to 'off/never-fail'.
>>
>> not sure what it really would be useful for - if I execute a query I
>> would truely expect it to get answered - if it can be made faster if
>> done in parallel thats nice but why would I want it to fail?
>
> I was thinking for debugging only, though I'm not really sure why you'd
> need it if you get a NOTICE when you don't end up with all the workers
> you expect.
Yeah, debugging is my concern as well. You're working on a query, you expect it to be using parallelism, and EXPLAIN is showing it's not. Now you're scratching your head.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-10 00:28:20 |
Message-ID: | 54B07224.6080203@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/9/15, 11:24 AM, Stephen Frost wrote:
> What I was advocating for up-thread was to consider multiple "parallel"
> paths and to pick whichever ends up being the lowest overall cost. The
> flip-side to that is increased planning time. Perhaps we can come up
> with an efficient way of working out where the break-point is based on
> the non-parallel cost and go at it from that direction instead of
> building out whole paths for each increment of parallelism.
I think at some point we'll need the ability to stop planning part-way through for queries producing really small estimates. If the first estimate you get is 1000 units, does it really make sense to do something like try every possible join permutation, or attempt to parallelize?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-10 04:59:02 |
Message-ID: | CAA4eK1KtGfdMSF=UpAqcba43H3PC7w67ea7gEojUmYm68cBMGg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 9, 2015 at 10:54 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> > In our case as currently we don't have a mechanism to reuse parallel
> > workers, so we need to account for that cost as well. So based on that,
> > I am planing to add three new parameters cpu_tuple_comm_cost,
> > parallel_setup_cost, parallel_startup_cost
> > * cpu_tuple_comm_cost - Cost of CPU time to pass a tuple from worker
> > to master backend with default value
> > DEFAULT_CPU_TUPLE_COMM_COST as 0.1, this will be multiplied
> > with tuples expected to be selected
> > * parallel_setup_cost - Cost of setting up shared memory for
parallelism
> > with default value as 100.0
> > * parallel_startup_cost - Cost of starting up parallel workers with
> > default
> > value as 1000.0 multiplied by number of workers decided for scan.
> >
> > I will do some experiments to finalise the default values, but in
general,
> > I feel developing cost model on above parameters is good.
>
> The parameters sound reasonable but I'm a bit worried about the way
> you're describing the implementation. Specifically this comment:
>
> "Cost of starting up parallel workers with default value as 1000.0
> multiplied by number of workers decided for scan."
>
> That appears to imply that we'll decide on the number of workers, figure
> out the cost, and then consider "parallel" as one path and
> "not-parallel" as another. I'm worried that if I end up setting the max
> parallel workers to 32 for my big, beefy, mostly-single-user system then
> I'll actually end up not getting parallel execution because we'll always
> be including the full startup cost of 32 threads. For huge queries,
> it'll probably be fine, but there's a lot of room to parallelize things
> at levels less than 32 which we won't even consider.
>
Actually the main factor to decide whether a parallel plan will be
selected or not will be based on selectivity and cpu_tuple_comm_cost,
parallel_startup_cost is mainly to prevent the cases where user
has set parallel_seqscan_degree, but the table is small enough
(letus say 10,000 tuples) that it doesn't need parallelism. If you are
worried by default cost parameter's, then I think those still needs
to be decided based on certain experiments.
> What I was advocating for up-thread was to consider multiple "parallel"
> paths and to pick whichever ends up being the lowest overall cost. The
> flip-side to that is increased planning time.
The main idea behind providing a parameter like parallel_seqscan_degree
is such that it will try to use that many number of workers for a single
parallel operation (intra-node parallelism) and incase we have to perform
inter-node parallelism than having such an parameter means that each
node can use that many number of parallel worker. For example we have
to parallelize scan as well as sort (Select * from t1 order by c1), and
parallel_degree is specified as 2, then each of the scan and sort can use
2 parallel workers each.
This is somewhat similar to the concept how degree of parallelism (DOP)
works in other databases. Refer case of Oracle [1] (Setting Degree of
Parallelism).
I don't deny the fact that it will be a idea worth exploring to make
optimizer
more smart for deciding parallel plans, but it seems to me it is an advanced
topic which will be more valuable when we will try to parallelize joins or
other
similar stuff and even most papers talk about it in those regards only.
At this moment if we can ensure that parallel plan should not be selected
for cases where it will perform poorly is more than enough considering
we have lots of other work left to even make any parallel operation work.
> Perhaps we can come up
> with an efficient way of working out where the break-point is based on
> the non-parallel cost and go at it from that direction instead of
> building out whole paths for each increment of parallelism.
>
> I'd really like to be able to set the 'max parallel' high and then have
> the optimizer figure out how many workers should actually be spawned for
> a given query.
>
> > Execution:
> > Most other databases does partition level scan for partition on
> > different disks by each individual parallel worker. However,
> > it seems amazon dynamodb [2] also works on something
> > similar to what I have used in patch which means on fixed
> > blocks. I think this kind of strategy seems better than dividing
> > the blocks at runtime because dividing randomly the blocks
> > among workers could lead to random scan for a parallel
> > sequential scan.
>
> Yeah, we also need to consider the i/o side of this, which will
> definitely be tricky. There are i/o systems out there which are faster
> than a single CPU and ones where a single CPU can manage multiple i/o
> channels. There are also cases where the i/o system handles sequential
> access nearly as fast as random and cases where sequential is much
> faster than random. Where we can get an idea of that distinction is
> with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
> lower random_page_cost from the default to indicate that.
>
I am not clear, do you expect anything different in execution strategy
than what I have mentioned or does that sound reasonable to you?
[1] :http://docs.oracle.com/cd/A57673_01/DOC/server/doc/A48506/pqoconce.htm
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-10 05:22:20 |
Message-ID: | CAA4eK1J7opEd_VcCU=mROeMQo8mWMYC-xMV3Ln13YZODSFfqPw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Jan 10, 2015 at 2:45 AM, Stefan Kaltenbrunner
<stefan(at)kaltenbrunner(dot)cc> wrote:
>
> On 01/09/2015 08:01 PM, Stephen Frost wrote:
> > Amit,
> >
> > * Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> >> On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
wrote:
> >>> I agree, but we should try and warn the user if they set
> >>> parallel_seqscan_degree close to max_worker_processes, or at least
give
> >>> some indication of what's going on. This is something you could end up
> >>> beating your head on wondering why it's not working.
> >>
> >> Yet another way to handle the case when enough workers are not
> >> available is to let user specify the desired minimum percentage of
> >> requested parallel workers with parameter like
> >> PARALLEL_QUERY_MIN_PERCENT. For example, if you specify
> >> 50 for this parameter, then at least 50% of the parallel workers
> >> requested for any parallel operation must be available in order for
> >> the operation to succeed else it will give error. If the value is set
to
> >> null, then all parallel operations will proceed as long as at least two
> >> parallel workers are available for processing.
> >
>>
> > Now, for debugging purposes, I could see such a parameter being
> > available but it should default to 'off/never-fail'.
>
> not sure what it really would be useful for - if I execute a query I
> would truely expect it to get answered - if it can be made faster if
> done in parallel thats nice but why would I want it to fail?
>
One usecase where I could imagine it to be useful is when the
query is going to take many hours if run sequentially and it could
be finished in minutes if run with 16 parallel workers, now let us
say during execution if there are less than 30% of parallel workers
available it might not be acceptable to user and he would like to
rather wait for some time and again run the query and if he wants
to run query even if 2 workers are available, he can choose not
to such a parameter.
Having said that, I also feel this doesn't seem to be an important case
to introduce a new parameter and such a behaviour. I have mentioned,
because it came across my eyes how some other databases handle
such a situation. Lets forget this suggestion if we can't imagine any
use of such a parameter.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-10 18:03:37 |
Message-ID: | 20150110180336.GQ3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> At this moment if we can ensure that parallel plan should not be selected
> for cases where it will perform poorly is more than enough considering
> we have lots of other work left to even make any parallel operation work.
The problem with this approach is that it doesn't consider any options
between 'serial' and 'parallelize by factor X'. If the startup cost is
1000 and the factor is 32, then a seqscan which costs 31000 won't ever
be parallelized, even though a factor of 8 would have parallelized it.
You could forget about the per-process startup cost entirely, in fact,
and simply say "only parallelize if it's more than X".
Again, I don't like the idea of designing this with the assumption that
the user dictates the right level of parallelization for each and every
query. I'd love to go out and tell users "set the factor to the number
of CPUs you have and we'll just use what makes sense."
The same goes for max number of backends. If we set the parallel level
to the number of CPUs and set the max backends to the same, then we end
up with only one parallel query running at a time, ever. That's
terrible. Now, we could set the parallel level lower or set the max
backends higher, but either way we're going to end up either using less
than we could or over-subscribing, neither of which is good.
I agree that this makes it a bit different from work_mem, but in this
case there's an overall max in the form of the maximum number of
background workers. If we had something similar for work_mem, then we
could set that higher and still trust the system to only use the amount
of memory necessary (eg: a hashjoin doesn't use all available work_mem
and neither does a sort, unless the set is larger than available
memory).
> > > Execution:
> > > Most other databases does partition level scan for partition on
> > > different disks by each individual parallel worker. However,
> > > it seems amazon dynamodb [2] also works on something
> > > similar to what I have used in patch which means on fixed
> > > blocks. I think this kind of strategy seems better than dividing
> > > the blocks at runtime because dividing randomly the blocks
> > > among workers could lead to random scan for a parallel
> > > sequential scan.
> >
> > Yeah, we also need to consider the i/o side of this, which will
> > definitely be tricky. There are i/o systems out there which are faster
> > than a single CPU and ones where a single CPU can manage multiple i/o
> > channels. There are also cases where the i/o system handles sequential
> > access nearly as fast as random and cases where sequential is much
> > faster than random. Where we can get an idea of that distinction is
> > with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
> > lower random_page_cost from the default to indicate that.
> >
> I am not clear, do you expect anything different in execution strategy
> than what I have mentioned or does that sound reasonable to you?
What I'd like is a way to figure out the right amount of CPU for each
tablespace (0.25, 1, 2, 4, etc) and then use that many. Using a single
CPU for each tablespace is likely to starve the CPU or starve the I/O
system and I'm not sure if there's a way to address that.
Note that I intentionally said tablespace there because that's how users
can tell us what the different i/o channels are. I realize this ends up
going beyond the current scope, but the parallel seqscan at the per
relation level will only ever be using one i/o channel. It'd be neat if
we could work out how fast that i/o channel is vs. the CPUs and
determine how many CPUs are necessary to keep up with the i/o channel
and then use more-or-less exactly that many for the scan.
I agree that some of this can come later but I worry that starting out
with a design that expects to always be told exactly how many CPUs to
use when running a parallel query will be difficult to move away from
later.
Thanks,
Stephen
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 03:39:26 |
Message-ID: | CA+TgmobBZ=0n=JcS28hBxVBaSXeZHBQCnxVzCTUSPMe1zsuGdw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 베이SQL |
On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Are we sure that in such cases we will consume work_mem during
> execution? In cases of parallel_workers we are sure to an extent
> that if we reserve the workers then we will use it during execution.
> Nonetheless, I have proceded and integrated the parallel_seq scan
> patch with v0.3 of parallel_mode patch posted by you at below link:
> http://www.postgresql.org/message-id/CA+TgmoYmp_=XcJEhvJZt9P8drBgW-pDpjHxBhZA79+M4o-CZQA@mail.gmail.com
That depends on the costing model. It makes no sense to do a parallel
sequential scan on a small relation, because the user backend can scan
the whole thing itself faster than the workers can start up. I
suspect it may also be true that the useful amount of parallelism
increases the larger the relation gets (but maybe not).
> 2. To enable two types of shared memory queue's (error queue and
> tuple queue), we need to ensure that we switch to appropriate queue
> during communication of various messages from parallel worker
> to master backend. There are two ways to do it
> a. Save the information about error queue during startup of parallel
> worker (ParallelMain()) and then during error, set the same (switch
> to error queue in errstart() and switch back to tuple queue in
> errfinish() and errstart() in case errstart() doesn't need to
> propagate
> error).
> b. Do something similar as (a) for tuple queue in printtup or other
> place
> if any for non-error messages.
> I think approach (a) is slightly better as compare to approach (b) as
> we need to switch many times for tuple queue (for each tuple) and
> there could be multiple places where we need to do the same. For now,
> I have used approach (a) in Patch which needs some more work if we
> agree on the same.
I don't think you should be "switching" queues. The tuples should be
sent to the tuple queue, and errors and notices to the error queue.
> 3. As per current implementation of Parallel_seqscan, it needs to use
> some information from parallel.c which was not exposed, so I have
> exposed the same by moving it to parallel.h. Information that is required
> is as follows:
> ParallelWorkerNumber, FixedParallelState and shm keys -
> This is used to decide the blocks that needs to be scanned.
> We might change it in future the way parallel scan/work distribution
> is done, but I don't see any harm in exposing this information.
Hmm. I can see why ParallelWorkerNumber might need to be exposed, but
the other stuff seems like it shouldn't be.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 03:40:58 |
Message-ID: | CA+Tgmoa_oVw5FUJoyE_7C5UudKiSL92ZkXpjVzU37wqLANxzKA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 사이트SQL |
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> Yeah, if we come up with a plan for X workers and end up not being able
> to spawn that many then I could see that being worth a warning or notice
> or something. Not sure what EXPLAIN has to do anything with it..
That seems mighty odd to me. If there are 8 background worker
processes available, and you allow each session to use at most 4, then
when there are >2 sessions trying to do parallelism at the same time,
they might not all get their workers. Emitting a notice for that
seems like it would be awfully chatty.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 03:46:07 |
Message-ID: | CA+TgmoZxTeVEHm6p96YMsZtWr6J9dgGoaC_ZTKnzLLvfBH9QEw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 9, 2015 at 12:24 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> The parameters sound reasonable but I'm a bit worried about the way
> you're describing the implementation. Specifically this comment:
>
> "Cost of starting up parallel workers with default value as 1000.0
> multiplied by number of workers decided for scan."
>
> That appears to imply that we'll decide on the number of workers, figure
> out the cost, and then consider "parallel" as one path and
> "not-parallel" as another. [...]
> I'd really like to be able to set the 'max parallel' high and then have
> the optimizer figure out how many workers should actually be spawned for
> a given query.
+1.
> Yeah, we also need to consider the i/o side of this, which will
> definitely be tricky. There are i/o systems out there which are faster
> than a single CPU and ones where a single CPU can manage multiple i/o
> channels. There are also cases where the i/o system handles sequential
> access nearly as fast as random and cases where sequential is much
> faster than random. Where we can get an idea of that distinction is
> with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
> lower random_page_cost from the default to indicate that.
On my MacOS X system, I've already seen cases where my parallel_count
module runs incredibly slowly some of the time. I believe that this
is because having multiple workers reading the relation block-by-block
at the same time causes the OS to fail to realize that it needs to do
aggressive readahead. I suspect we're going to need to account for
this somehow.
> Yeah, I agree that's more typical. Robert's point that the master
> backend should participate is interesting but, as I recall, it was based
> on the idea that the master could finish faster than the worker- but if
> that's the case then we've planned it out wrong from the beginning.
So, if the workers have been started but aren't keeping up, the master
should do nothing until they produce tuples rather than participating?
That doesn't seem right.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 04:14:47 |
Message-ID: | CAA4eK1JiPPwaXF3XrSXuTdfzcVEForCKrRo6jnPriFLU8rROJQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sun, Jan 11, 2015 at 9:09 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > 2. To enable two types of shared memory queue's (error queue and
> > tuple queue), we need to ensure that we switch to appropriate queue
> > during communication of various messages from parallel worker
> > to master backend. There are two ways to do it
> > a. Save the information about error queue during startup of parallel
> > worker (ParallelMain()) and then during error, set the same
(switch
> > to error queue in errstart() and switch back to tuple queue in
> > errfinish() and errstart() in case errstart() doesn't need to
> > propagate
> > error).
> > b. Do something similar as (a) for tuple queue in printtup or other
> > place
> > if any for non-error messages.
> > I think approach (a) is slightly better as compare to approach (b) as
> > we need to switch many times for tuple queue (for each tuple) and
> > there could be multiple places where we need to do the same. For now,
> > I have used approach (a) in Patch which needs some more work if we
> > agree on the same.
>
> I don't think you should be "switching" queues. The tuples should be
> sent to the tuple queue, and errors and notices to the error queue.
>
To achieve what you said (The tuples should be sent to the tuple
queue, and errors and notices to the error queue.), we need to
switch the queues.
The difficulty here is that once we set the queue (using
pq_redirect_to_shm_mq()) through which the communication has to
happen, it will use the same unless we change again the queue
using pq_redirect_to_shm_mq(). For example, assume we have
initially set error queue (using pq_redirect_to_shm_mq()) then to
send tuples, we need to call pq_redirect_to_shm_mq() to
set the tuple queue as the queue that needs to be used for communication
and again if error happens then we need to do the same for error
queue.
Do you have any other idea to achieve the same?
> > 3. As per current implementation of Parallel_seqscan, it needs to use
> > some information from parallel.c which was not exposed, so I have
> > exposed the same by moving it to parallel.h. Information that is
required
> > is as follows:
> > ParallelWorkerNumber, FixedParallelState and shm keys -
> > This is used to decide the blocks that needs to be scanned.
> > We might change it in future the way parallel scan/work distribution
> > is done, but I don't see any harm in exposing this information.
>
> Hmm. I can see why ParallelWorkerNumber might need to be exposed, but
> the other stuff seems like it shouldn't be.
>
It depends upon how we decide to achieve the scan of blocks
by backend worker. In current form, the patch needs to know
if myworker is the last worker (and I have used workers_expected
to achieve the same, I know that is not the right thing but I need
something similar if we decide to do in the way I have proposed),
so that it can scan all the remaining blocks.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 10:27:22 |
Message-ID: | 20150111102722.GR3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > Yeah, if we come up with a plan for X workers and end up not being able
> > to spawn that many then I could see that being worth a warning or notice
> > or something. Not sure what EXPLAIN has to do anything with it..
>
> That seems mighty odd to me. If there are 8 background worker
> processes available, and you allow each session to use at most 4, then
> when there are >2 sessions trying to do parallelism at the same time,
> they might not all get their workers. Emitting a notice for that
> seems like it would be awfully chatty.
Yeah, agreed, it could get quite noisy. Did you have another thought
for how to address the concern raised? Specifically, that you might not
get as many workers as you thought you would?
Thanks,
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 11:01:58 |
Message-ID: | 20150111110158.GS3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토SQL : Postg토토SQL |
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Fri, Jan 9, 2015 at 12:24 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > Yeah, we also need to consider the i/o side of this, which will
> > definitely be tricky. There are i/o systems out there which are faster
> > than a single CPU and ones where a single CPU can manage multiple i/o
> > channels. There are also cases where the i/o system handles sequential
> > access nearly as fast as random and cases where sequential is much
> > faster than random. Where we can get an idea of that distinction is
> > with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
> > lower random_page_cost from the default to indicate that.
>
> On my MacOS X system, I've already seen cases where my parallel_count
> module runs incredibly slowly some of the time. I believe that this
> is because having multiple workers reading the relation block-by-block
> at the same time causes the OS to fail to realize that it needs to do
> aggressive readahead. I suspect we're going to need to account for
> this somehow.
So, for my 2c, I've long expected us to parallelize at the relation-file
level for these kinds of operations. This goes back to my other
thoughts on how we should be thinking about parallelizing inbound data
for bulk data loads but it seems appropriate to consider it here also.
One of the issues there is that 1G still feels like an awful lot for a
minimum work size for each worker and it would mean we don't parallelize
for relations less than that size.
On a random VM on my personal server, an uncached 1G read takes over
10s. Cached it's less than half that, of course. This is all spinning
rust (and only 7200 RPM at that) and there's a lot of other stuff going
on but that still seems like too much of a chunk to give to one worker
unless the overall data set to go through is really large.
There's other issues in there too, of course, if we're dumping data in
like this then we have to either deal with jagged relation files somehow
or pad the file out to 1G, and that doesn't even get into the issues
around how we'd have to redesign the interfaces for relation access and
how this thinking is an utter violation of the modularity we currently
have there.
> > Yeah, I agree that's more typical. Robert's point that the master
> > backend should participate is interesting but, as I recall, it was based
> > on the idea that the master could finish faster than the worker- but if
> > that's the case then we've planned it out wrong from the beginning.
>
> So, if the workers have been started but aren't keeping up, the master
> should do nothing until they produce tuples rather than participating?
> That doesn't seem right.
Having the master jump in and start working could screw things up also
though. Perhaps we need the master to start working as a fail-safe but
not plan on having things go that way? Having more processes trying to
do X doesn't always result in things getting better and the master needs
to keep up with all the tuples being thrown at it from the workers.
Thanks,
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 11:09:31 |
Message-ID: | 20150111110931.GT3062@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Amit,
* Amit Kapila (amit(dot)kapila16(at)gmail(dot)com) wrote:
> On Sun, Jan 11, 2015 at 9:09 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> > I don't think you should be "switching" queues. The tuples should be
> > sent to the tuple queue, and errors and notices to the error queue.
Agreed.
> To achieve what you said (The tuples should be sent to the tuple
> queue, and errors and notices to the error queue.), we need to
> switch the queues.
> The difficulty here is that once we set the queue (using
> pq_redirect_to_shm_mq()) through which the communication has to
> happen, it will use the same unless we change again the queue
> using pq_redirect_to_shm_mq(). For example, assume we have
> initially set error queue (using pq_redirect_to_shm_mq()) then to
> send tuples, we need to call pq_redirect_to_shm_mq() to
> set the tuple queue as the queue that needs to be used for communication
> and again if error happens then we need to do the same for error
> queue.
> Do you have any other idea to achieve the same?
I think what Robert's getting at here is that pq_redirect_to_shm_mq()
might be fine for the normal data heading back, but we need something
separate for errors and notices. Switching everything back and forth
between the normal and error queues definitely doesn't sound right to
me- they need to be independent.
In other words, you need to be able to register a "normal data" queue
and then you need to also register a "error/notice" queue and have
errors and notices sent there directly. Going off of what I recall,
can't this be done by having the callbacks which are registered for
sending data back look at what they're being asked to send and then
decide which queue it's appropriate for out of the set which have been
registered so far?
Thanks,
Stephen
From: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 15:47:18 |
Message-ID: | 54B29B06.8010105@kaltenbrunner.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 01/11/2015 11:27 AM, Stephen Frost wrote:
> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>> On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>>> Yeah, if we come up with a plan for X workers and end up not being able
>>> to spawn that many then I could see that being worth a warning or notice
>>> or something. Not sure what EXPLAIN has to do anything with it..
>>
>> That seems mighty odd to me. If there are 8 background worker
>> processes available, and you allow each session to use at most 4, then
>> when there are >2 sessions trying to do parallelism at the same time,
>> they might not all get their workers. Emitting a notice for that
>> seems like it would be awfully chatty.
>
> Yeah, agreed, it could get quite noisy. Did you have another thought
> for how to address the concern raised? Specifically, that you might not
> get as many workers as you thought you would?
Wild idea: What about dealing with it as some sort of statistic - ie
track some global counts in the stats collector or on a per-query base
in pg_stat_activity and/or through pg_stat_statements?
Not sure why it is that important to get it on a per-query base, imho it
is simply a configuration limit we have set (similiar to work_mem or
when switching to geqo) - we dont report "per query" through
notice/warning there either (though the effect is kind visible in explain).
Stefan
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 21:55:54 |
Message-ID: | CA+TgmobFhPjUL-KokTsGmnoHYrqSybO8dn-g6QaOgPvR8SREuA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg메이저 토토 사이트SQL |
On Sat, Jan 10, 2015 at 11:14 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> I don't think you should be "switching" queues. The tuples should be
>> sent to the tuple queue, and errors and notices to the error queue.
> To achieve what you said (The tuples should be sent to the tuple
> queue, and errors and notices to the error queue.), we need to
> switch the queues.
> The difficulty here is that once we set the queue (using
> pq_redirect_to_shm_mq()) through which the communication has to
> happen, it will use the same unless we change again the queue
> using pq_redirect_to_shm_mq(). For example, assume we have
> initially set error queue (using pq_redirect_to_shm_mq()) then to
> send tuples, we need to call pq_redirect_to_shm_mq() to
> set the tuple queue as the queue that needs to be used for communication
> and again if error happens then we need to do the same for error
> queue.
> Do you have any other idea to achieve the same?
Yeah, you need two separate global variables pointing to shm_mq
objects, one of which gets used by pqmq.c for errors and the other of
which gets used by printtup.c for tuples.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 21:57:05 |
Message-ID: | CA+TgmoY2uksfjivmYwhBpBs=DPTJ0pk1b7A+gfLjoE-Vn8F_Ug@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sun, Jan 11, 2015 at 5:27 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>> On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> > Yeah, if we come up with a plan for X workers and end up not being able
>> > to spawn that many then I could see that being worth a warning or notice
>> > or something. Not sure what EXPLAIN has to do anything with it..
>>
>> That seems mighty odd to me. If there are 8 background worker
>> processes available, and you allow each session to use at most 4, then
>> when there are >2 sessions trying to do parallelism at the same time,
>> they might not all get their workers. Emitting a notice for that
>> seems like it would be awfully chatty.
>
> Yeah, agreed, it could get quite noisy. Did you have another thought
> for how to address the concern raised? Specifically, that you might not
> get as many workers as you thought you would?
I'm not sure why that's a condition in need of special reporting.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 22:00:13 |
Message-ID: | CA+TgmobcE7JiB+q5ZBvRGE=ZLndiVNB7F3=E1We4aN5k6X6m6g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> So, for my 2c, I've long expected us to parallelize at the relation-file
> level for these kinds of operations. This goes back to my other
> thoughts on how we should be thinking about parallelizing inbound data
> for bulk data loads but it seems appropriate to consider it here also.
> One of the issues there is that 1G still feels like an awful lot for a
> minimum work size for each worker and it would mean we don't parallelize
> for relations less than that size.
Yes, I think that's a killer objection.
> [ .. ] and
> how this thinking is an utter violation of the modularity we currently
> have there.
As is that.
My thinking is more along the lines that we might need to issue
explicit prefetch requests when doing a parallel sequential scan, to
make up for any failure of the OS to do that for us.
>> So, if the workers have been started but aren't keeping up, the master
>> should do nothing until they produce tuples rather than participating?
>> That doesn't seem right.
>
> Having the master jump in and start working could screw things up also
> though.
I don't think there's any reason why that should screw things up.
There's no reason why the master's participation should look any
different from one more worker. Look at my parallel_count code on the
other thread to see what I mean: the master and all the workers are
running the same code, and if fewer worker show up than expected, or
run unduly slowly, it's easily tolerated.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-11 22:01:58 |
Message-ID: | CA+TgmobhdWToL6uvGTVEBjD59_scEA0PvnhR4RT_rO9F_1ZAqQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sun, Jan 11, 2015 at 6:09 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> I think what Robert's getting at here is that pq_redirect_to_shm_mq()
> might be fine for the normal data heading back, but we need something
> separate for errors and notices. Switching everything back and forth
> between the normal and error queues definitely doesn't sound right to
> me- they need to be independent.
You've got that backwards. pq_redirect_to_shm_mq() handles errors and
notices, but we need something separate for the tuple stream.
> In other words, you need to be able to register a "normal data" queue
> and then you need to also register a "error/notice" queue and have
> errors and notices sent there directly. Going off of what I recall,
> can't this be done by having the callbacks which are registered for
> sending data back look at what they're being asked to send and then
> decide which queue it's appropriate for out of the set which have been
> registered so far?
It's pretty simple, really. The functions that need to use the tuple
queue are in printtup.c; those, and only those, need to be modified to
write to the other queue.
Or, possibly, we should pass the tuples around in their native format
instead of translating them into binary form and then reconstituting
them on the other end.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-12 03:34:09 |
Message-ID: | CAA4eK1KfG2Yfx8m+OqDGPkHdK8-NWqzYVfcHURwNH0AmfJaQaw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Jan 12, 2015 at 3:30 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >> So, if the workers have been started but aren't keeping up, the master
> >> should do nothing until they produce tuples rather than participating?
> >> That doesn't seem right.
> >
> > Having the master jump in and start working could screw things up also
> > though.
>
> I don't think there's any reason why that should screw things up.
Consider the case of inter-node parallelism, in such cases master
backend will have 4 responsibilities (scan relation, receive tuples
from other workers, send tuples to other workers, send tuples to
frontend) if we make it act like a worker.
For example
Select * from t1 Order By c1;
Now here first it needs to perform parallel sequential scan and then
fed the tuples from scan to another parallel worker which is doing sort.
It seems to me that master backend could starve few resources doing
all the work in an optimized way. As an example, one case could be
master backend read one page in memory (shared buffers) and then
read one tuple and apply the qualification and in the mean time the
queues on which it needs to receive got filled and it becomes busy
fetching tuples from those queues, now the page which it has read from
disk will be pinned in shared buffers for a longer time and even if we
release such a page, it has to be read again. OTOH, if master backend
would choose to read all the tuples from a page before checking the status
of queues, it can lead to lot of data piled up in queues.
I think there can be more such scenarios where getting many things
done by master backend can turn out to have negative impact.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-12 03:44:19 |
Message-ID: | CAA4eK1JFmSqjmm6xZTc5XRAywKHZWmKHTGnGVKKoABCJOuhO7g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Jan 12, 2015 at 3:27 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sun, Jan 11, 2015 at 5:27 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> >> On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net>
wrote:
> >> > Yeah, if we come up with a plan for X workers and end up not being
able
> >> > to spawn that many then I could see that being worth a warning or
notice
> >> > or something. Not sure what EXPLAIN has to do anything with it..
> >>
> >> That seems mighty odd to me. If there are 8 background worker
> >> processes available, and you allow each session to use at most 4, then
> >> when there are >2 sessions trying to do parallelism at the same time,
> >> they might not all get their workers. Emitting a notice for that
> >> seems like it would be awfully chatty.
> >
> > Yeah, agreed, it could get quite noisy. Did you have another thought
> > for how to address the concern raised? Specifically, that you might not
> > get as many workers as you thought you would?
>
> I'm not sure why that's a condition in need of special reporting.
>
So what should happen if no workers are available?
I don't think we can change the plan to a non-parallel at that
stage.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-12 08:17:12 |
Message-ID: | 54B38308.9080507@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/11/15 3:57 PM, Robert Haas wrote:
> On Sun, Jan 11, 2015 at 5:27 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>>> On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>>>> Yeah, if we come up with a plan for X workers and end up not being able
>>>> to spawn that many then I could see that being worth a warning or notice
>>>> or something. Not sure what EXPLAIN has to do anything with it..
>>>
>>> That seems mighty odd to me. If there are 8 background worker
>>> processes available, and you allow each session to use at most 4, then
>>> when there are >2 sessions trying to do parallelism at the same time,
>>> they might not all get their workers. Emitting a notice for that
>>> seems like it would be awfully chatty.
>>
>> Yeah, agreed, it could get quite noisy. Did you have another thought
>> for how to address the concern raised? Specifically, that you might not
>> get as many workers as you thought you would?
>
> I'm not sure why that's a condition in need of special reporting.
The case raised before (that I think is valid) is: what if you have a query that is massively parallel. You expect it to get 60 cores on the server and take 10 minutes. Instead it gets 10 and takes an hour (or worse, 1 and takes 10 hours).
Maybe it's not worth dealing with that in the first version, but I expect it will come up very quickly. We better make sure we're not painting ourselves in a corner.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | John Gorman <johngorman2(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-13 11:25:10 |
Message-ID: | CALkS6B_HBPPzSWuUQsS_S=OD-WtkRc9j2C+LubgDqJ05gigrug@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sun, Jan 11, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > So, for my 2c, I've long expected us to parallelize at the relation-file
> > level for these kinds of operations. This goes back to my other
> > thoughts on how we should be thinking about parallelizing inbound data
> > for bulk data loads but it seems appropriate to consider it here also.
> > One of the issues there is that 1G still feels like an awful lot for a
> > minimum work size for each worker and it would mean we don't parallelize
> > for relations less than that size.
>
> Yes, I think that's a killer objection.
One approach that I has worked well for me is to break big jobs into much
smaller bite size tasks. Each task is small enough to complete quickly.
We add the tasks to a task queue and spawn a generic worker pool which eats
through the task queue items.
This solves a lot of problems.
- Small to medium jobs can be parallelized efficiently.
- No need to split big jobs perfectly.
- We don't get into a situation where we are waiting around for a worker to
finish chugging through a huge task while the other workers sit idle.
- Worker memory footprint is tiny so we can afford many of them.
- Worker pool management is a well known problem.
- Worker spawn time disappears as a cost factor.
- The worker pool becomes a shared resource that can be managed and
reported on and becomes considerably more predictable.
From: | John Gorman <johngorman2(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-13 12:08:41 |
Message-ID: | CALkS6B8-A8uSG0J9a1fiGS_Q1BnL3aqovdZXYJKSeFLJZQb0Tw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 13, 2015 at 7:25 AM, John Gorman <johngorman2(at)gmail(dot)com> wrote:
>
>
> On Sun, Jan 11, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
>
>> On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net>
>> wrote:
>> > So, for my 2c, I've long expected us to parallelize at the relation-file
>> > level for these kinds of operations. This goes back to my other
>> > thoughts on how we should be thinking about parallelizing inbound data
>> > for bulk data loads but it seems appropriate to consider it here also.
>> > One of the issues there is that 1G still feels like an awful lot for a
>> > minimum work size for each worker and it would mean we don't parallelize
>> > for relations less than that size.
>>
>> Yes, I think that's a killer objection.
>
>
> One approach that I has worked well for me is to break big jobs into much
> smaller bite size tasks. Each task is small enough to complete quickly.
>
> We add the tasks to a task queue and spawn a generic worker pool which
> eats through the task queue items.
>
> This solves a lot of problems.
>
> - Small to medium jobs can be parallelized efficiently.
> - No need to split big jobs perfectly.
> - We don't get into a situation where we are waiting around for a worker
> to finish chugging through a huge task while the other workers sit idle.
> - Worker memory footprint is tiny so we can afford many of them.
> - Worker pool management is a well known problem.
> - Worker spawn time disappears as a cost factor.
> - The worker pool becomes a shared resource that can be managed and
> reported on and becomes considerably more predictable.
>
>
I forgot to mention that a running task queue can provide metrics such as
current utilization, current average throughput, current queue length and
estimated queue wait time. These can become dynamic cost factors in
deciding whether to parallelize.
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | John Gorman <johngorman2(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-14 03:42:57 |
Message-ID: | CAA4eK1LBNGHVEHw_QAzCS-Pjdyxzs+tbUNwBLe7oBJwXojQ4cw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 13, 2015 at 4:55 PM, John Gorman <johngorman2(at)gmail(dot)com> wrote:
>
>
>
> On Sun, Jan 11, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
>>
>> On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net>
wrote:
>> > So, for my 2c, I've long expected us to parallelize at the
relation-file
>> > level for these kinds of operations. This goes back to my other
>> > thoughts on how we should be thinking about parallelizing inbound data
>> > for bulk data loads but it seems appropriate to consider it here also.
>> > One of the issues there is that 1G still feels like an awful lot for a
>> > minimum work size for each worker and it would mean we don't
parallelize
>> > for relations less than that size.
>>
>> Yes, I think that's a killer objection.
>
>
> One approach that I has worked well for me is to break big jobs into much
smaller bite size tasks. Each task is small enough to complete quickly.
>
Here we have to decide what should be the strategy and how much
each worker should scan. As an example one of the the strategy
could be if the table size is X MB and there are 8 workers, then
divide the work as X/8 MB for each worker (which I have currently
used in patch) and another could be each worker does scan
1 block at a time and then check some global structure to see which
next block it needs to scan, according to me this could lead to random
scan. I have read that some other databases also divide the work
based on partitions or segments (size of segment is not very clear).
> We add the tasks to a task queue and spawn a generic worker pool which
eats through the task queue items.
>
> This solves a lot of problems.
>
> - Small to medium jobs can be parallelized efficiently.
> - No need to split big jobs perfectly.
> - We don't get into a situation where we are waiting around for a worker
to finish chugging through a huge task while the other workers sit idle.
> - Worker memory footprint is tiny so we can afford many of them.
> - Worker pool management is a well known problem.
> - Worker spawn time disappears as a cost factor.
> - The worker pool becomes a shared resource that can be managed and
reported on and becomes considerably more predictable.
>
Yeah, it is good idea to maintain shared worker pool, but it seems
to me that for initial version even if the workers are not shared,
then also it is meaningful to make parallel sequential scan work.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | John Gorman <johngorman2(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-14 04:00:22 |
Message-ID: | CAFjFpRfbW5+1+kDYiEny-5NciL_YJ4TdRf+LRmvPazmLrpSzLA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 사이트 추천SQL |
On Wed, Jan 14, 2015 at 9:12 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> On Tue, Jan 13, 2015 at 4:55 PM, John Gorman <johngorman2(at)gmail(dot)com>
> wrote:
> >
> >
> >
> > On Sun, Jan 11, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> >>
> >> On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost(at)snowman(dot)net>
> wrote:
> >> > So, for my 2c, I've long expected us to parallelize at the
> relation-file
> >> > level for these kinds of operations. This goes back to my other
> >> > thoughts on how we should be thinking about parallelizing inbound data
> >> > for bulk data loads but it seems appropriate to consider it here also.
> >> > One of the issues there is that 1G still feels like an awful lot for a
> >> > minimum work size for each worker and it would mean we don't
> parallelize
> >> > for relations less than that size.
> >>
> >> Yes, I think that's a killer objection.
> >
> >
> > One approach that I has worked well for me is to break big jobs into
> much smaller bite size tasks. Each task is small enough to complete quickly.
> >
>
> Here we have to decide what should be the strategy and how much
> each worker should scan. As an example one of the the strategy
> could be if the table size is X MB and there are 8 workers, then
> divide the work as X/8 MB for each worker (which I have currently
> used in patch) and another could be each worker does scan
> 1 block at a time and then check some global structure to see which
> next block it needs to scan, according to me this could lead to random
> scan. I have read that some other databases also divide the work
> based on partitions or segments (size of segment is not very clear).
>
A block can contain useful tuples, i.e tuples which are visible and fulfil
the quals + useless tuples i.e. tuples which are dead, invisible or that do
not fulfil the quals. Depending upon the contents of these blocks, esp. the
ratio of (useful tuples)/(unuseful tuples), even though we divide the
relation into equal sized runs, each worker may take different time. So,
instead of dividing the relation into number of run = number of workers, it
might be better to divide them into fixed sized runs with size < (total
number of blocks/ number of workers), and let a worker pick up a run after
it finishes with the previous one. The smaller the size of runs the better
load balancing but higher cost of starting with the run. So, we have to
strike a balance.
>
>
> > We add the tasks to a task queue and spawn a generic worker pool which
> eats through the task queue items.
> >
> > This solves a lot of problems.
> >
> > - Small to medium jobs can be parallelized efficiently.
> > - No need to split big jobs perfectly.
> > - We don't get into a situation where we are waiting around for a worker
> to finish chugging through a huge task while the other workers sit idle.
> > - Worker memory footprint is tiny so we can afford many of them.
> > - Worker pool management is a well known problem.
> > - Worker spawn time disappears as a cost factor.
> > - The worker pool becomes a shared resource that can be managed and
> reported on and becomes considerably more predictable.
> >
>
> Yeah, it is good idea to maintain shared worker pool, but it seems
> to me that for initial version even if the workers are not shared,
> then also it is meaningful to make parallel sequential scan work.
>
>
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>
--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | John Gorman <johngorman2(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-14 21:25:52 |
Message-ID: | CA+Tgmoaoj8kf6ft9O1E=T3+XCrRoKr4sWBVfoXdzFaDCH+=M+Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 13, 2015 at 6:25 AM, John Gorman <johngorman2(at)gmail(dot)com> wrote:
> One approach that I has worked well for me is to break big jobs into much
> smaller bite size tasks. Each task is small enough to complete quickly.
>
> We add the tasks to a task queue and spawn a generic worker pool which eats
> through the task queue items.
>
> This solves a lot of problems.
>
> - Small to medium jobs can be parallelized efficiently.
> - No need to split big jobs perfectly.
> - We don't get into a situation where we are waiting around for a worker to
> finish chugging through a huge task while the other workers sit idle.
> - Worker memory footprint is tiny so we can afford many of them.
> - Worker pool management is a well known problem.
> - Worker spawn time disappears as a cost factor.
> - The worker pool becomes a shared resource that can be managed and reported
> on and becomes considerably more predictable.
I think this is a good idea, but for now I would like to keep our
goals somewhat more modest: let's see if we can get parallel
sequential scan, and only parallel sequential scan, working and
committed. Ultimately, I think we may need something like what you're
talking about, because if you have a query with three or six or twelve
different parallelizable operations in it, you want the available CPU
resources to switch between those as their respective needs may
dictate. You certainly don't want to spawn a separate pool of workers
for each scan.
But I think getting that all working in the first version is probably
harder than what we should attempt. We have a bunch of problems to
solve here just around parallel sequential scan and the parallel mode
infrastructure: heavyweight locking, prefetching, the cost model, and
so on. Trying to add to that all of the problems that might attend on
a generic task queueing infrastructure fills me with no small amount
of fear.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-15 02:00:45 |
Message-ID: | 54B71F4D.3020303@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/13/15 9:42 PM, Amit Kapila wrote:
> As an example one of the the strategy
> could be if the table size is X MB and there are 8 workers, then
> divide the work as X/8 MB for each worker (which I have currently
> used in patch) and another could be each worker does scan
> 1 block at a time and then check some global structure to see which
> next block it needs to scan, according to me this could lead to random
> scan. I have read that some other databases also divide the work
> based on partitions or segments (size of segment is not very clear).
Long-term I think we'll want a mix between the two approaches. Simply doing something like blkno % num_workers is going to cause imbalances, but trying to do this on a per-block basis seems like too much overhead.
Also long-term, I think we also need to look at a more specialized version of parallelism at the IO layer. For example, during an index scan you'd really like to get IO requests for heap blocks started in the background while the backend is focused on the mechanics of the index scan itself. While this could be done with the stuff Robert has written I have to wonder if it'd be a lot more efficient to use fadvise or AIO. Or perhaps it would just be better to deal with an entire index page (remembering TIDs) and then hit the heap.
But I agree with Robert; there's a lot yet to be done just to get *any* kind of parallel execution working before we start thinking about how to optimize it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com> |
Cc: | John Gorman <johngorman2(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-15 05:55:08 |
Message-ID: | CAA4eK1Ju_Rn4j=kwgwY4vbfEdSfNTZCGqxUhFdkWF0JXm2pt3w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 14, 2015 at 9:30 AM, Ashutosh Bapat <
ashutosh(dot)bapat(at)enterprisedb(dot)com> wrote:
>
> On Wed, Jan 14, 2015 at 9:12 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>>
>> Here we have to decide what should be the strategy and how much
>> each worker should scan. As an example one of the the strategy
>> could be if the table size is X MB and there are 8 workers, then
>> divide the work as X/8 MB for each worker (which I have currently
>> used in patch) and another could be each worker does scan
>> 1 block at a time and then check some global structure to see which
>> next block it needs to scan, according to me this could lead to random
>> scan. I have read that some other databases also divide the work
>> based on partitions or segments (size of segment is not very clear).
>
>
> A block can contain useful tuples, i.e tuples which are visible and
fulfil the quals + useless tuples i.e. tuples which are dead, invisible or
that do not fulfil the quals. Depending upon the contents of these blocks,
esp. the ratio of (useful tuples)/(unuseful tuples), even though we divide
the relation into equal sized runs, each worker may take different time.
So, instead of dividing the relation into number of run = number of
workers, it might be better to divide them into fixed sized runs with size
< (total number of blocks/ number of workers), and let a worker pick up a
run after it finishes with the previous one. The smaller the size of runs
the better load balancing but higher cost of starting with the run. So, we
have to strike a balance.
>
I think your suggestion is good and it somewhat falls inline
with what Robert has suggested, but instead of block-by-block,
you seem to be suggesting of doing it in chunks (where chunk size
is not clear), however the only point against this is that such a
strategy for parallel sequence scan could lead to random scans
which can hurt the operation badly. Nonetheless, I will think more
on this lines of making work distribution dynamic so that we can
ensure that all workers can be kept busy.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-15 13:27:56 |
Message-ID: | CAA4eK1LQJhF5iMZF6kf4c8_sba+qDDswX2G9vXiEtocc-XE6hw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sat, Jan 10, 2015 at 11:14 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> >> I don't think you should be "switching" queues. The tuples should be
> >> sent to the tuple queue, and errors and notices to the error queue.
> > To achieve what you said (The tuples should be sent to the tuple
> > queue, and errors and notices to the error queue.), we need to
> > switch the queues.
> > The difficulty here is that once we set the queue (using
> > pq_redirect_to_shm_mq()) through which the communication has to
> > happen, it will use the same unless we change again the queue
> > using pq_redirect_to_shm_mq(). For example, assume we have
> > initially set error queue (using pq_redirect_to_shm_mq()) then to
> > send tuples, we need to call pq_redirect_to_shm_mq() to
> > set the tuple queue as the queue that needs to be used for communication
> > and again if error happens then we need to do the same for error
> > queue.
> > Do you have any other idea to achieve the same?
>
> Yeah, you need two separate global variables pointing to shm_mq
> objects, one of which gets used by pqmq.c for errors and the other of
> which gets used by printtup.c for tuples.
>
Okay, I will try to change the way as suggested without doing
switching, but this way we need to do it separately for 'T', 'D', and
'C' messages.
I have moved this patch to next CF as apart from above still I
have to work on execution strategy and optimizer related changes
as discussed in this thread
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-16 18:19:48 |
Message-ID: | CA+TgmobTSjwyQKfPCEDqUUJegsUOaKDwv3h-=WcLo7mhv05xMw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 14, 2015 at 9:00 PM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
> Simply doing
> something like blkno % num_workers is going to cause imbalances,
Yes.
> but trying
> to do this on a per-block basis seems like too much overhead.
...but no. Or at least, I doubt it. The cost of handing out blocks
one at a time is that, for each block, a worker's got to grab a
spinlock, increment and record the block number counter, and release
the spinlock. Or, use an atomic add. Now, it's true that spinlock
cycles and atomic ops can have sometimes impose severe overhead, but
you have to look at it as a percentage of the overall work being done.
In this case, the backend has to read, pin, and lock the page and
process every tuple on the page. Processing every tuple on the page
may involve de-TOASTing the tuple (leading to many more page
accesses), or evaluating a complex expression, or hitting CLOG to
check visibility, but even if it doesn't, I think the amount of work
that it takes to process all the tuples on the page will be far larger
than the cost of one atomic increment operation per block.
As mentioned downthread, a far bigger consideration is the I/O pattern
we create. A sequential scan is so-called because it reads the
relation sequentially. If we destroy that property, we will be more
than slightly sad. It might be OK to do sequential scans of, say,
each 1GB segment separately, but I'm pretty sure it would be a real
bad idea to read 8kB at a time at blocks 0, 64, 128, 1, 65, 129, ...
What I'm thinking about is that we might have something like this:
struct this_lives_in_dynamic_shared_memory
{
BlockNumber last_block;
Size prefetch_distance;
Size prefetch_increment;
slock_t mutex;
BlockNumber next_prefetch_block;
BlockNumber next_scan_block;
};
Each worker takes the mutex and checks whether next_prefetch_block -
next_scan_block < prefetch_distance and also whether
next_prefetch_block < last_block. If both are true, it prefetches
some number of additional blocks, as specified by prefetch_increment.
Otherwise, it increments next_scan_block and scans the block
corresponding to the old value.
So in this way, the prefetching runs ahead of the scan by a
configurable amount (prefetch_distance), which should be chosen so
that the prefetches have time to compete before the scan actually
reaches those blocks. Right now, of course, we rely on the operating
system to prefetch for sequential scans, but I have a strong hunch
that may not work on all systems if there are multiple processes doing
the reads.
Now, what of other strategies like dividing up the relation into 1GB
chunks and reading each one in a separate process? We could certainly
DO that, but what advantage does it have over this? The only benefit
I can see is that you avoid accessing a data structure of the type
shown above for every block, but that only matters if that cost is
material, and I tend to think it won't be. On the flip side, it means
that the granularity for dividing up work between processes is now
very coarse - when there are less than 6GB of data left in a relation,
at most 6 processes can work on it. That might be OK if the data is
being read in from disk anyway, but it's certainly not the best we can
do when the data is in memory.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-17 04:27:42 |
Message-ID: | CAA4eK1LbM9vqvDS-s4kF1gFfR53M3N7LhetLWcxS7k0Gq2rQQA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 16, 2015 at 11:49 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> As mentioned downthread, a far bigger consideration is the I/O pattern
> we create. A sequential scan is so-called because it reads the
> relation sequentially. If we destroy that property, we will be more
> than slightly sad. It might be OK to do sequential scans of, say,
> each 1GB segment separately, but I'm pretty sure it would be a real
> bad idea to read 8kB at a time at blocks 0, 64, 128, 1, 65, 129, ...
>
> What I'm thinking about is that we might have something like this:
>
> struct this_lives_in_dynamic_shared_memory
> {
> BlockNumber last_block;
> Size prefetch_distance;
> Size prefetch_increment;
> slock_t mutex;
> BlockNumber next_prefetch_block;
> BlockNumber next_scan_block;
> };
>
> Each worker takes the mutex and checks whether next_prefetch_block -
> next_scan_block < prefetch_distance and also whether
> next_prefetch_block < last_block. If both are true, it prefetches
> some number of additional blocks, as specified by prefetch_increment.
> Otherwise, it increments next_scan_block and scans the block
> corresponding to the old value.
>
Assuming we will increment next_prefetch_block only after prefetching
blocks (equivalent to prefetch_increment), won't 2 workers can
simultaneously see the same value for next_prefetch_block and try to
perform prefetch for same blocks?
What will be value of prefetch_increment?
Will it be equal to prefetch_distance or prefetch_distance/2 or
prefetch_distance/4 or .. or will it be totally unrelated
to prefetch_distance?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-17 04:39:51 |
Message-ID: | CA+TgmoZ=U9x+gHCwUH2iajWUfO8jTxQaOk1P9rHR+saCsZ5HaA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 16, 2015 at 11:27 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Fri, Jan 16, 2015 at 11:49 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> As mentioned downthread, a far bigger consideration is the I/O pattern
>> we create. A sequential scan is so-called because it reads the
>> relation sequentially. If we destroy that property, we will be more
>> than slightly sad. It might be OK to do sequential scans of, say,
>> each 1GB segment separately, but I'm pretty sure it would be a real
>> bad idea to read 8kB at a time at blocks 0, 64, 128, 1, 65, 129, ...
>>
>> What I'm thinking about is that we might have something like this:
>>
>> struct this_lives_in_dynamic_shared_memory
>> {
>> BlockNumber last_block;
>> Size prefetch_distance;
>> Size prefetch_increment;
>> slock_t mutex;
>> BlockNumber next_prefetch_block;
>> BlockNumber next_scan_block;
>> };
>>
>> Each worker takes the mutex and checks whether next_prefetch_block -
>> next_scan_block < prefetch_distance and also whether
>> next_prefetch_block < last_block. If both are true, it prefetches
>> some number of additional blocks, as specified by prefetch_increment.
>> Otherwise, it increments next_scan_block and scans the block
>> corresponding to the old value.
>
> Assuming we will increment next_prefetch_block only after prefetching
> blocks (equivalent to prefetch_increment), won't 2 workers can
> simultaneously see the same value for next_prefetch_block and try to
> perform prefetch for same blocks?
The idea is that you can only examine and modify next_prefetch_block
or next_scan_block while holding the mutex.
> What will be value of prefetch_increment?
> Will it be equal to prefetch_distance or prefetch_distance/2 or
> prefetch_distance/4 or .. or will it be totally unrelated to
> prefetch_distance?
I dunno, that might take some experimentation. prefetch_distance/2
doesn't sound stupid.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-19 07:24:08 |
Message-ID: | CAA4eK1+8cpeW3Zvrh-Li8HKTZ=Xf5tP_XqaL+gOW6aL+zqGxRg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Sat, Jan 17, 2015 at 10:09 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Fri, Jan 16, 2015 at 11:27 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > Assuming we will increment next_prefetch_block only after prefetching
> > blocks (equivalent to prefetch_increment), won't 2 workers can
> > simultaneously see the same value for next_prefetch_block and try to
> > perform prefetch for same blocks?
>
> The idea is that you can only examine and modify next_prefetch_block
> or next_scan_block while holding the mutex.
>
> > What will be value of prefetch_increment?
> > Will it be equal to prefetch_distance or prefetch_distance/2 or
> > prefetch_distance/4 or .. or will it be totally unrelated to
> > prefetch_distance?
>
> I dunno, that might take some experimentation. prefetch_distance/2
> doesn't sound stupid.
>
Okay, I think I got the idea what you want to achieve via
prefetching. So assuming prefetch_distance = 100 and
prefetch_increment = 50 (prefetch_distance /2), it seems to me
that as soon as there are less than 100 blocks in prefetch quota,
it will fetch next 50 blocks which means the system will be always
approximately 50 blocks ahead, that will ensure that in this algorithm
it will always perform sequential scan, however eventually this is turning
to be a system where one worker is reading from disk and then other
workers are reading from OS buffers to shared buffers and then getting
the tuple. In this approach only one downside I can see and that is
there could be times during execution where some/all workers will have
to wait on the worker doing prefetching, however I think we should try
this approach and see how it works.
Another thing is that I think prefetching is not supported on all platforms
(Windows) and for such systems as per above algorithm we need to
rely on block-by-block method.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-19 13:20:36 |
Message-ID: | CA+TgmoZN66-m9+DL_BR4b0Z1tYPi7nZQiJ+Nmtq-d2u9E0H9wQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Mon, Jan 19, 2015 at 2:24 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Okay, I think I got the idea what you want to achieve via
> prefetching. So assuming prefetch_distance = 100 and
> prefetch_increment = 50 (prefetch_distance /2), it seems to me
> that as soon as there are less than 100 blocks in prefetch quota,
> it will fetch next 50 blocks which means the system will be always
> approximately 50 blocks ahead, that will ensure that in this algorithm
> it will always perform sequential scan, however eventually this is turning
> to be a system where one worker is reading from disk and then other
> workers are reading from OS buffers to shared buffers and then getting
> the tuple. In this approach only one downside I can see and that is
> there could be times during execution where some/all workers will have
> to wait on the worker doing prefetching, however I think we should try
> this approach and see how it works.
Right. We probably want to make prefetch_distance a GUC. After all,
we currently rely on the operating system for prefetching, and the
operating system has a setting for this, at least on Linux (blockdev
--getra). It's possible, however, that we don't need this at all,
because the OS might be smart enough to figure it out for us. It's
probably worth testing, though.
> Another thing is that I think prefetching is not supported on all platforms
> (Windows) and for such systems as per above algorithm we need to
> rely on block-by-block method.
Well, I think we should try to set up a test to see if this is hurting
us. First, do a sequential-scan of a related too big at least twice
as large as RAM. Then, do a parallel sequential scan of the same
relation with 2 workers. Repeat these in alternation several times.
If the operating system is accomplishing meaningful readahead, and the
parallel sequential scan is breaking it, then since the test is
I/O-bound I would expect to see the parallel scan actually being
slower than the normal way.
Or perhaps there is some other test that would be better (ideas
welcome) but the point is we may need something like this, but we
should try to figure out whether we need it before spending too much
time on it.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 14:29:10 |
Message-ID: | CAA4eK1JMD6En5HE6GymdRaatWZtdtsqmJYjMP2YQ9s2c4QGyTw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 15, 2015 at 6:57 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
> >
> > Yeah, you need two separate global variables pointing to shm_mq
> > objects, one of which gets used by pqmq.c for errors and the other of
> > which gets used by printtup.c for tuples.
> >
>
> Okay, I will try to change the way as suggested without doing
> switching, but this way we need to do it separately for 'T', 'D', and
> 'C' messages.
>
I have taken care of integrating the parallel sequence scan with the
latest patch posted (parallel-mode-v1.patch) by Robert at below
location:
http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
Changes in this version
-----------------------------------------------
1. As mentioned previously, I have exposed one parameter
ParallelWorkerNumber as used in parallel-mode patch.
2. Enabled tuple queue to be used for passing tuples from
worker backend to master backend along with error queue
as per suggestion by Robert in the mail above.
3. Involved master backend to scan the heap directly when
tuples are not available in any shared memory tuple queue.
4. Introduced 3 new parameters (cpu_tuple_comm_cost,
parallel_setup_cost, parallel_startup_cost) for deciding the cost
of parallel plan. Currently, I have kept the default values for
parallel_setup_cost and parallel_startup_cost as 0.0, as those
require some experiments.
5. Fixed some issues (related to memory increase as reported
upthread by Thom Brown and general feature issues found during
test)
Note - I have yet to handle the new node types introduced at some
of the places and need to verify prepared queries and some other
things, however I think it will be good if I can get some feedback
at current stage.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_seqscan_v4.patch | application/octet-stream | 79.1 KB |
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 16:13:57 |
Message-ID: | CAA-aLv6ofF4m6xUUngsrS3-RpvVQruuud68rzGY926+4x0Ctyw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 사이트SQL |
On 20 January 2015 at 14:29, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Thu, Jan 15, 2015 at 6:57 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> > On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> > >
> > > Yeah, you need two separate global variables pointing to shm_mq
> > > objects, one of which gets used by pqmq.c for errors and the other of
> > > which gets used by printtup.c for tuples.
> > >
> >
> > Okay, I will try to change the way as suggested without doing
> > switching, but this way we need to do it separately for 'T', 'D', and
> > 'C' messages.
> >
>
> I have taken care of integrating the parallel sequence scan with the
> latest patch posted (parallel-mode-v1.patch) by Robert at below
> location:
>
> http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
>
> Changes in this version
> -----------------------------------------------
> 1. As mentioned previously, I have exposed one parameter
> ParallelWorkerNumber as used in parallel-mode patch.
> 2. Enabled tuple queue to be used for passing tuples from
> worker backend to master backend along with error queue
> as per suggestion by Robert in the mail above.
> 3. Involved master backend to scan the heap directly when
> tuples are not available in any shared memory tuple queue.
> 4. Introduced 3 new parameters (cpu_tuple_comm_cost,
> parallel_setup_cost, parallel_startup_cost) for deciding the cost
> of parallel plan. Currently, I have kept the default values for
> parallel_setup_cost and parallel_startup_cost as 0.0, as those
> require some experiments.
> 5. Fixed some issues (related to memory increase as reported
> upthread by Thom Brown and general feature issues found during
> test)
>
> Note - I have yet to handle the new node types introduced at some
> of the places and need to verify prepared queries and some other
> things, however I think it will be good if I can get some feedback
> at current stage.
>
Which commit is this based against? I'm getting errors with the latest
master:
thom(at)swift:~/Development/postgresql$ patch -p1 <
~/Downloads/parallel_seqscan_v4.patch
patching file src/backend/access/Makefile
patching file src/backend/access/common/printtup.c
patching file src/backend/access/shmmq/Makefile
patching file src/backend/access/shmmq/shmmqam.c
patching file src/backend/commands/explain.c
Hunk #1 succeeded at 721 (offset 8 lines).
Hunk #2 succeeded at 918 (offset 8 lines).
Hunk #3 succeeded at 1070 (offset 8 lines).
Hunk #4 succeeded at 1337 (offset 8 lines).
Hunk #5 succeeded at 2239 (offset 83 lines).
patching file src/backend/executor/Makefile
patching file src/backend/executor/execProcnode.c
patching file src/backend/executor/execScan.c
patching file src/backend/executor/execTuples.c
patching file src/backend/executor/nodeParallelSeqscan.c
patching file src/backend/executor/nodeSeqscan.c
patching file src/backend/libpq/pqmq.c
Hunk #1 succeeded at 23 with fuzz 2 (offset -3 lines).
Hunk #2 FAILED at 63.
Hunk #3 succeeded at 132 (offset -31 lines).
1 out of 3 hunks FAILED -- saving rejects to file
src/backend/libpq/pqmq.c.rej
patching file src/backend/optimizer/path/Makefile
patching file src/backend/optimizer/path/allpaths.c
patching file src/backend/optimizer/path/costsize.c
patching file src/backend/optimizer/path/parallelpath.c
patching file src/backend/optimizer/plan/createplan.c
patching file src/backend/optimizer/plan/planner.c
patching file src/backend/optimizer/plan/setrefs.c
patching file src/backend/optimizer/util/pathnode.c
patching file src/backend/postmaster/Makefile
patching file src/backend/postmaster/backendworker.c
patching file src/backend/postmaster/postmaster.c
patching file src/backend/tcop/dest.c
patching file src/backend/tcop/postgres.c
Hunk #1 succeeded at 54 (offset -1 lines).
Hunk #2 succeeded at 1132 (offset -1 lines).
patching file src/backend/utils/misc/guc.c
patching file src/backend/utils/misc/postgresql.conf.sample
can't find file to patch at input line 2105
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--------------------------
|diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h
|index 761ba1f..00ad468 100644
|--- a/src/include/access/parallel.h
|+++ b/src/include/access/parallel.h
--------------------------
File to patch:
--
Thom
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 16:55:49 |
Message-ID: | CAA4eK1KcQpmfeqT6Vwc1mBMrADiPWnJqOKcLDOi9MpE5irT1pA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 커뮤니티SQL |
On Tue, Jan 20, 2015 at 9:43 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>
> On 20 January 2015 at 14:29, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>> On Thu, Jan 15, 2015 at 6:57 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>> > On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
>> > >
>> > > Yeah, you need two separate global variables pointing to shm_mq
>> > > objects, one of which gets used by pqmq.c for errors and the other of
>> > > which gets used by printtup.c for tuples.
>> > >
>> >
>> > Okay, I will try to change the way as suggested without doing
>> > switching, but this way we need to do it separately for 'T', 'D', and
>> > 'C' messages.
>> >
>>
>> I have taken care of integrating the parallel sequence scan with the
>> latest patch posted (parallel-mode-v1.patch) by Robert at below
>> location:
>>
http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
>>
>> Changes in this version
>> -----------------------------------------------
>> 1. As mentioned previously, I have exposed one parameter
>> ParallelWorkerNumber as used in parallel-mode patch.
>> 2. Enabled tuple queue to be used for passing tuples from
>> worker backend to master backend along with error queue
>> as per suggestion by Robert in the mail above.
>> 3. Involved master backend to scan the heap directly when
>> tuples are not available in any shared memory tuple queue.
>> 4. Introduced 3 new parameters (cpu_tuple_comm_cost,
>> parallel_setup_cost, parallel_startup_cost) for deciding the cost
>> of parallel plan. Currently, I have kept the default values for
>> parallel_setup_cost and parallel_startup_cost as 0.0, as those
>> require some experiments.
>> 5. Fixed some issues (related to memory increase as reported
>> upthread by Thom Brown and general feature issues found during
>> test)
>>
>> Note - I have yet to handle the new node types introduced at some
>> of the places and need to verify prepared queries and some other
>> things, however I think it will be good if I can get some feedback
>> at current stage.
>
>
> Which commit is this based against? I'm getting errors with the latest
master:
>
It seems to me that you have not applied parallel-mode patch
before applying this patch, can you try once again by first applying
the patch posted by Robert at below link:
http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
commit-id used for this patch - 0b49642
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 16:58:19 |
Message-ID: | CAA-aLv4MWZEVa1eA2O6apTF-+3oEYXo-RKWvbkq8eHmv6EKvog@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg무지개 토토SQL |
On 20 January 2015 at 16:55, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Tue, Jan 20, 2015 at 9:43 PM, Thom Brown <thom(at)linux(dot)com> wrote:
> >
> > On 20 January 2015 at 14:29, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >>
> >> On Thu, Jan 15, 2015 at 6:57 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >> > On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> >> > >
> >> > > Yeah, you need two separate global variables pointing to shm_mq
> >> > > objects, one of which gets used by pqmq.c for errors and the other
> of
> >> > > which gets used by printtup.c for tuples.
> >> > >
> >> >
> >> > Okay, I will try to change the way as suggested without doing
> >> > switching, but this way we need to do it separately for 'T', 'D', and
> >> > 'C' messages.
> >> >
> >>
> >> I have taken care of integrating the parallel sequence scan with the
> >> latest patch posted (parallel-mode-v1.patch) by Robert at below
> >> location:
> >>
> http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
> >>
> >> Changes in this version
> >> -----------------------------------------------
> >> 1. As mentioned previously, I have exposed one parameter
> >> ParallelWorkerNumber as used in parallel-mode patch.
> >> 2. Enabled tuple queue to be used for passing tuples from
> >> worker backend to master backend along with error queue
> >> as per suggestion by Robert in the mail above.
> >> 3. Involved master backend to scan the heap directly when
> >> tuples are not available in any shared memory tuple queue.
> >> 4. Introduced 3 new parameters (cpu_tuple_comm_cost,
> >> parallel_setup_cost, parallel_startup_cost) for deciding the cost
> >> of parallel plan. Currently, I have kept the default values for
> >> parallel_setup_cost and parallel_startup_cost as 0.0, as those
> >> require some experiments.
> >> 5. Fixed some issues (related to memory increase as reported
> >> upthread by Thom Brown and general feature issues found during
> >> test)
> >>
> >> Note - I have yet to handle the new node types introduced at some
> >> of the places and need to verify prepared queries and some other
> >> things, however I think it will be good if I can get some feedback
> >> at current stage.
> >
> >
> > Which commit is this based against? I'm getting errors with the latest
> master:
> >
>
> It seems to me that you have not applied parallel-mode patch
> before applying this patch, can you try once again by first applying
> the patch posted by Robert at below link:
>
> http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
>
> commit-id used for this patch - 0b49642
>
D'oh. Yes, you're completely right. Works fine now.
Thanks.
Thom
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 17:29:38 |
Message-ID: | CAA-aLv6Xf-995_c54XqzTT4tKFmSipkgqN5tkywUOfd8jhNU7Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 20 January 2015 at 14:29, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Thu, Jan 15, 2015 at 6:57 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> > On Mon, Jan 12, 2015 at 3:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> > >
> > > Yeah, you need two separate global variables pointing to shm_mq
> > > objects, one of which gets used by pqmq.c for errors and the other of
> > > which gets used by printtup.c for tuples.
> > >
> >
> > Okay, I will try to change the way as suggested without doing
> > switching, but this way we need to do it separately for 'T', 'D', and
> > 'C' messages.
> >
>
> I have taken care of integrating the parallel sequence scan with the
> latest patch posted (parallel-mode-v1.patch) by Robert at below
> location:
>
> http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
>
> Changes in this version
> -----------------------------------------------
> 1. As mentioned previously, I have exposed one parameter
> ParallelWorkerNumber as used in parallel-mode patch.
> 2. Enabled tuple queue to be used for passing tuples from
> worker backend to master backend along with error queue
> as per suggestion by Robert in the mail above.
> 3. Involved master backend to scan the heap directly when
> tuples are not available in any shared memory tuple queue.
> 4. Introduced 3 new parameters (cpu_tuple_comm_cost,
> parallel_setup_cost, parallel_startup_cost) for deciding the cost
> of parallel plan. Currently, I have kept the default values for
> parallel_setup_cost and parallel_startup_cost as 0.0, as those
> require some experiments.
> 5. Fixed some issues (related to memory increase as reported
> upthread by Thom Brown and general feature issues found during
> test)
>
> Note - I have yet to handle the new node types introduced at some
> of the places and need to verify prepared queries and some other
> things, however I think it will be good if I can get some feedback
> at current stage.
>
I'm getting an issue:
➤ psql://thom(at)[local]:5488/pgbench
# set parallel_seqscan_degree = 8;
SET
Time: 0.248 ms
➤ psql://thom(at)[local]:5488/pgbench
# explain select c1 from t1;
QUERY PLAN
--------------------------------------------------------------
Parallel Seq Scan on t1 (cost=0.00..21.22 rows=100 width=4)
Number of Workers: 8
Number of Blocks Per Worker: 11
(3 rows)
Time: 0.322 ms
# explain analyse select c1 from t1;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------
Parallel Seq Scan on t1 (cost=0.00..21.22 rows=100 width=4) (actual
time=0.024..13.468 rows=100 loops=1)
Number of Workers: 8
Number of Blocks Per Worker: 11
Planning time: 0.040 ms
Execution time: 13.862 ms
(5 rows)
Time: 14.188 ms
➤ psql://thom(at)[local]:5488/pgbench
# set parallel_seqscan_degree = 10;
SET
Time: 0.219 ms
➤ psql://thom(at)[local]:5488/pgbench
# explain select c1 from t1;
QUERY PLAN
--------------------------------------------------------------
Parallel Seq Scan on t1 (cost=0.00..19.18 rows=100 width=4)
Number of Workers: 10
Number of Blocks Per Worker: 9
(3 rows)
Time: 0.375 ms
➤ psql://thom(at)[local]:5488/pgbench
# explain analyse select c1 from t1;
So setting parallel_seqscan_degree above max_worker_processes causes the
CPU to max out, and the query never returns, or at least not after waiting
2 minutes. Shouldn't it have a ceiling of max_worker_processes?
The original test I performed where I was getting OOM errors now appears to
be fine:
# explain (analyse, buffers, timing) select distinct bid from
pgbench_accounts;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=1400411.11..1400412.11 rows=100 width=4) (actual
time=8504.333..8504.335 rows=13 loops=1)
Group Key: bid
Buffers: shared hit=32 read=18183
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..1375411.11
rows=10000000 width=4) (actual time=0.054..7183.494 rows=10000000 loops=1)
Number of Workers: 8
Number of Blocks Per Worker: 18215
Buffers: shared hit=32 read=18183
Planning time: 0.058 ms
Execution time: 8876.967 ms
(9 rows)
Time: 8877.366 ms
Note that I increased seq_page_cost to force a parallel scan in this case.
Thom
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-20 20:39:23 |
Message-ID: | 54BEBCFB.4000304@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/19/15 7:20 AM, Robert Haas wrote:
>> >Another thing is that I think prefetching is not supported on all platforms
>> >(Windows) and for such systems as per above algorithm we need to
>> >rely on block-by-block method.
> Well, I think we should try to set up a test to see if this is hurting
> us. First, do a sequential-scan of a related too big at least twice
> as large as RAM. Then, do a parallel sequential scan of the same
> relation with 2 workers. Repeat these in alternation several times.
> If the operating system is accomplishing meaningful readahead, and the
> parallel sequential scan is breaking it, then since the test is
> I/O-bound I would expect to see the parallel scan actually being
> slower than the normal way.
>
> Or perhaps there is some other test that would be better (ideas
> welcome) but the point is we may need something like this, but we
> should try to figure out whether we need it before spending too much
> time on it.
I'm guessing that not all supported platforms have prefetching that actually helps us... but it would be good to actually know if that's the case.
Where I think this gets a lot more interesting is if we could apply this to an index scan. My thought is that would result in one worker mostly being responsible for advancing the index scan itself while the other workers were issuing (and waiting on) heap IO. So even if this doesn't turn out to be a win for seqscan, there's other places we might well want to use it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-21 06:21:26 |
Message-ID: | CAA4eK1JMRjf4Wr-hJEDoSzM0WuLQ2dRCGShWt4KX5r5Yp_aZ6w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 20, 2015 at 10:59 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>
> On 20 January 2015 at 14:29, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>> Note - I have yet to handle the new node types introduced at some
>> of the places and need to verify prepared queries and some other
>> things, however I think it will be good if I can get some feedback
>> at current stage.
>
>
> I'm getting an issue:
>
>
>
> # set parallel_seqscan_degree = 10;
> SET
> Time: 0.219 ms
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
>
> ➤ psql://thom(at)[local]:5488/pgbench
>
> # explain analyse select c1 from t1;
>
>
> So setting parallel_seqscan_degree above max_worker_processes causes the
CPU to max out, and the query never returns, or at least not after waiting
2 minutes. Shouldn't it have a ceiling of max_worker_processes?
>
Yes, it should behave that way, but this is not handled in
patch as still we have to decide on what is the best execution
strategy (block-by-block or fixed chunks for different workers)
and based on that I can handle this scenario in patch.
I could return an error for such a scenario or do some work
to handle it seamlessly, but it seems to me that I have to
rework on the same if we select different approach for doing
execution than used in patch, so I am waiting for that to get
decided. I am planing to work on getting the performance data for
both the approaches, so that we can decide which is better
way to go-ahead.
> The original test I performed where I was getting OOM errors now appears
to be fine:
>
Thanks for confirming the same.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-21 07:17:24 |
Message-ID: | 54BF5284.8020009@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 20-01-2015 PM 11:29, Amit Kapila wrote:
>
> I have taken care of integrating the parallel sequence scan with the
> latest patch posted (parallel-mode-v1.patch) by Robert at below
> location:
> http://www.postgresql.org/message-id/CA+TgmoZdUK4K3XHBxc9vM-82khourEZdvQWTfgLhWsd2R2aAGQ@mail.gmail.com
>
> Changes in this version
> -----------------------------------------------
> 1. As mentioned previously, I have exposed one parameter
> ParallelWorkerNumber as used in parallel-mode patch.
> 2. Enabled tuple queue to be used for passing tuples from
> worker backend to master backend along with error queue
> as per suggestion by Robert in the mail above.
> 3. Involved master backend to scan the heap directly when
> tuples are not available in any shared memory tuple queue.
> 4. Introduced 3 new parameters (cpu_tuple_comm_cost,
> parallel_setup_cost, parallel_startup_cost) for deciding the cost
> of parallel plan. Currently, I have kept the default values for
> parallel_setup_cost and parallel_startup_cost as 0.0, as those
> require some experiments.
> 5. Fixed some issues (related to memory increase as reported
> upthread by Thom Brown and general feature issues found during
> test)
>
> Note - I have yet to handle the new node types introduced at some
> of the places and need to verify prepared queries and some other
> things, however I think it will be good if I can get some feedback
> at current stage.
>
I got an assertion failure:
In src/backend/executor/execTuples.c: ExecStoreTuple()
/* passing shouldFree=true for a tuple on a disk page is not sane */
Assert(BufferIsValid(buffer) ? (!shouldFree) : true);
when called from:
In src/backend/executor/nodeParallelSeqscan.c: ParallelSeqNext()
I think something like the following would be necessary (reading from
comments in the code):
--- a/src/backend/executor/nodeParallelSeqscan.c
+++ b/src/backend/executor/nodeParallelSeqscan.c
@@ -85,7 +85,7 @@ ParallelSeqNext(ParallelSeqScanState *node)
if (tuple)
ExecStoreTuple(tuple,
slot,
- scandesc->rs_cbuf,
+ fromheap ? scandesc->rs_cbuf : InvalidBuffer,
!fromheap);
After fixing this, the assertion failure seems to be gone though I
observed the blocked (CPU maxed out) state as reported elsewhere by Thom
Brown.
What I was doing:
CREATE TABLE test(a) AS SELECT generate_series(1, 10000000);
postgres=# SHOW max_worker_processes;
max_worker_processes
----------------------
8
(1 row)
postgres=# SET seq_page_cost TO 100;
SET
postgres=# SET parallel_seqscan_degree TO 4;
SET
postgres=# EXPLAIN SELECT * FROM test;
QUERY PLAN
-------------------------------------------------------------------------
Parallel Seq Scan on test (cost=0.00..1801071.27 rows=8981483 width=4)
Number of Workers: 4
Number of Blocks Per Worker: 8849
(3 rows)
Though, EXPLAIN ANALYZE caused the thing.
Thanks,
Amit
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-21 10:44:40 |
Message-ID: | CAA4eK1+8h6KEdjZwJO8i7EbVC5w+wFQYZtg66Xj0o1zpTg_Zkw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 21, 2015 at 12:47 PM, Amit Langote <
Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>
> On 20-01-2015 PM 11:29, Amit Kapila wrote:
> > Note - I have yet to handle the new node types introduced at some
> > of the places and need to verify prepared queries and some other
> > things, however I think it will be good if I can get some feedback
> > at current stage.
> >
>
> I got an assertion failure:
>
> In src/backend/executor/execTuples.c: ExecStoreTuple()
>
> /* passing shouldFree=true for a tuple on a disk page is not sane */
> Assert(BufferIsValid(buffer) ? (!shouldFree) : true);
>
Good Catch!
The reason is that while master backend is scanning from a heap
page, if it finds another tuple/tuples's from shared memory message
queue it will process those tuples first and in such a scenario, the scan
descriptor will still have reference to buffer which it is using from
scanning
from heap. Your proposed fix will work.
> After fixing this, the assertion failure seems to be gone though I
> observed the blocked (CPU maxed out) state as reported elsewhere by Thom
> Brown.
>
Does it happen only when parallel_seqscan_degree > max_worker_processes?
Thanks for checking the patch.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Langote <amitlangote09(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-21 11:01:12 |
Message-ID: | CA+HiwqFoN+nNxZWj=Nw3mLzUoW3WnLdKxYX52HT2t_vJ1ZUunQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg롤 토토SQL : |
On Wednesday, January 21, 2015, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Wed, Jan 21, 2015 at 12:47 PM, Amit Langote <
> Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp
> <javascript:_e(%7B%7D,'cvml','Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp');>> wrote:
> >
> > On 20-01-2015 PM 11:29, Amit Kapila wrote:
> > > Note - I have yet to handle the new node types introduced at some
> > > of the places and need to verify prepared queries and some other
> > > things, however I think it will be good if I can get some feedback
> > > at current stage.
> > >
> >
> > I got an assertion failure:
> >
> > In src/backend/executor/execTuples.c: ExecStoreTuple()
> >
> > /* passing shouldFree=true for a tuple on a disk page is not sane */
> > Assert(BufferIsValid(buffer) ? (!shouldFree) : true);
> >
>
> Good Catch!
> The reason is that while master backend is scanning from a heap
> page, if it finds another tuple/tuples's from shared memory message
> queue it will process those tuples first and in such a scenario, the scan
> descriptor will still have reference to buffer which it is using from
> scanning
> from heap. Your proposed fix will work.
>
> > After fixing this, the assertion failure seems to be gone though I
> > observed the blocked (CPU maxed out) state as reported elsewhere by Thom
> > Brown.
> >
>
> Does it happen only when parallel_seqscan_degree > max_worker_processes?
>
I have max_worker_processes set to the default of 8 while
parallel_seqscan_degree is 4. So, this may be a case different from Thom's.
Thanks,
Amit
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-21 12:43:32 |
Message-ID: | CAA4eK1J4Ja1PYV8ZhfwyQKw=1q=ZbAcf62p9WSp1EpZ_cuOHiQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 21, 2015 at 4:31 PM, Amit Langote <amitlangote09(at)gmail(dot)com>
wrote:
> On Wednesday, January 21, 2015, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>>
>>
>> Does it happen only when parallel_seqscan_degree > max_worker_processes?
>
>
> I have max_worker_processes set to the default of 8 while
parallel_seqscan_degree is 4. So, this may be a case different from Thom's.
>
I think this is due to reason that memory for forming
tuple in master backend is retained for longer time which
is causing this statement to take much longer time than
required. I have fixed the other issue as well reported by
you in attached patch.
I think this patch is still not completely ready for general
purpose testing, however it could be helpful if we can run
some tests to see in what kind of scenario's it gives benefit
like in the test you are doing if rather than increasing
seq_page_cost, you should add an expensive WHERE condition
so that it should automatically select parallel plan. I think it is better
to change one of the new parameter's (parallel_setup_cost,
parallel_startup_cost and cpu_tuple_comm_cost) if you want
your statement to use parallel plan, like in your example if
you would have reduced cpu_tuple_comm_cost, it would have
selected parallel plan, that way we can get some feedback about
what should be the appropriate default values for the newly added
parameters. I am already planing to do some tests in that regard,
however if I get some feedback from other's that would be helpful.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_seqscan_v5.patch | application/octet-stream | 79.2 KB |
From: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 01:07:16 |
Message-ID: | 9A28C8860F777E439AA12E8AEA7694F8010A6CD9@BPXM15GP.gisp.nec.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 사이트 추천SQL |
> On Wed, Jan 21, 2015 at 4:31 PM, Amit Langote <amitlangote09(at)gmail(dot)com>
> wrote:
> > On Wednesday, January 21, 2015, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >>
> >>
> >> Does it happen only when parallel_seqscan_degree > max_worker_processes?
> >
> >
> > I have max_worker_processes set to the default of 8 while
> parallel_seqscan_degree is 4. So, this may be a case different from Thom's.
> >
>
> I think this is due to reason that memory for forming tuple in master backend
> is retained for longer time which is causing this statement to take much
> longer time than required. I have fixed the other issue as well reported
> by you in attached patch.
>
> I think this patch is still not completely ready for general purpose testing,
> however it could be helpful if we can run some tests to see in what kind
> of scenario's it gives benefit like in the test you are doing if rather
> than increasing seq_page_cost, you should add an expensive WHERE condition
> so that it should automatically select parallel plan. I think it is better
> to change one of the new parameter's (parallel_setup_cost,
> parallel_startup_cost and cpu_tuple_comm_cost) if you want your statement
> to use parallel plan, like in your example if you would have reduced
> cpu_tuple_comm_cost, it would have selected parallel plan, that way we can
> get some feedback about what should be the appropriate default values for
> the newly added parameters. I am already planing to do some tests in that
> regard, however if I get some feedback from other's that would be helpful.
>
(Please point out me if my understanding is incorrect.)
What happen if dynamic background worker process tries to reference temporary
tables? Because buffer of temporary table blocks are allocated on private
address space, its recent status is not visible to other process unless it is
not flushed to the storage every time.
Do we need to prohibit create_parallelscan_paths() to generate a path when
target relation is temporary one?
Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei <kaigai(at)ak(dot)jp(dot)nec(dot)com>
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 05:00:50 |
Message-ID: | CAA4eK1+y0yVXJiNmDxkMDs_+HQzKabd8MOWAzZW0Krf_wXXuBQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 결과SQL |
On Thu, Jan 22, 2015 at 6:37 AM, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> wrote:
>
> (Please point out me if my understanding is incorrect.)
>
> What happen if dynamic background worker process tries to reference
temporary
> tables? Because buffer of temporary table blocks are allocated on private
> address space, its recent status is not visible to other process unless
it is
> not flushed to the storage every time.
>
> Do we need to prohibit create_parallelscan_paths() to generate a path when
> target relation is temporary one?
>
Yes, we need to prohibit parallel scans on temporary relations. Will fix.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 05:14:37 |
Message-ID: | 54C0873D.3070001@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 사이트SQL |
On 21-01-2015 PM 09:43, Amit Kapila wrote:
> On Wed, Jan 21, 2015 at 4:31 PM, Amit Langote <amitlangote09(at)gmail(dot)com>
> wrote:
>> On Wednesday, January 21, 2015, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
>>>
>>>
>>> Does it happen only when parallel_seqscan_degree > max_worker_processes?
>>
>>
>> I have max_worker_processes set to the default of 8 while
> parallel_seqscan_degree is 4. So, this may be a case different from Thom's.
>>
>
> I think this is due to reason that memory for forming
> tuple in master backend is retained for longer time which
> is causing this statement to take much longer time than
> required. I have fixed the other issue as well reported by
> you in attached patch.
>
Thanks for fixing.
> I think this patch is still not completely ready for general
> purpose testing, however it could be helpful if we can run
> some tests to see in what kind of scenario's it gives benefit
> like in the test you are doing if rather than increasing
> seq_page_cost, you should add an expensive WHERE condition
> so that it should automatically select parallel plan. I think it is better
> to change one of the new parameter's (parallel_setup_cost,
> parallel_startup_cost and cpu_tuple_comm_cost) if you want
> your statement to use parallel plan, like in your example if
> you would have reduced cpu_tuple_comm_cost, it would have
> selected parallel plan, that way we can get some feedback about
> what should be the appropriate default values for the newly added
> parameters. I am already planing to do some tests in that regard,
> however if I get some feedback from other's that would be helpful.
>
>
Perhaps you are aware or you've postponed working on it, but I see that
a plan executing in a worker does not know about instrumentation. It
results in the EXPLAIN ANALYZE showing incorrect figures. For example
compare the normal seqscan and parallel seqscan below:
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE sqrt(a) < 3456 AND
md5(a::text) LIKE 'ac%';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Seq Scan on test (cost=0.00..310228.52 rows=16120 width=4) (actual
time=0.497..17062.436 rows=39028 loops=1)
Filter: ((sqrt((a)::double precision) < 3456::double precision) AND
(md5((a)::text) ~~ 'ac%'::text))
Rows Removed by Filter: 9960972
Planning time: 0.206 ms
Execution time: 17378.413 ms
(5 rows)
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE sqrt(a) < 3456 AND
md5(a::text) LIKE 'ac%';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Parallel Seq Scan on test (cost=0.00..255486.08 rows=16120 width=4)
(actual time=7.329..4906.981 rows=39028 loops=1)
Filter: ((sqrt((a)::double precision) < 3456::double precision) AND
(md5((a)::text) ~~ 'ac%'::text))
Rows Removed by Filter: 1992710
Number of Workers: 4
Number of Blocks Per Worker: 8849
Planning time: 0.137 ms
Execution time: 6077.782 ms
(7 rows)
Note the "Rows Removed by Filter". I guess the difference may be
because, all the rows filtered by workers were not accounted for. I'm
not quite sure, but since exec_worker_stmt goes the Portal way,
QueryDesc.instrument_options remains unset and hence no instrumentation
opportunities in a worker backend. One option may be to pass
instrument_options down to worker_stmt?
By the way, 17s and 6s compare really well in favor of parallel seqscan
above, :)
Thanks,
Amit
From: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 05:26:50 |
Message-ID: | 54C08A1A.20109@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 22-01-2015 PM 02:30, Amit Kapila wrote:
>> Perhaps you are aware or you've postponed working on it, but I see that
>> a plan executing in a worker does not know about instrumentation.
>
> I have deferred it until other main parts are stabilised/reviewed. Once
> that is done, we can take a call what is best we can do for instrumentation.
> Thom has reported the same as well upthread.
>
Ah, I missed Thom's report.
>> Note the "Rows Removed by Filter". I guess the difference may be
>> because, all the rows filtered by workers were not accounted for. I'm
>> not quite sure, but since exec_worker_stmt goes the Portal way,
>> QueryDesc.instrument_options remains unset and hence no instrumentation
>> opportunities in a worker backend. One option may be to pass
>> instrument_options down to worker_stmt?
>>
>
> I think there is more to it, master backend need to process that information
> as well.
>
I see.
Thanks,
Amit
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 05:30:02 |
Message-ID: | CAA4eK1Jf8Bxt2BHL-o-xJqK0RJn75_yrs9NoEKALRYYMaJ9Tng@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 캔SQL : |
On Thu, Jan 22, 2015 at 10:44 AM, Amit Langote <
Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>
> On 21-01-2015 PM 09:43, Amit Kapila wrote:
> > On Wed, Jan 21, 2015 at 4:31 PM, Amit Langote <amitlangote09(at)gmail(dot)com>
> > wrote:
> >> On Wednesday, January 21, 2015, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> > wrote:
> >>>
> >>>
> >>> Does it happen only when parallel_seqscan_degree >
max_worker_processes?
> >>
> >>
> >> I have max_worker_processes set to the default of 8 while
> > parallel_seqscan_degree is 4. So, this may be a case different from
Thom's.
> >>
> >
> > I think this is due to reason that memory for forming
> > tuple in master backend is retained for longer time which
> > is causing this statement to take much longer time than
> > required. I have fixed the other issue as well reported by
> > you in attached patch.
> >
>
> Thanks for fixing.
>
> > I think this patch is still not completely ready for general
> > purpose testing, however it could be helpful if we can run
> > some tests to see in what kind of scenario's it gives benefit
> > like in the test you are doing if rather than increasing
> > seq_page_cost, you should add an expensive WHERE condition
> > so that it should automatically select parallel plan. I think it is
better
> > to change one of the new parameter's (parallel_setup_cost,
> > parallel_startup_cost and cpu_tuple_comm_cost) if you want
> > your statement to use parallel plan, like in your example if
> > you would have reduced cpu_tuple_comm_cost, it would have
> > selected parallel plan, that way we can get some feedback about
> > what should be the appropriate default values for the newly added
> > parameters. I am already planing to do some tests in that regard,
> > however if I get some feedback from other's that would be helpful.
> >
> >
>
> Perhaps you are aware or you've postponed working on it, but I see that
> a plan executing in a worker does not know about instrumentation.
I have deferred it until other main parts are stabilised/reviewed. Once
that is done, we can take a call what is best we can do for instrumentation.
Thom has reported the same as well upthread.
> Note the "Rows Removed by Filter". I guess the difference may be
> because, all the rows filtered by workers were not accounted for. I'm
> not quite sure, but since exec_worker_stmt goes the Portal way,
> QueryDesc.instrument_options remains unset and hence no instrumentation
> opportunities in a worker backend. One option may be to pass
> instrument_options down to worker_stmt?
>
I think there is more to it, master backend need to process that information
as well.
> By the way, 17s and 6s compare really well in favor of parallel seqscan
> above, :)
>
That sounds interesting.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 10:57:49 |
Message-ID: | CAA4eK1JyVNEBE8KuxKd3bJhkG6tSbpBYX_+ZtP34ZSTCSucA1A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토SQL : Postg토토SQL |
On Mon, Jan 19, 2015 at 6:50 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Mon, Jan 19, 2015 at 2:24 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>
> > Another thing is that I think prefetching is not supported on all
platforms
> > (Windows) and for such systems as per above algorithm we need to
> > rely on block-by-block method.
>
> Well, I think we should try to set up a test to see if this is hurting
> us. First, do a sequential-scan of a related too big at least twice
> as large as RAM. Then, do a parallel sequential scan of the same
> relation with 2 workers. Repeat these in alternation several times.
> If the operating system is accomplishing meaningful readahead, and the
> parallel sequential scan is breaking it, then since the test is
> I/O-bound I would expect to see the parallel scan actually being
> slower than the normal way.
>
I have taken some performance data as per above discussion. Basically,
I have used parallel_count module which is part of parallel-mode patch
as that seems to be more close to verify the I/O pattern (doesn't have any
tuple communication overhead).
Script used to test is attached (parallel_count.sh)
Performance Data
----------------------------
Configuration and Db Details
IBM POWER-7 16 cores, 64 hardware threads
RAM = 64GB
Table Size - 120GB
Used below statements to create table -
create table tbl_perf(c1 int, c2 char(1000));
insert into tbl_perf values(generate_series(1,10000000),'aaaaa');
insert into tbl_perf values(generate_series(10000001,30000000),'aaaaa');
insert into tbl_perf values(generate_series(30000001,110000000),'aaaaa');
*Block-By-Block*
*No. of workers/Time (ms)* *0* *2* Run-1 267798 295051 Run-2 276646
296665 Run-3 281364 314952 Run-4 290231 326243 Run-5 288890 295684
Then I have modified the parallel_count module such that it can scan in
fixed chunks, rather than block-by-block, the patch for same is attached
(parallel_count_fixed_chunk_v1.patch, this is a patch based on parallel
count module in parallel-mode patch [1]).
*Fixed-Chunks*
*No. of workers/Time (ms)* *0* *2*
286346 234037
250051 215111
255915 254934
263754 242228
251399 202581
Observations
------------------------
1. Scanning block-by-block has negative impact on performance and
I thin it will degrade more if we increase parallel count as that can lead
to more randomness.
2. Scanning in fixed chunks improves the performance. Increasing
parallel count to a very large number might impact the performance,
but I think we can have a lower bound below which we will not allow
multiple processes to scan the relation.
Now I can go-ahead and try with prefetching approach as suggested
by you, but I have a feeling that overall it might not be beneficial (mainly
due to the reason that it is not supported on all platforms, we can say
that we don't care for such platforms, but still there is no mitigation
strategy
for those platforms) due to the reasons mentioned up-thread.
Thoughts?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
parallel_count.sh | application/x-sh | 1.0 KB |
parallel_count_fixed_chunk_v1.patch | application/octet-stream | 3.0 KB |
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 13:53:16 |
Message-ID: | CA+TgmobVCb7jnCn60j2KY2UQvkQJ2ECbySKY+MwcC2R0qUq8ag@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 22, 2015 at 5:57 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> 1. Scanning block-by-block has negative impact on performance and
> I thin it will degrade more if we increase parallel count as that can lead
> to more randomness.
>
> 2. Scanning in fixed chunks improves the performance. Increasing
> parallel count to a very large number might impact the performance,
> but I think we can have a lower bound below which we will not allow
> multiple processes to scan the relation.
I'm confused. Your actual test numbers seem to show that the
performance with the block-by-block approach was slightly higher with
parallelism than without, where as the performance with the
chunk-by-chunk approach was lower with parallelism than without, but
the text quoted above, summarizing those numbers, says the opposite.
Also, I think testing with 2 workers is probably not enough. I think
we should test with 8 or even 16.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 14:02:14 |
Message-ID: | CAA4eK1KnpMv6DHWqQznjVwy4mfBqXZDLzMWXoCCnYvdk-VwvXQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 22, 2015 at 7:23 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Jan 22, 2015 at 5:57 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > 1. Scanning block-by-block has negative impact on performance and
> > I thin it will degrade more if we increase parallel count as that can
lead
> > to more randomness.
> >
> > 2. Scanning in fixed chunks improves the performance. Increasing
> > parallel count to a very large number might impact the performance,
> > but I think we can have a lower bound below which we will not allow
> > multiple processes to scan the relation.
>
> I'm confused. Your actual test numbers seem to show that the
> performance with the block-by-block approach was slightly higher with
> parallelism than without, where as the performance with the
> chunk-by-chunk approach was lower with parallelism than without, but
> the text quoted above, summarizing those numbers, says the opposite.
>
Sorry for causing confusion, I should have been more explicit about
explaining the numbers. Let me try again,
Values in columns is time in milliseconds to complete the execution,
so higher means it took more time. If you see in block-by-block, the
time taken to complete the execution with 2 workers is more than
no workers which means parallelism has degraded the performance.
> Also, I think testing with 2 workers is probably not enough. I think
> we should test with 8 or even 16.
>
Sure, will do this and post the numbers.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-22 14:17:00 |
Message-ID: | CA+TgmobKaYREemD-iyGpgUeEajJ5fsoGejuYk0CgE9LXTWy79A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 22, 2015 at 9:02 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> I'm confused. Your actual test numbers seem to show that the
>> performance with the block-by-block approach was slightly higher with
>> parallelism than without, where as the performance with the
>> chunk-by-chunk approach was lower with parallelism than without, but
>> the text quoted above, summarizing those numbers, says the opposite.
>
> Sorry for causing confusion, I should have been more explicit about
> explaining the numbers. Let me try again,
> Values in columns is time in milliseconds to complete the execution,
> so higher means it took more time. If you see in block-by-block, the
> time taken to complete the execution with 2 workers is more than
> no workers which means parallelism has degraded the performance.
*facepalm*
Oh, yeah, right.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-23 02:48:11 |
Message-ID: | 54C1B66B.1090804@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 01/22/2015 05:53 AM, Robert Haas wrote:
> Also, I think testing with 2 workers is probably not enough. I think
> we should test with 8 or even 16.
FWIW, based on my experience there will also be demand to use parallel
query using 4 workers, particularly on AWS.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-23 11:42:51 |
Message-ID: | CAA4eK1+B=c6rNNTNFcap=QXeCaEeDijqdz6dwdrdcD-T58b7ig@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 22, 2015 at 7:23 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Jan 22, 2015 at 5:57 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > 1. Scanning block-by-block has negative impact on performance and
> > I thin it will degrade more if we increase parallel count as that can
lead
> > to more randomness.
> >
> > 2. Scanning in fixed chunks improves the performance. Increasing
> > parallel count to a very large number might impact the performance,
> > but I think we can have a lower bound below which we will not allow
> > multiple processes to scan the relation.
>
> I'm confused. Your actual test numbers seem to show that the
> performance with the block-by-block approach was slightly higher with
> parallelism than without, where as the performance with the
> chunk-by-chunk approach was lower with parallelism than without, but
> the text quoted above, summarizing those numbers, says the opposite.
>
> Also, I think testing with 2 workers is probably not enough. I think
> we should test with 8 or even 16.
>
Below is the data with more number of workers, the amount of data and
other configurations remains as previous, I have only increased parallel
worker count:
*Block-By-Block*
*No. of workers/Time (ms)* *0* *2* *4* *8* *16* *24* *32* Run-1 257851
287353 350091 330193 284913 338001 295057 Run-2 263241 314083 342166 347337
378057 351916 348292 Run-3 315374 334208 389907 340327 328695 330048 330102
Run-4 301054 312790 314682 352835 323926 324042 302147 Run-5 304547 314171
349158 350191 350468 341219 281315
*Fixed-Chunks*
*No. of workers/Time (ms)* *0* *2* *4* *8* *16* *24* *32* Run-1 250536
266279 251263 234347 87930 50474 35474 Run-2 249587 230628 225648 193340
83036 35140 9100 Run-3 234963 220671 230002 256183 105382 62493 27903
Run-4 239111 245448 224057 189196 123780 63794 24746 Run-5 239937 222820
219025 220478 114007 77965 39766
The trend remains same although there is some variation.
In block-by-block approach, it performance dips (execution takes
more time) with more number of workers, though it stabilizes at
some higher value, still I feel it is random as it leads to random
scan.
In Fixed-chunk approach, the performance improves with more
number of workers especially at slightly higher worker count.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-23 18:44:23 |
Message-ID: | 54C29687.9050300@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/23/15 5:42 AM, Amit Kapila wrote:
> *Fixed-Chunks*
> *No. of workers/Time (ms)*
> *0* *2* *4* *8* *16* *24* *32*
> Run-1 250536 266279 251263 234347 87930 50474 35474
> Run-2 249587 230628 225648 193340 83036 35140 9100
> Run-3 234963 220671 230002 256183 105382 62493 27903
> Run-4 239111 245448 224057 189196 123780 63794 24746
> Run-5 239937 222820 219025 220478 114007 77965 39766
>
>
>
> The trend remains same although there is some variation.
> In block-by-block approach, it performance dips (execution takes
> more time) with more number of workers, though it stabilizes at
> some higher value, still I feel it is random as it leads to random
> scan.
> In Fixed-chunk approach, the performance improves with more
> number of workers especially at slightly higher worker count.
Those fixed chunk numbers look pretty screwy. 2, 4 and 8 workers make no difference, then suddenly 16 cuts times by 1/2 to 1/3? Then 32 cuts time by another 1/2 to 1/3?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-23 18:54:45 |
Message-ID: | 54C298F5.6040107@commandprompt.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 01/23/2015 10:44 AM, Jim Nasby wrote:
> number of workers especially at slightly higher worker count.
>
> Those fixed chunk numbers look pretty screwy. 2, 4 and 8 workers make no
> difference, then suddenly 16 cuts times by 1/2 to 1/3? Then 32 cuts time
> by another 1/2 to 1/3?
cached? First couple of runs gets the relations into memory?
JD
--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
"If we send our children to Caesar for their education, we should
not be surprised when they come back as Romans."
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-24 04:16:17 |
Message-ID: | CAA4eK1L42kdf2jMXBc7nCP3CHPUmzm50wv1F8MeC_wW5OgsG8A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg사설 토토SQL |
On Sat, Jan 24, 2015 at 12:24 AM, Joshua D. Drake <jd(at)commandprompt(dot)com>
wrote:
>
>
> On 01/23/2015 10:44 AM, Jim Nasby wrote:
>>
>> number of workers especially at slightly higher worker count.
>>
>> Those fixed chunk numbers look pretty screwy. 2, 4 and 8 workers make no
>> difference, then suddenly 16 cuts times by 1/2 to 1/3? Then 32 cuts time
>> by another 1/2 to 1/3?
>
There is variation in tests at different worker count but there is
definitely improvement from 0 to 2 worker count (if you refer my
initial mail on this data, with 2 workers there is a benefit of ~20%)
and I think we run the tests in a similar way (like compare 0 and 2
or 0 or 4 or 0 and 8), then the other effects could be minimised and
we might see better consistency, however the general trend with
fixed-chunk seems to be that scanning that way is better.
I think the real benefit with the current approach/patch can be seen
with qualifications (especially costly expression evaluation).
Further, if we want to just get the benefit of parallel I/O, then
I think we can get that by parallelising partition scan where different
table partitions reside on different disk partitions, however that is
a matter of separate patch.
>
> cached? First couple of runs gets the relations into memory?
>
Not entirely, as the table size is double than RAM, so each run
has to perform I/O.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-26 21:48:39 |
Message-ID: | 54C6B637.9050408@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 사이트SQL |
On 1/23/15 10:16 PM, Amit Kapila wrote:
> Further, if we want to just get the benefit of parallel I/O, then
> I think we can get that by parallelising partition scan where different
> table partitions reside on different disk partitions, however that is
> a matter of separate patch.
I don't think we even have to go that far.
My experience with Postgres is that it is *very* sensitive to IO latency (not bandwidth). I believe this is the case because complex queries tend to interleave CPU intensive code in-between IO requests. So we see this pattern:
Wait 5ms on IO
Compute for a few ms
Wait 5ms on IO
Compute for a few ms
...
We blindly assume that the kernel will magically do read-ahead for us, but I've never seen that work so great. It certainly falls apart on something like an index scan.
If we could instead do this:
Wait for first IO, issue second IO request
Compute
Already have second IO request, issue third
...
We'd be a lot less sensitive to IO latency.
I wonder what kind of gains we would see if every SeqScan in a query spawned a worker just to read tuples and shove them in a queue (or shove a pointer to a buffer in the queue). Similarly, have IndexScans have one worker reading the index and another worker taking index tuples and reading heap tuples...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-26 22:39:28 |
Message-ID: | 17752.1422311968@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg사설 토토SQL |
Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> writes:
> On 1/23/15 10:16 PM, Amit Kapila wrote:
>> Further, if we want to just get the benefit of parallel I/O, then
>> I think we can get that by parallelising partition scan where different
>> table partitions reside on different disk partitions, however that is
>> a matter of separate patch.
> I don't think we even have to go that far.
> My experience with Postgres is that it is *very* sensitive to IO latency (not bandwidth). I believe this is the case because complex queries tend to interleave CPU intensive code in-between IO requests. So we see this pattern:
> Wait 5ms on IO
> Compute for a few ms
> Wait 5ms on IO
> Compute for a few ms
> ...
> We blindly assume that the kernel will magically do read-ahead for us, but I've never seen that work so great. It certainly falls apart on something like an index scan.
> If we could instead do this:
> Wait for first IO, issue second IO request
> Compute
> Already have second IO request, issue third
> ...
> We'd be a lot less sensitive to IO latency.
It would take about five minutes of coding to prove or disprove this:
stick a PrefetchBuffer call into heapgetpage() to launch a request for the
next page as soon as we've read the current one, and then see if that
makes any obvious performance difference. I'm not convinced that it will,
but if it did then we could think about how to make it work for real.
regards, tom lane
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> |
Cc: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 05:11:05 |
Message-ID: | CAA4eK1J9B82bn_8n2cHe5O1LuP-QyWuZge2OO9qEW8YHt9xdZg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 27, 2015 at 3:18 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com> wrote:
>
> On 1/23/15 10:16 PM, Amit Kapila wrote:
>>
>> Further, if we want to just get the benefit of parallel I/O, then
>> I think we can get that by parallelising partition scan where different
>> table partitions reside on different disk partitions, however that is
>> a matter of separate patch.
>
>
> I don't think we even have to go that far.
>
>
> We'd be a lot less sensitive to IO latency.
>
> I wonder what kind of gains we would see if every SeqScan in a query
spawned a worker just to read tuples and shove them in a queue (or shove a
pointer to a buffer in the queue).
>
Here IIUC, you want to say that just get the read done by one parallel
worker and then all expression calculation (evaluation of qualification
and target list) in the main backend, it seems to me that by doing it
that way, the benefit of parallelisation will be lost due to tuple
communication overhead (may be the overhead is less if we just
pass a pointer to buffer but that will have another kind of problems
like holding buffer pins for a longer period of time).
I could see the advantage of testing on lines as suggested by Tom Lane,
but that seems to be not directly related to what we want to achieve by
this patch (parallel seq scan) or if you think otherwise then let me know?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Daniel Bausch <bausch(at)dvs(dot)tu-darmstadt(dot)de> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 07:02:37 |
Message-ID: | 87d2601zaq.fsf@gelnhausen.dvs.informatik.tu-darmstadt.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi PG devs!
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>> Wait for first IO, issue second IO request
>> Compute
>> Already have second IO request, issue third
>> ...
>
>> We'd be a lot less sensitive to IO latency.
>
> It would take about five minutes of coding to prove or disprove this:
> stick a PrefetchBuffer call into heapgetpage() to launch a request for the
> next page as soon as we've read the current one, and then see if that
> makes any obvious performance difference. I'm not convinced that it will,
> but if it did then we could think about how to make it work for real.
Sorry for dropping in so late...
I have done all this two years ago. For TPC-H Q8, Q9, Q17, Q20, and Q21
I see a speedup of ~100% when using IndexScan prefetching + Nested-Loops
Look-Ahead (the outer loop!).
(On SSD with 32 Pages Prefetch/Look-Ahead + Cold Page Cache / Small RAM)
Regards,
Daniel
--
MSc. Daniel Bausch
Research Assistant (Computer Science)
Technische Universität Darmstadt
http://www.dvs.tu-darmstadt.de/staff/dbausch
From: | David Fetter <david(at)fetter(dot)org> |
---|---|
To: | Daniel Bausch <bausch(at)dvs(dot)tu-darmstadt(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 14:54:48 |
Message-ID: | 20150127145448.GA3788@fetter.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg범퍼카 토토SQL |
On Tue, Jan 27, 2015 at 08:02:37AM +0100, Daniel Bausch wrote:
> Hi PG devs!
>
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>
> >> Wait for first IO, issue second IO request
> >> Compute
> >> Already have second IO request, issue third
> >> ...
> >
> >> We'd be a lot less sensitive to IO latency.
> >
> > It would take about five minutes of coding to prove or disprove this:
> > stick a PrefetchBuffer call into heapgetpage() to launch a request for the
> > next page as soon as we've read the current one, and then see if that
> > makes any obvious performance difference. I'm not convinced that it will,
> > but if it did then we could think about how to make it work for real.
>
> Sorry for dropping in so late...
>
> I have done all this two years ago. For TPC-H Q8, Q9, Q17, Q20, and Q21
> I see a speedup of ~100% when using IndexScan prefetching + Nested-Loops
> Look-Ahead (the outer loop!).
> (On SSD with 32 Pages Prefetch/Look-Ahead + Cold Page Cache / Small RAM)
Would you be so kind as to pass along any patches (ideally applicable
to git master), tests, and specific measurements you made?
Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 21:10:30 |
Message-ID: | CA+Tgmoa7O3szt6UY97z4BWOSGcFeVkErWRPYYu-vYo0h0TpafA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Thu, Jan 22, 2015 at 5:57 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Script used to test is attached (parallel_count.sh)
Why does this use EXPLAIN ANALYZE instead of \timing ?
> IBM POWER-7 16 cores, 64 hardware threads
> RAM = 64GB
>
> Table Size - 120GB
>
> Used below statements to create table -
> create table tbl_perf(c1 int, c2 char(1000));
> insert into tbl_perf values(generate_series(1,10000000),'aaaaa');
> insert into tbl_perf values(generate_series(10000001,30000000),'aaaaa');
> insert into tbl_perf values(generate_series(30000001,110000000),'aaaaa');
I generated this table using this same method and experimented with
copying the whole file to the bit bucket using dd. I did this on
hydra, which I think is the same machine you used.
time for i in `seq 0 119`; do if [ $i -eq 0 ]; then f=16388; else
f=16388.$i; fi; dd if=$f of=/dev/null bs=8k; done
There is a considerable amount of variation in the amount of time this
takes to run based on how much of the relation is cached. Clearly,
there's no way for the system to cache it all, but it can cache a
significant portion, and that affects the results to no small degree.
dd on hydra prints information on the data transfer rate; on uncached
1GB segments, it runs at right around 400 MB/s, but that can soar to
upwards of 3GB/s when the relation is fully cached. I tried flushing
the OS cache via echo 1 > /proc/sys/vm/drop_caches, and found that
immediately after doing that, the above command took 5m21s to run -
i.e. ~321000 ms. Most of your test times are faster than that, which
means they reflect some degree of caching. When I immediately reran
the command a second time, it finished in 4m18s the second time, or
~258000 ms. The rate was the same as the first test - about 400 MB/s
- for most of the files, but 27 of the last 28 files went much faster,
between 1.3 GB/s and 3.7 GB/s.
This tells us that the OS cache on this machine has anti-spoliation
logic in it, probably not dissimilar to what we have in PG. If the
data were cycled through the system cache in strict LRU fashion, any
data that was leftover from the first run would have been flushed out
by the early part of the second run, so that all the results from the
second set of runs would have hit the disk. But in fact, that's not
what happened: the last pages from the first run remained cached even
after reading an amount of new data that exceeds the size of RAM on
that machine. What I think this demonstrates is that we're going to
have to be very careful to control for caching effects, or we may find
that we get misleading results. To make this simpler, I've installed
a setuid binary /usr/bin/drop_caches that you (or anyone who has an
account on that machine) can use you drop the caches; run 'drop_caches
1'.
> Block-By-Block
>
> No. of workers/Time (ms) 0 2
> Run-1 267798 295051
> Run-2 276646 296665
> Run-3 281364 314952
> Run-4 290231 326243
> Run-5 288890 295684
The next thing I did was run test with the block-by-block method after
having dropped the caches. I did this with 0 workers and with 8
workers. I dropped the caches and restarted postgres before each
test, but then ran each test a second time to see the effect of
caching by both the OS and by PostgreSQL. I got these results:
With 0 workers, first run took 883465.352 ms, and second run took 295050.106 ms.
With 8 workers, first run took 340302.250 ms, and second run took 307767.758 ms.
This is a confusing result, because you expect parallelism to help
more when the relation is partly cached, and make little or no
difference when it isn't cached. But that's not what happened.
I've also got a draft of a prefetching implementation here that I'd
like to test out, but I've just discovered that it's buggy, so I'm
going to send these results for now and work on fixing that.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 21:46:44 |
Message-ID: | 20150127214644.GN3854@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토SQL : Postg토토SQL |
Robert, all,
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> There is a considerable amount of variation in the amount of time this
> takes to run based on how much of the relation is cached. Clearly,
> there's no way for the system to cache it all, but it can cache a
> significant portion, and that affects the results to no small degree.
> dd on hydra prints information on the data transfer rate; on uncached
> 1GB segments, it runs at right around 400 MB/s, but that can soar to
> upwards of 3GB/s when the relation is fully cached. I tried flushing
> the OS cache via echo 1 > /proc/sys/vm/drop_caches, and found that
> immediately after doing that, the above command took 5m21s to run -
> i.e. ~321000 ms. Most of your test times are faster than that, which
> means they reflect some degree of caching. When I immediately reran
> the command a second time, it finished in 4m18s the second time, or
> ~258000 ms. The rate was the same as the first test - about 400 MB/s
> - for most of the files, but 27 of the last 28 files went much faster,
> between 1.3 GB/s and 3.7 GB/s.
[...]
> With 0 workers, first run took 883465.352 ms, and second run took 295050.106 ms.
> With 8 workers, first run took 340302.250 ms, and second run took 307767.758 ms.
>
> This is a confusing result, because you expect parallelism to help
> more when the relation is partly cached, and make little or no
> difference when it isn't cached. But that's not what happened.
These numbers seem to indicate that the oddball is the single-threaded
uncached run. If I followed correctly, the uncached 'dd' took 321s,
which is relatively close to the uncached-lots-of-workers and the two
cached runs. What in the world is the uncached single-thread case doing
that it takes an extra 543s, or over twice as long? It's clearly not
disk i/o which is causing the slowdown, based on your dd tests.
One possibility might be round-trip latency. The multi-threaded case is
able to keep the CPUs and the i/o system going, and the cached results
don't have as much latency since things are cached, but the
single-threaded uncached case going i/o -> cpu -> i/o -> cpu, ends up
with a lot of wait time as it switches between being on CPU and waiting
on the i/o.
Just some thoughts.
Thanks,
Stephen
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 23:00:54 |
Message-ID: | CA+Tgmoa-BCTZZQnoadfbwSVVpfdSxcwL1uDXBWoOC3S+zZo-tA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Fri, Jan 23, 2015 at 6:42 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Fixed-Chunks
>
> No. of workers/Time (ms) 0 2 4 8 16 24 32
> Run-1 250536 266279 251263 234347 87930 50474 35474
> Run-2 249587 230628 225648 193340 83036 35140 9100
> Run-3 234963 220671 230002 256183 105382 62493 27903
> Run-4 239111 245448 224057 189196 123780 63794 24746
> Run-5 239937 222820 219025 220478 114007 77965 39766
I cannot reproduce these results. I applied your fixed-chunk size
patch and ran SELECT parallel_count('tbl_perf', 32) a few times. The
first thing I notice is that, as I predicted, there's an issue with
different workers finishing at different times. For example, from my
first run:
2015-01-27 22:13:09 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34700) exited with exit code 0
2015-01-27 22:13:09 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34698) exited with exit code 0
2015-01-27 22:13:09 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34701) exited with exit code 0
2015-01-27 22:13:10 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34699) exited with exit code 0
2015-01-27 22:15:00 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34683) exited with exit code 0
2015-01-27 22:15:29 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34673) exited with exit code 0
2015-01-27 22:15:58 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34679) exited with exit code 0
2015-01-27 22:16:38 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34689) exited with exit code 0
2015-01-27 22:16:39 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34671) exited with exit code 0
2015-01-27 22:16:47 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34677) exited with exit code 0
2015-01-27 22:16:47 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34672) exited with exit code 0
2015-01-27 22:16:48 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34680) exited with exit code 0
2015-01-27 22:16:50 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34686) exited with exit code 0
2015-01-27 22:16:51 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34670) exited with exit code 0
2015-01-27 22:16:51 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34690) exited with exit code 0
2015-01-27 22:16:51 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34674) exited with exit code 0
2015-01-27 22:16:52 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34684) exited with exit code 0
2015-01-27 22:16:53 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34675) exited with exit code 0
2015-01-27 22:16:53 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34682) exited with exit code 0
2015-01-27 22:16:53 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34691) exited with exit code 0
2015-01-27 22:16:54 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34676) exited with exit code 0
2015-01-27 22:16:54 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34685) exited with exit code 0
2015-01-27 22:16:55 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34692) exited with exit code 0
2015-01-27 22:16:56 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34687) exited with exit code 0
2015-01-27 22:16:56 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34678) exited with exit code 0
2015-01-27 22:16:57 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34681) exited with exit code 0
2015-01-27 22:16:57 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34688) exited with exit code 0
2015-01-27 22:16:59 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34694) exited with exit code 0
2015-01-27 22:16:59 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34693) exited with exit code 0
2015-01-27 22:17:02 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34695) exited with exit code 0
2015-01-27 22:17:02 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34697) exited with exit code 0
2015-01-27 22:17:02 UTC [34660] LOG: worker process: parallel worker
for PID 34668 (PID 34696) exited with exit code 0
That run started at 22:13:01. Within 4 seconds, 4 workers exited. So
clearly we are not getting the promised 32-way parallelism for the
whole test. Granted, in this instance, *most* of the workers run
until the end, but I think we'll find that there are
uncomfortably-frequent cases where we get significantly less
parallelism than we planned on because the work isn't divided evenly.
But leaving that aside, I've run this test 6 times in a row now, with
a warm cache, and the best time I have is 237310.042 ms and the worst
time I have is 242936.315 ms. So there's very little variation, and
it's reasonably close to the results I got with dd, suggesting that
the system is fairly well I/O bound. At a sequential read speed of
400 MB/s, 240 s = 96 GB of data. Assuming it takes no time at all to
process the cached data (which seems to be not far from wrong judging
by how quickly the first few workers exit), that means we're getting
24 GB of data from cache on a 64 GB machine. That seems a little low,
but if the kernel is refusing to cache the whole relation to avoid
cache-trashing, it could be right.
Now, when you did what I understand to be the same test on the same
machine, you got times ranging from 9.1 seconds to 35.4 seconds.
Clearly, there is some difference between our test setups. Moreover,
I'm kind of suspicious about whether your results are actually
physically possible. Even in the best case where you somehow had the
maximum possible amount of data - 64 GB on a 64 GB machine - cached,
leaving no space for cache duplication between PG and the OS and no
space for the operating system or postgres itself - the table is 120
GB, so you've got to read *at least* 56 GB from disk. Reading 56 GB
from disk in 9 seconds represents an I/O rate of >6 GB/s. I grant that
there could be some speedup from issuing I/O requests in parallel
instead of serially, but that is a 15x speedup over dd, so I am a
little suspicious that there is some problem with the test setup,
especially because I cannot reproduce the results.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 23:43:32 |
Message-ID: | 54C822A4.7040106@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/26/15 11:11 PM, Amit Kapila wrote:
> On Tue, Jan 27, 2015 at 3:18 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com <mailto:Jim(dot)Nasby(at)bluetreble(dot)com>> wrote:
> >
> > On 1/23/15 10:16 PM, Amit Kapila wrote:
> >>
> >> Further, if we want to just get the benefit of parallel I/O, then
> >> I think we can get that by parallelising partition scan where different
> >> table partitions reside on different disk partitions, however that is
> >> a matter of separate patch.
> >
> >
> > I don't think we even have to go that far.
> >
> >
> > We'd be a lot less sensitive to IO latency.
> >
> > I wonder what kind of gains we would see if every SeqScan in a query spawned a worker just to read tuples and shove them in a queue (or shove a pointer to a buffer in the queue).
> >
>
> Here IIUC, you want to say that just get the read done by one parallel
> worker and then all expression calculation (evaluation of qualification
> and target list) in the main backend, it seems to me that by doing it
> that way, the benefit of parallelisation will be lost due to tuple
> communication overhead (may be the overhead is less if we just
> pass a pointer to buffer but that will have another kind of problems
> like holding buffer pins for a longer period of time).
>
> I could see the advantage of testing on lines as suggested by Tom Lane,
> but that seems to be not directly related to what we want to achieve by
> this patch (parallel seq scan) or if you think otherwise then let me know?
There's some low-hanging fruit when it comes to improving our IO performance (or more specifically, decreasing our sensitivity to IO latency). Perhaps the way to do that is with the parallel infrastructure, perhaps not. But I think it's premature to look at parallelism for increasing IO performance, or worrying about things like how many IO threads we should have before we at least look at simpler things we could do. We shouldn't assume there's nothing to be gained short of a full parallelization implementation.
That's not to say there's nothing else we could use parallelism for. Sort, merge and hash operations come to mind.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-27 23:52:17 |
Message-ID: | 54C824B1.4020305@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/27/15 3:46 PM, Stephen Frost wrote:
>> With 0 workers, first run took 883465.352 ms, and second run took 295050.106 ms.
>> >With 8 workers, first run took 340302.250 ms, and second run took 307767.758 ms.
>> >
>> >This is a confusing result, because you expect parallelism to help
>> >more when the relation is partly cached, and make little or no
>> >difference when it isn't cached. But that's not what happened.
> These numbers seem to indicate that the oddball is the single-threaded
> uncached run. If I followed correctly, the uncached 'dd' took 321s,
> which is relatively close to the uncached-lots-of-workers and the two
> cached runs. What in the world is the uncached single-thread case doing
> that it takes an extra 543s, or over twice as long? It's clearly not
> disk i/o which is causing the slowdown, based on your dd tests.
>
> One possibility might be round-trip latency. The multi-threaded case is
> able to keep the CPUs and the i/o system going, and the cached results
> don't have as much latency since things are cached, but the
> single-threaded uncached case going i/o -> cpu -> i/o -> cpu, ends up
> with a lot of wait time as it switches between being on CPU and waiting
> on the i/o.
This exactly mirrors what I've seen on production systems. On a single SeqScan I can't get anywhere close to the IO performance I could get with dd. Once I got up to 4-8 SeqScans of different tables running together, I saw iostat numbers that were similar to what a single dd bs=8k would do. I've tested this with iSCSI SAN volumes on both 1Gbit and 10Gbit ethernet.
This is why I think that when it comes to IO performance, before we start worrying about real parallelization we should investigate ways to do some kind of async IO.
I only have my SSD laptop and a really old server to test on, but I'll try Tom's suggestion of adding a PrefetchBuffer call into heapgetpage() unless someone beats me to it. I should be able to do it tomorrow.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 02:07:52 |
Message-ID: | CA+TgmoZkuyTn43o3rFpc6gfk==3x_T1gdXShHPXqY5jm-yNx=Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 27, 2015 at 4:46 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> With 0 workers, first run took 883465.352 ms, and second run took 295050.106 ms.
>> With 8 workers, first run took 340302.250 ms, and second run took 307767.758 ms.
>>
>> This is a confusing result, because you expect parallelism to help
>> more when the relation is partly cached, and make little or no
>> difference when it isn't cached. But that's not what happened.
>
> These numbers seem to indicate that the oddball is the single-threaded
> uncached run. If I followed correctly, the uncached 'dd' took 321s,
> which is relatively close to the uncached-lots-of-workers and the two
> cached runs. What in the world is the uncached single-thread case doing
> that it takes an extra 543s, or over twice as long? It's clearly not
> disk i/o which is causing the slowdown, based on your dd tests.
Yeah, I'm wondering if the disk just froze up on that run for a long
while, which has been known to occasionally happen on this machine,
because I can't reproduce that crappy number. I did the 0-worker test
a few more times, with the block-by-block method, dropping the caches
and restarting PostgreSQL each time, and got:
322222.968 ms
322873.325 ms
322967.722 ms
321759.273 ms
After that last run, I ran it a few more times without restarting
PostgreSQL or dropping the caches, and got:
257629.348 ms
289668.976 ms
290342.970 ms
258035.226 ms
284237.729 ms
Then I redid the 8-client test. Cold cache, I got 337312.554 ms. On
the rerun, 323423.813 ms. Third run, 324940.785.
There is more variability than I would like here. Clearly, it goes a
bit faster when the cache is warm, but that's about all I can say with
any confidence.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 02:16:41 |
Message-ID: | CA+TgmoZrFZPkJ0DRG81CkND3EDCdiafo9xZDhLTshafAAZPJUQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Jan 27, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Now, when you did what I understand to be the same test on the same
> machine, you got times ranging from 9.1 seconds to 35.4 seconds.
> Clearly, there is some difference between our test setups. Moreover,
> I'm kind of suspicious about whether your results are actually
> physically possible. Even in the best case where you somehow had the
> maximum possible amount of data - 64 GB on a 64 GB machine - cached,
> leaving no space for cache duplication between PG and the OS and no
> space for the operating system or postgres itself - the table is 120
> GB, so you've got to read *at least* 56 GB from disk. Reading 56 GB
> from disk in 9 seconds represents an I/O rate of >6 GB/s. I grant that
> there could be some speedup from issuing I/O requests in parallel
> instead of serially, but that is a 15x speedup over dd, so I am a
> little suspicious that there is some problem with the test setup,
> especially because I cannot reproduce the results.
So I thought about this a little more, and I realized after some
poking around that hydra's disk subsystem is actually six disks
configured in a software RAID5[1]. So one advantage of the
chunk-by-chunk approach you are proposing is that you might be able to
get all of the disks chugging away at once, because the data is
presumably striped across all of them. Reading one block at a time,
you'll never have more than 1 or 2 disks going, but if you do
sequential reads from a bunch of different places in the relation, you
might manage to get all 6. So that's something to think about.
One could imagine an algorithm like this: as long as there are more
1GB segments remaining than there are workers, each worker tries to
chug through a separate 1GB segment. When there are not enough 1GB
segments remaining for that to work, then they start ganging up on the
same segments. That way, you get the benefit of spreading out the I/O
across multiple files (and thus hopefully multiple members of the RAID
group) when the data is coming from disk, but you can still keep
everyone busy until the end, which will be important when the data is
all in-memory and you're just limited by CPU bandwidth.
All that aside, I still can't account for the numbers you are seeing.
When I run with your patch and what I think is your test case, I get
different (slower) numbers. And even if we've got 6 drives cranking
along at 400MB/s each, that's still only 2.4 GB/s, not >6 GB/s. So
I'm still perplexed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
[1] Not my idea.
From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 07:08:13 |
Message-ID: | 54C88ADD.5010205@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 01/28/2015 04:16 AM, Robert Haas wrote:
> On Tue, Jan 27, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Now, when you did what I understand to be the same test on the same
>> machine, you got times ranging from 9.1 seconds to 35.4 seconds.
>> Clearly, there is some difference between our test setups. Moreover,
>> I'm kind of suspicious about whether your results are actually
>> physically possible. Even in the best case where you somehow had the
>> maximum possible amount of data - 64 GB on a 64 GB machine - cached,
>> leaving no space for cache duplication between PG and the OS and no
>> space for the operating system or postgres itself - the table is 120
>> GB, so you've got to read *at least* 56 GB from disk. Reading 56 GB
>> from disk in 9 seconds represents an I/O rate of >6 GB/s. I grant that
>> there could be some speedup from issuing I/O requests in parallel
>> instead of serially, but that is a 15x speedup over dd, so I am a
>> little suspicious that there is some problem with the test setup,
>> especially because I cannot reproduce the results.
>
> So I thought about this a little more, and I realized after some
> poking around that hydra's disk subsystem is actually six disks
> configured in a software RAID5[1]. So one advantage of the
> chunk-by-chunk approach you are proposing is that you might be able to
> get all of the disks chugging away at once, because the data is
> presumably striped across all of them. Reading one block at a time,
> you'll never have more than 1 or 2 disks going, but if you do
> sequential reads from a bunch of different places in the relation, you
> might manage to get all 6. So that's something to think about.
>
> One could imagine an algorithm like this: as long as there are more
> 1GB segments remaining than there are workers, each worker tries to
> chug through a separate 1GB segment. When there are not enough 1GB
> segments remaining for that to work, then they start ganging up on the
> same segments. That way, you get the benefit of spreading out the I/O
> across multiple files (and thus hopefully multiple members of the RAID
> group) when the data is coming from disk, but you can still keep
> everyone busy until the end, which will be important when the data is
> all in-memory and you're just limited by CPU bandwidth.
OTOH, spreading the I/O across multiple files is not a good thing, if
you don't have a RAID setup like that. With a single spindle, you'll
just induce more seeks.
Perhaps the OS is smart enough to read in large-enough chunks that the
occasional seek doesn't hurt much. But then again, why isn't the OS
smart enough to read in large-enough chunks to take advantage of the
RAID even when you read just a single file?
- Heikki
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 07:40:10 |
Message-ID: | CAA4eK1JdhxgwPH+kdq6fKuJkjcNZ0jCjvHRQX+D1PPHX9oMAfg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 12:38 PM, Heikki Linnakangas <
hlinnakangas(at)vmware(dot)com> wrote:
>
> On 01/28/2015 04:16 AM, Robert Haas wrote:
>>
>> On Tue, Jan 27, 2015 at 6:00 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
>>>
>>> Now, when you did what I understand to be the same test on the same
>>> machine, you got times ranging from 9.1 seconds to 35.4 seconds.
>>> Clearly, there is some difference between our test setups. Moreover,
>>> I'm kind of suspicious about whether your results are actually
>>> physically possible. Even in the best case where you somehow had the
>>> maximum possible amount of data - 64 GB on a 64 GB machine - cached,
>>> leaving no space for cache duplication between PG and the OS and no
>>> space for the operating system or postgres itself - the table is 120
>>> GB, so you've got to read *at least* 56 GB from disk. Reading 56 GB
>>> from disk in 9 seconds represents an I/O rate of >6 GB/s. I grant that
>>> there could be some speedup from issuing I/O requests in parallel
>>> instead of serially, but that is a 15x speedup over dd, so I am a
>>> little suspicious that there is some problem with the test setup,
>>> especially because I cannot reproduce the results.
>>
>>
>> So I thought about this a little more, and I realized after some
>> poking around that hydra's disk subsystem is actually six disks
>> configured in a software RAID5[1]. So one advantage of the
>> chunk-by-chunk approach you are proposing is that you might be able to
>> get all of the disks chugging away at once, because the data is
>> presumably striped across all of them. Reading one block at a time,
>> you'll never have more than 1 or 2 disks going, but if you do
>> sequential reads from a bunch of different places in the relation, you
>> might manage to get all 6. So that's something to think about.
>>
>> One could imagine an algorithm like this: as long as there are more
>> 1GB segments remaining than there are workers, each worker tries to
>> chug through a separate 1GB segment. When there are not enough 1GB
>> segments remaining for that to work, then they start ganging up on the
>> same segments. That way, you get the benefit of spreading out the I/O
>> across multiple files (and thus hopefully multiple members of the RAID
>> group) when the data is coming from disk, but you can still keep
>> everyone busy until the end, which will be important when the data is
>> all in-memory and you're just limited by CPU bandwidth.
>
>
> OTOH, spreading the I/O across multiple files is not a good thing, if you
don't have a RAID setup like that. With a single spindle, you'll just
induce more seeks.
>
Yeah, if such a thing happens then there is less chance that user
will get any major benefit via parallel sequential scan unless
the qualification expressions or other expressions used in
statement are costly. So here one way could be that either user
should configure the parallel sequence scan parameters in such
a way that only when it can be beneficial it should perform parallel
scan (something like increase parallel_tuple_comm_cost or we can
have some another parameter) or just not use parallel sequential scan
(parallel_seqscan_degree=0).
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 14:03:01 |
Message-ID: | CA+Tgmoa-XCu2Bkkieo=6X3sQNuOfzBoo_osMdeK9gR4NuObj7w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 2:08 AM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> OTOH, spreading the I/O across multiple files is not a good thing, if you
> don't have a RAID setup like that. With a single spindle, you'll just induce
> more seeks.
>
> Perhaps the OS is smart enough to read in large-enough chunks that the
> occasional seek doesn't hurt much. But then again, why isn't the OS smart
> enough to read in large-enough chunks to take advantage of the RAID even
> when you read just a single file?
Suppose we have N spindles and N worker processes and it just so
happens that the amount of computation is such that a each spindle can
keep one CPU busy. Let's suppose the chunk size is 4MB. If you read
from the relation at N staggered offsets, you might be lucky enough
that each one of them keeps a spindle busy, and you might be lucky
enough to have that stay true as the scans advance. You don't need
any particularly large amount of read-ahead; you just need to stay at
least one block ahead of the CPU. But if you read the relation in one
pass from beginning to end, you need at least N*4MB of read-ahead to
have data in cache for all N spindles, and the read-ahead will
certainly fail you at the end of every 1GB segment.
The problem here, as I see it, is that we're flying blind. If there's
just one spindle, I think it's got to be right to read the relation
sequentially. But if there are multiple spindles, it might not be,
but it seems hard to predict what we should do. We don't know what
the RAID chunk size is or how many spindles there are, so any guess as
to how to chunk up the relation and divide up the work between workers
is just a shot in the dark.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 14:12:49 |
Message-ID: | CAA-aLv4cPHaeGWZz3XW2rhrs=cBkf=F3SaW7acAreuSNeTmE7g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg윈 토토SQL : |
On 28 January 2015 at 14:03, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> The problem here, as I see it, is that we're flying blind. If there's
> just one spindle, I think it's got to be right to read the relation
> sequentially. But if there are multiple spindles, it might not be,
> but it seems hard to predict what we should do. We don't know what
> the RAID chunk size is or how many spindles there are, so any guess as
> to how to chunk up the relation and divide up the work between workers
> is just a shot in the dark.
Can't the planner take effective_io_concurrency into account?
Thom
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:29:52 |
Message-ID: | CAA4eK1+ACaW3=J4yYuhYfHqrm0MHWmR6hxQV09vn6thoekx0bw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg토토 꽁 머니SQL |
On Wed, Jan 28, 2015 at 7:46 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
>
> All that aside, I still can't account for the numbers you are seeing.
> When I run with your patch and what I think is your test case, I get
> different (slower) numbers. And even if we've got 6 drives cranking
> along at 400MB/s each, that's still only 2.4 GB/s, not >6 GB/s. So
> I'm still perplexed.
>
I have tried the tests again and found that I have forgotten to increase
max_worker_processes due to which the data is so different. Basically
at higher client count it is just scanning lesser number of blocks in
fixed chunk approach. So today I again tried with changing
max_worker_processes and found that there is not much difference in
performance at higher client count. I will take some more data for
both block_by_block and fixed_chunk approach and repost the data.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Thom Brown <thom(at)linux(dot)com> |
Cc: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:38:56 |
Message-ID: | CA+TgmoYM4WT2sQ4W9LVRV30272hetha3-f+XKn1F4hLo63VMJA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 9:12 AM, Thom Brown <thom(at)linux(dot)com> wrote:
> On 28 January 2015 at 14:03, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> The problem here, as I see it, is that we're flying blind. If there's
>> just one spindle, I think it's got to be right to read the relation
>> sequentially. But if there are multiple spindles, it might not be,
>> but it seems hard to predict what we should do. We don't know what
>> the RAID chunk size is or how many spindles there are, so any guess as
>> to how to chunk up the relation and divide up the work between workers
>> is just a shot in the dark.
>
> Can't the planner take effective_io_concurrency into account?
Maybe. It's answering a somewhat the right question -- to tell us how
many parallel I/O channels we think we've got. But I'm not quite sure
what the to do with that information in this case. I mean, if we've
got effective_io_concurrency = 6, does that mean it's right to start
scans in 6 arbitrary places in the relation and hope that keeps all
the drives busy? That seems like throwing darts at the wall. We have
no idea which parts are on which underlying devices. Or maybe it mean
we should prefetch 24MB, on the assumption that the RAID stripe is
4MB? That's definitely blind guesswork.
Considering the email Amit just sent, it looks like on this machine,
regardless of what algorithm we used, the scan took between 3 minutes
and 5.5 minutes, and most of them took between 4 minutes and 5.5
minutes. The results aren't very predictable, more workers don't
necessarily help, and it's not really clear that any algorithm we've
tried is clearly better than any other. I experimented with
prefetching a bit yesterday, too, and it was pretty much the same.
Some settings made it slightly faster. Others made it slower. Whee!
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:40:47 |
Message-ID: | 30549.1422459647@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> The problem here, as I see it, is that we're flying blind. If there's
> just one spindle, I think it's got to be right to read the relation
> sequentially. But if there are multiple spindles, it might not be,
> but it seems hard to predict what we should do. We don't know what
> the RAID chunk size is or how many spindles there are, so any guess as
> to how to chunk up the relation and divide up the work between workers
> is just a shot in the dark.
I thought the proposal to chunk on the basis of "each worker processes
one 1GB-sized segment" should work all right. The kernel should see that
as sequential reads of different files, issued by different processes;
and if it can't figure out how to process that efficiently then it's a
very sad excuse for a kernel.
You are right that trying to do any detailed I/O scheduling by ourselves
is a doomed exercise. For better or worse, we have kept ourselves at
sufficient remove from the hardware that we can't possibly do that
successfully.
regards, tom lane
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:42:27 |
Message-ID: | CA+TgmobkE0wqxjb6-zimo4ZxbGwF36czAu8x1Ema1BAdMhgoLA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 10:40 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> The problem here, as I see it, is that we're flying blind. If there's
>> just one spindle, I think it's got to be right to read the relation
>> sequentially. But if there are multiple spindles, it might not be,
>> but it seems hard to predict what we should do. We don't know what
>> the RAID chunk size is or how many spindles there are, so any guess as
>> to how to chunk up the relation and divide up the work between workers
>> is just a shot in the dark.
>
> I thought the proposal to chunk on the basis of "each worker processes
> one 1GB-sized segment" should work all right. The kernel should see that
> as sequential reads of different files, issued by different processes;
> and if it can't figure out how to process that efficiently then it's a
> very sad excuse for a kernel.
I agree. But there's only value in doing something like that if we
have evidence that it improves anything. Such evidence is presently a
bit thin on the ground.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:49:46 |
Message-ID: | 30774.1422460186@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Wed, Jan 28, 2015 at 10:40 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I thought the proposal to chunk on the basis of "each worker processes
>> one 1GB-sized segment" should work all right. The kernel should see that
>> as sequential reads of different files, issued by different processes;
>> and if it can't figure out how to process that efficiently then it's a
>> very sad excuse for a kernel.
> I agree. But there's only value in doing something like that if we
> have evidence that it improves anything. Such evidence is presently a
> bit thin on the ground.
Well, of course none of this should get committed without convincing
evidence that it's a win. But I think that chunking on relation segment
boundaries is a plausible way of dodging the problem that we can't do
explicitly hardware-aware scheduling.
regards, tom lane
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 15:56:48 |
Message-ID: | 20150128155648.GV3854@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | Postg스포츠 토토 결과SQL |
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Wed, Jan 28, 2015 at 10:40 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > I thought the proposal to chunk on the basis of "each worker processes
> > one 1GB-sized segment" should work all right. The kernel should see that
> > as sequential reads of different files, issued by different processes;
> > and if it can't figure out how to process that efficiently then it's a
> > very sad excuse for a kernel.
Agreed.
> I agree. But there's only value in doing something like that if we
> have evidence that it improves anything. Such evidence is presently a
> bit thin on the ground.
You need an i/o subsystem that's fast enough to keep a single CPU busy,
otherwise (as you mentioned elsewhere), you're just going to be i/o
bound and having more processes isn't going to help (and could hurt).
Such i/o systems do exist, but a single RAID5 group over spinning rust
with a simple filter isn't going to cut it with a modern CPU- we're just
too darn efficient to end up i/o bound in that case. A more complex
filter might be able to change it over to being more CPU bound than i/o
bound and produce the performance improvments you're looking for.
The caveat to this is if you have multiple i/o *channels* (which it
looks like you don't in this case) where you can parallelize across
those channels by having multiple processes involved. We only support
multiple i/o channels today with tablespaces and we can't span tables
across tablespaces. That's a problem when working with large data sets,
but I'm hopeful that this work will eventually lead to a parallelized
Append node that operates against a partitioned/inheirited table to work
across multiple tablespaces.
Thanks,
Stephen
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 16:02:30 |
Message-ID: | 20150128160230.GW3854@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
* Stephen Frost (sfrost(at)snowman(dot)net) wrote:
> Such i/o systems do exist, but a single RAID5 group over spinning rust
> with a simple filter isn't going to cut it with a modern CPU- we're just
> too darn efficient to end up i/o bound in that case.
err, to *not* end up i/o bound.
Thanks,
Stephen
From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-28 20:36:58 |
Message-ID: | 54C9486A.6050101@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On 1/28/15 9:56 AM, Stephen Frost wrote:
> * Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
>> On Wed, Jan 28, 2015 at 10:40 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> I thought the proposal to chunk on the basis of "each worker processes
>>> one 1GB-sized segment" should work all right. The kernel should see that
>>> as sequential reads of different files, issued by different processes;
>>> and if it can't figure out how to process that efficiently then it's a
>>> very sad excuse for a kernel.
>
> Agreed.
>
>> I agree. But there's only value in doing something like that if we
>> have evidence that it improves anything. Such evidence is presently a
>> bit thin on the ground.
>
> You need an i/o subsystem that's fast enough to keep a single CPU busy,
> otherwise (as you mentioned elsewhere), you're just going to be i/o
> bound and having more processes isn't going to help (and could hurt).
>
> Such i/o systems do exist, but a single RAID5 group over spinning rust
> with a simple filter isn't going to cut it with a modern CPU- we're just
> too darn efficient to end up i/o bound in that case. A more complex
> filter might be able to change it over to being more CPU bound than i/o
> bound and produce the performance improvments you're looking for.
Except we're nowhere near being IO efficient. The vast difference between Postgres IO rates and dd shows this. I suspect that's because we're not giving the OS a list of IO to perform while we're doing our thing, but that's just a guess.
> The caveat to this is if you have multiple i/o *channels* (which it
> looks like you don't in this case) where you can parallelize across
> those channels by having multiple processes involved.
Keep in mind that multiple processes is in no way a requirement for that. Async IO would do that, or even just requesting stuff from the OS before we need it.
> We only support
> multiple i/o channels today with tablespaces and we can't span tables
> across tablespaces. That's a problem when working with large data sets,
> but I'm hopeful that this work will eventually lead to a parallelized
> Append node that operates against a partitioned/inheirited table to work
> across multiple tablespaces.
Until we can get a single seqscan close to dd performance, I fear worrying about tablespaces and IO channels is entirely premature.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-29 01:27:21 |
Message-ID: | 20150129012721.GB3854@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Jim,
* Jim Nasby (Jim(dot)Nasby(at)BlueTreble(dot)com) wrote:
> On 1/28/15 9:56 AM, Stephen Frost wrote:
> >Such i/o systems do exist, but a single RAID5 group over spinning rust
> >with a simple filter isn't going to cut it with a modern CPU- we're just
> >too darn efficient to end up i/o bound in that case. A more complex
> >filter might be able to change it over to being more CPU bound than i/o
> >bound and produce the performance improvments you're looking for.
>
> Except we're nowhere near being IO efficient. The vast difference between Postgres IO rates and dd shows this. I suspect that's because we're not giving the OS a list of IO to perform while we're doing our thing, but that's just a guess.
Uh, huh? The dd was ~321000 and the slowest uncached PG run from
Robert's latest tests was 337312.554, based on my inbox history at
least. I don't consider ~4-5% difference to be vast.
> >The caveat to this is if you have multiple i/o *channels* (which it
> >looks like you don't in this case) where you can parallelize across
> >those channels by having multiple processes involved.
>
> Keep in mind that multiple processes is in no way a requirement for that. Async IO would do that, or even just requesting stuff from the OS before we need it.
While I agree with this in principle, experience has shown that it
doesn't tend to work out as well as we'd like with a single process.
> > We only support
> >multiple i/o channels today with tablespaces and we can't span tables
> >across tablespaces. That's a problem when working with large data sets,
> >but I'm hopeful that this work will eventually lead to a parallelized
> >Append node that operates against a partitioned/inheirited table to work
> >across multiple tablespaces.
>
> Until we can get a single seqscan close to dd performance, I fear worrying about tablespaces and IO channels is entirely premature.
I feel like one of us is misunderstanding the numbers, which is probably
in part because they're a bit piecemeal over email, but the seqscan
speed in this case looks pretty close to dd performance for this
particular test, when things are uncached. Cached numbers are
different, but that's not what we're discussing here, I don't think.
Don't get me wrong- I've definitely seen cases where we're CPU bound
because of complex filters, etc, but that doesn't seem to be the case
here.
Thanks!
Stephen
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-29 02:59:41 |
Message-ID: | CA+TgmoYh+dRcaHXbPTR40=vX0dguuWtDMK3=4bCG6nVOKDJvVg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 8:27 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> I feel like one of us is misunderstanding the numbers, which is probably
> in part because they're a bit piecemeal over email, but the seqscan
> speed in this case looks pretty close to dd performance for this
> particular test, when things are uncached. Cached numbers are
> different, but that's not what we're discussing here, I don't think.
>
> Don't get me wrong- I've definitely seen cases where we're CPU bound
> because of complex filters, etc, but that doesn't seem to be the case
> here.
To try to clarify a bit: What we've testing here is a function I wrote
called parallel_count(regclass), which counts all the visible tuples
in a named relation. That runs as fast as dd, and giving it extra
workers or prefetching or the ability to read the relation with
different I/O patterns never seems to speed anything up very much.
The story with parallel sequential scan itself may well be different,
since that has a lot more CPU overhead than a dumb-simple
tuple-counter.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | Daniel Bausch <bausch(at)dvs(dot)tu-darmstadt(dot)de> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Thom Brown <thom(at)linux(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-29 07:10:10 |
Message-ID: | 8761bqrrjh.fsf@gelnhausen.dvs.informatik.tu-darmstadt.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Wed, Jan 28, 2015 at 9:12 AM, Thom Brown <thom(at)linux(dot)com> wrote:
>> On 28 January 2015 at 14:03, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> The problem here, as I see it, is that we're flying blind. If there's
>>> just one spindle, I think it's got to be right to read the relation
>>> sequentially. But if there are multiple spindles, it might not be,
>>> but it seems hard to predict what we should do. We don't know what
>>> the RAID chunk size is or how many spindles there are, so any guess as
>>> to how to chunk up the relation and divide up the work between workers
>>> is just a shot in the dark.
>>
>> Can't the planner take effective_io_concurrency into account?
>
> Maybe. It's answering a somewhat the right question -- to tell us how
> many parallel I/O channels we think we've got. But I'm not quite sure
> what the to do with that information in this case. I mean, if we've
> got effective_io_concurrency = 6, does that mean it's right to start
> scans in 6 arbitrary places in the relation and hope that keeps all
> the drives busy? That seems like throwing darts at the wall. We have
> no idea which parts are on which underlying devices. Or maybe it mean
> we should prefetch 24MB, on the assumption that the RAID stripe is
> 4MB? That's definitely blind guesswork.
>
> Considering the email Amit just sent, it looks like on this machine,
> regardless of what algorithm we used, the scan took between 3 minutes
> and 5.5 minutes, and most of them took between 4 minutes and 5.5
> minutes. The results aren't very predictable, more workers don't
> necessarily help, and it's not really clear that any algorithm we've
> tried is clearly better than any other. I experimented with
> prefetching a bit yesterday, too, and it was pretty much the same.
> Some settings made it slightly faster. Others made it slower. Whee!
I have been researching this topic long time ago. One notably fact is
that active prefetching disables automatic readahead prefetching (by
Linux kernel), which can occour in larger granularities than 8K.
Automatic readahead prefetching occours when consecutive addresses are
read, which may happen by a seqscan but also by "accident" through an
indexscan in correlated cases.
My consequence was to NOT prefetch seqscans, because OS does good enough
without advice. Prefetching indexscan heap accesses is very valuable
though, but you need to detect the accidential sequential accesses to
not hurt your performance in correlated cases.
In general I can give you the hint to not only focus on HDDs with their
single spindle. A single SATA SSD scales up to 32 (31 on Linux)
requests in parallel (without RAID or anything else). The difference in
throughput is extreme for this type of storage device. While single
spinning HDDs can only gain up to ~20% by NCQ, SATA SSDs can easily gain
up to 700%.
+1 for using effective_io_concurrency to tune for this, since
prefetching random addresses is effectively a type of parallel I/O.
Regards,
Daniel
--
MSc. Daniel Bausch
Research Assistant (Computer Science)
Technische Universität Darmstadt
http://www.dvs.tu-darmstadt.de/staff/dbausch
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, John Gorman <johngorman2(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2015-01-29 12:21:54 |
Message-ID: | CAA4eK1JHCmN2X1LjQ4bOmLApt+btOuid5Vqqk5G6dDFV69iyHg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Wed, Jan 28, 2015 at 8:59 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>
> I have tried the tes