Lists: | pgsql-bugs |
---|
From: | PG Bug reporting form <noreply(at)postgresql(dot)org> |
---|---|
To: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Cc: | karen(dot)talarico(at)swarm64(dot)com |
Subject: | BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2021-03-12 19:03:44 |
Message-ID: | 16925-ec96d83529d0d629@postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
The following bug has been logged on the website:
Bug reference: 16925
Logged by: Karen Talarico
Email address: karen(dot)talarico(at)swarm64(dot)com
PostgreSQL version: 12.6
Operating system: CentOS Linux 8 Kernel: Linux 4.18.0-240.1.1.el8_3.
Description:
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
376GB RAM
Using TPC-H benchmark scale-factor 1000. To recreate dataset, see
https://github.com/swarm64/s64da-benchmark-toolkit. Use psql_native
schema.
Log:
2021-03-12 19:45:37.352 CET [316243] ERROR: XX000: invalid DSA memory alloc
request size 1073741824
2021-03-12 19:45:37.352 CET [316243] LOCATION: dsa_allocate_extended,
dsa.c:677
2021-03-12 19:45:37.352 CET [316243] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.352 CET [316224] ERROR: XX000: invalid DSA memory alloc
request size 1073741824
2021-03-12 19:45:37.352 CET [316224] CONTEXT: parallel worker
2021-03-12 19:45:37.352 CET [316224] LOCATION: dsa_allocate_extended,
dsa.c:677
2021-03-12 19:45:37.352 CET [316224] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316242] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316242] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316242] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316239] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316239] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316239] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316241] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316241] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316241] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316240] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316240] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316240] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316238] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316238] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316238] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316237] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316237] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316237] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316234] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316234] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316234] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316236] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316236] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316236] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316232] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316232] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316232] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316235] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316235] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316235] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316233] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316233] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316233] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316230] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316230] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316230] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316231] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316231] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316231] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316228] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316228] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316228] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.353 CET [316229] FATAL: 57P01: terminating connection
due to administrator command
2021-03-12 19:45:37.353 CET [316229] LOCATION: ProcessInterrupts,
postgres.c:3023
2021-03-12 19:45:37.353 CET [316229] STATEMENT: EXPLAIN (ANALYZE, VERBOSE,
COSTS)
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= date '1994-01-01'
and o_orderdate < date '1994-01-01' + interval '3' month
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
2021-03-12 19:45:37.909 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316243) exited with exit code 1
2021-03-12 19:45:37.909 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.654 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316231) exited with exit code 1
2021-03-12 19:45:38.654 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.655 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316236) exited with exit code 1
2021-03-12 19:45:38.655 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.656 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316230) exited with exit code 1
2021-03-12 19:45:38.656 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.657 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316234) exited with exit code 1
2021-03-12 19:45:38.657 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.658 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316235) exited with exit code 1
2021-03-12 19:45:38.658 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.659 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316238) exited with exit code 1
2021-03-12 19:45:38.659 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.661 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316239) exited with exit code 1
2021-03-12 19:45:38.661 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.662 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316242) exited with exit code 1
2021-03-12 19:45:38.662 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.674 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316237) exited with exit code 1
2021-03-12 19:45:38.674 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.686 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316229) exited with exit code 1
2021-03-12 19:45:38.686 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.688 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316233) exited with exit code 1
2021-03-12 19:45:38.688 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.689 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316240) exited with exit code 1
2021-03-12 19:45:38.689 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.690 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316241) exited with exit code 1
2021-03-12 19:45:38.690 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:38.692 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316232) exited with exit code 1
2021-03-12 19:45:38.692 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
2021-03-12 19:45:49.644 CET [314918] LOG: 00000: background worker
"parallel worker" (PID 316228) exited with exit code 1
2021-03-12 19:45:49.644 CET [314918] LOCATION: LogChildExit,
postmaster.c:3671
PG_SETTINGS:
name |
setting
---------------------------------------------------+--------------------------------------------
allow_system_table_mods | off
application_name | psql
archive_cleanup_command |
archive_command | (disabled)
archive_mode | off
archive_timeout | 0
array_nulls | on
authentication_timeout | 60
autovacuum | on
autovacuum_analyze_scale_factor | 0
autovacuum_analyze_threshold | 50
autovacuum_freeze_max_age | 200000000
autovacuum_max_workers | 15
autovacuum_multixact_freeze_max_age | 400000000
autovacuum_naptime | 1
autovacuum_vacuum_cost_delay | 2
autovacuum_vacuum_cost_limit | -1
autovacuum_vacuum_scale_factor | 0
autovacuum_vacuum_threshold | 50
autovacuum_work_mem | -1
backend_flush_after | 0
backslash_quote | safe_encoding
bgwriter_delay | 200
bgwriter_flush_after | 64
bgwriter_lru_maxpages | 100
bgwriter_lru_multiplier | 2
block_size | 8192
bonjour | off
bonjour_name |
bytea_output | hex
check_function_bodies | on
checkpoint_completion_target | 0.5
checkpoint_flush_after | 32
checkpoint_timeout | 300
checkpoint_warning | 30
client_encoding | UTF8
client_min_messages | notice
cluster_name |
commit_delay | 0
commit_siblings | 5
config_file |
/mnt/ssd_storage/pg12-host/postgresql.conf
constraint_exclusion | partition
cpu_index_tuple_cost | 0.005
cpu_operator_cost | 0.0025
cpu_tuple_cost | 0.01
cursor_tuple_fraction | 0.1
data_checksums | off
data_directory |
/mnt/ssd_storage/pg12-host
data_directory_mode | 0700
data_sync_retry | off
DateStyle | ISO, MDY
db_user_namespace | off
deadlock_timeout | 1000
debug_assertions | off
debug_pretty_print | on
debug_print_parse | off
debug_print_plan | off
debug_print_rewritten | off
default_statistics_target | 2500
default_table_access_method | heap
default_tablespace |
default_text_search_config | pg_catalog.english
default_transaction_deferrable | off
default_transaction_isolation | read committed
default_transaction_read_only | off
dynamic_library_path | $libdir
dynamic_shared_memory_type | posix
effective_cache_size | 37748736
effective_io_concurrency | 128
enable_bitmapscan | on
enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
enable_indexscan | on
enable_material | off
enable_mergejoin | on
enable_nestloop | on
enable_parallel_append | on
enable_parallel_hash | on
enable_partition_pruning | on
enable_partitionwise_aggregate | off
enable_partitionwise_join | off
enable_seqscan | on
enable_sort | on
enable_tidscan | on
escape_string_warning | on
event_source | PostgreSQL
exit_on_error | off
external_pid_file |
extra_float_digits | 1
force_parallel_mode | off
from_collapse_limit | 8
fsync | on
full_page_writes | on
geqo | on
geqo_effort | 5
geqo_generations | 0
geqo_pool_size | 0
geqo_seed | 0
geqo_selection_bias | 2
geqo_threshold | 12
gin_fuzzy_search_limit | 0
gin_pending_list_limit | 4096
hba_file |
/mnt/ssd_storage/pg12-host/pg_hba.conf
hot_standby | on
hot_standby_feedback | off
huge_pages | try
ident_file |
/mnt/ssd_storage/pg12-host/pg_ident.conf
idle_in_transaction_session_timeout | 0
ignore_checksum_failure | off
ignore_system_indexes | off
integer_datetimes | on
IntervalStyle | postgres
jit | on
jit_above_cost | 100000
jit_debugging_support | off
jit_dump_bitcode | off
jit_expressions | on
jit_inline_above_cost | 500000
jit_optimize_above_cost | 500000
jit_profiling_support | off
jit_provider | llvmjit
jit_tuple_deforming | on
join_collapse_limit | 8
krb_caseins_users | off
krb_server_keyfile |
FILE:/etc/sysconfig/pgsql/krb5.keytab
lc_collate | en_US.utf-8
lc_ctype | en_US.utf-8
lc_messages | en_US.utf-8
lc_monetary | en_US.utf-8
lc_numeric | en_US.utf-8
lc_time | en_US.utf-8
listen_addresses | localhost
lo_compat_privileges | off
local_preload_libraries |
lock_timeout | 0
log_autovacuum_min_duration | -1
log_checkpoints | off
log_connections | off
log_destination | stderr
log_directory | log
log_disconnections | off
log_duration | off
log_error_verbosity | verbose
log_executor_stats | off
log_file_mode | 0600
log_filename | postgresql-%a.log
log_hostname | off
log_line_prefix | %m [%p]
log_lock_waits | off
log_min_duration_statement | -1
log_min_error_statement | error
log_min_messages | warning
log_parser_stats | off
log_planner_stats | off
log_replication_commands | off
log_rotation_age | 1440
log_rotation_size | 0
log_statement | none
log_statement_stats | off
log_temp_files | -1
log_timezone | Europe/Berlin
log_transaction_sample_rate | 0
log_truncate_on_rotation | on
logging_collector | on
maintenance_work_mem | 16777216
max_connections | 100
max_files_per_process | 1000
max_function_args | 100
max_identifier_length | 63
max_index_keys | 32
max_locks_per_transaction | 64
max_logical_replication_workers | 4
max_parallel_maintenance_workers | 32
max_parallel_workers | 1000
max_parallel_workers_per_gather | 52
max_pred_locks_per_page | 2
max_pred_locks_per_relation | -2
max_pred_locks_per_transaction | 64
max_prepared_transactions | 0
max_replication_slots | 10
max_stack_depth | 2048
max_standby_archive_delay | 30000
max_standby_streaming_delay | 30000
max_sync_workers_per_subscription | 2
max_wal_senders | 10
max_wal_size | 102400
max_worker_processes | 1024
min_parallel_index_scan_size | 64
min_parallel_table_scan_size | 0
min_wal_size | 4096
old_snapshot_threshold | -1
operator_precedence_warning | off
parallel_leader_participation | off
parallel_setup_cost | 500
parallel_tuple_cost | 0.01
password_encryption | md5
plan_cache_mode | auto
port | 5432
post_auth_delay | 0
pre_auth_delay | 0
primary_conninfo |
primary_slot_name |
promote_trigger_file |
quote_all_identifiers | off
random_page_cost | 2
recovery_end_command |
recovery_min_apply_delay | 0
recovery_target |
recovery_target_action | pause
recovery_target_inclusive | on
recovery_target_lsn |
recovery_target_name |
recovery_target_time |
recovery_target_timeline | latest
recovery_target_xid |
restart_after_crash | on
restore_command |
row_security | on
search_path | "$user", public
segment_size | 131072
seq_page_cost | 1
server_encoding | UTF8
server_version | 12.6
server_version_num | 120006
session_preload_libraries |
session_replication_role | origin
shared_buffers | 12582912
shared_memory_type | mmap
shared_preload_libraries | swarm64da
ssl | off
ssl_ca_file |
ssl_cert_file | server.crt
ssl_ciphers |
HIGH:MEDIUM:+3DES:!aNULL
ssl_crl_file |
ssl_dh_params_file |
ssl_ecdh_curve | prime256v1
ssl_key_file | server.key
ssl_library | OpenSSL
ssl_max_protocol_version |
ssl_min_protocol_version | TLSv1
ssl_passphrase_command |
ssl_passphrase_command_supports_reload | off
ssl_prefer_server_ciphers | on
standard_conforming_strings | on
statement_timeout | 0
stats_temp_directory | pg_stat_tmp
superuser_reserved_connections | 3
swarm64da.auto_analyze_max_age | 604800
swarm64da.columnstore_updater_sleep_time | 60
swarm64da.columnstore_updater_workers | 4
swarm64da.cost_scan_page | 0.01
swarm64da.cost_scan_startup | 100
swarm64da.enable_auto_analyze | on
swarm64da.enable_columnstore | on
swarm64da.enable_columnstore_updater | on
swarm64da.enable_join | on
swarm64da.enable_not_in_to_not_exists | on
swarm64da.enable_notice_not_created_extension | on
swarm64da.enable_outputbuffer | on
swarm64da.enable_planner_improvements | on
swarm64da.enable_seqscan | on
swarm64da.enable_settings_advisor | on
swarm64da.enable_shuffle | on
swarm64da.enable_shuffle_clause_minimization | on
swarm64da.enable_shuffled_aggregate | on
swarm64da.enable_shuffled_distinct | on
swarm64da.enable_unnesting | on
swarm64da.enable_workload_manager | off
swarm64da.maximize_parallel_workers | on
swarm64da.workload_manager_bypass_cost | 10000
swarm64da.workload_manager_max_concurrent_queries | 512
synchronize_seqscans | on
synchronous_commit | on
synchronous_standby_names |
syslog_facility | local0
syslog_ident | postgres
syslog_sequence_numbers | on
syslog_split_messages | on
tcp_keepalives_count | 9
tcp_keepalives_idle | 7200
tcp_keepalives_interval | 75
tcp_user_timeout | 0
temp_buffers | 524288
temp_file_limit | -1
temp_tablespaces |
TimeZone | Europe/Berlin
timezone_abbreviations | Default
trace_notify | off
trace_recovery_messages | log
trace_sort | off
track_activities | on
track_activity_query_size | 1024
track_commit_timestamp | off
track_counts | on
track_functions | none
track_io_timing | off
transaction_deferrable | off
transaction_isolation | read committed
transaction_read_only | off
transform_null_equals | off
unix_socket_directories | /var/run/postgresql,
/tmp
unix_socket_group |
unix_socket_permissions | 0777
update_process_title | on
vacuum_cleanup_index_scale_factor | 0.1
vacuum_cost_delay | 0
vacuum_cost_limit | 200
vacuum_cost_page_dirty | 20
vacuum_cost_page_hit | 1
vacuum_cost_page_miss | 10
vacuum_defer_cleanup_age | 0
vacuum_freeze_min_age | 50000000
vacuum_freeze_table_age | 150000000
vacuum_multixact_freeze_min_age | 5000000
vacuum_multixact_freeze_table_age | 150000000
wal_block_size | 8192
wal_buffers | 2048
wal_compression | off
wal_consistency_checking |
wal_init_zero | on
wal_keep_segments | 0
wal_level | replica
wal_log_hints | off
wal_receiver_status_interval | 10
wal_receiver_timeout | 60000
wal_recycle | on
wal_retrieve_retry_interval | 5000
wal_segment_size | 16777216
wal_sender_timeout | 60000
wal_sync_method | fdatasync
wal_writer_delay | 200
wal_writer_flush_after | 128
work_mem | 6291456
xmlbinary | base64
xmloption | content
zero_damaged_pages | off
(338 rows)
OS settings:
fs.aio-max-nr=1048576
fs.file-max=1000000
kernel.numa_balancing=0
vm.dirty_background_bytes=134217728
vm.overcommit_memory=1
vm.swappiness=0
vm.zone_reclaim_mode=0
From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | karen(dot)talarico(at)swarm64(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2021-03-13 10:21:38 |
Message-ID: | CA+hUKGKo5N65N2WG7ywO0Wf7p1gBduB-UJO1QzpQTN1xSxoROg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Sat, Mar 13, 2021 at 8:12 AM PG Bug reporting form
<noreply(at)postgresql(dot)org> wrote:
> Using TPC-H benchmark scale-factor 1000. To recreate dataset, see
> https://github.com/swarm64/s64da-benchmark-toolkit. Use psql_native
> schema.
I'd like to reproduce this but it may take me some time. Can you
please show the query plan?
> 2021-03-12 19:45:37.352 CET [316243] ERROR: XX000: invalid DSA memory alloc
> request size 1073741824
So, this means we have a call to dsa_allocate_extended() without the
DSA_ALLOC_HUGE flag failing the sanity check that surely no one wants
a GB of memory at once. In hash joins, we deliberately avoid making
our hash table bucket array long enough to hit that, since commit
86a2218e, and all other data is allocated in small chunks. So the
only way to hit this would be with an individual tuple that takes 1GB
to store. Other paths that use DSA include bitmap heap scans, but
they use the DSA_ALLOC_HUGE flag at least in one place.
> max_parallel_workers | 1000
> max_parallel_workers_per_gather | 52
> work_mem | 6291456
6GB * 52 workers = 321GB. I can see how we can get up to some
largish quantities of memory here, but I don't yet see how we try to
make an individual allocation of 1GB.
From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | karen(dot)talarico(at)swarm64(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2021-03-13 10:59:34 |
Message-ID: | CA+hUKGKYggD1k4Btd4S7HFfXdGJfmvxhv15LtoXpKAxm-GFYDg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Sat, Mar 13, 2021 at 11:21 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> On Sat, Mar 13, 2021 at 8:12 AM PG Bug reporting form
> <noreply(at)postgresql(dot)org> wrote:
> > Using TPC-H benchmark scale-factor 1000. To recreate dataset, see
> > https://github.com/swarm64/s64da-benchmark-toolkit. Use psql_native
> > schema.
>
> I'd like to reproduce this but it may take me some time. Can you
> please show the query plan?
>
> > 2021-03-12 19:45:37.352 CET [316243] ERROR: XX000: invalid DSA memory alloc
> > request size 1073741824
>
> So, this means we have a call to dsa_allocate_extended() without the
> DSA_ALLOC_HUGE flag failing the sanity check that surely no one wants
> a GB of memory at once. In hash joins, we deliberately avoid making
> our hash table bucket array long enough to hit that, since commit
> 86a2218e, and all other data is allocated in small chunks. So the
> only way to hit this would be with an individual tuple that takes 1GB
> to store. Other paths that use DSA include bitmap heap scans, but
> they use the DSA_ALLOC_HUGE flag at least in one place.
>
> > max_parallel_workers | 1000
> > max_parallel_workers_per_gather | 52
Another way to make a very large single allocation with a parallel
hash join with a large number of partitions and participants. Are you
in a position to debug and test patched versions? It'd be interesting
to know if ExecParallelHashJoinSetUpBatches() is the location, where
it does:
pstate->batches =
dsa_allocate0(hashtable->area,
EstimateParallelHashJoinBatch(hashtable) * nbatch);
If so, questions of whether it's really sane to run with so many
batches aside, the solution could be dsa_allocate_extended(...,
DSA_ALLOC_HUGE | DSA_ALLOC_ZERO).
From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Cc: | Karen Talarico <karen(dot)talarico(at)swarm64(dot)com> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2021-03-17 13:06:32 |
Message-ID: | CA+hUKGJWKafbqQ3+uByP-Ydb4poX+0DPJCTh3bbcneJkbLKV1w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Sat, Mar 13, 2021 at 11:59 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> > > 2021-03-12 19:45:37.352 CET [316243] ERROR: XX000: invalid DSA memory alloc
> > > request size 1073741824
After an off-list exchange with Karen and colleague who ran this with
the ERROR changed to a PANIC and examined the smoldering core, the
problem turns out to be a failure to keep the hashtable bucket array
<= MaxAllocSize in one code path. Although commit 86a2218e fixed
another version of that problem a while ago, it can still be
exceeded... by one byte... when we expand from one batch to many.
Will propose a fix.
dbuckets = Min(dbuckets,
MaxAllocSize / sizeof(dsa_pointer_atomic));
new_nbuckets = (int) dbuckets;
new_nbuckets = Max(new_nbuckets, 1024);
new_nbuckets = 1 << my_log2(new_nbuckets);
From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Cc: | Karen Talarico <karen(dot)talarico(at)swarm64(dot)com> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2021-03-18 09:21:34 |
Message-ID: | CA+hUKGLYXpGmbWVKjsCJuWahynGORy1Afwra3ZJyk6YWQ16Vaw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Thu, Mar 18, 2021 at 2:06 AM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> On Sat, Mar 13, 2021 at 11:59 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> > > > 2021-03-12 19:45:37.352 CET [316243] ERROR: XX000: invalid DSA memory alloc
> > > > request size 1073741824
>
> After an off-list exchange with Karen and colleague who ran this with
> the ERROR changed to a PANIC and examined the smoldering core, the
> problem turns out to be a failure to keep the hashtable bucket array
> <= MaxAllocSize in one code path. Although commit 86a2218e fixed
> another version of that problem a while ago, it can still be
> exceeded... by one byte... when we expand from one batch to many.
> Will propose a fix.
Here's a standalone reproducer with the right parameters to reach this
error, and a simple fix. (Definitely room for more improvements in
this area of code... but that'll have to be a project for later.)
===8<===
shared_buffers=2GB
fsync=off
max_wal_size=10GB
min_dynamic_shared_memory=2GB
===8<===
create table bigger_than_it_looks as
select generate_series(1, 256000000) as id;
alter table bigger_than_it_looks set (autovacuum_enabled = 'false');
alter table bigger_than_it_looks set (parallel_workers = 1);
analyze bigger_than_it_looks;
update pg_class set reltuples = 5000000 where relname = 'bigger_than_it_looks';
===8<===
postgres=# set work_mem = '4.5GB';
SET
postgres=# explain analyze select count(*) from bigger_than_it_looks
t1 join bigger_than_it_looks t2 using (id);
ERROR: invalid DSA memory alloc request size 1073741824
CONTEXT: parallel worker
===8<===
Attachment | Content-Type | Size |
---|---|---|
0001-Fix-oversized-memory-allocation-in-Parallel-Hash-Joi.patch | text/x-patch | 1.9 KB |
From: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Cc: | "a(dot)rybakina" <a(dot)rybakina(at)postgrespro(dot)ru> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2023-12-06 04:46:08 |
Message-ID: | 9277414b-dbce-4a32-8aff-642e399e23e5@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On 18/3/2021 16:21, Thomas Munro wrote:
> ===8<===
> shared_buffers=2GB
> fsync=off
> max_wal_size=10GB
> min_dynamic_shared_memory=2GB
> ===8<===
> create table bigger_than_it_looks as
> select generate_series(1, 256000000) as id;
> alter table bigger_than_it_looks set (autovacuum_enabled = 'false');
> alter table bigger_than_it_looks set (parallel_workers = 1);
> analyze bigger_than_it_looks;
> update pg_class set reltuples = 5000000 where relname = 'bigger_than_it_looks';
> ===8<===
> postgres=# set work_mem = '4.5GB';
> SET
> postgres=# explain analyze select count(*) from bigger_than_it_looks
> t1 join bigger_than_it_looks t2 using (id);
> ERROR: invalid DSA memory alloc request size 1073741824
> CONTEXT: parallel worker
> ===8<===
This bug still annoyingly interrupts the queries of some clients. Maybe
complete this work?
It is stable and reproduces on all PG versions. The case:
work_mem = '2GB'
test table:
-----------
CREATE TABLE bigger_than_it_looks AS
SELECT generate_series(1, 512E6) AS id;
ALTER TABLE bigger_than_it_looks SET (autovacuum_enabled = 'false');
ALTER TABLE bigger_than_it_looks SET (parallel_workers = 1);
ANALYZE bigger_than_it_looks;
UPDATE pg_class SET reltuples = 5000000
WHERE relname = 'bigger_than_it_looks';
The parallel workers number impacts size of the allowed memory under the
hash table and in that sense correlates with the work_mem value, needed
for the bug reproduction (keep in mind also that hash_mem_multiplier has
been changed recently).
Query:
SELECT sum(a.id)
FROM bigger_than_it_looks a
JOIN bigger_than_it_looks b ON a.id =b.id
LEFT JOIN bigger_than_it_looks c ON b.id = c.id;
Any query that needs Parallel Hash Join can be found here. The case here
is as follows.
The first batch contains a lot of tuples (on increment, it has about
67mln tuples.). We calculate the number of buckets needed, approximately
134 mln (134217728). Remember, the size of dsa_pointer_atomic is 8 in my
case, and it ends up with an overflow of the max number of DSA, which
can be allocated (1073741823 bytes).
See the new patch in the attachment.
--
regards,
Andrei Lepikhov
Postgres Professional
Attachment | Content-Type | Size |
---|---|---|
0001-Bugfix.-Guard-total-number-of-hash-table-buckets.patch | text/plain | 1.4 KB |
From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>, "a(dot)rybakina" <a(dot)rybakina(at)postgrespro(dot)ru> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2023-12-10 02:41:13 |
Message-ID: | CA+hUKGK6CHrAxgnjt2UM5oMMwMKmzNvqfC8QXWkxo50ag1u0jA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Wed, Dec 6, 2023 at 5:46 PM Andrei Lepikhov
<a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
> On 18/3/2021 16:21, Thomas Munro wrote:
> > ===8<===
> > shared_buffers=2GB
> > fsync=off
> > max_wal_size=10GB
> > min_dynamic_shared_memory=2GB
> > ===8<===
> > create table bigger_than_it_looks as
> > select generate_series(1, 256000000) as id;
> > alter table bigger_than_it_looks set (autovacuum_enabled = 'false');
> > alter table bigger_than_it_looks set (parallel_workers = 1);
> > analyze bigger_than_it_looks;
> > update pg_class set reltuples = 5000000 where relname = 'bigger_than_it_looks';
> > ===8<===
> > postgres=# set work_mem = '4.5GB';
> > SET
> > postgres=# explain analyze select count(*) from bigger_than_it_looks
> > t1 join bigger_than_it_looks t2 using (id);
> > ERROR: invalid DSA memory alloc request size 1073741824
> > CONTEXT: parallel worker
> > ===8<===
>
> This bug still annoyingly interrupts the queries of some clients. Maybe
> complete this work?
Ugh, sorry. We had a report, a repro and a candidate patch a couple
of years ago, but I somehow completely forgot about it. I have now
added a CF entry (#4689).
From: | Alena Rybakina <a(dot)rybakina(at)postgrespro(dot)ru> |
---|---|
To: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2023-12-10 23:36:26 |
Message-ID: | e62d301f-6c98-43cc-a303-ebcafb1e51d2@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
Hi! Thank you for your work on the subject.
On 11.12.2023 02:30, Alena Rybakina wrote:
>
> On 06.12.2023 07:46, Andrei Lepikhov wrote:
>> On 18/3/2021 16:21, Thomas Munro wrote:
>>> ===8<===
>>> shared_buffers=2GB
>>> fsync=off
>>> max_wal_size=10GB
>>> min_dynamic_shared_memory=2GB
>>> ===8<===
>>> create table bigger_than_it_looks as
>>> select generate_series(1, 256000000) as id;
>>> alter table bigger_than_it_looks set (autovacuum_enabled = 'false');
>>> alter table bigger_than_it_looks set (parallel_workers = 1);
>>> analyze bigger_than_it_looks;
>>> update pg_class set reltuples = 5000000 where relname =
>>> 'bigger_than_it_looks';
>>> ===8<===
>>> postgres=# set work_mem = '4.5GB';
>>> SET
>>> postgres=# explain analyze select count(*) from bigger_than_it_looks
>>> t1 join bigger_than_it_looks t2 using (id);
>>> ERROR: invalid DSA memory alloc request size 1073741824
>>> CONTEXT: parallel worker
>>> ===8<===
>>
>> This bug still annoyingly interrupts the queries of some clients.
>> Maybe complete this work?
>> It is stable and reproduces on all PG versions. The case:
>> work_mem = '2GB'
>>
>> test table:
>> -----------
>> CREATE TABLE bigger_than_it_looks AS
>> SELECT generate_series(1, 512E6) AS id;
>> ALTER TABLE bigger_than_it_looks SET (autovacuum_enabled = 'false');
>> ALTER TABLE bigger_than_it_looks SET (parallel_workers = 1);
>> ANALYZE bigger_than_it_looks;
>> UPDATE pg_class SET reltuples = 5000000
>> WHERE relname = 'bigger_than_it_looks';
>>
>> The parallel workers number impacts size of the allowed memory under
>> the hash table and in that sense correlates with the work_mem value,
>> needed for the bug reproduction (keep in mind also that
>> hash_mem_multiplier has been changed recently).
>>
>> Query:
>> SELECT sum(a.id)
>> FROM bigger_than_it_looks a
>> JOIN bigger_than_it_looks b ON a.id =b.id
>> LEFT JOIN bigger_than_it_looks c ON b.id = c.id;
>>
>> Any query that needs Parallel Hash Join can be found here. The case
>> here is as follows.
>> The first batch contains a lot of tuples (on increment, it has about
>> 67mln tuples.). We calculate the number of buckets needed,
>> approximately 134 mln (134217728). Remember, the size of
>> dsa_pointer_atomic is 8 in my case, and it ends up with an overflow
>> of the max number of DSA, which can be allocated (1073741823 bytes).
>> See the new patch in the attachment.
I've looked through your code and haven't seen any errors yet, but I
think we could rewrite these lines of code as follows:
- dbuckets = ceil(dtuples / NTUP_PER_BUCKET);
- dbuckets = Min(dbuckets, max_buckets);
- new_nbuckets = (int) dbuckets;
- new_nbuckets = Max(new_nbuckets, 1024);
+ dbuckets = Min(ceil(dtuples / NTUP_PER_BUCKET),
max_buckets);
+ new_nbuckets = Max((int) dbuckets, 1024);
I have attached a diff file with the proposed changes to this email.
--
Regards,
Alena Rybakina
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment | Content-Type | Size |
---|---|---|
code_refactoring.diff.txt | text/plain | 808 bytes |
From: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>, "a(dot)rybakina" <a(dot)rybakina(at)postgrespro(dot)ru> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2023-12-11 02:26:33 |
Message-ID: | 5192824e-370f-432e-918f-6b62cead4ae3@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On 10/12/2023 09:41, Thomas Munro wrote:
> On Wed, Dec 6, 2023 at 5:46 PM Andrei Lepikhov
> <a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
>> This bug still annoyingly interrupts the queries of some clients. Maybe
>> complete this work?
>
> Ugh, sorry. We had a report, a repro and a candidate patch a couple
> of years ago, but I somehow completely forgot about it. I have now
> added a CF entry (#4689).
Thanks. I think the Parallel Hash Join code is worth discovering extreme
cases anyway, but in that case, we have quite a clear bug and must fix it.
--
regards,
Andrei Lepikhov
Postgres Professional
From: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
---|---|
To: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>, "a(dot)rybakina" <a(dot)rybakina(at)postgrespro(dot)ru> |
Subject: | Re: BUG #16925: ERROR: invalid DSA memory alloc request size 1073741824 CONTEXT: parallel worker |
Date: | 2024-01-07 07:31:44 |
Message-ID: | CAPpHfdvVE8d2MynYHHZe2Nw2eoc9env4QVoTHYATsXcAvU3KVQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
On Mon, Dec 11, 2023 at 4:26 AM Andrei Lepikhov
<a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
> On 10/12/2023 09:41, Thomas Munro wrote:
> > On Wed, Dec 6, 2023 at 5:46 PM Andrei Lepikhov
> > <a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
> >> This bug still annoyingly interrupts the queries of some clients. Maybe
> >> complete this work?
> >
> > Ugh, sorry. We had a report, a repro and a candidate patch a couple
> > of years ago, but I somehow completely forgot about it. I have now
> > added a CF entry (#4689).
>
> Thanks. I think the Parallel Hash Join code is worth discovering extreme
> cases anyway, but in that case, we have quite a clear bug and must fix it.
Thank you. I've pushed and backpatched this to PG 12 with some editing from me.
------
Regards,
Alexander Korotkov