Re: Recommendations on improving the insert on conflict do nothing performance

Lists: Postg젠 토토SQL :Postg토토 사이트 순위SQL
From: Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org, pgsql-in-general(at)postgresql(dot)org
Subject: Recommendations on improving the insert on conflict do nothing performance
Date: 2024-09-11 08:53:04
Message-ID: CAJCZkoKi5RqJ0Y+_D_D_yhdLXcCiiCG4uBdui1Mgd0wC=EM4oA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-in-general

Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
*8vcpus and 32gb ram
Number of calls per sec 1600 at this time 42% of cpu utilized
Max in ms 33.62 per call
Avg in ms 0.17 per call
Table
"dictionary.dictionary"
Column | Type | Collation | Nullable | Default
| Storage | Compression | Stats target | Description
----------------+--------------------------+-----------+----------+----------+----------+-------------+--------------+-------------
lang | text | | not null |
| extended | | |
tid | text | | not null |
| extended | | |
basetid | text | | not null |
| extended | | |
sportid | text | | |
| extended | | |
brandid | text | | not null |
| extended | | |
translatedtext | text | | |
| extended | | |
objecttype | text | | |
| extended | | |
createdat | timestamp with time zone | | not null | now()
| plain | | |
modified | timestamp with time zone | | not null | now()
| plain | | |
modifiedby | text | | not null |
''::text | extended | | |
version | integer | | not null | 0
| plain | | |
Indexes:
"pk_dictionary" PRIMARY KEY, btree (lang, tid)
"idx_dictionary_basetid" btree (basetid)
"idx_dictionary_brandid" btree (brandid)
"idx_dictionary_objecttype" btree (objecttype)
"idx_dictionary_sportid" btree (sportid)
Triggers:
i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW EXECUTE
FUNCTION update_createdat_col()
i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
ROW EXECUTE FUNCTION update_modified_col()
Access method: heap
How do we improve this query performance without taking more cpu?

Regards,
Durga Mahesh


From: Muhammad Usman Khan <usman(dot)k(at)bitnine(dot)net>
To: Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org, pgsql-in-general(at)postgresql(dot)org
Subject: Re: Recommendations on improving the insert on conflict do nothing performance
Date: 2024-09-12 04:35:38
Message-ID: CAPnRvGu_Zdf7rq+5EjRdcGZHwd711g=Cg2EpUZk6QwqKWKxzHw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: Postg젠 토토SQL : Postg토토 사이트 순위SQL

Hi,
You can use the following approaches for optimization:

- Instead of inserting one row at a time, perform bulk inserts, which
will reduce the overhead of each individual transaction
- Partitioning can improve write performance by splitting the data into
smaller, more manageable chunks
- Tune postgres configuration like
work_mem = '16MB'
shared_buffers = '8GB'
effective_cache_size = '24GB'

On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com>
wrote:

> Hi
> insert into
> dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
> values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
> *8vcpus and 32gb ram
> Number of calls per sec 1600 at this time 42% of cpu utilized
> Max in ms 33.62 per call
> Avg in ms 0.17 per call
> Table
> "dictionary.dictionary"
> Column | Type | Collation | Nullable |
> Default | Storage | Compression | Stats target | Description
>
> ----------------+--------------------------+-----------+----------+----------+----------+-------------+--------------+-------------
> lang | text | | not null |
> | extended | | |
> tid | text | | not null |
> | extended | | |
> basetid | text | | not null |
> | extended | | |
> sportid | text | | |
> | extended | | |
> brandid | text | | not null |
> | extended | | |
> translatedtext | text | | |
> | extended | | |
> objecttype | text | | |
> | extended | | |
> createdat | timestamp with time zone | | not null | now()
> | plain | | |
> modified | timestamp with time zone | | not null | now()
> | plain | | |
> modifiedby | text | | not null |
> ''::text | extended | | |
> version | integer | | not null | 0
> | plain | | |
> Indexes:
> "pk_dictionary" PRIMARY KEY, btree (lang, tid)
> "idx_dictionary_basetid" btree (basetid)
> "idx_dictionary_brandid" btree (brandid)
> "idx_dictionary_objecttype" btree (objecttype)
> "idx_dictionary_sportid" btree (sportid)
> Triggers:
> i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW
> EXECUTE FUNCTION update_createdat_col()
> i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
> ROW EXECUTE FUNCTION update_modified_col()
> Access method: heap
> How do we improve this query performance without taking more cpu?
>
> Regards,
> Durga Mahesh
>


From: Durgamahesh Manne <maheshpostgres9(at)gmail(dot)com>
To: Muhammad Usman Khan <usman(dot)k(at)bitnine(dot)net>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org, pgsql-in-general(at)postgresql(dot)org
Subject: Re: Recommendations on improving the insert on conflict do nothing performance
Date: 2024-09-12 17:03:10
Message-ID: CAJCZkoLK0i2FDyZgvJiJVJJkhs_Ao4tK2RmEGRTgczaPNZKvcg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-in-general

Hi Muhammad Usman Khan

I have already set required values of params.Here issue was about
triggers.I have resolved this issue

Regards
Durga Mahesh

On Thu, Sep 12, 2024 at 10:05 AM Muhammad Usman Khan <usman(dot)k(at)bitnine(dot)net>
wrote:

> Hi,
> You can use the following approaches for optimization:
>
> - Instead of inserting one row at a time, perform bulk inserts, which
> will reduce the overhead of each individual transaction
> - Partitioning can improve write performance by splitting the data
> into smaller, more manageable chunks
> - Tune postgres configuration like
> work_mem = '16MB'
> shared_buffers = '8GB'
> effective_cache_size = '24GB'
>
>
> On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <
> maheshpostgres9(at)gmail(dot)com> wrote:
>
>> Hi
>> insert into
>> dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
>> values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
>> *8vcpus and 32gb ram
>> Number of calls per sec 1600 at this time 42% of cpu utilized
>> Max in ms 33.62 per call
>> Avg in ms 0.17 per call
>> Table
>> "dictionary.dictionary"
>> Column | Type | Collation | Nullable |
>> Default | Storage | Compression | Stats target | Description
>>
>> ----------------+--------------------------+-----------+----------+----------+----------+-------------+--------------+-------------
>> lang | text | | not null |
>> | extended | | |
>> tid | text | | not null |
>> | extended | | |
>> basetid | text | | not null |
>> | extended | | |
>> sportid | text | | |
>> | extended | | |
>> brandid | text | | not null |
>> | extended | | |
>> translatedtext | text | | |
>> | extended | | |
>> objecttype | text | | |
>> | extended | | |
>> createdat | timestamp with time zone | | not null | now()
>> | plain | | |
>> modified | timestamp with time zone | | not null | now()
>> | plain | | |
>> modifiedby | text | | not null |
>> ''::text | extended | | |
>> version | integer | | not null | 0
>> | plain | | |
>> Indexes:
>> "pk_dictionary" PRIMARY KEY, btree (lang, tid)
>> "idx_dictionary_basetid" btree (basetid)
>> "idx_dictionary_brandid" btree (brandid)
>> "idx_dictionary_objecttype" btree (objecttype)
>> "idx_dictionary_sportid" btree (sportid)
>> Triggers:
>> i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW
>> EXECUTE FUNCTION update_createdat_col()
>> i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
>> ROW EXECUTE FUNCTION update_modified_col()
>> Access method: heap
>> How do we improve this query performance without taking more cpu?
>>
>> Regards,
>> Durga Mahesh
>>
>