postgresql_management

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
postgresql_management [2024/07/17 20:02] – [VACUUMING] andonovjpostgresql_management [2024/07/18 04:41] (current) – [VACUUMING] andonovj
Line 467: Line 467:
  
 =====VACUUMING===== =====VACUUMING=====
-VACUUM Tips and Best Practices 
-Tried and tested tips and best practices for PostgreSQL vacuuming 
 Vacuuming should be part of routine database maintenance, and there are optimal ways to go about it. Below are some performance tips for PostgreSQL vacuuming. Vacuuming should be part of routine database maintenance, and there are optimal ways to go about it. Below are some performance tips for PostgreSQL vacuuming.
-    Don’t run manual VACUUM or ANALYZE without reason. +Don’t run manual VACUUM or ANALYZE without reason. 
-    Database administrators should refrain from running manual vacuums too often on the entire database, as the autovacuum process might already have optimally vacuumed the target database. As a result, a manual vacuum may not remove any dead tuples but cause unnecessary I/O loads or CPU spikes. +Database administrators should refrain from running manual vacuums too often on the entire database, as the autovacuum process might already have optimally vacuumed the target database. As a result, a manual vacuum may not remove any dead tuples but cause unnecessary I/O loads or CPU spikes. 
-    If necessary, manual vacuums should be run on a table-by-table basis only when necessary, like when there are low ratios of live rows to dead rows or large gaps between autovacuum operations. They should also be run when user activity is minimum. +If necessary, manual vacuums should be run on a table-by-table basis only when necessary, like when there are low ratios of live rows to dead rows or large gaps between autovacuum operations. They should also be run when user activity is minimum. 
-    Autovacuum also keeps a table’s data distribution statistics up-to-date (it doesn’t rebuild them). When manually run, the ANALYZE command rebuilds these statistics instead of updating them. Again, rebuilding statistics when they’re already optimally updated by a regular autovacuum might cause unnecessary pressure on system resources. +Autovacuum also keeps a table’s data distribution statistics up-to-date (it doesn’t rebuild them). When manually run, the ANALYZE command rebuilds these statistics instead of updating them. Again, rebuilding statistics when they’re already optimally updated by a regular autovacuum might cause unnecessary pressure on system resources. 
-    The time when you must run ANALYZE manually is immediately after bulk loading data into the target table. A large number (even a few hundred) of new rows in an existing table will significantly skew its column data distribution. The new rows will cause any existing column statistics to be out-of-date. When the query optimizer uses such statistics, query performance can be really slow. +The time when you must run ANALYZE manually is immediately after bulk loading data into the target table. A large number (even a few hundred) of new rows in an existing table will significantly skew its column data distribution. The new rows will cause any existing column statistics to be out-of-date. When the query optimizer uses such statistics, query performance can be really slow. 
-    In these cases, running the ANALYZE command immediately after a data load to rebuild the statistics completely is better than waiting for the autovacuum to kick in. +In these cases, running the ANALYZE command immediately after a data load to rebuild the statistics completely is better than waiting for the autovacuum to kick in. 
-    Select VACUUM FULL only when performance degrades badly +Select VACUUM FULL only when performance degrades badly 
-    The autovacuum functionality doesn’t recover disk space taken up by dead tuples. Running a VACUUM FULL command will do so, but has performance implications. The target table is exclusively locked during the operation, preventing even reads on the table. The process also makes a full copy of the table, which requires extra disk space when it runs. We recommend only running VACUUM FULL if there is a very high percentage of bloat and queries are suffering badly. We also recommend using periods of lowest database activity for it. +The autovacuum functionality doesn’t recover disk space taken up by dead tuples. Running a VACUUM FULL command will do so, but has performance implications. The target table is exclusively locked during the operation, preventing even reads on the table. The process also makes a full copy of the table, which requires extra disk space when it runs. We recommend only running VACUUM FULL if there is a very high percentage of bloat and queries are suffering badly. We also recommend using periods of lowest database activity for it. 
-    Fine-tune Autovacuum Threshold + 
-    It’s essential to check or tune the autovacuum and analyze configuration parameters in the postgresql.conf file or in individual table properties to strike a balance between autovacuum and performance gain. +Fine-tune Autovacuum Threshold 
-    PostgreSQL uses two configuration parameters to decide when to kick off an autovacuum: +It’s essential to check or tune the autovacuum and analyze configuration parameters in the postgresql.conf file or in individual table properties to strike a balance between autovacuum and performance gain. 
-        autovacuum_vacuum_threshold: this has a default value of 50 +PostgreSQL uses two configuration parameters to decide when to kick off an autovacuum: 
-        autovacuum_vacuum_scale_factor: this has a default value of 0.2 +  autovacuum_vacuum_threshold: this has a default value of 50 
-    Together, these parameters tell PostgreSQL to start an autovacuum when the number of dead rows in a table exceeds the number of rows in that table multiplied by the scale factor plus the vacuum threshold. In other words, PostgreSQL will start autovacuum on a table when:+  autovacuum_vacuum_scale_factor: this has a default value of 0.2 
 +    
 +Together, these parameters tell PostgreSQL to start an autovacuum when the number of dead rows in a table exceeds the number of rows in that table multiplied by the scale factor plus the vacuum threshold. In other words, PostgreSQL will start autovacuum on a table when:
     pg_stat_user_tables.n_dead_tup > (pg_class.reltuples x autovacuum_vacuum_scale_factor)  + autovacuum_vacuum_threshold     pg_stat_user_tables.n_dead_tup > (pg_class.reltuples x autovacuum_vacuum_scale_factor)  + autovacuum_vacuum_threshold
-    This may be sufficient for small to medium-sized tables. For example, in a table with 10,000 rows, the number of dead rows has to be over 2,050 ((10,000 x 0.2) + 50) before an autovacuum kicks off. + 
-    Not every table in a database experiences the same rate of data modification. Usually, a few large tables will experience frequent data modifications, resulting in a higher number of dead rows. The default values may not work for such tables. For example, with the default values, a table with 1 million rows must have more than 200,050 dead rows before an autovacuum starts ((1000,000 x 0.2) + 50). This can mean longer gaps between autovacuums, increasingly long autovacuum times, and worse, autovacuum not running at all if active transactions on the table are locking it. +This may be sufficient for small to medium-sized tables. For example, in a table with 10,000 rows, the number of dead rows has to be over 2,050 ((10,000 x 0.2) + 50) before an autovacuum kicks off. 
-    Therefore, the goal should be to set these thresholds to optimal values so autovacuum can happen at regular intervals and don’t take a long time (and affect user sessions) while keeping the number of dead rows relatively low. +Not every table in a database experiences the same rate of data modification. Usually, a few large tables will experience frequent data modifications, resulting in a higher number of dead rows. The default values may not work for such tables. For example, with the default values, a table with 1 million rows must have more than 200,050 dead rows before an autovacuum starts ((1000,000 x 0.2) + 50). This can mean longer gaps between autovacuums, increasingly long autovacuum times, and worse, autovacuum not running at all if active transactions on the table are locking it. 
-    One approach is to use one or the other parameter. So, if we set autovacuum_vacuum_scale_factor to 0 and instead set autovacuum_vacuum_threshold to, say, 5,000, a table will be autovacuumed when its number of dead rows is more than 5,000. + 
-    Fine-tune Autoanalyze Threshold +Therefore, the goal should be to set these thresholds to optimal values so autovacuum can happen at regular intervals and don’t take a long time (and affect user sessions) while keeping the number of dead rows relatively low. 
-    Similar to autovacuum, autoanalyze also uses two parameters that decide when autovacuum will also trigger an autoanalyze: + 
-        autovacuum_analyze_threshold: this has a default value of 50 +One approach is to use one or the other parameter. So, if we set autovacuum_vacuum_scale_factor to 0 and instead set autovacuum_vacuum_threshold to, say, 5,000, a table will be autovacuumed when its number of dead rows is more than 5,000. 
-        autovacuum_analyze_scale_factor: this has a default value of 0.1 + 
-    Like autovacuum, the autovacuum_analyze_threshold parameter can be set to a value that dictates the number of inserted, deleted, or updated tuples in a table before an autoanalyze starts. We recommend setting this parameter separately on large and high-transaction tables. The table configuration will override the postgresql.conf values. +Fine-tune Autoanalyze Threshold 
-    The code snippet below shows the SQL syntax for modifying the autovacuum_analyze_threshold setting for a table.+Similar to autovacuum, autoanalyze also uses two parameters that decide when autovacuum will also trigger an autoanalyze: 
 +  autovacuum_analyze_threshold: this has a default value of 50 
 +  autovacuum_analyze_scale_factor: this has a default value of 0.1 
 + 
 +Like autovacuum, the autovacuum_analyze_threshold parameter can be set to a value that dictates the number of inserted, deleted, or updated tuples in a table before an autoanalyze starts. We recommend setting this parameter separately on large and high-transaction tables. The table configuration will override the postgresql.conf values. 
 +The code snippet below shows the SQL syntax for modifying the autovacuum_analyze_threshold setting for a table.
     ALTER TABLE <table_name> SET (autovacuum_analyze_threshold = <threshold rows>)     ALTER TABLE <table_name> SET (autovacuum_analyze_threshold = <threshold rows>)
-    Fine-tune Autovacuum workers + 
-    Another parameter often overlooked is autovacuum_max_workers, with a default value of 3. Autovacuum is not a single process but a number of individual vacuum threads running in parallel. The reason for specifying multiple workers is to ensure that vacuuming large tables isn’t holding up vacuuming smaller tables and user sessions. The autovacuum_max_workers parameter tells PostgreSQL to spin up the number of autovacuum worker threads to do the cleanup. +Fine-tune Autovacuum workers 
-    A common practice by PostgreSQL DBAs is to increase the number of maximum worker threads to speed up autovacuum. This doesn’t work as all the threads share the same autovacuum_vacuum_cost_limit, which has a default value of 200. Each autovacuum thread is assigned a cost limit using the formula shown below: +Another parameter often overlooked is autovacuum_max_workers, with a default value of 3. Autovacuum is not a single process but a number of individual vacuum threads running in parallel. The reason for specifying multiple workers is to ensure that vacuuming large tables isn’t holding up vacuuming smaller tables and user sessions. The autovacuum_max_workers parameter tells PostgreSQL to spin up the number of autovacuum worker threads to do the cleanup. 
-    individual thread’s cost_limit = autovacuum_vacuum_cost_limit / autovacuum_max_workers +A common practice by PostgreSQL DBAs is to increase the number of maximum worker threads to speed up autovacuum. This doesn’t work as all the threads share the same autovacuum_vacuum_cost_limit, which has a default value of 200. Each autovacuum thread is assigned a cost limit using the formula shown below: 
-    The cost of work done by an autovacuum thread is calculated using three parameters: + 
-        vacuum_cost_page_hit: this has a default value of 1 +individual thread’s cost_limit = autovacuum_vacuum_cost_limit / autovacuum_max_workers 
-        vacuum_cost_page_miss: this has a default value of 10 + 
-        vacuum_cost_page_dirty: this has a default value of 20 +The cost of work done by an autovacuum thread is calculated using three parameters: 
-    What these parameters mean is this: +  vacuum_cost_page_hit: this has a default value of 1 
-        When a vacuum thread finds the data page that it’s supposed to clean in the shared buffer, the cost is 1. +  vacuum_cost_page_miss: this has a default value of 10 
-        If the data page is not in the shared buffer but the OS cache, the cost will be 10. +  vacuum_cost_page_dirty: this has a default value of 20 
-        If the page has to be marked dirty because the vacuum thread had to delete dead rows, the cost will be 20. + 
-    An increased number of worker threads will lower the cost limit for each thread. As each thread is assigned a lower cost limit, it will go to sleep more often as the cost threshold is easily reached, ultimately causing the whole vacuum process to run slow. We recommend increasing the autovacuum_vacuum_cost_limit to a higher value, like 2000, and then adjusting the maximum number of worker threads. +What these parameters mean is this: 
-    A better way is to tune these parameters for individual tables only when necessary. For example, if the autovacuum of a large transactional table is taking too long, the table may be temporarily configured to use its own vacuum cost limit and cost delays. The cost limit and delay will override the system-wide values set in postgresql.conf. +When a vacuum thread finds the data page that it’s supposed to clean in the shared buffer, the cost is 1. 
-    The code snippet below shows how to configure individual tables. +If the data page is not in the shared buffer but the OS cache, the cost will be 10. 
-    ALTER TABLE <table_name> SET (autovacuum_vacuum_cost_limit = <large_value>+If the page has to be marked dirty because the vacuum thread had to delete dead rows, the cost will be 20. 
-    ALTER TABLE <table_name> SET (autovacuum_vacuum_cost_delay = <lower_cost_delay>+An increased number of worker threads will lower the cost limit for each thread. As each thread is assigned a lower cost limit, it will go to sleep more often as the cost threshold is easily reached, ultimately causing the whole vacuum process to run slow. We recommend increasing the autovacuum_vacuum_cost_limit to a higher value, like 2000, and then adjusting the maximum number of worker threads. 
-    Using the first parameter will ensure the autovacuum thread assigned to the table performs more work before going to sleep. Lowering the autovacuum_vacuum_cost_delay will also mean the thread sleeps for less time.+A better way is to tune these parameters for individual tables only when necessary. For example, if the autovacuum of a large transactional table is taking too long, the table may be temporarily configured to use its own vacuum cost limit and cost delays. The cost limit and delay will override the system-wide values set in postgresql.conf. 
 +The code snippet below shows how to configure individual tables. 
 + 
 +  * ALTER TABLE <table_name> SET (autovacuum_vacuum_cost_limit = <large_value>
 +  ALTER TABLE <table_name> SET (autovacuum_vacuum_cost_delay = <lower_cost_delay>
 + 
 +Using the first parameter will ensure the autovacuum thread assigned to the table performs more work before going to sleep. Lowering the autovacuum_vacuum_cost_delay will also mean the thread sleeps for less time.
 Get more best practice tips from our professional team of PostgreSQL experts: Get more best practice tips from our professional team of PostgreSQL experts:
  
 ===Examples=== ===Examples===
  • postgresql_management.1721246573.txt.gz
  • Last modified: 2024/07/17 20:02
  • by andonovj