Performing FULL VACCUM in Greenplum efficiently


Greenplum is a MPP Warehouse platform based on PostgreSQL database. We discuss one of the most important and most common maintenance task that needs to be executed on periodic basis on the platform.

What causes bloat?

Greenplum platform is ACID compliant. The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially, i.e., one after the other. Providing isolation is the main goal of concurrency control. Greenplum (PostgreSQL, really) implements isolation with Multi Version Concurrency Control. In normal PostgreSQL operation, rows that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it’s necessary to do VACUUM periodically, especially on frequently updated tables. Failure to do table maintenance in order to allow reuse of this space causes table data file to grow bigger and therefore scans of the table take longer.

Options to cleanup the bloat


This the slowest method of the lot, when VACUUM FULL command is executed, rows in the table are reshuffled. If there are large number of rows, it leads to various reshuffling of the data leading to very unpredictable time.

Any bloated catalog table of the database (i.e all tables under pg_catalog schema) can only use this method to remove the bloat, thus removing highly bloated catalog tables can be time-consuming.

Create Table As Select (CTAS)

CTAS is a quick method to remove bloat and using this method also helps to avoid the table lock (EXCLUSIVE LOCK) which VACUUM FULL method acquires to do the operation, thus allowing the table to be used by end users while maintenance is being performed on the main table.

The disadvantage is that it involves many steps to perform the activity:

  1. Obtain the DDL of the table using pg_dump -s -t <bloated-table-name> <database> command. This creates a script with all the sub objects like indexes that are involved with the table. It also provides list of grants associated with the table.
  2. Once the DDL is obtained, replace the to using the find/replace option with any editor of your choice and then execute the file on psql to create new objects in the database.
  3. Then follow steps put in the code below:
INSERT INTO <schema-name>.<new-table-name> SELECT * FROM <schema-name>.<bloated-table-name>;
ALTER TABLE <schema-name>.<bloated-table-name> RENAME TO <table-name-to-drop>;
ALTER TABLE <schema-name>.<new-table-name> RENAME TO <bloated-table-name>;
-- Once users confirm everything is good.
DROP TABLE <schema-name>.<bloated-table-name>;

This is another way to clear up the bloat, this involves backing up the tables and then restoring them back. Tools that can be used to achieve this are:

  1. gpcrondump / gpdbrestore
  2. gp_dump / gp_restore
  3. pg_dump / pg_restore
  4. COPY .. TO .. / COPY .. FROM ..

This is one the quickest method of the list, when you execute redistribute command, internally the database creates a new file and loads the existing data into it. Once load is done, it removes the old file, effectively eliminating the bloat. Following script automates this process by building redistribution script that preserves distribution key order.

SELECT 'ALTER TABLE '||n.nspname||'.'||c.relname||' SET with (reorganize=false) DISTRIBUTED RANDOMLY;\n'||
 'ALTER TABLE '||n.nspname||'.'||c.relname||' SET with (reorganize=true) DISTRIBUTED'||
 WHEN length(dist_col) > 0 THEN
 ' BY ('||dist_col||');\n'
 END||'ANALYZE '||n.nspname||'.'||c.relname||';\n'
 FROM pg_class c, pg_namespace n,
 (SELECT pc.oid,
 string_agg(attname, ', ' order by colorder) AS dist_col
 FROM pg_class AS pc
 LEFT JOIN (SELECT localoid,
 unnest(attrnums) as colnum,
 generate_series(1, array_upper(attrnums, 1)) as colorder
 FROM gp_distribution_policy
 ) AS d
 ON (d.localoid = pc.oid)
 LEFT JOIN pg_attribute AS a
 ON (d.localoid = a.attrelid AND d.colnum = a.attnum)
 ) AS keys
 WHERE c.oid = keys.oid
 AND c.relnamespace = n.oid
 AND c.relkind='r'
 AND c.relstorage='h'
 -- Following filter takes out all partitions, we care just about master table
 AND c.oid NOT IN (SELECT inhrelid FROM pg_inherits)
 AND n.nspname not in (
 AND n.nspname not like 'pg_temp%';

When we execute this script against a database, it produces SQL that will cleanup the bloat when run against the database.

psql -t gpadmin -f fullvacuum.sql |sed 's/\s*$//'

ALTER TABLE public.testtable SET with (reorganize=false) DISTRIBUTED RANDOMLY;
ALTER TABLE public.testtable SET with (reorganize=true) DISTRIBUTED BY (n);
ANALYZE public.testtable;

One thought on “Performing FULL VACCUM in Greenplum efficiently

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s