Functions v1.3
The reference section is a list of functions available with Analytics Accelerator.
Refer to PGFS functions for Pipelines for details on how to create storage locations for PGAA.
Table functions
pgaa.lakehouse_table_stats()
Returns storage statistics for a specific analytical table, including the size of the latest active snapshot and the total cumulative size of all data versions (historical data and logs) stored in object storage.
Synopsis
SELECT * FROM pgaa.lakehouse_table_stats('table_name'::regclass);
Parameters
| Parameter | Type | Description |
|---|---|---|
relation | REGCLASS | The name or OID of the analytical table to investigate. |
Returns
| Column | Type | Description |
|---|---|---|
latest_snapshot_size | BIGINT | The size in bytes of the latest active snapshot of the table. |
total_size | BIGINT | The total size in bytes of all files associated with the table in object storage, including metadata, transaction logs, and historical snapshots. |
Catalog functions
pgaa.add_catalog()
Registers a new Iceberg catalog with PGAA. This function performs an automated connection check to validate credentials and accessibility before the catalog is registered in the system.
Synopsis
SELECT pgaa.add_catalog('catalog_name', 'catalog_type', 'catalog_options'::json);
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | A unique name for the catalog within PGAA. |
catalog_type | pgaa.catalog_type | The catalog type. Supported values are iceberg-rest (Iceberg REST catalog), and iceberg-s3tables (AWS S3 Tables). |
catalog_options | JSON | A JSON object containing the the connection and authentication parameters. |
Depending on which storage you use, your JSON file for the catalog_options must specify different options.
For REST catalogs
{ "url": "https://your-catalog-endpoint.com", "warehouse_name": "your_warehouse_name", "warehouse": "your_warehouse_id", "token": "your_secret_auth_token" "danger_accept_invalid_certs": "false" }
Where:
url: The base HTTP(S) endpoint of the REST catalog service.warehouse_name: A human-readable alternative to the warehouse ID, supported by some REST providers for easier configuration.warehouse: The unique identifier for the specific warehouse within the catalog service.danger_accept_invalid_certs: If set totrue, Postgres skips SSL certificate validation. Use this only for internal testing or with self-signed certificates; never use it for sensitive public connections.
For AWS S3 Tables:
{ "arn": "arn:aws:s3tables:us-east-1:1234567890:bucket/my-bucket", "region": "us-east-1" }
Where:
arn: Specifies the Amazon Resource Name (ARN), the unique identifier for your S3 Table bucket.region: Specifies the physical AWS data center location where your S3 Table bucket resides.
Returns
Returns the name of the catalog on success.
pgaa.update_catalog()
Updates the configuration options (the JSON object) for an existing Iceberg catalog. Like pgaa.add_catalog(), this function performs a validation check to ensure the new connection parameters are functional before applying the changes.
Synopsis
SELECT * FROM pgaa.update_catalog('catalog_name', 'new_options'::json);
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | A unique name for the catalog within PGAA. |
new_options | JSON | A JSON object containing the updated connection and authentication parameters. |
See pgaa.add_catalog() for a detailed breakdown of the required JSON fields for each catalog type.
Returns
Returns the name of the catalog upon successful update.
pgaa.delete_catalog()
Removes a registered catalog from the database.
Synopsis
SELECT * FROM pgaa.delete_catalog('catalog_name');
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | The name of the catalog to be removed. |
Returns
Returns the name of the deleted catalog upon successful completion.
pgaa.list_catalogs()
Returns a list of all registered catalogs in the system, including their connection configuration, metadata synchronization timestamps, and current operational status.
Synopsis
SELECT * FROM pgaa.list_catalogs();
Parameters
None.
Returns
| Column | Type | Description |
|---|---|---|
name | TEXT | The name of the catalog. |
type | pgaa.catalog_type | The catalog type. Supported values are iceberg-rest (Iceberg REST catalog), and iceberg-s3tables (AWS S3 Tables). |
options | JSON | The connection parameters (URL, ARN, etc.) used for this catalog. |
status | pgaa.catalog_status | The current health of the catalog ( detached, attached, refresh_retry, or refresh_failed). |
created_at | TIMESTAMPTZ | The timestamp when the catalog was first registered. |
last_refreshed_at | TIMESTAMPTZ | The last time PGAA successfully synced metadata from this catalog. |
pgaa.import_catalog()
Performs a one-time scan and import of table definitions from a registered Iceberg catalog into Postgres. This function creates the local metadata required for PGAA to query the remote tables. This is a manual, once-off import and does not enable automatic, continuous synchronization.
Synopsis
SELECT pgaa.import_catalog('catalog_name');
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | The name of the previously registered catalog to import from. |
Returns
None.
pgaa.attach_catalog()
Enables continuous metadata synchronization for a previously registered Iceberg catalog (using pgaa.add_catalog()). Once attached, PGAA automatically monitors the remote catalog for changes and updates the local Postgres metadata accordingly. See Catalog synchronization for polling rate configuration.
Synopsis
SELECT pgaa.attach_catalog('catalog_name');
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | The name of the registered catalog to start synchronizing. |
Returns
None.
pgaa.detach_catalog()
Stops continuous metadata synchronization for a registered Iceberg catalog and moves it to a detached state.
Synopsis
SELECT * FROM pgaa.detach_catalog('catalog_name');
Parameters
| Parameter | Type | Description |
|---|---|---|
catalog_name | VARCHAR | The name of the registered catalog to detach. |
Returns
The function returns the row from the pgaa.catalog system table for the catalog being detached.
| Column | Type | Description |
|---|---|---|
name | TEXT | The name of detached catalog. |
type | pgaa.catalog_type | The catalog type. |
status | pgaa.catalog_status | The new status, which will be detached. |
pgaa.test_catalog()
Tests the connectivity and configuration of a registered Iceberg catalog. This function verifies that the Postgres instance can communicate with the remote catalog endpoint and, optionally, validates that the provided credentials have write permissions.
Synopsis
SELECT pgaa.test_catalog('catalog_name', test_writes:=true);
Parameters
| Parameter | Type | Description |
|---|---|---|
name | TEXT | The name of the registered catalog to test. |
text_writes | BOOLEAN | If true, the function attempts a write operation to the catalog metadata service to verify permissions. If false, only read permissions are tested. |
Returns
Returns NULL if the test is successful. Returns a descriptive error message if the test fails.
Spark functions
pgaa.spark_sql()
Executes a Spark SQL query directly on your Postgres cluster via the configured Spark Connect endpoint. This allows you to run Iceberg compaction routines or Spark functions that aren't available in Postgres.
See Spark procedures for a list of the available procedures.
To run this function, you must set the configuration parameter pgaa.spark_connect_url to point to an available Spark Connect service.
Synopsis
For a single catalog:
SELECT pgaa.spark_sql('query', 'catalog_name');
For multiple catalogs:
SELECT pgaa.spark_sql('query', ARRAY['catalog1', 'catalog2']);
Parameters
| Parameter | Type | Description |
|---|---|---|
query | TEXT | The Spark SQL statement to execute. |
catalog | TEXT or TEXT[] | A single catalog name, or an array of catalog names to use for the query. |
Returns
The result set of the Spark query formatted as a JSON object.
Example
Reduce metadata overhead via the rewrite_data_files Spark task:
SELECT pgaa.spark_sql($$ CALL preexisting.system.rewrite_data_files( table => '"preexisting"."ns-1"."table-1"', strategy => 'sort', sort_order => 'value DESC', options => map('rewrite-all', 'true') ) $$);
Background task functions
pgaa.launch_task()
Schedules a background maintenance task for an analytical Delta table.
Synopsis
SELECT pgaa.launch_task( 'table_name'::regclass, 'task_type', 'task_options'::jsonb, 'scheduled_at'::timestamp );
Parameters
| Parameter | Type | Description |
|---|---|---|
table_name | REGCLASS | The name or OID of the analytical table to run the task on. |
task_type | TEXT | The maintenance operation: compaction, zorder, vacuum, or purge. |
task_options | JSONB | Configuration specific to the task type (see below). |
scheduled_at | TIMESTAMP | If provided, the task will wait until this time to execute. Default is NULL. |
The values for the JSONB task_options depend on each task_type. All options are optional except zorder, which requires columns, and purge, which requires both storage_location and path.
compaction: Merges small files into larger ones to speed up analytical scans. The availabletask_optionsare:{ "target_size": 536870912, "preserve_insertion_order": true, "max_concurrent_tasks": 10, "max_spill_size": 2147483648, "min_commit_interval": 60 }
Where:
target_size: Specifies the size of the output files in bytes.preserve_insertion_order: Whether to maintain the existing sort order of rows.max_concurrent_tasks: Limits the number of parallel tasks the executor can run.max_spill_size: Sets the maximum data size in bytes allowed to spill to disk during the process.min_commit_interval: Sets the minimum wait time in seconds between committing updates to the Delta log.
zorder: A clustering technique that reorganizes data across multiple columns to improve "data skipping" for queries with filters on those columns. The availabletask_optionsare:{ "columns": ["customer_id", "transaction_date"], "target_size": 1073741824, "preserve_insertion_order": false, "max_concurrent_tasks": 4, "max_spill_size": 2147483648, "min_commit_interval": 30 }
Where:
columns: (Required) An array of strings representing the columns to be used for Z-ordering.target_size: Specifies the size of the output files in bytes.preserve_insertion_order: Whether to maintain the existing sort order of rows.max_concurrent_tasks: Limits the number of parallel tasks the executor can run.max_spill_size: Sets the maximum data size in bytes allowed to spill to disk during the process.min_commit_interval: Sets the minimum wait time in seconds between committing updates to the Delta log.
vacuum: Deletes old data files that are no longer referenced by the Delta transaction logs, freeing up space in object storage. The availabletask_optionsare:{ "retention_period": "168 hours", "dry_run": false, "enforce_retention_duration": true }
Where:
retention_period: Defines the age at which unreferenced files become eligible for deletion.dry_run: Iftrue, calculates and logs the files that would be deleted, but performs no deletions.enforce_retention_duration: Iftrue, the task validates theretention_periodagainst the system's global minimum safety limit.
purge: Explicitly removes data from a specific storage path. The availabletask_optionsare:{ "storage_location": "s3_main", "path": "archive/2023/temp/" }
Where:
storage_location: Required. The name of the storage location.path: The relative directory path or file prefix within the storage location that should be permanently deleted.
Returns
A unique task ID for the task.
You can check the task status by querying the pgaa.background_task table and the provided task ID.
Examples
Delete old data files prior to the last 7 days:
SELECT pgaa.launch_task( 'sales.transactions', 'vacuum', '{"retention_period": "7 days", "dry_run": false}'::jsonb );
Perform compaction:
SELECT pgaa.launch_task( 'telemetry.logs', 'compaction', '{ "target_size": 536870912, "max_concurrent_tasks": 2 }'::jsonb );
Reorganize data rows by clustering on columns
regionandcustomer_id:SELECT pgaa.launch_task( 'crm.customers', 'zorder', '{ "columns": ["region", "customer_id"], "target_size": 1073741824 }'::jsonb );