Every Data Query Must Use Every Bucket Parameter
Data Queries are used to group data into buckets, so each Data Query must use every bucket parameter.Why does each Data Query have to use each every bucket parameter?
Why does each Data Query have to use each every bucket parameter?
When PowerSync does incremental replication of data from your source database, it evaluates every row/document received on the , and computes a list of buckets that that row/document belongs to. This allows PowerSync to efficiently update only the specific buckets that are affected by each change event received. PowerSync uses the Data Queries in the Sync Rules bucket definitions to determine which rows/documents belong to which buckets. Therefore, if it was possible for a certain bucket parameter to not be used in the
WHERE clause of a Data Query, the bucket IDs to which the row/document belongs would be ambiguous — we would have to assume “all possible values” for an ambiguous parameter value in the bucket ID – and the row/document would have to be exploded into many buckets. To avoid this, PowerSync imposes the constraint that every Data Query needs to use every parameter defined on the bucket.Supported SQL
The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See Supported SQL for full details.Examples
Grouping by Parameter Query Values
Selecting Output Columns/Fields
When specific columns/fields are selected, only those columns/fields are synced to the client. This is good practice, to ensure the synced data does not unintentionally change when new columns are added to the schema (in the case of Postgres) or to the data structure (in the case of MongoDB). Note: Anid column must always be present, and must have a text type. If the primary key is different, use a column alias and/or transformations to output a text id column.
MongoDB uses
_id as the name of the ID field in collections. Therefore, PowerSync requires using SELECT _id as id in the data queries when using MongoDB as the backend source database.