Overview
To make this trade‑off explicit, you choose whether to keep the existing behavior or turn on newer fixes that slightly change how data is processed. Use theconfig block in your sync config YAML to choose the behavior. There are two ways to turn fixes on:
- Set an
editionto enable the full set of fixes for that edition. This is the recommended approach for new projects. - Toggle individual options for more fine‑grained control.
Configuration
For new projects, it is recommended to enable all current fixes by settingedition: <edition>:
Sync Streams Requirement
New Sync Streams configurations should useedition: 3, which enables the new compiler with an expanded SQL feature set (including JOIN, CTEs, multiple queries per stream, BETWEEN, CASE, and more):
Upgrading from alpha: If you have existing Sync Streams using
edition: 2, upgrade to edition: 3 to enable the new compiler with an expanded SQL feature set (including JOIN, CTEs, multiple queries per stream, BETWEEN, CASE, and more). See Supported SQL for the full list of supported features.Storage version
The PowerSync Service stores replicated bucket data in bucket storage. That data uses a storage version that can evolve when you deploy new Sync Streams or Sync Rules. This versioning approach avoids large upfront migrations on existing bucket data when the Service introduces bigger storage changes. Each time your sync config is deployed and processed, the bucket data written for that deployment uses a specific storage version.Optional config.storage_version
You can pin the bucket storage version by setting it under the config block:
When to set storage_version explicitly
In most deployments you can omit storage_version. The PowerSync Service then uses the latest stable storage version it supports. You should only set this field if you need more control, e.g.:
- Service downgrade: If you need to run an older Service version that only supports up to a given storage version, deploy sync config with that
storage_version, wait until reprocessing for that deployment has finished, then downgrade the Service. - Experiments: Opt into an odd, unstable storage version in non-production environments.
- Delaying a storage upgrade: Change other sync config while keeping bucket data on an older stable storage version until you are ready for the newer format.
Stable and experimental versions
The service distinguishes stable and experimental storage versions as follows:- Even numbers (for example
2,4) denote stable formats. Once a stable version is supported, newer Service releases are expected to keep supporting it until it is officially deprecated. - Odd numbers (for example
3) denote unstable formats. The layout may change without notice and support may be removed in a future release. Use odd versions only for testing, not production.
Supported fixes
This table lists all fixes currently supported:timestamps_iso8601
PowerSync is supposed to encode timestamps according to the ISO-8601 standard.
Without this fix, the service encoded timestamps from MongoDB and Postgres source databases incorrectly.
To ensure time values from Postgres compare lexicographically, they’re also padded to six digits of accuracy when encoded.
Since MongoDB only stores values with an accuracy of milliseconds, only three digits of accuracy are used.
For instance, the value 2025-09-22T14:29:30 would be encoded as follows:
- For Postgres:
2025-09-22 14:29:30without the fix,2025-09-22T14:29:30.000000with the fix applied. - For MongoDB:
2025-09-22 14:29:30.000without the fix,2025-09-22T14:29:30.000with the fix applied.
Configurable sub-second datetime precision
When thetimestamps_iso8601 option is enabled, PowerSync will synchronize date and time values with a higher
precision depending on the source database.
You can use the timestamp_max_precision option to configure the actual precision to use.
For instance, a Postgres timestamp value would sync as 2025-09-22T14:29:30.000000 by default.
If you don’t want that level of precision, you can use the following options to make it sync as 2025-09-22T14:29:30.000:
sync-config.yaml
timestamp_max_precision are seconds, milliseconds, microseconds and nanoseconds. When an explicit
value is given, all synced time values will use that precision.
If a source value has a higher precision, it will be truncated (it is not rounded).
If a source value has a lower precision, it will be padded (so setting the option to microseconds with a MongoDB source database
will sync values as 2025-09-22T14:29:30.123000, with the last three sub-second digits always being set to zero).
If no option is given, the default precision depends on the source database:
| Source database | Default precision | Max precision | Notes |
|---|---|---|---|
| MongoDB | Milliseconds | Milliseconds | |
| Postgres | Microseconds | Microseconds | |
| MySQL | Milliseconds | Microseconds | Defaults to milliseconds, but can be expanded with the option. |
| SQL Server | Nanoseconds | Nanoseconds | SQL Server supports 7 digits of accuracy, the sync service pads values to always use 9 for nanoseconds. |
versioned_bucket_ids
Sync Rules define buckets, which rows to sync are then assigned to. When you run a full defragmentation or
redeploy Sync Rules, the same bucket identifiers are re-used when processing data again.
Because the second iteration uses different checksums for the same bucket ids, clients may sync data
twice before realizing that something is off and starting from scratch.
Applying this fix improves client-side progress estimation and is more efficient, since data would not get
downloaded twice.
For how bucket identifiers are represented in bucket storage at the persistence layer (including automatic use of versioned bucket names with newer storage formats), see Storage version.
fixed_json_extract
This fixes the json_extract functions as well as the -> and ->> operators in Sync Rules to behave similar
to recent SQLite versions: We only split on . if the path starts with $..
For instance, 'json_extract({"foo.bar": "baz"}', 'foo.bar') would evaluate to:
bazwith the option enabled.nullwith the option disabled.
custom_postgres_types
If you have custom Postgres types in your backend source database schema, older versions of the PowerSync Service
would not recognize these values and sync them with the textual wire representation used by Postgres.
This is especially noticeable when defining DOMAIN types with e.g. a REAL inner type: The wrapped
DOMAIN type should get synced as a real value as well, but it would actually get synced as a string.
With this fix applied:
DOMAIN TYPEs are synced as their inner type.- Array types of custom types get parsed correctly, and sync as a JSON array.
- Custom types get parsed and synced as a JSON object containing their members.
- Ranges sync as a JSON object corresponding to the following TypeScript definition:
- Multi-ranges sync as an array of ranges.